Designing great AI products — Personality and emotion

Kore
Becoming Human: Artificial Intelligence Magazine
9 min readMar 16, 2023

--

The following post is an excerpt from my book ‘Designing Human-Centric AI Experiences’ on applied UX design for Artificial intelligence.

Photo by Jason Leung on Unsplash

We tend to anthropomorphize AI systems, i.e., we impute them with human-like qualities. Consumer demand for personality in AI dates back many decades in Hollywood and the video game industry¹. Many popular depictions of AI, like Samantha in the movie Her or Ava in Ex-Machina, show a personality and sometimes even display emotions. Many AI systems like Alexa or Siri are designed with a personality in mind. However, choosing to give your AI system a personality has its advantages and disadvantages. While an AI that appears human-like might feel more trustworthy, your users might overtrust the system or expose sensitive information because they think they are talking to a human. An AI with a personality can also set unrealistic expectations about its capabilities. If a user forms an emotional bond with an AI system, turning it off can be difficult, even when it is no longer useful. It is generally not a good idea to imbue your AI with human-like characteristics, especially if it is meant to act as a tool like translating languages, recognizing objects from images, or calculating distances.

While an AI that appears human-like might feel more trustworthy, your users might overtrust the system.

We’re forming these tight relationships with our cars, our phones, and our smart-enabled devices². Many of these bonds are not intentional. Some argue that we’re building a lot of smartness into our technologies but not a lot of emotional intelligence³. Affect is a core aspect of intelligence. Our emotions give cues to our mental states. Emotions are one mechanism that humans evolved to accomplish what needs to be done in the time available with the information at hand — to satisfice. Emotions are not an impediment to rationality; arguably, they are integral to rationality in humans. We are designing AI systems that simulate emotions in their interactions. According to Rana El Kaliouby, the founder of Affectiva, this kind of interface between humans and machines is going to become ubiquitous that it will just be ingrained in the future human-machine interfaces, whether it’s our car, our phone or smart devices in our home or in the office. We will just be coexisting and collaborating with these new devices and new kinds of interfaces. The goal of disclosing the agent’s ‘personality’ is to allow a person without any knowledge of AI technology to have a meaningful understanding of the likely behavior of the agent.

Here are some scenarios where it makes sense to personify AI systems:

  1. Avatars in games, chatbots, and voice assistants.
  2. Collaborative settings where humans and machines partner up, collaborate and help each other. E.g., Cobots in factories might use emotional cues to motivate and signal errors. An AI assistant that collaborates and works alongside people may need to display empathy.
  3. If your AI is involved with caregiving activities like therapy, nursing, etc., it might make sense to display emotional cues.
  4. If AI is pervasive in your product or a suite of products, and you want to communicate it under an umbrella term. Having a consistent brand, tone of voice, and personality would be important. E.g., Almost all Google assistant capabilities have a consistent voice across different touchpoints like Google lens, smart speakers, Google assistant within maps, etc.
  5. If building a tight relationship between your AI and the user is a core feature of your product.

Designing a personality for AI is complicated and needs to be done carefully.

Guidelines for designing an AI personality

Designing your AI’s personality is an opportunity for building trust. Sometimes it makes sense to imbue your AI features with a personality and simulate emotions. The job of designing a persona for your AI is complicated and needs to be done carefully.

Here are some guidelines to help you design better AI personas:

Don’t pretend to be human

People tend to trust human-like responses with AI interfaces involving voice and conversations. However, if the algorithmic nature and limits of these products are not explicitly communicated, they can set expectations that are unrealistic and eventually lead to user disappointment or even unintended deception. For example, I have a cat, and I sometimes talk to her. I never think she is an actual human but is capable of giving me a response. When users confuse an AI with a human being, they can sometimes disclose more information than they would otherwise or rely on the system more than they should. While it can be tempting to simulate humans and try to pass the Turing test, when building a product that real people will use, you should avoid emulating humans completely. We don’t want to dupe our users and break their trust. For example, Microsoft’s ​​Cortana doesn’t think it’s human, and it knows it isn’t a girl, and it has a team of writers that’s writing for what it’s engineered to do. Your users should always be aware that they are interacting with an AI. Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI¹⁰.

Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI¹¹.

Clearly communicate boundaries

You should clearly communicate your AI’s limits and capabilities. When interacting with an AI with a personality and emotions, people can struggle to build accurate mental models of what’s possible and what’s not. While the idea of a general AI that can answer any questions can be easy to grasp and more inviting, it can set the wrong expectations and lead to mistrust. For example, an ‘ask me anything’ call-out in a healthcare chatbot is misleading since you can’t actually ask it anything — it can’t get you groceries or call your mom. A better call-out would be ‘ask me about medicines, diseases, or doctors.’ When users can’t accurately map the system’s abilities, they may over-trust the system at the wrong times, or miss out on the greatest value-add of all: better ways to do a task they take for granted¹².

Healthcare chatbot: Clearly communicate boundaries. (left) Aim to explain what the AI can do. In this example, the bot indicates its capabilities and boundaries. (right) Avoid open-ended statements. In this example, saying ‘ask me anything’ is misleading since users can’t ask anything they want.
Healthcare chatbot: Clearly communicate boundaries. (left) Aim to explain what the AI can do. In this example, the bot indicates its capabilities and boundaries. (right) Avoid open-ended statements. In this example, saying ‘ask me anything’ is misleading since users can’t ask anything they want.

Consider your user

When crafting your AI’s personality, consider who you are building it for and why they would use your product. Knowing this can help you make decisions about your AI’s brand, tone of voice, and appropriateness within the target user’s context. Here are some recommendations:

  1. Define your target audience and their preferences. Your user persona should consider their job profiles, backgrounds, characteristics, and goals.
  2. Understand your user’s purpose and expectations when interacting with your AI. Consider the reason they use your AI product. For example, an empathetic tone might be necessary if your user uses the AI for customer service, while your AI can take a more authoritative tone for delivering information.

Consider cultural norms

When deploying AI solutions with a personality, you should consider the social and cultural values of the community within which it operates. This can affect the type of language your AI uses, whether to include small-talk responses, amount of personal space, tone of voice, gestures, non-verbal communications, amount of eye contact, speed of speech, and other culture-specific interactions. For instance, although a “thumbs-up” sign is commonly used to indicate approval, in some countries, this gesture can be considered an insult¹³.

Designing responses

Leveraging human-like characteristics within your AI product can be helpful, especially if product interactions rely on emulating human-to-human behaviors like having conversations, delegation, etc. Here are some considerations when designing responses for your AI persona:

Grammatical person

The grammatical person is the distinction between first-person (I, me, we, us), second-person (you), and third-person (he, she, they) perspectives. Using the first person is useful in chat and voice interactions. Users can intuitively understand a conversational system since it mimics human interactions. However, using first-person can sometimes set wrong expectations of near-perfect natural language understanding, which your AI might not be able to pull off. In many cases, like providing movie recommendations, it is better to use second-person responses like ‘you may like’ or third-person responses like ‘people also watched.’

Tone of voice

What we say is the message, and how we say is our voice¹⁴. When you go to the dentist, you expect a different tone than when you see your chartered accountant or your driving instructor. Like a person, your AI’s voice should express personality in a particular way; its tone should adjust based on the context. For example, you would want to express happiness in a different tone than an error. Having the right tone is critical to setting the right expectations and ease of use. It shows users that you understand their expectations and goals when interacting with your AI assistant. An AI assistant focused on the healthcare industry may require some compassion, whereas an AI assistant for an accountant may require a more authoritative/professional tone, and an AI assistant for a real estate agency should have some excitement and enthusiasm¹⁵.

Strive for inclusivity

In most cases, try to make your AI’s personality as inclusive as possible. Be mindful of how the AI responds to users. While you may not be in the business of teaching users how to behave, it is good to establish certain morals for your AI’s personality. Here are some considerations:

  1. Consider your AI’s gender or whether you should have one. By giving it a name, you are already creating an image of the persona. For example, Google Assistant is a digital helper that seems human without pretending to be one. That’s part of the reason that Google’s version doesn’t have a human-ish name like Siri or Alexa¹⁶. Ascribing your AI a gender can sometimes perpetuate negative stereotypes and introduce bias. For example, an AI with a doctor’s persona with a male name and a nurse with a female name can contribute to harmful stereotypes.
  2. Consider how you would respond to abusive language. Don’t make a game of abusive language, and don’t ignore bad behavior. For example, if you say ‘fuck you’ to Apple’s Siri, it denies responding to you by saying ‘I won’t respond to that’ in a firm, assertive tone.
  3. When users display inappropriate behavior, like asking for a sexual relationship with your AI, respond with a firm no. Don’t shame people, but don’t encourage, allow, or perpetuate bad behavior. You can acknowledge the request and say that you don’t want to go there.
  4. While it can be tempting to make your AI’s personality fun and humorous, humor should only be applied selectively and in very small doses¹⁷. Humor is hard. Don’t throw anyone under the bus, and consider if you are marginalizing anyone.
  5. You will run into tricky situations when your users will say that they are sad, depressed, need help, or are suicidal. In such cases, your users expect a response. Your AI’s ethics will guide the type of response you design.

Don’t leave the user hanging.

Ensure that your users have a path forward when interacting with your AI. You should be able to take any conversation to its logical conclusion, even if it means not having the proper response. Never leave users feeling confused about the next steps when they’re given a response.

Risks of personification

While a human-like AI can feel more trustworthy, imbuing your AI with a personality comes with its own risks. Following are some risks you need to be mindful of:

  1. We should think twice before allowing AI to take over interpersonal services. You need to ensure that your AI’s behavior doesn’t cross legal or ethical bounds. A human-like AI can appear to act as a trusted friend ready with sage or calming advice but might also be used to manipulate users. Should an AI system be used to nudge users for the user’s benefit or for the organization building it?
  2. When affective systems are deployed across cultures, they could adversely affect the cultural, social, or religious values of the community in which they interact¹⁸. Consider the cultural and societal implications of deploying your AI.
  3. AI personas can perpetuate or contribute to negative stereotypes and gender or racial inequality. For example, suggesting that an engineer is a male and a school teacher is female.
  4. AI systems that appear human-like might engage in the psychological manipulation of users without their consent. Ensure that users are aware of this and consent to such behavior. Provide them an option to opt-out.
  5. Privacy is a major concern. For example, ambient recordings from an Amazon Echo were submitted as evidence in an Arkansas murder trial, the first time data recorded by an artificial-intelligence-powered gadget was used in a U.S. courtroom¹⁹. Some AI systems are constantly listening and monitoring user input and behavior. Users should be informed of their data being captured explicitly and provided with an easy way to opt out of using the system.
  6. Anthropomorphized AI systems can have side effects such as interfering with the relationship dynamics between human partners, causing attachments between the user and the AI that are distinct from the human partnership²⁰.

The above post is an excerpt from my book ‘Designing Human-Centric AI Experiences’ on applied UX design for Artificial intelligence.

--

--