W Power 2024

4 ways organisations can move beyond instrumental efficiency of conversational AI

The article emphasises the importance of incorporating human naturalness rather than pure instrumental efficiency into conversational AI. Here are four key takeaways for organisations

Published: Apr 6, 2023 12:33:28 PM IST
Updated: Apr 6, 2023 12:46:42 PM IST

4 ways organisations can move beyond instrumental efficiency of conversational AIThe cognitive, relational, and emotional competencies in conversational AI agents lower the cognitive efforts of the user, diminish their communication ambiguities, and enhance their physiological arousal Image: Shutterstock

Driven by the need to provide continuous, timely, and efficient customer service, firms have shown an increased interest in designing and implementing artificial intelligence (AI)-based interactional technologies, such as conversational AI or chatbots. Chatbots are typically available through messaging applications or chat windows on company websites, providing inquiry-based customer service. These conversational AI agents prevent the need for human service agents to provide customer service.

However, the viability of conversational AI agents as a business solution depends on users' willingness to engage with them. AI agents must create a natural, human-like interpersonal environment that facilitates user engagement. Understanding what human-like competencies in conversational AI agents will foster user engagement is essential.

Human-like conversational AI competencies

The three desirable interactive competencies in conversational AI are cognitive, relational, and emotional competencies. Cognitive competency is an AI agent's ability to consider and apply problem-solving and decision-making skills to complete assigned tasks effectively. Cognitive competency in AI reduces the cognitive effort of the user.

Relational competency is the AI agent's interpersonal skills, such as supporting, cooperating, and collaborating with its users. Relational competency in AI results in fewer communication ambiguities for the user. Emotional competency is the ability of an AI agent to self-manage and moderate its interactions with users, accounting for their moods, feelings, and reactions through appropriate expressions and behaviour. Emotional competency in AI results in a high degree of physiological arousal for the user. The cognitive, relational, and emotional competencies in conversational AI agents lower the cognitive efforts of the user, diminish their communication ambiguities, and enhance their physiological arousal. Thus, these user-AI interactions would be human-like and natural, resulting in deeper user engagement.

How do human-like conversational AI competencies foster user engagement?

Uncertainty related to the "black box" that hides AI inputs and operations from users may discourage users from its use. Trust is the backbone of all social interactions. User trust in conversational AI is of utmost importance. AI's decision-making processes are far too complex for users to understand, making them feel vulnerable and anxious due to a perceived loss of control. Attributing human qualities to nonhuman technological agents builds user trust. Human-like competencies in conversational AI agents will enhance user trust by overcoming users' uncertainties and keeping them engaged. The cognitive, relational, and emotional human-like competencies are expected to foster a naturalness in users' interactions with AI. This increased naturalness reassures the user of conversational AI's validity as an interactional partner. Thus, human-like interactional competencies in AI provide users with appropriate trust-building cues, alleviating their perceptions of risk and vulnerability. This enhanced level of trust will foster deeper user engagement with conversational AI.

Also read: As chatbot sophistication grows, AI debate intensifies

Key takeaways for organisations

The article emphasises the importance of incorporating human naturalness rather than pure instrumental efficiency into conversational AI. There are four key takeaways for organisations from this article.

1) In their efforts to anthropomorphise AI, organisations should create an appearance that their conversational AI can think. AI designers should develop human-like cognitive capabilities in conversational AI by paying particular attention to how humans think.

2) Organisations should plan and design their AI services by building conversational AI agents embedded with an "artificial brain" as well as an "artificial heart." Thus, AI designers should design conversational AI that can understand the range of emotions experienced by humans and provide personalised, emotionally sensitive experiences that impact users physiologically. In addition, based on customer responses, an emotionally equipped AI chatbot should be able to sense complex emotions such as frustration and decide to forward customers to a live service agent when appropriate, such as when a customer is unusually frustrated.

3) AI designers should enhance user perception of personalised services coupled with a perception of fairness by first understanding their users and then adding bias-free social connectedness in AI.

4) AI designers should be mindful of the unique user needs across industries as they build the necessary competencies into their conversational AI to develop trustworthy and engaging conversational AI.

Dr Shalini Chandra, Associate Professor, S P Jain School of Global Management

The article is based on the research published in the December 2022 issue of the Journal of Management Information Systems, co-authored by Dr Shalini Chandra from S P Jain School of Global Management in Singapore, Dr Anuragini Shirish from Institut Mines-Télécom Business School in Paris, and Dr Shirish C Srivastava from HEC Business School in Paris.

Post Your Comment
Required
Required, will not be published
All comments are moderated