The power of AI to shape negotiations
How to harness the vast potential of AI to enhance negotiation outcomes — while navigating its challenges

Wouldn’t it be wonderful if we could outsource the complex, laborious and often emotionally intense process of negotiating to technology? Until recently, the idea of merging negotiation and AI was but a dream. However, the launch of more sophisticated AI systems has raised the bar for how negotiations could evolve and how we approach the process.
Negotiation has traditionally been seen by many as an art – an intensely human-centric task that requires mixing collaborative and competitive moves to overcome complexity, information asymmetry and suspicion to arrive at an acceptable outcome. Only in the past decades has it evolved into a science focused on codifying a systematic way of problem-solving to achieve success.
The recent interplay between AI and negotiation marks a paradigm shift in the latter. Today, agents powered by large language models (LLMs) emulate human behaviour based on social science techniques, while other AI tools draw on economics and game theory methods. As the technology advances, it is helpful to anticipate how it could shape negotiation strategies, alongside the possible risks involved.
These tools have significantly lowered the preparation costs of obtaining relevant information, learning and adopting essential advice and assembling a robust strategy. When employed as a negotiation preparation assistant, generic LLMs can increase the average quality of the output at a fraction of the time and cost. However, at present, the advice can be generic, the information can be wrong and the role plays are still glitchy. Future LLMs are expected to fix these weaknesses.
The adoption of AI-supported negotiation systems by corporations can result in competitive advantage, short-term gains and an early learning curve. This requires investing time and money to change existing processes and systems. It could therefore take some time before we see a marked shift towards semi-automated or automated negotiation agents.
Human negotiators are limited by emotions, cognitive biases and ignorance over best practices, all of which can hinder our ability to craft and agree to optimal solutions. While AI systems trained on historical data can also develop biases, these can be reduced or eradicated more easily than with humans. Indeed, LLMs have an easier time remaining rational and sticking to best practices, as they don’t experience emotions (although they can mimic them).
Negotiations today are seldom recorded, leaving us ignorant over what took place – including potential unethical or illegal practices. Using AI could increase the traceability and transparency of the process and allow for audits and learning loops. This can help organisations improve negotiation skills, outcomes, fairness and accountability.
At the moment, those employing AI in negotiations are using it to help them negotiate better, but the processes are essentially the same. An exciting opportunity would be using AI to rethink or redesign how we negotiate. For instance, AI can handle so much data at once that either side can share their interests and preferences in an AI “black box" or mediator, where neither party learns the limits or secrets of the other. AI can use the huge volume of information to produce optimal solutions that humans are unlikely to craft on their own via standard negotiation practices.
The complexity of negotiating too many issues at once can be cognitively overwhelming for humans, which reduces or caps value creation. But with AI’s vast computational ability, negotiations could juggle an enormous number of issues simultaneously to identify trade-offs and find optimal solutions quickly – and with fewer communication or relationship risks.
What’s more, AI may have an easier time sticking with best practices, such as tit-for-tat moves. It can start positively (as it does not feel fear), reciprocate negative moves (not to punish or escalate, but to teach the counterparty) and forgive and return to collaboration in response to a positive move (as it does not feel the need for revenge or retribution). AI can also resist bias exploitation, power moves or manipulation, which could make it a great negotiator against win-lose tactics.
Additionally, as automated negotiations become commonplace, some companies or individuals may be incentivised to discover, hack and exploit the virtual agents’ rules, decision trees, patterns or weaknesses. Semi-automated processes, or those that put the final decision in the hands of human negotiators, may prevent such exploitation, though at the cost of efficiency.
Another hurdle may be automated agents created to intentionally negotiate using win-lose strategies or to exploit collaborative agents and humans. Currently, most designers of automated or semi-automated agents claim to promote value creation and optimisation to increase gains for all parties. Unfortunately, such environments could invite exploitation. Companies claiming, factually or otherwise, the superiority of their agents can become a tempting proposition for powerful clients who can impose their choice of agents on their smaller counterparts.
Even if not deliberate, AI-powered negotiation agents are likely to develop biases and create unfair deals or unethical interactions, especially when trained to be purely utilitarian. It’s therefore necessary to instil ethical, legal and optimisation principles in upcoming AI algorithms to avoid the negative consequences of AI biases.
AI-powered agents can also hallucinate or be too sensitive. For instance, an agent may stop the conversation at the slightest (mis)perception of an ethical violation. Or, it can walk away after receiving a threat, an insult or even just a persistent request it had denied once before. Ending negotiations at the slightest infraction or disagreement might be necessary for compliance purposes and could raise the ethical bar for future negotiations. However, in the short term, it may significantly reduce the number of closed deals, which is a luxury some organisations cannot afford.
The expansion of AI’s role in negotiations also brings legal concerns like data privacy, confidentiality and compliance. For example, disclosing confidential details informally in a negotiation could be a common practice to build trust or untangle impasses. However, in a semi-automated or automated negotiation, confidential information captured risks being divulged, leveraged or exploited at another time without consent.
Another legal concern revolves around liability for AI-issued decisions or AI misbehaviour. The technology can make mistakes that result in unacceptable or illegal behaviours or extremely unprofitable outcomes. In such cases, can an individual sue a company for being discriminated against? If an unprofitable deal is closed by a company’s AI, can it blame the AI’s mistake to excuse itself from performing its obligation? Who is responsible for such errors?
In short, AI negotiation agents still have several shortcomings and face significant challenges – but none seem unsurmountable. The reliability of technology-based solutions tends to increase with time, as problems are continuously identified and addressed to improve the system. Eventually, the balance will likely tilt towards the success of automated and semi-automated processes, even if they may not fully substitute human-to-human negotiations.
As the technology continues to evolve, researchers, practitioners and developers need to think about how to navigate these challenges carefully. The integration of AI into negotiation processes requires a balanced approach to harness its advantages while mitigating risks, all while ensuring that the technology is beneficial, ethical and effective. If this is achieved, this exciting collaboration will continue to blossom.
First Published: Dec 27, 2024, 11:14
Subscribe Now