Artificial intelligence and the evolution of conflict resolution

A look at what role artificial intelligence might play in conflict resolution

Henry J. Bongiovi
2020 August

Artificial intelligence has arrived, and the legal profession is not immune. The seeds of AI have been implanted in the world of alternative dispute resolution; however, its full impact is yet to be realized. Maybe we will become accustomed to dealing with holographic hotel check-in clerks or concierges. Nonetheless, the complete transition to AI in fields where human contact is essential, such as nursing or mediation, is more dubious.

Any significant technological advance which has a wide-ranging insertion into society provokes discussion. Does the method of conflict resolution matter, or does the end justify the means? Is there a ‘best’ way to resolve conflict: a primitive mano-a-mano duel, or a futuristic system involving interacting with a computer? Or maybe some hybrid of the two extremes? If the dispute is resolved, does the journey really matter?

While the dispute may be concluded, the healing is only beginning. Much like a bedside nurse who aids with healing in ways that lie outside of technology of medicine, so does a mediator aid with healing in ways that the law or technology cannot accomplish.

In Peru: A basic form of conflict resolution

The Peruvian province of Chumbivilcas, perched within the fog-embanked Andes, holds a unique festival at the end of the year. During the celebratory season, throngs of people gather to partake in what many would consider typical Christmas pastimes – playing music, singing, drinking, eating, and wearing elaborately colorful attire. It is their own version of caroling, eggnog, and ugly sweaters. However, on Christmas day, all of that takes a backseat to Takanakuy, the community’s form of conflict resolution.

Takanakuy is a series of publicly viewed, and participated in, fistfights aimed at settling grievances. It is as if mediation met Fight Club. Anyone can enter in the fights, from the young to the elderly, and the fights are open to both men and women. Takanakuy is a year’s worth of frustration boiled over into one explosive day. A day wherein a member of the community will approach another to address a past offense taken, and the two people will proceed to exchange a flurry of punches and kicks.

However, that is not to say this whole ordeal lacks a sense of order or civility. To the contrary, each Takanakuy bout starts and ends with a hug. A simple reminder that each fighter is a member of the community, and that the end of the fight will also signify the end of the conflict and the beginning of the healing process. The fights are supervised by a neutral third party, who plays the same role as any referee in a combative sport; except these mediators arm themselves with whips in case the violence gets too out of hand. The fighting itself, while intense, also carries itself in an orderly fashion. Moves like biting, hair pulling, and hitting someone when they are already down, are all strictly forbidden. These are not barroom brawls, where the inebriated participants will ultimately forget the reason for their fighting halfway through the altercation. The goal of each Takanakuy fistfight is simple: healing. So that afterwards, the participants can move forward in a productive manner that will benefit the community. The entire festival centers around resolving conflict through human interaction in the hope that it will heal the community.

Evolving into basic form

Nonmembers of the Chumbivilcas community may find it easy to dismiss Takanakuy as a legitimate form of conflict resolution. The logic of resolving conflict with physical battle may seem appalling to many. However, that is not to say that this form of conflict resolution is any less effective or appropriate when considering the setting. Nested atop the altitude-sickness-inducing Andes, communities like Chumbivilcas are relatively cut off from the rest of the world. And while the author of this article claims no expertise in the fields of history, anthropology, archeology, or any science that could offer a definitive explanation to the origin of Takanakuy, it seems reasonable that the geographic isolation of the community played a part in the evolution of the practice. Generally speaking, tough places breed tough peoples. And, the stony mountains of the Andes are a tough place to live. Being cut off from the country’s power, financial, and judicial centers can lead to a more self-reliant outlook.

So, rather than traverse the Andes every time a conflict arises, the people of the community save it for Christmas and Takanakuy, where they can settle disputes amongst themselves. Under a certain light, Takanakuy demonstrates very democratic values. It is a system created by the people for the people, and all people can participate. Regardless, Takanakuy displays conflict resolution in its most primal state – two people, four fists, and a problem. The far-sightedness of the Chumbivilcas people is that they realize that individual disputes do not just affect the participants, but also the community.

The people of Chumbivilcas remind us that the core of conflict resolution is human interaction. Two parties coming together and settling a grievance so that they can all move forward and begin to heal. Whether it is in a frenzied Takanakuy fight, or in a conference room, the centerpiece of conflict resolution is human contact if there is to be true healing.

Emergence of artificial intelligence

The prevalence of artificial intelligence, and similar thought process programs, is already here. Cars now boast self-driving modes, or at a minimum, some form of accident avoidance systems, and smart brakes – brakes that automatically apply without the driver pressing on the pedal. Machine learning programs, and similarly based algorithms are gaining more traction in the public eye every day.

For years, public outcry feared the loss of blue-collar jobs to machines. Factory workers replaced by machines. Waiters, and other members of service industry, being replaced by touchscreens and automated voice recordings. Truckers, and members of the delivery industry, being replaced by self-driving trucks and self-flying drones. Just take a look at the Samsung-funded Neon project. This endeavor aims to develop computer-generated digital humans. These will be virtual avatars that act and respond like actual humans, even learning and mimicking emotions without the need for motion-capture actors. Imagine Amazon’s Alexa with a life-like and fully emotive avatar. Concierges, tourism guides, and even jobs like news anchors and tutors could all be replaced. The entities are disturbingly human like; and through the lens of a video feed, distinguishing them from real people could be a challenge.

A few years back, United States Supreme Court Chief Justice Roberts was asked, by the president of a college in upstate New York, if he could foresee a day when smart machines, driven with artificial intelligences, would assist with courtroom fact-finding or judicial decision-making. In response, Chief Justice Roberts said: “It’s a day that’s here . . .” He went on to say, “The impact of technology has been across the board and we haven’t yet really absorbed how it’s going to change the way we do business.”

So, what does the emergence of artificial intelligence mean for the future of the legal profession, and more specifically, dispute resolution?

Already making an impact

Contracts have also evolved in this digital age with the appearance of smart contracts. These are virtual agreements stored amongst computer networks – a singular system of this can be known as a blockchain – where contract provisions written by humans are now defined by computer code that provide remedies for breaches of contract at specific junctures. These programs can also work in facilitating the exchanging of money, property, shares, or anything similarly valuable. The key to the prevalence of these new forms of contracts is the speed and individual control associated with electronic communication. In this system, the contracts are supervised by the computer network, automatically enforcing the obligations of the agreement, without the risk of human error or bias. Decisions are made, and remedies provided, along the path of the performance of the contract terms. At each fork in the smart contract road, lawyers, judges, arbitrators, and mediators become redundant as the program provides the remedy.

Smarter than a human?

Cambridge University’s 2017 Case Crunch project, a custom legal artificial intelligence program aimed at predictive analytics, further exemplifies the emergence of AI in the legal field. The artificial intelligence program was pitted against over one hundred commercial London lawyers in a competition to see who could correctly predict the outcome of over 700 financial ombudsman cases. The artificial intelligence ended with an accuracy rate of 86.6%. The humans mustered an accuracy rate of 66.3%. No, this does not harken the end of the need for humans in the legal profession. It does, however, offer a new insight into the trajectory of the legal field – that being predictive programs, which are already leagues ahead of most human capabilities at discerning analytics and patterns. These types of programs will only become easier to obtain; and consequently, more prevalent. To wit, Case Crunch has a part of its website dedicated to user-submitted inquiries. Banks, insurance companies, law firms, and litigation funders can submit an inquiry and receive predictive and analytical data faster and more accurate than had they used human performance.

However, there are two aspects that should be pointed out before flocking to this artificial intelligence. For one, Case Crunch is reserved solely for complaint handling, legal decision predictions, and merit-based claim reviews. Those are the only spheres that the program has knowledge of. Secondly, it, along with many other similar artificial intelligences, is solely a predictive program. It can predict the outcome of hundreds of decisions, but it cannot make one for itself. The power to make decisions still lies with humans, at least for now.

Most importantly for mediation, the ability to empower parties to make decisions about how they can create their own outcome, is still best left to the humans. The use of empathy and showing respect for another human’s suffering are the essential tools of a mediator and something that a non-human cannot contribute. It is only through this self-empowerment journey that the parties can heal.

Further examples

A few years ago, a man’s six-year prison sentence was partly arrived at using risk-assessment software. While technically not an artificial intelligence, the software utilized algorithms with predictive properties – not wholly unlike Case Crunch. In that case, the software in question identified the defendant as a potential risk to repeat offenses; and consequently, a danger to the community. The software’s report was based on statistics and analytical data, not human testimony. And, a man was sentenced.

A core issue in the case was the program’s proprietary, closed-source, software. The case was appealed to the Wisconsin Supreme Court, and later the United States Supreme Court. In Loomis v. Wisconsin, it was asserted that the proprietary nature of the software violated the right to due process on the grounds that the validity and accuracy of the software’s report could not be challenged. The software belonged to a private company and was not viewable to the public. The defendant lacked any access to the algorithm itself, and so he could not inspect, question, or explain any potential flaws in the report.

In the end, the Wisconsin Supreme Court denied the appeal, and the United States Supreme Court ultimately declined to hear the case. The Wisconsin Supreme Court reasoned that since the software’s initial report was basically deemed correct, the defendant would probably have received a similar sentencing without the software’s reporting. But does the accuracy of the algorithm justify its use? Due to the enclosed and secretive nature of the program’s algorithm, the public will never know the lengths that the software undertook to reach its assessment. What factor or variables did it take into consideration? What did it fail to take into consideration?

In Loomis, a criminal defendant’s sentence was partially decided by a computerized thought process. While not technically an artificial intelligence, the program does share many important properties with AI, consequently, rendering it a primitive form of artificial intelligence. Many artificial intelligence programs, both realized and theorized, rely heavily on analytical and predictive algorithms – algorithms akin to the one used in the Loomis case.

What does this all mean?

The use of closed proprietary software and algorithms in a judge’s sentencing of a criminal defendant affected a loss of liberty. This is profound. The Loomis case can be seen as a significant step in allowing judicial decisions to be made in conjunction with the use of computerized thought-processing systems. A person’s capacity to be rehabilitated could, one day, completely hinge on a series of ones and zeros. The Loomis case illustrates the use of computerized thought processing in the legal world and asks us to trust a non-human, non-transparent, system with fundamental rights as significant as the loss of liberty.

The concept of artificial intelligence once conjured up fears of lost free will and the emergence of determinism – humanity against machine. Those old enough to remember characters like HAL, in Space Odyssey, can attest to the general fear that surrounded the idea of relinquishing control to artificial intelligence. Now, people are allowing machine learning programs to tell us where to go, curate our music libraries, suggest movie preferences, and even control what news we receive – thereby being able to avoid those pesky alternative views. The once feared villains are now viewed as convenient friends who we voluntarily invite to track our every move.

Legal systems are inherently human and deeply embedded in the fabric of humanity. What else on this planet holds court? Humans created the legal world for humans. It requires human interaction and distinctly human experience. An artificial intelligence can access the entire knowledge bank accrued by humanity; yet, knowledge does not equate to empathy, compassion, or the human experience. The unique pathos of humanity is lost on the utter logic of machines. Our legal system, and especially mediation, is built upon humans tapping into their own, and other peoples’ sense of humanity and shared experience. Justice Potter Stewart’s famous quote, “But, I know it when I see it . . .” almost perfectly sums up the need for the human touch in the law. Yielding legal process to artificial intelligence not only strips away autonomy, but also empathy, respect, and the shared experience of the human condition. The legal world could become nothing more than a scene ripped from Minority Report, and humans become reduced to nothing more than statistical data that artificial intelligence is just trying to predict and reduce.

With COVID-19 isolation, many of us are turning to technology as a solution and conducting mediations remotely. Many of these mediations result in settlement, and many people like not having to travel to the mediation. However, the use of remote technology also reinforces the sense that the parties have not fully connected, despite the mediation being statistically successful. Similarly, it also has reinforced fears of technology providing the weakest link in confidentiality, which is the bedrock of mediation. The use of AI in mediation will further amplify the fear of hacks, data breaches, and the loss of confidentiality.

Why is mediation so special?

When done correctly, mediation provides each participant with a sense of fairness in the process. This is accomplished when participants are afforded respect and empathy by being listened to, and heard, by another human, and connecting with shared experience. When fairness, empathy, and respect are accomplished, then closure and healing can begin, and the ultimate goal of mediation can be attained.

The COVID-19 pandemic is reinforcing the importance of human interaction in the healing process. Scared patients are being met by medical professionals behind masks and shields and are being isolated from their loved ones and some must often face death alone. These frightened patients are not being afforded the sense of empathy and compassion that humans rely on in difficult and uncertain times. Suffering is occurring because of a lack of human interaction.

Human interaction is what makes Takanakuy so effective at achieving the goal of healing – at the end of the fight, both participants can look each other in the eye and agree a fair fight was fought. AI lacks that level of human interaction.

This does not take away from AI’s effectiveness in resolving some disputes, such as through block chain technology and smart contracts, or in providing statistical analysis of potential outcomes. Further, AI has never been more important to humanity than now. It is currently being applied to find a vaccine and/or cure to COVID-19, an imperative role for the future of humanity. However, AI will never replace a nurse holding the hand of a patient who is facing a difficult and uncertain road ahead, or a mediator who is guiding a fearful litigant down an alternative path.

Henry J. Bongiovi Henry J. Bongiovi

Henry J. Bongiovi has almost 30 years’ experience as a litigator and mediator. Mr. Bongiovi was on Santa Barbara Court’s first ADR Committee, a Special Settlement Master, Discovery Referee, and is a mediator for the CADRe and CMADRESS programs, and US Bankruptcy Court.

Copyright © 2024 by the author.
For reprint permission, contact the publisher: Advocate Magazine