background

The Dangers of Legal AI

Uwais Iqbal2022-11-22


What are some of the dangers and risks involved with using a Legal AI solution?

While AI is all the rage, it’s all too easy to get swept up in the hype without carefully considering some of the risks and potential dangers around the use of Legal AI.

Here are a couple of examples of potential risks around the use of Legal AI:

  1. Liability
  2. Bias
  3. Trust

Liability

If an AI model makes a mistake, who is responsible? If a self-driving car malfunctions and causes an accident is the driver responsible? If a Legal AI solution trained to predict judicial outcomes charges an innocent person as guilty who is responsible? The lawyers presenting the case, the judges presiding over the case, or the developers who wrote the code for the models?

Bias

Bias in data is inherent. Data is ultimately a reflection of our reality. All of that discrimination out there gets captured and quantified within data. AI models scale whatever patterns are found in data and scale that discrimination along with it. It’s well-known that one of the most popular language models GPT-3 is inherently discriminatory because of the bias in the data it was trained on [1]. If such discriminatory models come to power Legal AI solutions around the drafting of cases, contracts and even legislation, that bias will also present itself in subtle ways that could have real-world impact.

A few years back, Apple trained a model to assist with credit card applications. Without realising it, the model was discriminatory against women and lots of women were denied access to credit all because of their gender [2].

Trust

Most Legal AI solutions are black boxes. There’s no way to peer inside the brain of the model to see what the AI is thinking or how it arrived at its outcome. Take the example of a Legal AI solution designed to accelerate contract review. If the model flags a clause as being risky for human review, what level of trust or confidence can the lawyer place in this prediction? What if the AI model misses extremely risky clauses during review and there is no one to manually check the outcomes? If an AI model makes a mistake and the lawyers lose trust, it’s very difficult to regain that trust once again.

Mitigating Risk

As with any technology, there are always risks that present themselves. The way to mitigate such risks is through awareness and careful planning through the design, development, and deployment of Legal AI solutions.

Liability can be mitigated by ensuring that a human is always involved in the process to some degree and by not giving models full autonomy.

Bias can be mitigated by improving data capture and spending time refining and filtering bias from data.

Trust can be mitigated by incorporating Explainability into Legal AI solutions so the model shows how it arrived at its answer.

Approaching Legal AI

Lawyers and legal professionals are trained to be risk-averse since there can be quite dire and real consequences to people’s lives with unlicensed risk. The same attitude should also be carried over to Legal AI solutions.

Legal AI shouldn’t be designed in a way to replace legal professionals and automate everything without thinking of the consequences. Legal AI solutions should be designed in a way where they minimise risk and try to aid and assist legal professionals, instead of replacing them, so that they can be better at what they do.

[1] - AI’s Islamophobia problem

[2] - The Apple Card Didn’t See Gender