Generative AI will have transformative implications for the legal sector. As the capability of the technology increases, questions inevitably arise including the impact on the lawyer-client experience, disruption to the law firm business model, and others. What will be the biggest change? Empowering more work to be done by individual lawyers? A shift in the expectations of level of customer service? Tapping into a bigger pool of unmet or underserved legal need? And will those changes be limited only by the capabilities of the technology, or are there ethical and regulatory limitations that need to be put in place? During a roundtable dinner co-hosted by The Lawyer and Thomson Reuters, managing partners pondered questions concerning generative AI, regulation and ethics.

Using an AI Assistant

We are already seeing a tipping point away from an AI assistant being a generic tool towards one that is personalised to you and your needs. By automating task delivery, combined with human oversight, legal professionals can focus on delivering their expertise rather than spending hours searching for answers that a virtual assistant can find in seconds––allowing for streamlined collaboration and customer interactions. For the law firm this brings a variety of benefits, so that standardisation goes beyond following rules to encompass adaptation using your house style and a consistency in approach across similar matters. Individuality is still important, but this is where the human touch can come in, bringing the personal understanding of client needs and preferences.

A stepping stone towards this might be to think of your legal co-pilot as a tool that runs in the background. Recommendations or predictions do not have to be used or even shown to a lawyer but are a valuable point of comparison to learn based on the actual decisions that the lawyer takes. As recommendations get better and better, this naturally raises questions about at what point, and how, they are acted on. Even if we decide that ethically it isn’t desirable to have a machine making legal decisions, these can become the ‘sounding boards’ that we use to explore options and alternatives before the lawyer makes the right legal recommendation.

As technology gets better and better at guiding what you can do, the responsibility still sits with the lawyer to decide what you should do. The question, then, remains: how do we manage the tipping point?

Minimising bias

Legal professionals are rightly concerned about bias. The Thomson Reuters Future of Professionals Report, found the biggest concerns include a compromise of accuracy (25 per cent), job loss (19 per cent) and ethics (15 per cent) among individuals across a variety of professions. Building AI that solves customers’ biggest pain points in a transparent and responsible way while providing trusted results will help instil confidence and alleviate fears.

The roundtable discussion covered the Foundation Model Transparency Index, produced by Stanford, and the general lack of transparency in methodology, data use, and other dimensions, particularly with the most popular commercial models available. This lack of transparency naturally raises questions and concerns about bias.

However, it is important to think of this in context. As humans we are ourselves biased, and so we need to find ways to manage this by identifying and managing against bias in our own decision making. The same should apply for generative AI and how we use the outputs that are generated, as well as putting guardrails in place if applying these to use cases that should be triggering ethical, or legal concerns. The best way to identify and mitigate bias in the software is to encode the same approach we would take if the advice came from a person. We can also deliberately capture examples of biased and unbiased results that can be used to train models to flag areas future results that warrant concern.

Another solution to bias is cleaning up the data, given that this ultimately determines the outcome that AI Assistants generate. If we see a trend towards models that are more transparent, this could lead to increased scrutiny and auditability of the data used to train models – something that is currently left to the model creators themselves. Another way to correct biases in the data is using reinforcement learning from human feedback (RLHF), where people are paid to curate and adjust biases in the output of models, as well as to improve performance for valuable tasks. This is already common practice in creation of the LLMs we see in the market today.


However, even with LLMs, the output generated cannot be linked back to the specific training data used; instead, techniques such as retrieval augmented generation can be applied to ground the output generated to a trusted source that can be validated. By being able to link back to a source, it is possible to have traceability, to check for bias and also to audit your decision making.

Explainable AI is increasingly important to teams; they want to know how models are functioning and getting output. As it stands, ethics in decision making is based on there being a person responsible for a decision. Will that change in the future? If we do enter a future where the output of a machine is used/acted on automatically, is there a world where corporations become responsible for automated decision making?

Understandability and confidentiality are also important to clients. Anyone using an LLM model needs to know how their confidential data is being used and stored. They need confidence in the answer being generated by a particular set of content. Pre-hoc and post-hoc explainability, enabling clients to look at a set of data and understand why the answer has been given, will set tech providers apart.

Changes to the sector

There was debate among the attendees regarding the value of AI. Dialogue with clients about how and where firms are using tech and the value added is crucial. There are a range of views regarding the types and scale of value gained. Ultimately, value needs to come from being able to point to how AI is assisting. Can the same result be achieved faster? Or a different result that is objectively better? Or, for example, can an answer be arrived by reviewing a much larger set of information than could be done manually, helping reduce risk or increase confidence.

The legal sector has absorbed a lot of change. AI will take years to take full effect: clients create demand because they want cheaper legal services, but firms still have to make huge investments. The skills gap – a lack of CTOs and data scientists – will dictate the speed with which law firms can adapt. Moreover, the legal market structure may change as access to tech increases access for smaller firms. This will create more of a level playing field compared to previous AI implementation.

In summary, AI is not going to replace experienced legal professionals. Instead, generative AI is the next evolution of legal AI tools to help legal practitioners more efficiently manage many of the mundane, repetitive tasks involved with legal work. However, putting in place some fundamental AI guardrails now will be critical in addressing key areas of concern around AI such as transparency, bias and accuracy, while also helping to unlock the possible benefits in a clear and trusted way.

AI @ Thomson Reuters

Learn more about how Thomson Reuters is informing the future with AI, and sign up for early access to insights, updates, and all things AI @ Thomson Reuters.