Increased hype surrounding the intersection between AI and law in the last year is undeniable. In the recent roundtable co-hosted by Thomson Reuters and The Lawyer, the attendees embarked on navigating the practical application of AI to in-house teams.

The roundtable began with a call to arms. Discovering AI’s practical use for in-house legal teams is a journey, and we are only in the foothills of discovery. We must embrace this new challenge if we wish to utilise the benefits. Ensuring AI is cost-efficient, restructuring the team and encouraging digital adoption prove to be the key challenges for in-house legal teams.

Cost-saving benefits

One practicality of legal teams adopting AI is that there is a training gap in large language models. In most cases, a legal head’s digital understanding does not compare to that of an AI external adviser. While the latter may stress the benefits of adopting new technologies to the former, the reality is that AI is expensive to get right in a business environment without adequate training. This means that the cost effectiveness of AI is contested in its early stages while the tech is being utilised.

As in-house teams come under increasing pressure to deliver tight timelines, it pushes teams to try things they wouldn’t have historically done. Therefore, it is critical to use the right tech for the right task. In some legal scenarios, the conventional legal approach just won’t work. For example, AI enables legal teams to review and summarise thousands of contracts in a fraction of the time.

However, clients are already starting to demand a share of the financial benefits brought by AI. This leaves in-house teams in the hands of over-lawyering: lawyers billing hours to recheck AI. The firm’s justification for this is frustratingly pragmatic: ‘it’s not how long it took; it’s how long it would’ve taken,’ says one delegate. This raises concerns for in-house teams as general counsel are under increasing pressure to demonstrate they can add value to the business.

Restructuring the team

How AI is used in legal work raises broader organisational questions. There is scope for AI to be used in both legal and business teams, and there is certainly, overlap, so who drives the use of AI in the organisation? Equally, who leads policy, governance and adoption?

One solution is to introduce a digital transformation officer with executive authority to act as the central pillar of digital adoption. A dedicated legal operations department can also take the lead in fostering understanding of AI across the business. Legal operations teams have particular concern for processes and efficiency, which enables them to analyse the pros and cons of AI to the business. Carving out a specialised function takes the strain off general counsels, who typically place more importance on safeguarding and compliance.

AI is also useful in automating the admin. Junior lawyers will be free from mundane, commoditised work, to perform tasks of higher utility. The legal team can now demonstrate greater operational value. It will also change legal career paths, although it’s more likely to change the nature of the work than the size of legal teams. There is uncertainty as to whether more jobs will be created or lost. Adjustments to training are also yet to be thought out. However, it was noted that in-house teams can still expect to rely on external counsel for the foreseeable future.

Getting lawyers onboard

Caution and risk are important to lawyers, meaning that AI adoption does not come naturally. Regulators have been keeping quiet amid the hubbub on legal tech, which feeds into this risk. In fact, the ethics of AI have been largely ignored. However, this gives the opportunity for tech providers to be more forthcoming about guidance with respect to projects. Responsible AI is equally vital for lawyers to board the AI train. It was agreed at the roundtable that legal teams must provide the governance for AI to be implemented safely. They should highlight the risks and find the mitigation. Ultimately, embracing AI does not mean allowing an unsupervised robot to take over legal work. We aren’t yet in a position to trust a machine more than an individual.

Change management is crucial in pushing lawyers to get comfortable with AI. Best practice is to embed the tech where lawyers already work. It was discussed that the vast majority of software features are unused and therefore AI should be introduced by continual nudges to drive behaviour changes. Ultimately, the success of software is measured in the competency of its users. Giving people the confidence to use tools will reduce their barrier to entry.

Another practical strategy for efficient in-house legal teams is process mapping. The designated in-house tech leader can discuss and visualise the software with platform providers to illustrate how the functions will operate. This can be done across the entire team – from junior lawyers to senior management. Guest speaker slots in legal meetings also enable AI to be gradually introduced to legal teams without scaremongering. Rather, teams will be thinking ‘what can this software do for me?’.

Commentary from Thomson Reuters:

At Thomson Reuters, our AI assistant experience is bringing trusted content and expertise to the point of need. We are delivering transformative efficiency and productivity gains, integrated where users work for a seamless experience.

This round table was an opportunity to share the experiences of the transformation journey and change management processes in-house teams are on. We heard examples from organizations who have their own labs teams and sandboxes to experiment with, while others are joining forces with their trusted partners to be part of Beta programs.

My top tip: get involved, experiment, and you will be amazed by the game changing productivity gains your teams can benefit from. Start small, think big!

Debbie Cunningham, chief of staff, Thomson Reuters LegalTech

Generative AI is having a transformational impact on the work professionals do and how it’s done, but foundational to the transformation is Trust. These capabilities work best when we augment our experts, not replace them. At Thomson Reuters we are finding that bringing trusted content through a Retrieval Augmented Generation (RAG) approach, is foundational to increasing reliability and therefore confidence.

It was great to hear lots of pragmatism from in-house teams, needing to understand data flows to identify the risks and the opportunities. I was also struck by a comment that a professional’s use of tools tends to plateau, we need to make new AI capabilities easy to use, embedded where they work.

Andrew Fletcher, director of strategy, Thomson Reuters Labs