AI in the legal industry: Overcoming the challenges of adopting AI technology

Automation and AI have become integral to the tech industry — tools that eliminate menial repetition allow an employee to spend more time on tasks that require real human ingenuity, thus improving overall efficiency and productivity. As a tool for automating data analysis, contract drafting and research, AI is already saving general counsel a great deal of tedium.

A new report indicates that legal teams are recognising the revolutionary potential of AI, with many legal professionals believing AI is set to redesign legal service delivery.

The Thomas Reuters Institute surveyed over 2,200 professionals across a range of industries for their 2nd annual Future of Professionals Report. Over 750 of respondents worked in law firms.

A remarkable 79% of legal professionals believed AI would have a substantial or transformative impact on their work over the next five years, despite the existing flaws in the technology.

Obstacles facing the implementation of artificial intelligence in the legal industry

While there are many different AI technologies available, one of the most pervasive is generative AI. According to Wikipedia, “generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models often generate output in response to specific prompts.”

While ChatGPT is common, you may have also heard of other tools like CoCounsel, Harvey AI and Diligen (or used them in your own legal team).

The Future of Professionals Report indicated that “generative AI will not replace highly trained lawyers…but a lawyer…using generative AI will certainly replace one who isn’t using the technology.”

Yet, while adoption rates are increasing, there are still concerns about this evolving technology. Common issues cited in other sectors also impact legal teams, including inaccuracy, inconsistency, bias, and privacy.

Inaccuracy

There are abundant examples of incidents that have exposed the inaccuracies of using AI. In 2023, a lawyer was caught-out using ChatGPT to prepare a legal brief that referenced a string of prior legal cases. Turns out the “cases” were fabricated by the AI tool and not fact-checked by the lawyer.

More recently, when a German journalist typed his own name into Microsoft’s Copilot AI, the technology named him as the perpetrator of the crimes on which he had reported.

It’s clear that inaccuracy and misinformation generated by AI could have serious consequences in the legal field. Thorough oversight and scrutiny at individual and organizational levels, along with robust company policies must accompany AI adoption.

Lack of independent verification

Any AI-generated content must be carefully verified to be sound and dependable. Much legal work is ripe for technology, such as analyzing documents, comparing a contract to standard legal forms or (increasingly) extracting data and providing semantic background. But AI can’t replicate human judgment.

Informed by deep business relationships, a rich work history and real-world experience, human intelligence is able to pick up on subtle cues and take calculated risks. AI outputs, when examined, are a valuable complement to that process; they’re not the final answer.

Limited data availability

A cornerstone of machine learning is raw information. Data is the fuel for a machine to be able to ingest and learn, which means large data sets are needed to make AI function effectively. In handwriting recognition software, for instance, the more examples of a letter “B” that the AI is given leads to greater accuracy in recognising that letter.

Hundreds of thousands or even millions of data points are required to create accurate machine learning. Not only do legal departments rarely have such vast swaths of data, but most corporate legal matters are highly confidential and not suitable for sharing in a central pool at the nationwide, let alone worldwide, level. Any legal department hoping to develop useful AI software would have difficulty feeding the AI enough raw data to make it consistent and accurate.

Resistance and uncertainty impacting in-house adoption rates

Although using technology to facilitate workflow is one of the top three priorities in legal departments (per the latest Legal Department Operations LDO Index from Thomson Reuters), 90% report slow to moderate progress when it comes to using AI in corporate legal departments. Only about 4% of respondents reported using generative AI tools at any stage of their operations.

A system’s data tells you exactly what you can expect from its results. Legal departments that invest time in implementing AI technologies can save time and money in the long-run.

Lack of in house guidelines

Corporate legal teams can begin to make their work more AI-friendly by standardizing clause libraries and starting with small automated tasks, such as clause choice. Many corporate groups will eventually benefit from small, in-house AIs because each corporate law department is quite different.

Not having clear rules can leave things open to ambiguity and inconsistency. Often the first step taken is to regularise document template formatting to facilitate AI review. By combining knowledge from millions of standardised agreement or contract templates, AI can make suggest clauses to use in certain documents and template types.

Bias

AI could be predisposed to propagate prejudice in exisiting data. AI tools learn based on training data from a variety if inputs and are only as balanced as those inputs. Such sources can lead to legally impermissible results or poor decision making that undermines the integrity of legal outcomes.

Blind spots in datasets, unexplained and unreproducible materials, and poor-quality input need to be identified as rules are formulated or interpreted.

Intellectual property and copyright

AI systems often rely on datasets that include copyrighted content. Not having source references or explanations for outputs presents risks for legal professionals relying on AI-powered content, including unknowingly violating copyright laws. AI outputs should be your starting point, or a source of ideas, rather than being used verbatim.

Essential safeguards for using AI

As with integrating any new technology or system, the introduction of AI into legal processes comes with new risks. Legal teams or law firms considering implementing legal AI should consider these safeguards to manage risks effectively:

Standardize training and usage protocols:

We all have our unique ways of doing things. And using AI is no different. Inconsistent application produces variable results. Establishing standardized AI training and processes helps legal departments to mitigate inconsistency.

Data privacy and security:

To protect client data, avoid entering person or confidential information into AI tools, limiting inputs to information that is non-sensitive and in the public sphere.

Compliance and regulation:

Keep up-to-date with legislation as AI tools evolve. Failure to comply with regulations pertaining to the likes of copyright, confidentiality and privacy may result in legal consequences or reputational risks for your organization.

Generative AI in the legal industry

The question, how will AI affect lawyers, is becoming increasingly relevant as automation technology continues to evolve and pervade legal services.

“We are just at the very tip of the iceberg in thinking about the implications of AI,” said Professor David Wilkins ’80, director of the Center on the Legal Profession at Harvard Law School, noting that at the beginning of last year, most people “had ever heard of generative AI, let alone used ChatGPT. Now, it’s everywhere.”

During an interview with Harvard Law Today in February, he noted: “In my conversations with lawyers, people started out very skeptical that you could use it for anything useful in the day-to-day work of a practicing lawyer,” he continued. “And now, increasingly, people are more comfortable using it in a wide range of settings.”

Referencing analysis released by the International Money Fund (IMF), which said that almost 40% of global employment could be affected by AI, Professor Wilkins noted there will be a heightened impact in the professional ranks. “That clearly has implications for lawyers,” he added.

Broad application and less dubious outputs

The legal profession is reliant on precedent and legislation. Text generation and review and managing legal contracts are among its core exercises. Therefore legal work is highly suited to generative AI application.

Large language models (LLMs) were created for natural language processing tasks. These tools, which fortify generative AI, have opened legal tasks to the technology-driven metamorphosis many industries and corporate functions have already experienced.

The models already in the public domain have foundations in extensive historical data. Over time, they have improved, outputting results that are less dubious. As Professor Wilkins explained: “AI is also getting much better and hallucinating less. The industry is moving from non-specialized AI to AI trained on legal materials, designed to tackle specific, complex legal problems.”

Using AI in corporate legal departments

Rather than a tool to replace lawyers, AI is a complementary asset. The potential benefits of AI in law and legal departments are wide-ranging and diverse. AI is poised to help legal leaders with contract analysis, drafting legal briefs, document review and conducting legal research.

Well-known benefits of AI include improved efficiency and effectiveness, with respondents of The Future of Professionals Report highlighting AI as a tool to boost productivity and free up time. But the report also underscored new areas of value:

  1. Managing large volumes of data proficiently
  2. Reducing human error-caused inaccuracies
  3. Improved decision-making through advanced analytics

Optimistic about utilizing AI tools to automate routine tasks, the legal field respondents believed the technology would pave the way for spending time on high value tasks. Time-consuming menial tasks such as reviewing and comparing legal documents or summarizing large amounts of text could be achieved with AI.

Preparing for the future

With the growing impact of AI in mind, legal leaders must proactively prepare for the future. The Future of Professionals Report suggests there are three essential ingredients to effective adoption:

  1. Embracing AI and emerging technology to enhance productivity and client value.
  2. Encouraging a culture of innovation, inspiring your team to explore new approaches.
  3. Staying informed about industry and technology trends, as the future of law favors continuous learners.

Reshaping the legal service delivery model

Artificial intelligence is reshaping the legal service delivery model. With less time to do more, AI-powered tools can help your team do the work.

Conduct legal research, manage caseloads, identify patterns and draft language for briefs or contracts with AI assistance. Legal professionals who combine technological proficiency with traditional expertise will be best positioned to navigate the changing terrain and deliver superior results.

Dazychain offers a smart tech solution for legal teams

Dazychain transforms in-house legal operations with smart matter management software that streamlines workflows, automates routine tasks, enhances risk management and provides clear visibility across all matters. With Dazychain, your team can focus on complex legal issues instead of administrative tasks. We don’t just help you manage legal matters; we elevate your entire operation. Book a demo today!

Interested in exploring Dazychain’s solutions?

Stay in the loop

Subscribe to our free newsletter.