Articles
5 minutes

Generative AI for Legal Work: The Art of Prompt Engineering

The art of prompt engineering is tough to master, here's where to start for legal professionals.
Written by
Jamie Fonarev
Published on
August 20, 2024
Introduction

Generative AI is starting to transform the legal industry, and the powerful new tools are yielding powerful new results. But getting good results from generative AI tools isn’t as simple as it might seem. Writing the right prompt, asking the right questions, will change the quality of the results that you receive. 

Generative AI has the potential to superpower legal professional’s productivity and skills, but also has the risk of giving misleading or even false information. It is crucial to know how to create good prompts so that you can accelerate outcomes, and minimize the risks of poor results. 

In this article, we will explore the crucial role of prompt engineering, the art of writing the right query. We’ll dig into the basics: what is prompt engineering and why it’s important, discuss a few approaches to prompt writing and share some best practices for you to employ when working with generative AI, specifically for legal tasks. 

What is Prompt Engineering and Why is it Important?

Prompt engineering is the art of crafting precise instructions to input into a generative AI model that will result in optimal answers. Just as with a regular conversation with another person, asking good questions leads to better answers. If you were to ask vague, meandering questions with no context to a stranger on the street, you might get a vague, unintelligible answer (or a lie). Similarly with generative AI - asking poor questions could lead to poor results. 

Prompts - or questions, are the foundation of generative AI models. And learning how to write good prompts will allow you to harness the full potential of the AI models. 

This is particularly important for legal professionals, where the risks of missing key case details, misrepresenting information, or getting made up (hallucinated) answers could be particularly risky. 

Types of Prompts 

As you work with generative AI, the prompts and questions that you ask, can be generally categorized into a few types.

  • Analysis: Generative AI can enable legal professionals to analyze vast amounts of information, helping identify patterns, inconsistencies, and relevant insights across complex legal documents.
  • Education: AI can be leveraged as a learning tool: legal professionals can deepen their understanding of specific legal topics, statutes, and case precedents, enabling them to stay up-to-date with the latest developments in the field.
  • Creative Thinking/Brainstorming: AI can serve as a valuable partner in expanding on legal ideas, generating new perspectives, and facilitating creative problem-solving within the legal realm. Legal professionals can also use generative AI to take on non-legal, creative tasks such as marketing and communication drafting. 

Each prompt type has a distinct purpose, and knowing which type of task you are taking on can help guide you to the right ways to formulate your prompt. 

Some Different Approaches to Prompt Engineering

There are many approaches towards “how” to prompt engineer. Each has its own strengths and weaknesses - but they are primarily related to broader scopes of projects: creating lengthy premade prompts and crafting multi-step AI engagements.

In this article we’ll only highlight a few, as they lay the foundation for more individual learnings to follow. Note that these approaches are most useful for crafting extensive prompting, not one-off requests. 

  • Chain-of-Thought Prompting: Building upon previous outputs to generate coherent and logical chains of thought.
  • Complexity-Based Prompting: Adjusting the complexity level of prompts to elicit outputs of varying depths or levels of detail.
  • Generated Knowledge Prompting: Leveraging AI-generated knowledge to enhance the prompt and increase the quality of outputs. (For example - prompting the AI to create the next prompt - saying “Generate a prompt for me that asks for ____”)
  • Least-to-Most Prompting: Gradually refining prompts to guide the AI system from basic to more elaborate responses.
  • Self-Refine Prompting: Iteratively refining prompts based on feedback from AI-generated outputs. (For example - use the AI to get feedback “How can I improve my prompt to get ____?”)
  • Directional-Stimulus Prompting: Providing specific directions or constraints to guide the AI system towards desired outputs. In this approach, prompts are given strict guardrails and directions to keep a narrow window of responses. 
  • Tree-of-Thought Prompting: Creating branching paths of exploration that allow for diverse and expansive outputs. With this approach bigger tasks are broken up into smaller, discrete components which can then be stitched together to perform a larger process. 
Key Elements of a Prompt

Before we dive into specific tips and tricks, let’s review the anatomy of a prompt. Crafting an effective prompt requires attention to all of the key elements of a prompt. Think of each prompt as a short recipe with key ingredients. Failing to include an ingredient might not be felt for a small dish (say, you forget the salt in your baked good), but could also lead to some very unsavory results (say, forgetting the eggs for your cookies). 

Here is what all good prompts are made of: 

  • Instructions: Clear and unambiguous instructions, providing clarity on the desired outcome.
  • Context: Providing adequate context within the prompt, ensuring the AI system understands the specific legal domain or case at hand.
  • Input Data: Choosing relevant and targeted input data that aligns with the legal issue or analysis required.
  • Output Indicator: Defining desired output indicators, allowing the AI system to generate outputs that meet specific criteria or objectives.
Best Practices for Prompt Engineering

To enhance prompt engineering and optimize the use of generative AI in the legal field, consider the following best practices:

  • Craft unambiguous prompts that leave no room for misinterpretation. This means clear, concise instructions, formatted in the style of a command. Avoid using jargon and abbreviations to leave no room for misinterpretation. 
  • Provide adequate context within the prompt to guide the AI system's understanding.
  • For legal work, this would include the case details, your location and practice area, and an outline of the key task at hand. 
  • Strike a balance between targeted information and desired output to ensure the AI system generates relevant results. Asking for too much information in one go can confuse the AI, and give you results that are unfocused, and less useful. 
  • Experiment and refine your prompts based on feedback and insights gained through iterative interactions with the AI system. You can rephrase your questions, ask follow ups (“go deeper”, “focus on [this] topic”), or start from scratch (clear your conversation history to refocus the AI) with a new approach. 

Bonus Tip: If conducting Legal Research, try to avoid basic generative AI prompts. Even with great prompt engineering, the risk that the AI hallucinates (makes up) a case is too high. Ask your AI providers if they have a dedicated Legal Research functionality that allows you to prompt the models to look through a curated dataset of legal precedent.  

What can go wrong?

Now that we know about the basics of prompts, and the best practices surrounding them, let’s discuss some of what could go wrong for a legal professional using generative AI without good prompt engineering. 

There are three main categories of problems that can arise with poor prompt engineering:

  1. Vague and not thorough answers. This is most likely to happen when prompts aren’t detailed enough, or have vague terminology. Or on the flip side, are overly specific. For legal work, the risk of an incomplete answer could be profound. For example, if you are looking to summarize all of the key events of a case, and the prompt you gave resulted in only some of the key events being identified, you might miss crucial information to move your case along. 
  2. Incorrect answers. There are a few ways that poor prompts can lead to incorrect answers. The highest risk is that of hallucination (read more here) - where the AI makes up answers that are not present in the information provided. A good example of prompts that can lead to hallucination are those that don’t provide a good contingency for results not being present “Find all of the typos in this document” vs “Find all of the typos in this document. If there are none, say ‘none found’”. In the first version of the prompt, the results could find typos that are not present in the text, just to satisfy the request. In the second version, you provide an “out” that the AI can quickly adjust to. 
  3. Incorrect conclusions. Lastly, some prompts can lead to incorrect conclusions. In this case, although all the information that the platform analyzed and found could be correct, it might draw the wrong conclusion from it. This can be a symptom stemming from vague or incomplete results above. For example, if you identified some but not all of the case facts, the model might make the wrong conclusions in the facts’ impact on the case. This could also occur if prompts lack context that help guide the results to the right conclusions. For example, if you do not provide context that you have about the state that you are practicing law, the AI might perform an analysis and give results for a different state, where the statutes and laws are different than in your jurisdiction. 

Understanding the risks of poor and non-thoughtful prompting will help create a better experience and better outputs for legal professionals enhancing their workflows. Although good prompting can help alleviate some of these risks, it is still important to check and verify the work that AI is doing, especially for key legal work.

A Legal Example 

Let’s cover a quick example for legal work, to illustrate what good vs poor prompting could look like. 

Let’s consider the example of using a generative AI tool to identify inconsistencies across multiple depositions (or pieces of evidence). This is a particularly apt example, since it combines elements of analysis and creative thinking. We’ve found that analyzing a document and extracting key facts, although time consuming, is a relatively easy task to take on. But combing through two or more documents, extracting details from them, and doing a check to cross-reference each fact to find differences is an exponentially harder (and lengthy) task. 

So how would you go about prompting your generative AI to identify inconsistencies between your depositions? 

First let’s do it the “wrong” way. 

“Identify the inconsistencies between these two documents.” 

Unfortunately, this simple prompt might not yield the best results. Let’s review our prompt ingredients. Although this prompt is giving clear and unambiguous instructions, it is missing context, and doesn’t give any output guidelines. 

This prompt is likely to yield incomplete results, such as missing key details from the case, or might make up inconsistencies that were not present in the documents.

So how could we improve this prompt? 

First, we can break this task into smaller pieces. First, we identify all of the facts mentioned in the depositions. Then we ask the AI to parse through the case, with the facts identified as context, and pick out the inconsistencies. And of course, we don’t want to forget our context. 

“I am working on an [employment law] case, where my client claims that [he was wrongfully terminated]. I have provided you with two depositions from our case. I am analyzing transcripts of depositions that recently occurred. I’d like you to help me identify any inconsistent testimony that may have occurred in these depositions.

Please read through any depositions I have provided you with thoroughly before identifying the inconsistent testimony. For each deposition, please list out all of the different facts and statements made by the witness.”

The first section of our prompt is context: what is the case we’re working on (for our example we mentioned just the basics of an employment law case, but you could enter all the pertinent details, including your state, your claims, etc). The second section is the clear, concise, and unambiguous instruction - telling the AI to perform one specific task. The second section is also making sure to balance getting clear information, in a targeted way, making sure the task doesn’t bite off more than it can chew (for example, identifying key facts AND analyzing them for inconsistencies all in one go. 

Now we can move onto the second prompt that brings the task to a close. 

“Now review the depositions again and identify any major inconsistencies in the deposition transcripts. Please only highlight inconsistencies that may indicate the witnesses may not be telling the truth.”

This prompt is also clear and concise. It gives parameters for the results, keeping the work focused and most meaningful. In this second prompt we won’t need to provide the context, since the model should be referencing our first message (or better yet - a specific “instructions” tab that includes case details and contexts). 

You can also see that this sequence of prompts is following a few of the “approaches” to prompt engineering: it starts broader (list all the facts) and then narrows in on the task (identify inconsistencies) - see Least-to-Most Prompting, it limits the scope of the prompt to tackle the most impactful results (only identify major inconsistencies) - see Directional-Stimulus Prompting.

Closing Thoughts 

Prompt engineering is the cornerstone of using generative AI effectively, in and out of the legal industry. By understanding the significance of prompt engineering and adopting best practices, legal professionals can unleash the full potential of AI systems to analyze information, expand knowledge, and facilitate creative thinking. 

Knowing about prompt engineering, understanding the different approaches and the core components of a prompt is a great first step in becoming a good prompter and a powerful AI user. 

Remember, prompt engineering is a skill that improves with practice and experimentation. Where you can, ask for support from your generative AI providers to help you craft the best prompts. Stay curious, explore new approaches, and continue refining your prompts to unlock the true potential of generative AI in your legal work.  

Monthly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every month.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.