As 2023 comes to a close, we reflect on what we’ve learned about this new world of generative AI. New tools can be hard to learn, and this is particularly the case with brand new technology like Large Language Models (LLMs). The trick here is that it’s not always how good the model is, it’s also how good you (the user) are at using it. Asking inefficient questions can diminish the value you get, and could even get you into hot water, by generating incorrect information (often called “hallucination”). But when “prompted”, or asked, the right way, generative AI platforms can redefine how work gets done.
To close out the calendar year, we’ve rounded up our top tips for users of generative AI - with a particular focus on legal professionals’ application of LLMs.
When interacting with Large Language Models it is helpful to think of the technology not as a program you’re interacting with but as another person. LLMs were built to understand plain language, and function in a similar way to another team member helping out with a particular question or task. Throughout this list we try to provide some relatable examples that can help you conceptualize our advice.
Think of it this way: Imagine you're at a hardware store looking to buy a tool for a home improvement project. If you approach the store clerk and ask, "Can you help me find a tool?" the clerk will likely be puzzled by the lack of specifics and will have to ask a series of follow-up questions to narrow down what you actually need.
However, if you approach the clerk and say, "I'm installing kitchen cabinets and I need a tool to help me precisely measure and level them. What would you recommend?" the clerk can immediately guide you to the right tools for your specific task, such as a laser level and a tape measure.
Think of it this way: Imagine you're trying to get advice from a fitness trainer for a workout plan. If you simply ask, "Can you give me a workout routine?" without providing any context, the trainer might give you a generic plan that doesn't suit your needs.
However, if you tell the trainer, "I'm training for a marathon that's happening in six months, and I have a history of knee injuries," the trainer can create a personalized workout plan that takes into account your specific goal and health background, ensuring the routine is tailored to help you prepare for the marathon while avoiding injury.
Remember that although people, and LLMs, are good at reading through documents and making judgements about what is important and what is not, proving an explanation of the work lets people focus on the right parts of the case. Dropping a case file on a team member's desk and asking for a summary will yield a less focused answer than a casefile with a note on top saying “this case is about a wrongful termination and we need to get key details to draft a demand letter, you’re going to help me get those key details.”
Think of it this way: Imagine you're at a busy coffee shop trying to order a drink from a barista who is handling multiple orders. If you start explaining all the reasons why you need a caffeine boost that day, the barista might get distracted or confused by the unnecessary details, which could lead to a mistake in your order. Instead, if you simply and clearly state, "I'd like a medium latte, please," the barista can quickly and accurately process your request.
Think of it this way: Imagine if you were trying to cook a meal and you asked a chef, or a friend, for their recipe, you’d want them to write out the steps - then perform them in order. If they were to give you all of the instructions in one go (“preheat the oven, and make the sauce, and cook the vegetables, and also mix everything together, bake…”) you might get confused. And your end results might not be as good as if you took each task one at a time, starting at the beginning and systematically moving through to next steps.
Think of it this way: Imagine you're at a doctor's office, and after a check-up, the doctor says, "You might want to watch your diet." This advice is somewhat vague and doesn't give you specific guidance on what exactly you should change about your eating habits. You’d want to ask follow-up questions: "Can you specify which foods I should eat more of and which to avoid?" By asking these questions, you're prompting the doctor to give you more detailed and actionable advice. This iterative process of inquiry and response helps you understand precisely what steps you need to take to improve your diet and overall health.
Think of it this way: Imagine you're receiving a recipe from a friend who is known for their culinary expertise. They verbally give you instructions for making a complex dish. While you trust your friend's cooking skills, you decide to write down the recipe. Before you start cooking, you double-check the ingredients and steps by looking at a cookbook or a cooking website to ensure you understood everything correctly and that nothing was miscommunicated or forgotten.
For legal work, the stakes are high. You’re likely to check and double check your work, whether it’s your own, or your team’s output - you want to make sure it’s right. Just as with other projects, check the output of your AI - because even the smartest members of your team can make a mistake.
Think of it this way: Imagine you're a photographer tasked with capturing high-quality images of a nighttime landscape. You have a smartphone with a camera, which can take decent photos. However, for this specialized task, you also have access to a professional DSLR camera with settings that can be adjusted for low light conditions and a tripod to stabilize the shot.
If you choose to use your smartphone, you might end up with grainy, blurry images because it's not equipped to handle the nuances of nighttime photography. On the other hand, using the DSLR camera and tripod, which are the right tools for this specific task, you'll likely capture stunning, clear images that meet the professional standards required.
As for a legal example - Imagine you needed to verify a particular law as it applied to your case. If you were to ask an average (smart) person walking down the street for legal advice, they’ll probably give you some good answers, but they could make a grave mistake. They might be from another state, and giving you inaccurate information for your jurisdiction, or they might be conflating a series of things together, remembering two or three pieces of news they’d hear about and telling you them as one story. You would be much better suited asking, even a non-lawyer, but a stranger that has a repository of case law in front of them, or a book of the local laws to help you out.
Think of it this way: Imagine you're on a road trip with an old GPS navigation system that doesn't have real-time traffic updates. It's great for providing directions based on static maps, but it won't inform you about current road closures, traffic jams, or construction work. To successfully reach your destination without delays, you would need to be aware of this limitation and perhaps listen to live traffic radio updates or check a real-time traffic app on your smartphone to supplement the information from your GPS.
LLMs are much like us - the way that we learn is by seeing (and doing). We’ve experienced and learned about a certain (large but not unending) set of facts, and we draw upon our knowledge base for our daily lives. This is how LLMs have been trained - by seeing a vast amount of information, and being able to draw on it to answer questions. But if a person were asked to talk on a topic they’d never seen or learned about, they’d falter. Similarly LLMs are only good at working through things they’ve seen before in their training.
Think of it this way: Imagine you're at a restaurant trying to explain to the chef exactly how you want your steak cooked. You could say, "I'd like my steak well-done," but your idea of "well-done" might differ from the chef's interpretation, which could lead to an unsatisfactory meal. To ensure you get exactly what you want, you might describe the color and texture of what you're expecting, "I'd like the steak to have no pink inside and a slightly charred exterior."
Think of it this way: Imagine you are working with a colleague on a case, trying to come up with the best case strategy. You wouldn’t stop your meeting after a few questions and continue working alone. You’d discuss, ask follow-up questions, run ideas by each other, and approach the problem from different angles. Each question builds on the last, deepening your understanding. You're not just collecting disjointed facts; you're engaging in a dynamic learning process that evolves with each new perspective you discuss. The deeper you go with your LLM, similarly to Tip #5, the higher quality of your answers.
Think of it this way: Imagine you're at a large, bustling farmers' market looking for ingredients to make a salsa. You could wander around aimlessly, hoping to stumble upon tomatoes, onions, and cilantro. However, this approach might be time-consuming and frustrating. Instead, you decide to ask a vendor, "Where can I find heirloom tomatoes?" By using the keyword "heirloom tomatoes," you've made it clear what you're looking for, and the vendor can direct you to the exact stall that sells them. Then, you might ask, "Who has the freshest cilantro here?" Again, the keyword "freshest cilantro" helps the vendor understand your specific interest in quality herbs.
Think of it this way: Think of LLMs as recent immigrants. Although they have a high-level education, they might not pick up on your slang, abbreviations, or other colloquialisms. The simpler the language, and the clearer the request, the more understanding the model will have, and the more accurately the model will process your questions.
For example, imagine you're trying to assemble a piece of furniture with the help of a friend over the phone. If you say, "I'm stuck with this thingamajig here," your friend might have no idea which part or step you're referring to, leading to confusion and an unassembled piece of furniture.
To get effective help, you would instead say, "I'm on step 5 of the instructions, trying to fit the shelf into the side panel, but it won't align properly." By being specific and avoiding ambiguous terms, you give your friend a clear understanding of your situation, allowing them to provide precise and helpful guidance
Think of it this way: Imagine you're planning a trip and you're consulting a travel agent. You wouldn't start by asking, "What are the best restaurants to eat at in Paris?" if you haven't yet decided that Paris is your destination. Instead, you begin with, "Which European cities are recommended for art lovers?" After the agent suggests Paris based on your interest in art, you might then ask, "What are the must-visit art museums in Paris?" Once you have a list of museums, you finally ask, "Can you recommend restaurants near these museums?"
This sequence of questions allows the travel agent to provide you with information that is not only relevant but also builds upon your travel plans step by step. It ensures that the advice you receive is tailored to your interests and logically structured, making your trip planning more efficient and enjoyable.
Think of it this way: Imagine you're in a cooking class, and the chef presents a complex recipe that spans several pages, detailing numerous steps and ingredients. Feeling a bit overwhelmed by the intricacy and length of the recipe, you ask the chef, "Can you give me an overview of the cooking process and the key flavors we're aiming for?"
The chef then provides a simplified version, saying, "We'll start by preparing our base ingredients, which will give us a rich umami flavor, then we'll move on to constructing the dish in layers to enhance texture, and we'll finish with a garnish that adds a fresh aroma." This high-level summary gives you a clear idea of the cooking journey you're about to embark on, without getting bogged down in the details right away.
Think of it this way: Imagine you're a seasoned driver used to navigating with traditional paper maps. Over the years, GPS technology has advanced, and now there are navigation apps that offer real-time traffic updates, route optimization, and voice-guided directions. If you continue to rely solely on paper maps, you might find yourself stuck in traffic jams or taking longer routes, unaware of the tools that could make your journey faster and more efficient.
By staying updated with the latest navigation apps and their features, you can enhance your driving experience. For example, a new app update might include a feature that alerts you to road closures or suggests the best departure times to avoid traffic, saving you time and frustration.
A bonus thing to think about - choose a secure platform. Not all LLM providers treat your data the same way. If you are dealing with sensitive, confidential, or unredacted data, you will want to ensure that the software that you are leveraging isn’t using your data incorrectly - selling it, exposing it, or training on it. You can read more about the security concerns for generative AI in this blog post.
We hope these tips were helpful in getting you on the right track and more confident while using Large Language Models and generative AI tools. Stay tuned for more education on how to get the most out of your legal generative AI models.