AI Summit_Sept. 13 2024
9/11/24, 4:57 PM
Navigating a Shifting Landscape | AAJ
Mandatory training on the AI tools that you choose to implement and your { rm’s policies related to those tools is very important. Everyone who is going to use this technology needs to understand how it works and its bene { ts and limitations. All of this is moving at such a fast pace that we will need ongoing monitoring and compliance checks to make sure people are using the tech properly, new hires get the appropriate training, and { rm policies stay current. Firms will also need to think about what to communicate to clients. Is it going to be via an engagement letter that says, “We use AI for X, Y, and Z purposes. If you would like to discuss the use of AI on your case, please raise this with your attorney.” Or is it going to be a conversation you have with all new clients? And if you’re just using AI to correct grammar, or to make a paragraph a little tighter, or to generate an initial draft of deposition questions, is it even necessary to have that discussion at all? Clients probably don’t care about those sorts of things, but they might very well care if you are drafting court pleadings or an opening argument using AI. There are two different circumstances where parties may seek to admit AI or purported AI. One is where both parties agree that the evidence is AI based. For instance, both parties agree that an AI tool was used for hiring and the plaintiff didn’t get the job because the algorithm said, “X is in the bottom quartile and doesn’t qualify for this job.” So, then the process for admitting related evidence tends to look the same as the process for most technical and scienti { c evidence. The questions are going to be: How does this tool work? Are there standards for its operation? How was it trained? What is the data that it was trained on? Has it been tested? What is its error rate? Has it been peer reviewed and adopted by others in the industry? In the employment example, if you are a woman of color and the training data was gathered primarily from white males, the tool likely won’t make an accurate prediction. We want to know about the training data and whether it’s representative of the groups about which the tool is being used to predict. What due diligence was done, or what was done to test that this tool is both valid and reliable? “Valid” meaning it measures or predicts what it’s supposed to, and “reliable” meaning that it does so consistently under similar circumstances. We also want to know about bias. Is the tool biased against certain protected groups? The second situation is different. It involves disputed AI evidence or deepfakes. You say you have audio of me saying “X, Y, Z,” and I say, “That’s not me. I never said that.” Under ordinary circumstances, to get that admitted, all you have to do is { nd somebody who knows my voice really well to testify that This website uses cookies and tracking technologies to optimize your experience. Privacy Policy Let’s shift to how AI could affect evidence in court. What are the main issues?
AI Roundtable Page 50
https://www.justice.org/resources/publications/trial-magazine/2024-june-navigating-a-shifting
5/9
Made with FlippingBook Digital Publishing Software