By Laura Lin and Rachel June-Graber

There are safeguards that can be put into place to protect against AI-created misinformation or misleading the judge or jury as to the origin of the evidence. For example, experts or counsel may be duty-bound to inform the court when AI has been used to generate evidence presented.

Popular generative AI tools can now create complicated written and visual works – including trial exhibits and demonstratives if courts allow it. Lawyers and the general public are increasingly aware of AI’s ability to create complex works such as essays, poems, summaries of written works, and detailed images conditioned on certain text descriptions or parameters. AI tools can mimic the style and form of other creative works and are trained on vast amounts of online data relating to existing information, including images, visual art, literature, and other written works. This means that generative AI can aggregate or summarize existing information and data, or create new analyses or visual representations.

The High Court of Delhi at New Delhi late last month considered the admissibility of AI-generated evidence in a trademark case, Christian Louboutin SAS & ANR. v. M/S The Shoe Boutique. The famous shoemaker in the Louboutin case argues that its shoes, and the corresponding red soles and other distinctive design elements, have acquired “enormous reputation and goodwill” and have been depicted in popular culture and the media. Drawing from AI, plaintiffs alleged that “the reputation that the plaintiffs have garnered can also be evaluated on the basis of a ChatGPT query.”

The Louboutin plaintiffs sought to proffer evidence of ChatGPT’s response to the query: “Is Christian Louboutin known for spiked men’s shoes?” ChatGPT responded: “Yes, Christian Louboutin is known for their iconic red-soled shoes, including spiked styles for both men and women. The brand’s spike-adorned footwear has become quite popular and is often associated with their unique and edgy designs.

The Indian court rejected this evidence and concluded that ChatGPT is a “tool [that] cannot be the basis of adjudication of legal or factual issues in a court of law.” But the court did not immediately brush off the plaintiffs’ suggestion that a ChatGPT query can provide evidence of reputation. Instead, the Court seemed to recognize that ChatGPT has the ability to confirm that specific brands are known for having particular design attributes.

The Indian court ran its own queries through ChatGPT, asking (i) “Is there any brand known for manufacturing & selling shoes with spikes and studs on the outer body?”; and (ii) “Give a list of brands that make shoes with spikes and shoes on the outer body of the shoe.” In response, ChatGPT identified Christian Louboutin shoes first but also listed additional brands of competitor shoes. Both answers included a caveat that fashion trends, brands and designs change over time and the user should check brands and retailers for current shoe options.

The court’s skepticism about ChatGPT evidence arose from its conclusion that the response of generative AI chatbots “depends upon a host of factors including the nature and structure of the query put by the user, the training data, etc.” The court seemingly blamed “the present stage of technological development” and noted that, at present, “the tool could be utilized for a preliminary understanding” at best. The court expressed concern, too, that “there are possibilities of incorrect responses, fictional case laws, imaginative data, etc. generated by AI chatbots. Accuracy and reliability of AI-generated data is still in the grey area.”

While the Indian court found that ChatGPT could not be admitted, its analysis suggests that AI-generated evidence is on the way. In the trademark context this could mean asking AI about a brand’s reputation or whether a knock-off could create a likelihood of confusion. Similarly, AI could opine about what a “reasonable person” would do in a certain situation based on the AI’s extensive information database. In the criminal context, the parties might ask generative AI to determine whether law enforcement had reasonable suspicion for a traffic stop or probable cause for a search warrant.

In each of the above examples, generative AI evidence is more likely to be admitted where similar inquiries by the court (or opposing counsel) produce the same results as the proffering party’s inquiry. Admissibility might hinge, too, on the use of an expert to sponsor the ChatGPT responses or explain the extent of the AI tool’s reliability.

For demonstratives, too, generative AI tools offer a potentially powerful tool in the courtroom. A litigant could input large amounts of data into a generative AI to aggregate or summarize data into a written or visual demonstrative. Similarly, a party could feed a set of facts into an AI image generator to create a visual representation. As an example, the following text was fed into an AI image generator, “two cars approaching each other at the intersection of Haight and Ashbury in San Francisco,” and, as expected, obtained a visual representation of the cars at the specified cross streets that could be viewed from multiple angles.

There are safeguards that can be put into place to protect against AI-created misinformation or misleading the judge or jury as to the origin of the evidence. For example, experts or counsel may be duty-bound to inform the court when AI has been used to generate evidence presented. At least three federal judges have already put into place certain obligations, such as standing orders, requiring that counsel certify that generative AI was not used in connection with court filings, or to disclose when such tools have been used. E.g., June 6, 2023 Standing Order of

Judge Michael M. Baylson of the United States District Court for the District of Pennsylvania. Revealing the use of a generative AI platform, and how that platform works to the judge or jury will allow the decision-maker to make its own determination as to how much weight that evidence should be afforded.

To what extent courts will allow works created by generative AI into the courtroom remains to be seen. But growing acceptance appears likely with time, particularly as advocates learn the upsides of these tools, generative AI programs improve to better eliminate bias and increase reliability, and experts emerge to explain the reliability of the technology in the courtroom.

© Daily Journal 2023

This content may be used for research purposes only. It may not be posted on any website or intranet or used in marketing materials.

For reprint rights or to purchase a copy of your Daily Journal photo:

Email [email protected] for prices.

Direct dial: 949-702-5390

To buy more copies, call 866-531-1492 or email [email protected] Link: View on DailyJournal

Scroll to Top