The U.S. Judicial Conference's Advisory Committee on Evidence Rules met and struggled to determine whether or how to draft rules that would allow courts to ensure the authenticity and reliability of trial evidence generated by artificial intelligence (article available here).
The Committee on Evidence Rules heard from computer scientists and academics about the risks of AI being used to manipulate videos and images and create "deep fakes" that could taint a trial. However, by the end of the discussion the eight-member panel charged with drafting evidence-related amendments to the Federal Rules of Evidence decided that proponents of one AI-related proposal needed to go back to the drawing board.
Some judges questioned whether old rules that predated the current AI boom were good enough to ensure the reliability of evidence.
The meeting came amidst broader efforts by federal and state courts nationally to address the rise of generative AI, including programs that are capable of learning patterns from large datasets and then generating text, images and videos.
In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials.
The committee considered several deepfake-related rule changes. In the agenda for the meeting, US District Judge Paul Grimm and attorney Maura Grossman proposed modifying Federal Rule 901(b)(9) (see page 5), which involves authenticating or identifying evidence.
For now, no definitive rule changes have been made, and the process continues.