Judges to AI: We object!

Presented by

With help from Derek Robertson

You know the fears about AI automation are real when even the chief justice of the United States starts to sound nervous.

John Roberts’ year-end report on the federal judiciary has caused a stir with its defense of the value of human judges in a world where AI models have started passing bar exams.

While encouraging members of the stodgy legal profession to sit up and pay attention to AI’s advances, Roberts made the case that the job of judging involves an irreducible human element. “Legal determinations often involve gray areas that still require application of human judgment,” he argued.

Even if human judges aren’t going anywhere soon, the evidence suggests Roberts is right to be raising the alarm on AI. The technology is poised, or in some cases already starting, to collide with the practice of law in several arenas, many of which might not be obvious – but could have long-term effects.

One particularly thorny issue will be the admission of evidence that is an output of an AI model, according to James Baker, a former federal appeals judge and the co-author of a 2023 judges’ guide to AI published by the Federal Judicial Center, a research agency run by and for the judiciary.

The report anticipates that outputs like AI-generated analyses of medical tests or AI-screened job applicant pools will soon start posing legal dilemmas for judges.

Baker told DFD that he expects the complexity of models to make controversies over AI evidence more vexing than debates over DNA evidence, which overcame initial skepticism to become a mainstay in American legal proceedings.

“The challenge with AI is every AI model is different,” he said, “What’s more, AI models are constantly learning and changing.”

For now, judges have discretion to steer clear of that confusion: Baker pointed to Rule 403 of the Federal Rules of Evidence, which says that a judge can exclude relevant evidence at trial if it’s likely to cause too much confusion or distraction.

Of course, courts won’t be able to sidestep the complexity of AI models when they’re central to the dispute being litigated. Already, generative AI has become the subject of several ongoing copyright cases, including one in which the New York Times is challenging OpenAI’s use of copyrighted material to train its models. Baker said he also expects to start seeing cases that will force judges to grapple with the role of AI in automated driving and medical malpractice.

While the constitutionally mandated role of judges offers a certain level of job security, other positions in the legal profession are already starting to feel the heat.

Last week, former Donald Trump lieutenant Michael Cohen, himself a disbarred lawyer, offered a memorable lesson in how not to use AI in the practice of law. On Friday court records were unsealed showing that Cohen had provided his legal team nonexistent legal precedents, given to him by Google’s Bard chatbot, which his lawyers then cited in a motion to end his supervised release early.

But specialized AI legal research tools are improving rapidly, according to one litigator at a prominent mid-sized law firm who was granted anonymity to discuss what is becoming an increasingly touchy subject inside the legal profession.

He said that one research tool he tried out last month accomplished in three or four minutes what a junior associate would take 10 hours to do. He predicted that smaller law firms will be able to adopt the technologies more quickly than the large firms that dominate the industry.

The litigator said clients at startups and in the tech industry have already started pushing lawyers to make use of the automated tools: “There’s an expectation now that you’d use AI to reduce costs.”

And the tools look poised to get better at automating human work. Last month, Harvey, a startup that bills itself as “generative AI for elite law firms,” announced it had raised $80 million from investors including Kleiner Perkins, Sequoia and OpenAI.

If summer associates soon start to sweat, chief justices may not be too far behind.

Matt Henshon, chair of the American Bar Association’s Artificial Intelligence and Robotics Committee, pointed DFD to a notable “dichotomy” between Roberts’ “gray area” commentary and his other memorable remarks about the role of the judiciary.

At his confirmation hearing in 2005, Roberts famously described judging in more black-and-white terms. “Judges are like umpires,” he said, adding, “it’s my job to call balls and strikes.”

There’s good reason for Roberts to ditch the umpire comparison in favor of a vaguer, more touchy-feely conception of judging (his latest report also emphasized judges’ ability to interpret “a quivering voice,” “a moment’s hesitation,” or “a fleeting break in eye contact”).

If litigating is America’s favorite pastime, baseball might be its second favorite. And in 2019, Major League Baseball began experimenting with automated “umpires” to call balls and strikes in the minor leagues. Last year, the robo umps came to every Triple A ballpark, the last stop before getting called up to the big leagues.

the new (kind of scary) frontier

European regulators are sounding the alarm for 2024 about cybersecurity risks posed by some of the most cutting-edge new technologies.

As POLITICO’s Cyber Insights (for Pro subscribers) reported this morning, European officials are particularly worried about cyber threats from quantum cryptography, AI-powered attacks, and data breaches in the cloud. On quantum, the European Union is hoping this year to roll out a coordinated, bloc-wide network of quantum-proof communications systems; the EU AI Act hopes to address threats posed by the corruption or misuse of powerful AI models; and a planned cloud certification scheme will tackle threats there. (And as noted in the stateside edition of Morning Cybersecurity today, crypto heists and hacks remain a major threat as well with North Korea increasingly using the spoils to fuel its rocket program.)

Still, these threats are infiltrating the European policy conversation amid a year already expected to be busy fending off election hacking, illegal targeted ads, and disinformation. — Derek Robertson

way mo' safer

In case you missed it before the holiday: Google subsidiary Waymo is boasting that its self-driving cars are now safer than ever.

In two recently published research papers, Waymo favorably compares its automated drivers’ crash rates to those of humans, and then shows its work by explaining what those benchmarks for safety actually are. Overall Waymo claims an 85 percent reduction in crash rates involving injury compared to the crash rates of human drivers, and a 57 percent reduction in crashes that are reported to police, an indicator of more significant incidents.

The good-news papers come as Waymo plans to expand from its existing service areas of San Francisco and Phoenix to Los Angeles and Austin, even as regulators in the U.S. and abroad worry about the technology and its safety. — Derek Robertson

Tweet of the Day

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]), Daniella Cheslow ([email protected]), and Christine Mui ([email protected]).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.