OpenAI has Little Legal Recourse Versus DeepSeek, Tech Law Experts Say
OpenAI and the White House have accused DeepSeek of using ChatGPT to cheaply train its new chatbot.
- Experts in tech law state OpenAI has little recourse under copyright and agreement law.
- OpenAI's terms of usage may use but are largely unenforceable, they state.
This week, OpenAI and the White House accused DeepSeek of something similar to theft.
In a flurry of press statements, they stated the Chinese upstart had bombarded OpenAI's chatbots with inquiries and hoovered up the resulting information trove to quickly and inexpensively train a design that's now practically as great.
The Trump administration's top AI czar said this training procedure, called "distilling," amounted to copyright theft. OpenAI, on the other hand, told Business Insider and other outlets that it's examining whether "DeepSeek may have wrongly distilled our models."
OpenAI is not saying whether the business prepares to pursue legal action, rather assuring what a representative described "aggressive, proactive countermeasures to protect our innovation."
But could it? Could it take legal action against DeepSeek on "you took our content" premises, just like the grounds OpenAI was itself took legal action against on in a continuous copyright claim filed in 2023 by The New York Times and other news outlets?
BI positioned this concern to experts in technology law, who said difficult DeepSeek in the courts would be an uphill battle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time proving an or copyright claim, these legal representatives said.
"The concern is whether ChatGPT outputs" - implying the responses it creates in action to inquiries - "are copyrightable at all," Mason Kortz of Harvard Law School stated.
That's due to the fact that it's uncertain whether the responses ChatGPT spits out qualify as "creativity," he stated.
"There's a teaching that says innovative expression is copyrightable, but facts and ideas are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, stated.
"There's a huge concern in copyright law today about whether the outputs of a generative AI can ever make up innovative expression or if they are necessarily vulnerable realities," he added.
Could OpenAI roll those dice anyhow and declare that its outputs are safeguarded?
That's not likely, the legal representatives stated.
OpenAI is already on the record in The New York Times' copyright case arguing that training AI is an allowable "fair usage" exception to copyright security.
If they do a 180 and inform DeepSeek that training is not a reasonable usage, "that might come back to sort of bite them," Kortz said. "DeepSeek could state, 'Hey, weren't you simply stating that training is reasonable usage?'"
There might be a distinction between the Times and DeepSeek cases, Kortz added.
"Maybe it's more transformative to turn news short articles into a model" - as the Times implicates OpenAI of doing - "than it is to turn outputs of a model into another design," as DeepSeek is said to have done, Kortz said.
"But this still puts OpenAI in a quite difficult scenario with regard to the line it's been toeing relating to fair use," he included.
A breach-of-contract lawsuit is most likely
A breach-of-contract suit is much likelier than an IP-based suit, though it features its own set of issues, stated Anupam Chander, who teaches technology law at Georgetown University.
Related stories
The regards to service for Big Tech chatbots like those developed by OpenAI and Anthropic forbid using their material as training fodder for securityholes.science a competing AI design.
"So possibly that's the suit you may potentially bring - a contract-based claim, not an IP-based claim," Chander said.
"Not, 'You copied something from me,' however that you gained from my model to do something that you were not enabled to do under our contract."
There may be a drawback, Chander and Kortz stated. OpenAI's terms of service need that the majority of claims be solved through arbitration, not claims. There's an exception for lawsuits "to stop unapproved usage or abuse of the Services or intellectual property infringement or misappropriation."
There's a bigger drawback, though, specialists said.
"You ought to understand that the fantastic scholar Mark Lemley and a coauthor argue that AI terms of usage are most likely unenforceable," Chander said. He was referring to a January 10 paper, "The Mirage of Artificial Intelligence Regards To Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Information Technology Policy.
To date, "no model developer has really tried to impose these terms with monetary penalties or injunctive relief," the paper states.
"This is most likely for good reason: we think that the legal enforceability of these licenses is doubtful," it adds. That remains in part since model outputs "are mainly not copyrightable" and because laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "offer limited option," it says.
"I believe they are likely unenforceable," Lemley informed BI of OpenAI's terms of service, "since DeepSeek didn't take anything copyrighted by OpenAI and because courts typically won't enforce arrangements not to contend in the absence of an IP right that would prevent that competition."
Lawsuits in between celebrations in various nations, each with its own legal and enforcement systems, are constantly challenging, Kortz said.
Even if OpenAI cleared all the above difficulties and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over money or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he said.
Here, OpenAI would be at the mercy of another incredibly complex area of law - the enforcement of foreign judgments and the balancing of specific and corporate rights and nationwide sovereignty - that extends back to before the founding of the US.
"So this is, a long, complicated, laden process," Kortz included.
Could OpenAI have protected itself better from a distilling attack?
"They could have utilized technical steps to block repeated access to their site," Lemley stated. "But doing so would likewise disrupt normal clients."
He added: "I don't believe they could, or should, have a legitimate legal claim against the browsing of uncopyrightable info from a public site."
Representatives for DeepSeek did not right away react to an ask for comment.
"We understand that groups in the PRC are actively working to utilize approaches, including what's referred to as distillation, to attempt to reproduce advanced U.S. AI designs," Rhianna Donaldson, an OpenAI representative, informed BI in an emailed statement.