Skip to main content

OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation

OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation

/

The EU is finalizing new AI regulations, but OpenAI CEO Sam Altman says he has ‘many concerns’ about the law. The EU AI Act would require the company to disclose details of its training methods and data sources.

Share this story

Illustration of the OpenAI logo on an orange background with purple lines
Illustration: The Verge

OpenAI CEO Sam Altman has warned that the company might pull its services from the European market in response to AI regulation being developed by the EU.

Speaking to reporters after a talk in London, Altman said he had “many concerns” about the EU AI Act, which is currently being finalized by lawmakers. The terms of the Act have been expanded in recent months to include new obligations for makers of so-called “foundation models” — large-scale AI systems that power services like OpenAI’s ChatGPT and DALL-E.

“The details really matter,” said Altman, according to a report from The Financial Times. “We will try to comply, but if we can’t comply we will cease operating.”

A day later, Altman tried to temper his initial comments, saying OpenAI had productive conversations related to AI regulation in Europe and “of course, have no plans to leave.”

Nevertheless, in comments reported earlier by Time, Altman said the concern was that systems like ChatGPT would be designated “high risk” under the EU legislation. This means OpenAI would have to meet a number of safety and transparency requirements. “Either we’ll be able to solve those requirements or not,” said Altman. “[T]here are technical limits to what’s possible.”

In addition to technical challenges, disclosures required under the EU AI Act also present potential business threats to OpenAI. One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”) and provide “summaries of copyrighted data used for training.”

OpenAI used to share this sort of information but has stopped as its tools have become increasingly commercially valuable. In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

The recent comments from Altman help fill out a more nuanced picture of the company’s desire for regulation. Altman has told US politicians that regulation should mostly apply to future, more powerful AI systems. By contrast, the EU AI Act is much more focused on the current capabilities of AI software.

Update May 26th, 5:29AM ET: Added Altman’s more tempered statement issued a day later.