Africa Flying

AI Companies Battle Over Europe's AI Act as Creatives Push Back

AI Companies Battle Over Europe’s AI Act as Creatives Push Back


Nine months ago, the European Union passed a landmark AI Act that was hailed as groundbreaking. It became the world’s first law aimed at regulating artificial intelligence technology, and specifically demanded that AI companies start informing the public when a piece of content is AI-generated. 

But one of the most divisive aspects of the law concerns transparency in the crucial upstream phase, requiring AI companies to notify rightsholders when their works are being used as training data for the algorithms of their generative AI systems. That notification obligation — the implementation of which is still up in the air but is due to kick in on Aug. 2 — is the crux of the battle for rightsholders seeking compensation and new revenue streams, as well as AI companies including OpenAI (Chat GPT’s parent company), Meta and French banner MistralAI. These companies have unanimously called out the AI Act for creating a cumbersome environment and slowing down innovation in the so-called “old continent.”

OpenAI boss Sam Altman even penned an op-ed in French newspaper Le Monde on Feb. 8 to argue that “European regulators, who are working on the application of the AI Act, must think about the consequences of their decisions on tomorrow’s opportunities, especially at a time when the rest of the world is advancing.” He cited Mario Draghi, the president of the European Central Bank, who claimed there was an “innovation gap” when comparing Europe with the U.S. and China.

It’s far from the first time tech giants have clashed with EU regulators. In 2018, the EU was heavily criticized by U.S. banners, including Meta, for enforcing the toughest security and privacy law, the infamous GDPR (General Data Protection Regulation). But that law became somewhat of a template overseas, including in the U.S., where there are now at least five states who feature some kind of consumer privacy law. 

This time around, when it comes to AI — even in the U.S. where technology is scarcely regulated — a group of news outlets led by The New York Times has taken OpenAI to federal court over copyright infringement. It’s just one of dozens AI-related cases raising copyright challenges to have erupted around the world.

In France, where OpenAI only has a licensing deal with Le Monde newspaper, the LVMH-owned press group Les Echos-Le Parisien is threatening to follow The New York Times’ footsteps and take legal action against OpenAI after failing to reach an agreement, the French group president Pierre Louette told Variety.

Louette — who is also now president of Alliance Presse, which reps roughly 40% of French journalists — voiced his concerns over the plundering of journalistic works at the AI Action Summit in Paris, appearing on a panel alongside Jane C. Ginsburg, a revered professor of literary and artistic property law at Columbia University, and French-Moroccan filmmaker Nabil Ayouch.

Elsewhere, OpenAI has agreements with News Corp, which comprises the Wall Street Journal and Daily Telegraph, and Axel Springer, comprising Die Welt and Politico. The French startup MistralAI, meanwhile, has a deal with the newswire AFP (Agence France Presse), and Google has a pact with Associated Press.

While only a handful of licensing deals have been signed globally, AI companies have been able to “ingurgitate millions of works by scraping the internet and downloading collections of works, sometimes from illegal sources, particularly databases of pirated books” to train their models, most times without compensating rightsholders,” Ginsburg said on the panel. AI entrepreneurs have claimed their copies were “excused under exceptions for ‘text and data mining’ in the EU, and ‘fair use’ in the U.S.,” Ginsburg said.

The “fair use” exception basically allows AI to use copyrighted content as long as it produces something different (for educational or commentary purposes) that is not competing with the original. The EU’s “text and data mining” exception goes much further, as it gives rightsholders the possibility to opt out when their work is being copied for commercial purposes. As such, more and more EU rightsholders, including news organizations, have started opting out to prevent AI companies from accessing their works.

In theory, such constraint should give AI companies an incentive to sign licensing agreements, since accessing high-quality data is a prerequisite for training their machines and, as Ginsburg says, relying on copyright-expired works or low quality data could compromise the AI system. But so far, this dramatic perspective has not yet compelled AI entrepreneurs to take their checkbooks and sign deals with rightsholders. The latter, meanwhile, don’t feel adequately protected by the opt-out option that seems almost obsolete.

Louette said the work of journalists was plundered by AI companies prior to them opting out a couple years ago and is seeking some compensation for past, present and future use since there is no way of purging the machines.

“We know you’re harvesting us. We didn’t know it before. Now we tell you we refuse it and we need to have a protection and we need to have a remuneration,” Louette said on the panel. “Certain companies sell subscriptions. It is exactly the situation in which you harvest someone else’s field and you sell what you’ve harvested to others. It’s like stealing.” He also mentioned the irony of seeing OpenAI accusing China’s DeepSeek of copying their advanced AI models.

“Whenever one of these companies is under competition from another, they say, ‘Hey, we have IP rights! You can’t take away from us! That’s when it becomes a topic for them. This is ridiculous in many ways,” Louette said.

Ayouch, meanwhile, who recently sat on the Berlin Film Festival jury which was presided over by Todd Haynes, raised the alarm over AI companies’ total opacity regarding the content they use as training data.

“Some say: “Drill, baby, drill.” I say: “Regulate, baby, regulate!” Ayouch said. “Every technological innovation, to thrive, has required a protective regulatory framework. Without regulation, innovation collapses — history has proven this, time and time again. We want to move forward, to embrace progress, but not at any cost and not at any price.”

Ayouch is demanding that AI companies be fully transparent about every work they are using to train their machines. AI companies, especially those in the U.S., are fighting back to avoid having to communicate granular details about training data in the EU, and there are rumors that they could only be imposed to notify rightsholders over a fraction of what they use. One thing that’s certain, however, is that once the obligation kicks in on Aug. 2, the EU will fine companies — wherever they’re from — for infringements that could add up to the billions, according to a France-based industry source.

“When the Lumière brothers invented cinema, people predicted the end of painting and photography. Yet, both survived and became stronger than ever. The same thing happened when television emerged — supposedly set to kill cinema — and later VOD and streaming platforms were expected to wipe out TV,” Ayouch said. “But here we are today: all these remarkable innovations have endured, and artists have embraced them. Author’s rights have never ceased to evolve, adapting to each of these changes.”

France, which was the first country in Europe to get streamers to invest a percentage of their local turnover on French content, has been the backdrop of fiery debates between cultural figures and AI players. The country has had an ambivalent position when it comes to regulating AI, even though it has a long legacy of protecting authors and boasts one of the world’s most comprehensive copyright laws.

While it’s deeply attached to the world of creation and the concept of authorship, France also holds the status of Europe’s biggest hub for AI innovation. French President Emmanuel Macron has worked to strengthen the EU’s positioning in the AI ecosystem in order to compete with the U.S., where the Trump administration has vowed to fast-track strides in AI, and with China. France is also home to the EU’s biggest AI startup, Mistral AI, and earlier this month Macron unveiled a massive investment of €109 billion from the private sector to be injected into the country’s AI industry. That plan, unveiled by Macron during the AI Action Summit in Paris, is meant to rival Trump’s $500 billion Stargate Project.

As Macron told Variety in his cover story in October, AI is also a matter of soft power that raises geo-political issues. As such, he said developing a generative AI system in Europe is key because “there’s a lot of bias that can be created right from the start in these models” and the EU needs to “develop broad language models that match our preference.”

“There’s a race to innovate, so we have to be part of it. We need to continue to train and retain talented people, invest more public and private money — that’s one of my European battles,” he said.

The bias is indeed obvious when it comes to AI tools’ news dissemination. Banijay France’s CEO Alexia Laroche-Joubert drew some buzz over the weekend with a LinkedIn post that compared the answers to the simple question “Can you give me the latest news in the world?” from ChatGPT, MistralAI’s Le Chat and Microsoft. The answers were radically different. ChatGPT, for instance, had its top two news items about the Trump administration, while MistralAI’s Le Chat led with the UN’s call for donations in Haiti and the homecoming of Syrian refugees.

In smaller countries, such as the digital-savvy Baltic country of Estonia, president Alar Karis said on an AI panel in Paris that he is so worried about the absence of Estonian language and culture in training data for major AI models that members of his government are in talks with Meta to hand out Estonian content for free — a move that has sparked uproar with local creators.

But as Louette says, allowing AI companies to pull content from the internet for free is a slippery slope that could have large repercussions. “A friend of mine in the room asked ChatGPT this question … ‘Can plundering a country’s culture lead to the country’s disappearance?’ ChatGPT’s answer was, ‘When a civilization is under massive plundering, it loses a part of its memory and its identity.’ Well, that’s a good answer,” he said.

The panel at the AI Action Summit was meant to highlight the virtuous cycle that could be created between AI companies and content creators. Such a symbiotic bond exists in France between local cinema and U.S. blockbusters, thanks to a scheme that allows the National Film Board to levy each admission in theater and funds subsidies for the French film industry.

While the prospect of seeing big tech companies financing local players seems rather unrealistic, Laroche-Joubert told Variety that she’s confident creators will not be left out of the AI ecosystem.

“Right now, we’re on this mad race for innovation but there will be some normalization, as it always happens after the frenzy,” she said. “One thing is sure: we’ll never work without creators.”



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Verified by MonsterInsights