Hem
Mest lästaFördjupning

Nu ställs AI-pionjärerna inför rätta

(Shutterstock)

Medan politikerna diskuterar hur de ska lagstifta om artificiell intelligens, kommer frågorna av störst ekonomisk betydelse att avgöras i domstol. Det skriver Wall Street Journal med anledning av en rad aktuella rättsfall.

Sedan chat GPT lanserades i slutet av förra året har en storm av stämningar redan riktats mot bolag som Open AI, Microsoft, Google och Meta, det vill säga de första att lansera AI-verktyg på bred front.

Stämningarna reser frågor som vem som bär ansvaret när en AI gör sig skyldig till förtal eller gör intrång på någons upphovsätt. Vem bär ansvaret när en AI fattar beslut i försäkringsärenden? Är det företaget som tillhandahåller AI:n eller är det användaren som ska hållas ansvarig?

The Wall Street Journal

Some of the Thorniest Questions About AI Will Be Answered in Court

Companies selling ChatGPT-like tools face lawsuits alleging defamation, unfair business practices, copyright infringement and privacy violations.

By Ryan Tracy

The Wall Street Journal, 23 August 2023

WASHINGTON. Congress and the White House are talking about regulating artificial intelligence, but courts might well decide some of the most economically significant questions about the booming technology.

Since the late 2022 launch of ChatGPT, the viral AI-powered chatbot, a flurry of suits has targeted AI purveyors including OpenAI, Microsoft, Google and Meta Platforms. The cases involve questions such as who is accountable when an AI system libels someone, whether artists should get paid when AI developers use their work to train machines, and what responsibilities a business owner bears when using AI to inform a decision affecting a customer’s life.

Most of these suits are in early stages and could take months or years to play out, and some may reach the Supreme Court. But each has the potential to significantly shape the legal landscape of artificial intelligence.

Defamation: What happens when AI lies?

In the past, tech companies have been shielded from many defamation claims by the law known as Section 230, which immunizes them from publishing content created by someone else. But that immunity might not apply to so-called generative-AI systems, such as ChatGPT, which generate their own texts, photos, videos and other media.

That potentially exposes the companies to significant legal liability when AI systems make false statements—as they often do.

In a Georgia court, a syndicated radio host is accusing OpenAI of libel. Mark Walters’s complaint says that ChatGPT, in response to a prompt typed in by another person, spat out an answer that falsely accuses him of embezzling funds from a nonprofit group.

“It’s not like it’s a bad rendition of the actual facts. It’s just all completely made up,” said John Monroe, Walters’s lawyer, referring to the answer ChatGPT gave. “He was shocked because it just came out of left field.”

“By its very nature, AI-generated content is probabilistic and not always factual”

OpenAI

OpenAI recently moved to dismiss the suit, arguing that it makes the limitations of its tools clear to users. “By its very nature, AI-generated content is probabilistic and not always factual, and there is near universal consensus that responsible use of AI includes fact-checking prompted outputs before using or sharing them,” the company said in a July filing.

Matt Perault, who consults for some tech companies and heads the Center on Technology Policy at the University of North Carolina at Chapel Hill, said legal uncertainty will complicate companies’ internal discussions about rolling out new AI apps. An app that helps users write social-media posts, for example, could face a flood of lawsuits if it makes up defamatory statements.

“Until that’s clarified, those product meetings and interaction between engineers and lawyers is going to be really fraught,” Perault said.

Copyright: When should humans be compensated for AI-generated work?

Courts have rejected creative rights for nonhumans before—rejecting copyright for a monkey’s selfies in 2018—but introducing AI into the intellectual property sphere opens a new world of questions about who will reap the potential profits of materials generated or assisted by AI.

“It’s really a question of how the economics are going to shake out, and who is going to get paid for the money that will undoubtedly be made,” said Aaron Moss, a copyright lawyer based in California.

A suit filed in July from author and comedian Sarah Silverman argues that her book was scraped from a “shadow library” website and used to train ChatGPT, providing an AI-generated summary of her book as evidence. Creatives have filed at least three other major proposed class-action lawsuits this year, raising concerns on two fronts: that AI tools might be illegally trained on copyrighted works, and that they might be later prompted to produce imitations that compete with the originals.

“My entire industry is holding our collective breaths”

 Karla Ortiz, an artist

“My entire industry is holding our collective breaths to see how far and how quickly this technology will come to replace us,” said Karla Ortiz, an artist and class-action plaintiff who appeared before a July Senate hearing on copyright and AI.

In another high-profile case, Getty Images is suing the popular image generator Stable Diffusion, arguing its maker, Stability AI, scraped millions of images from Getty’s archives without permission.

Some companies, including Adobe, which is selling a package of generative-AI tools, claim to use only licensed materials for training. Others argue that training the tools with large data sets is allowed under copyright law and improves the quality of their output.

“Training these models is an acceptable, transformative and socially beneficial use of existing content,” Ben Brooks, head of public policy for Stability AI, said at the Senate hearing.

Several lawyers warned that a U.S. ruling restricting how AI models can be trained might push companies to move portions of their development to other countries with more permissive laws, such as Israel or South Korea.

Responsibility: What is AI’s proper role in medical decisions?

Not all the lawsuits target generative-AI systems.

In a proposed class-action lawsuit in California, plaintiffs say health insurer Cigna breached its duty to patients by allowing doctors to “instantly reject massive amounts of claims” via an algorithm. More than 300,000 preapproved insurance claims were rejected over two months, and doctors never opened or reviewed the claims, the suit alleges.

“By replacing licensed doctors with an unchecked algorithm, Cigna is not only breaking the law, but is supplanting experienced workers’ jobs and sacrificing patient care to cut costs,” said Shireen Clarkson, a partner at public-interest law firm Clarkson, which has filed several AI-related suits in California.

The suit raises broader questions about what constitutes fair business conduct in the context of an AI system—a topic also being examined by Congress and the Federal Trade Commission.

In a statement, Cigna Healthcare described the lawsuit as “highly questionable” and denied using AI or algorithms in its review process.

“The review takes place after patients have received treatment, so it does not result in any denials of care,” the statement said.

Privacy: Do ChatGPT-like models violate privacy laws?

Another set of suits takes a different line of attack on tech companies’ practice of collecting massive amounts of data from the internet and using it to train AI systems. In scraping all that data, the plaintiffs allege, the companies collected personal information in violation of federal and state privacy laws. The 1998 Children’s Online Privacy Protection Act, for example, bars the collection of personal data about children under 13 without a parent’s permission.

Google and OpenAI have each been hit with proposed class-action suits, which also allege unfair business practices, negligence and a series of other claims. The companies have said they follow privacy laws and noted that users can now opt out of having their personal information used for training.

“American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” Halimah DeLaine Prado, Google’s general counsel, said.

Rulings on some of these AI-related lawsuits could come before Congress passes any major AI legislation.

Omni är politiskt obundna och oberoende. Vi strävar efter att ge fler perspektiv på nyheterna. Har du frågor eller synpunkter kring vår rapportering? Kontakta redaktionen