July 30, 2024

Lawmakers want to carve out intimate AI deepfakes from Section 230 immunity

/ The Intimate Privacy Protection Act would require platforms to have a ‘reasonable process’ to address cyberstalking and digital forgeries.

A bipartisan pair of House lawmakers are proposing a bill to carve out Section 230 protection for tech companies that fail to remove intimate AI deepfakes from their platforms.

Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA) unveiled the Intimate Privacy Protection Act, Politico first reported, “to combat cyberstalking, intimate privacy violations, and digital forgeries,” as the bill says. The bill amends Section 230 of the Communications Act of 1934, which currently shields online platforms from being held legally responsible for what their users post on their services. Under the Intimate Privacy Protection Act, that immunity could be taken away in cases where platforms fail to combat the kinds of harms listed. It does this by creating a duty of care for platforms — a legal term that basically means they are expected to act responsibly — which includes having a “reasonable process” for addressing cyberstalking, intimate privacy violations, and digital forgeries.

Digital forgeries would seem to include AI deepfakes since they’re defined in part as “digital audiovisual material” that was “created, manipulated, or altered to be virtually indistinguishable from an authentic record of the speech, conduct, or appearance of an individual.” The process mandated by the duty of care must include measures to prevent these kinds of privacy violations, a clear way to report them, and a process to remove them within 24 hours.

In statements, both Auchincloss and Hinson said tech platforms shouldn’t be able to use Section 230 as an excuse not to protect users from these harms. “Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms,” Auchincloss said. Hinson added, “Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations.”

Combatting intimate (in other words, sexually explicit) AI deepfakes has been one area of AI policy that lawmakers around that country seem motivated to move ahead on. While much of AI policy remains in an early stagethe Senate recently managed to pass the DEFIANCE Act, which would let victims of nonconsensual intimate images created by AI pursue civil remedies against those who made them. Several states have enacted laws combatting intimate AI deepfakes, particularly when they involve minors. And some companies have also been on board — Microsoft on Tuesday called for Congress to regulate how AI-generated deepfakes could be used for fraud and abuse.

Lawmakers on both sides of the aisle have long wished to narrow Section 230 protection for platforms they fear have abused a legal shield created for the industry when it was made up of much smaller players. But most of the time, Republicans and Democrats can’t agree on how exactly the statute should be changed. One notable exception was when Congress passed FOSTA-SESTA, carving out sex trafficking charges from Section 230 protection.

The Intimate Privacy Protection Act’s inclusion of a duty of care is the same mechanism used in the Kids Online Safety Act, which is expected to pass through the Senate on Tuesday with overwhelming support. That might suggest it’s becoming a popular way to create new protections on the internet.


By:  Lauren Feiner
Source: The Verge