Auchincloss, Maloy introduce bipartisan bill carving out bots & deepfakes from social media platforms' Section 230 liability shield
WASHINGTON, D.C. –Today, U.S. Representatives Jake Auchincloss (D, MA-04) and Celeste Maloy (R, UT-02) introduced the Deepfake Liability Act to tackle the steep rise of deepfake pornography without the consent of the deepfake’s subject. 98% of online deepfake imagery is deepfake pornography–99% of which targets women. This has led to a rise of widely available ‘nudification apps’ used to create convincing deepfakes of female teenagers.
Section 230’s liability shield for online platforms removes their incentive to ensure users’ safety or address harmful content on their platforms. The Deepfake Liability Act conditions the applicability of Section 230’s liability shield on a platform implementing a duty of care: a set of basic steps that requires companies to be responsive to complaints about cyberstalking and abusive deepfakes. It also amends the definition of “information content provider” in Section 230 to clarify that AI-generated content is not covered by Section 230 protections.
This legislation implements the notice and removal provisions of the TAKE IT DOWN Act, which passed and became law this Congress. The bill also requires that this duty of care include the implementation of:
- A process to prevent, to the extent practicable, cyberstalking and abusive deepfakes.
- A clear and accessible process to report instances of these harms, as well as a process for investigating them.
- A process to remove information that the provider knows or has reason to know constitutes one of these harms.
- Data logging requirements to ensure victims’ access to data for legal proceedings.
- A process for the removal or blocking of content determined to be unlawful by a court.
“AI shouldn’t have special privileges & immunities that journalists don’t get,” said Congressman Jake Auchincloss. “Using bots or deepfakes to violate or stalk another person is reprehensible, and it needs to be a CEO-level problem for the trillion-dollar social media corporations that platform it. Congress needs to get ahead of this growing problem, instead of being left in the dust like we were with social media.”
“Abusive deepfakes and cyberstalking are harming people across the country, and victims deserve real help. Our bill creates a straightforward duty of care and a reliable process to remove harmful content when victims ask for help,” said Congresswoman Celeste Maloy. “Companies that take this seriously will keep their protections under the law. Those that do nothing will be held accountable.”
“The time is now to reform Section 230. For too long, online platforms have been shielded from liability for online abuse that we know silences victims and ruins lives. Nearly every industry owes basic duties to prevent foreseeable harm; with this bill, so will the tech industry. This bill imposes a well-defined duty of care on online platforms to prevent, investigate, and remove cyberstalking, nonconsensual intimate images, and digital forgeries,” said Danielle Keats Citron, Vice President of the Cyber Civil Rights Initiative. “The bill also corrects an overbroad judicial interpretation of Section 230 that lets platforms solicit or encourage online activity without accountability. With this bill, online intermediaries will be responsible not only for online speech activity they helped create or develop but also for online speech activity that they solicit or encourage. This is the bill that we need to protect civil rights and liberties online.”