New U.S. Law Targets AI Deepfakes with Take It Down Act

Key Takeaways

1. The Take It Down Act makes it illegal to share sexually explicit images without consent, including real photos and AI-generated deepfakes.
2. Social media companies must remove such content within 48 hours after being notified by the victim.
3. Deepfake incidents have increased significantly, with 180 cases reported in early 2025, a 19% rise from all of 2024.
4. The Electronic Frontier Foundation criticizes the law for lacking protections against misuse and relying on automated filters that may mistakenly flag legal content.
5. Smaller online platforms face challenges in complying with the law’s 48-hour removal requirement, complicating the verification of illegal content.


U.S. President Donald Trump has recently enacted a new law that prohibits the sharing of sexually explicit images without consent online. This legislation applies to both real photographs and AI-generated deepfakes.

New Regulations on Explicit Content

The Take It Down Act (as reported by Engadget) makes it a crime to “knowingly publish” or threaten to release fake images of individuals in explicit scenarios online. The law requires social media companies to take down such content within 48 hours upon receiving a notice from the victim.

Rise in Deepfake Incidents

According to Surfshark, the occurrence of deepfake incidents has surged in 2025. The report indicates that nearly 180 deepfake cases were reported in the first quarter of 2025, which is 19% higher than the total for all of 2024.

From the 179 incidents reported in the initial three months, 53 were explicit in nature, while online fraud made up 48. Political-related incidents reached 40, and 38 other miscellaneous reports completed the total.

Obligations for Online Platforms

The new law mandates that online platforms establish a process for removing images upon request. The Electronic Frontier Foundation (EFF), a non-profit that advocates for individual rights on the internet, highlighted that the law has “Major Flaws.”

The EFF argues that the legislation “lacks important protections against frivolous or bad-faith takedown requests.” They pointed out that many services will depend on “automated filters,” which might result in the wrongful flagging of legal content, including fair-use commentary and news articles.

The brief 48-hour time frame also poses challenges for smaller platforms, making it difficult “to verify if the content is actually illegal.” The EFF contends that in its current form, the act compels platforms “to actively monitor speech, including that which is currently encrypted.”

Source:
Link

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *