Tag: deepfakes

  • New U.S. Law Targets AI Deepfakes with Take It Down Act

    New U.S. Law Targets AI Deepfakes with Take It Down Act

    Key Takeaways

    1. The Take It Down Act makes it illegal to share sexually explicit images without consent, including real photos and AI-generated deepfakes.
    2. Social media companies must remove such content within 48 hours after being notified by the victim.
    3. Deepfake incidents have increased significantly, with 180 cases reported in early 2025, a 19% rise from all of 2024.
    4. The Electronic Frontier Foundation criticizes the law for lacking protections against misuse and relying on automated filters that may mistakenly flag legal content.
    5. Smaller online platforms face challenges in complying with the law’s 48-hour removal requirement, complicating the verification of illegal content.


    U.S. President Donald Trump has recently enacted a new law that prohibits the sharing of sexually explicit images without consent online. This legislation applies to both real photographs and AI-generated deepfakes.

    New Regulations on Explicit Content

    The Take It Down Act (as reported by Engadget) makes it a crime to “knowingly publish” or threaten to release fake images of individuals in explicit scenarios online. The law requires social media companies to take down such content within 48 hours upon receiving a notice from the victim.

    Rise in Deepfake Incidents

    According to Surfshark, the occurrence of deepfake incidents has surged in 2025. The report indicates that nearly 180 deepfake cases were reported in the first quarter of 2025, which is 19% higher than the total for all of 2024.

    From the 179 incidents reported in the initial three months, 53 were explicit in nature, while online fraud made up 48. Political-related incidents reached 40, and 38 other miscellaneous reports completed the total.

    Obligations for Online Platforms

    The new law mandates that online platforms establish a process for removing images upon request. The Electronic Frontier Foundation (EFF), a non-profit that advocates for individual rights on the internet, highlighted that the law has “Major Flaws.”

    The EFF argues that the legislation “lacks important protections against frivolous or bad-faith takedown requests.” They pointed out that many services will depend on “automated filters,” which might result in the wrongful flagging of legal content, including fair-use commentary and news articles.

    The brief 48-hour time frame also poses challenges for smaller platforms, making it difficult “to verify if the content is actually illegal.” The EFF contends that in its current form, the act compels platforms “to actively monitor speech, including that which is currently encrypted.”

    Source:
    Link

  • 15 Teens Get Year Probation for Deepfake Photos of Classmates

    15 Teens Get Year Probation for Deepfake Photos of Classmates

    A Spanish court has placed 15 teenagers on a one-year probation for using deepfake technology to create and disseminate indecent images of their female classmates. This incident, highlighted in July 2023, has raised significant concerns about the misuse of deepfakes and the severe impact on victims.

    Incident Overview

    The perpetrators, aged between 13 and 15, were discovered when photoshopped nude images of their classmates began circulating on WhatsApp. Parents, alarmed by the situation, reported it to the police, prompting an investigation. The authorities identified the teenagers responsible for the deepfake images.

    Court’s Verdict

    The Badajoz court in Spain found each teenager guilty on 20 counts: one for generating child abuse material and another for infringing on the moral integrity of their victims. The court imposed a one-year probation for each teen, along with mandatory gender and equality awareness programs. Additionally, they must attend courses on responsible technology use.

    Broader Implications

    This case underscores the alarming potential for deepfakes to generate harmful content, especially among young individuals. It also highlights the critical need for education on responsible technology usage and digital literacy, particularly for teenagers. The court’s ruling, which includes specific awareness programs, indicates a growing emphasis on tackling the misuse of technology and its effects on others.

    Spanish law protects minors under 14 from criminal charges, but their cases are typically referred to child protection services, which can enforce participation in rehabilitation programs.

  • Microsoft Addresses the Security Loophole Behind Taylor Swift’s Nude Deepfakes

    Microsoft Addresses the Security Loophole Behind Taylor Swift’s Nude Deepfakes

    The Misuse of AI by Microsoft: A Lesson in Caution

    The use of artificial intelligence (AI) continues to grow rapidly, with many companies integrating this technology into their services. Microsoft, a prominent player in the field, aims to enhance user experiences through generative AI technologies that can create text and images. However, it is crucial to exercise caution with AI, as improper use can lead to significant issues. Microsoft recently faced challenges when a tool initially used for creating AI-generated images was misused to create inappropriate content involving celebrities, including Taylor Swift.

    Microsoft’s Swift Action Against Taylor Swift Deepfake Exploits

    Artificial intelligence has opened up new possibilities in photo editing, allowing users to create any photo they desire without needing extensive Photoshop skills. However, not all individuals use this technology for innocent purposes. In the case of Microsoft’s Azure Face API, some malicious users exploited its capabilities to generate nude photos and videos of celebrities, including Taylor Swift. Recognizing the severity of the situation, Microsoft took swift action to address the security vulnerability that enabled the creation of such deepfake content.

    Fixing the Security Vulnerability

    Microsoft discovered a security loophole that allowed attackers to manipulate certain API parameters, enabling them to replace Taylor Swift’s face with that of another person. To rectify this issue, Microsoft promptly released an update that blocks the use of invalid parameters in the API. While this step is commendable, it is important to note that it alone may not effectively address the escalating deepfake crisis.

    The Growing Threat of Deepfake Content

    Advancements in artificial intelligence and technology have made it incredibly easy to create deepfake content. These manipulated photos and videos are frequently used to spread fake news or conduct smear campaigns. In this specific instance, Taylor Swift became a victim of such misuse.

    Tech Companies Taking Action

    Fortunately, tech companies are actively working to combat these problems. Microsoft is diligently fixing the vulnerability in its Azure Face API, ensuring that such deepfake content creation is no longer possible. Additionally, under the leadership of Elon Musk, X has limited the platform for Taylor Swift searches to prevent the spread of these videos on social media. It is crucial to recognize that sharing explicit images, whether deepfake or not, can have serious consequences. For ethical reasons and more, it is advisable to avoid creating, sharing, or contributing to the dissemination of such content.