Tag: cybersecurity

  • HONOR Launches Global AI Deepfake Detection in April 2025

    HONOR Launches Global AI Deepfake Detection in April 2025

    Key Takeaways

    1. HONOR’s AI Deepfake Detection feature will launch globally in April 2025 to help users identify manipulated audio and video content in real time.
    2. Deepfake technology is a growing concern, with incidents reported every five minutes and 59% of people struggling to differentiate between human and AI-generated content.
    3. The detection system uses advanced AI algorithms to find subtle inconsistencies in media, alerting users when altered content is detected.
    4. There has been a significant rise in deepfake attacks, with digital document forgeries increasing by 244% and specific industries experiencing deepfake incidents up to 1520%.
    5. Experts, including Marco Kamiya from UNIDO, praise the technology as a vital security feature for mobile devices to combat digital manipulation.


    HONOR has revealed that its AI Deepfake Detection feature will launch globally in April 2025. This initiative is designed to assist users in recognizing manipulated audio and video content in real time.

    Growing Concern of Deepfake Technology

    Deepfake technology, which employs AI to create highly convincing but fake media, is becoming an increasing worry for both individuals and businesses. According to the Entrust Cybersecurity Institute, in 2024, a deepfake incident occurred every five minutes. Deloitte’s 2024 Connected Consumer Study also discovered that 59% of participants found it difficult to distinguish between human-created content and that generated by AI. Furthermore, 84% of those using generative AI expressed a desire for clear labels on AI-produced content.

    Advanced Detection Features

    HONOR first showcased its AI Deepfake Detection technology at the IFA 2024 event. This system utilizes sophisticated AI algorithms that detect subtle inconsistencies that are often unnoticed by the human eye. These inconsistencies may include pixel-level errors, problems with border blending, irregularities across video frames, and unusual facial traits like face-to-ear proportions or odd hairstyle features. When the system detects altered content, it issues an alert, allowing users to avoid potential risks.

    Increasing Incidents of Deepfake Attacks

    This global launch aligns with the rising number of deepfake attacks. Between 2023 and 2024, digital document forgeries surged by 244%. Industries like iGaming, fintech, and crypto have faced significant challenges, with deepfake occurrences increasing by 1520%, 533%, and 217%, respectively, year over year.

    HONOR’s efforts are part of a broader industry movement to tackle deepfake issues. Groups such as the Content Provenance and Authenticity (C2PA), established by Adobe, Arm, Intel, Microsoft, and Truepic, are developing technical standards to confirm the authenticity of digital content. Microsoft is also rolling out AI tools to help prevent deepfake misuse, including an automatic face-blurring feature for images uploaded to Copilot. Additionally, Qualcomm’s Snapdragon X Elite NPU enables local deepfake detection using McAfee’s AI models, preserving user privacy.

    Expert Praise for Deepfake Detection

    Marco Kamiya from the United Nations Industrial Development Organization (UNIDO) commended this technology, stating that AI Deepfake Detection is an essential security feature for mobile devices and can protect users from digital manipulation.


  • Verizon and AT&T Hit by Major Chinese Cyberattack

    Verizon and AT&T Hit by Major Chinese Cyberattack

    A Chinese state-backed hacking group, referred to as Salt Typhoon, has allegedly infiltrated the systems of prominent U.S. broadband companies, such as Verizon, AT&T, and Lumen Technologies. This cyber intrusion is reported to have lasted for several months, and it’s a significant issue. Authorities are currently viewing it as a major national security concern. The hackers might have accessed systems used for legal wiretapping requests, raising alarm about the potential compromise of U.S. intelligence and communication data.

    Ongoing Investigations

    Although the breach was only recently uncovered, the full scope of it remains under investigation by U.S. government agencies and private cybersecurity companies. Investigators think that the hackers focused on network infrastructure to capture internet traffic, which could impact millions of Americans. There are also signs that providers outside the U.S. might have faced similar threats.

    Verizon’s Response

    In reaction to the breach, Verizon has established a “war room” at its facility in Ashburn, Virginia, collaborating with the FBI, Microsoft, and Google’s Mandiant—a cybersecurity firm that specializes in threat detection, incident response, and security consulting—to evaluate the situation. While U.S. officials have not yet verified whether the attackers accessed lists of surveillance targets or their communications, the severity of the incident warranted a briefing for President Joe Biden, according to reports.

    Broader Implications

    The Salt Typhoon operation, which has been active since 2020, is part of a wider Chinese espionage campaign, with signs indicating possible involvement from China’s Ministry of State Security. The FBI and U.S. intelligence agencies continue to probe the extent of the breach and what sensitive data may have been taken. Microsoft, along with other cybersecurity companies, is helping to assess the level of data compromise.

  • Your iPhone could be infected with a virus

    Your iPhone could be infected with a virus

    There was an unexpected development. A virus affecting the Android side turned out to affect the iOS operating system as well. This dangerous virus can spread your various information on the internet or take over your bank account. So read the article carefully and let’s take a look at what we can do against emerging viruses.

    Dangerous virus affects everyone

    Based on the Golddigger Android trojan, the GoldPickaxe virus is now affecting iOS and all operating systems are at risk. Users need to take precautions against such viruses. Group-IB’s research has confirmed that the latest virus that has emerged takes your facial recognition data and also steals bank accounts.

    GoldPickaxe virus

    The GoldPickaxe.iOS virus can use social engineering to gain access to your bank accounts, which can end badly for you. The new virus has been found to be common in regions such as Vietnam and Thailand. It is also expected to spread started from the US to other regions in the future. Group-IB researchers continue to investigate the GoldPickaxe virus and say they have sent reports to the relevant brands.

    If you have an Android or iOS operating system, we recommend that you do not install applications/files, etc. from unknown sources. New viruses that have emerged in recent days may cause your information to be stolen. You should take steps to protect your smartphone.

  • Romania Experiences Widespread Ransomware Attack Targeting 18 Hospitals

    Romania Experiences Widespread Ransomware Attack Targeting 18 Hospitals

    A recent ransomware attack has caused significant disruption to 18 hospitals across Romania, putting a halt to their operations. The attack targeted the Hipocrate Information System (HIS), which is essential for managing patient care and medical records. As a result, the system is currently down, leaving hospitals struggling to maintain their usual level of care.

    The patient care and medical records are currently unavailable

    The attack occurred overnight between February 11 and 12, 2024, leading to encrypted databases and files. The Romanian Ministry of Health has acknowledged the severity of the situation and is actively working on a solution. Efforts to recover the affected systems are in progress, with IT and cybersecurity experts from the National Cyber Security Directorate (DNSC) leading the charge.

    The impact of the ransomware attack is widespread, affecting a variety of medical facilities including regional hospitals and cancer treatment centers. To prevent further damage, the Ministry of Health has enhanced security measures for other hospitals that were not affected by the attack.

    Details about the attack and compromised data remain unclear

    Currently, the details about the ransomware group behind the attack or the specifics of the data compromised are not clear. The provider of the HIS system, RSC, has not yet made a public statement regarding the incident.

    Incidents such as these expose the vulnerability of healthcare systems to cyberattacks and further boost the importance of robust cybersecurity measures to protect sensitive patient data and ensure the continuous operation of critical healthcare services. This is one sector that can’t afford serious downtime like these, and for obvious reasons too!

  • Play Protect’s real-time update effectively combats financial fraud

    Play Protect’s real-time update effectively combats financial fraud

    Since its launch, Google Play Protect has been scanning installed apps for malware, however this still does not guarantee that customers’ banking apps are 100% safe. Hackers merely need to gain access to the one-time password (OTP) that users received through SMS, enter the right verification code, and they can easily access the victim’s bank account.

    Fraud protection on Google Play Protect

    Play Protect will now check on the permissions an app requires, the ones that hackers most frequently abuse: RECEIVE_SMS, READ_SMS, BIND_Notifications, and Accesibility. This is a brand-new functionality that Google revealed for the Play Protect. Hackers can view incoming SMS messages and notifications with these permissions, and they can even use the device without the user’s awareness with accesibility permission.

    Fraud protection on Play Protect

    Since this functionality was developed in collaboration with Cyber Security of Singapore, Google only makes it available in Singapore for now. Users in Singapore will be the first to receive this fraud prevention tool from Google. This is a new feature from Google that always keeps an eye on what apps are doing in the background regarding permissions.

    Google states that this allows users to use banking apps safely. Play Protect fraud protection kicks in when users install a third-party application, such as an APK file downloaded from the internet. If the app requests all 4 permissions, a report will be provided to the user.

    It’s a really smart move to check these four permissions: RECEIVE_SMS, READ_SMS, BIND_Notifications, and Accesibility. This stops hackers from spying on SMS and notifications coming into the user’s phone, thereby limiting users’ data given to the app so hackers can’t access the user’s bank account.

  • Inappropriate Content Generation Exploit Uncovered by Microsoft Staffer in OpenAI’s DALL-E 3

    Inappropriate Content Generation Exploit Uncovered by Microsoft Staffer in OpenAI’s DALL-E 3

    Shane Jones, a manager in Microsoft’s software engineering department, recently discovered a vulnerability in OpenAI’s DALL-E 3 model, which is known for generating text-based images. This flaw allows the model to bypass AI Guardrails and generate inappropriate NSFW (Not Safe for Work) content. Upon discovering this vulnerability, Jones reported it to both Microsoft and OpenAI. However, instead of receiving a satisfactory response, he was met with a “Gagging Order” from Microsoft, which prohibited him from publicly disclosing the vulnerability.

    Jones, concerned about the potential security risks associated with this vulnerability, decided to share the information publicly despite Microsoft’s directive. He took to LinkedIn to write an open letter, urging OpenAI to temporarily suspend the DALL-E 3 model until the flaw could be addressed. However, Microsoft downplayed the severity of the vulnerability and questioned its success rate.

    Despite his efforts to communicate internally with Microsoft about the issue, Jones received no response. Frustrated by the lack of action, he made the decision to disclose the vulnerability to the media and relevant authorities. Jones also linked the vulnerability to recent incidents of AI-generated inappropriate content featuring Taylor Swift, which were allegedly created using Microsoft’s Designer AI function, which relies on the DALL-E 3 model.

    Microsoft’s legal department and senior executives warned Jones to stop disclosing information externally, but the vulnerability remained unpatched. As media outlets like Engadget sought an official response from Microsoft, the company finally acknowledged the concerns raised by Jones. Microsoft assured the public that it would address the issues and work towards fixing the vulnerabilities.


    Concerns over Vulnerability in OpenAI’s DALL-E 3 Model Uncovered by Microsoft Manager

    A vulnerability in OpenAI’s DALL-E 3 model, discovered by Shane Jones, a manager in Microsoft’s software engineering department, has raised concerns about potential security risks. The flaw enables the model to generate inappropriate NSFW content by bypassing AI Guardrails. Despite reporting the issue to both Microsoft and OpenAI, Jones faced a “Gagging Order” from Microsoft, preventing him from disclosing the vulnerability publicly.

    Downplayed Severity and Lack of Response

    Jones stumbled upon the vulnerability during independent research in December. He promptly informed Microsoft and OpenAI about the issue, emphasizing the security risks associated with it. In an open letter on LinkedIn, Jones urged OpenAI to temporarily suspend the DALL-E 3 model until the flaw was addressed. However, Microsoft responded by instructing him to remove the LinkedIn post without providing any explanation.

    Despite seeking internal communication with Microsoft to address the issue, Jones received no response. Frustrated by the lack of action, he decided to disclose the vulnerability to the media and relevant authorities. Jones also linked the vulnerability to instances of AI-generated inappropriate content featuring Taylor Swift, allegedly created using Microsoft’s Designer AI function, which relies on the DALL-E 3 model.

    Unpatched Vulnerability and Media Attention

    Microsoft’s legal department and senior executives warned Jones to stop disclosing information externally. However, even with these warnings, the vulnerability remained unpatched. Media outlets, including Engadget, sought an official response from Microsoft, which finally acknowledged the concerns raised by Jones. The company assured the public that it would address the issues and work towards fixing the vulnerabilities.

    It is crucial for organizations to take vulnerabilities seriously and prioritize their resolution to ensure the security and integrity of their products and services. While the exact nature and impact of this vulnerability are not explicitly stated, it is clear that Jones’s concerns should be acknowledged and addressed promptly. The incident also highlights the importance of responsible disclosure and effective communication between researchers and companies to mitigate potential security risks.

  • OnePlus Collaborates with App Defense Alliance to Enhance User Security

    OnePlus Collaborates with App Defense Alliance to Enhance User Security

    OnePlus Partners with App Defense Alliance to Enhance User Data Privacy and Device Security

    OnePlus, a leading smartphone manufacturer, has announced its collaboration with the App Defense Alliance (ADA), making it the first company of its kind to join this important security group. This strategic move underlines OnePlus’s unwavering commitment to safeguarding user data and enhancing the overall security of their devices.

    Joining Forces for a Safer Google Play Store

    The App Defense Alliance was established in November 2019, with industry giants Google, ESET, Lookout, and Zimperium coming together to ensure the safety of the Google Play Store. The primary objective of this alliance is to conduct thorough scans of applications for malware before they are made available on the store. By pooling the expertise and technology of these renowned companies, the ADA aims to efficiently identify and prevent harmful apps from reaching users’ devices.

    OnePlus’s Dedication to Security

    OnePlus has consistently prioritized security in its product offerings. Their latest flagship device, the OnePlus 12, powered by OxygenOS 14, is equipped with an array of cutting-edge security features. Notable features include the Device Security Engine 3.0, an enhanced Security Center, Strong Box for chip-level data protection, Auto Pixelate 2.0, and photo permission management. These features work in harmony to provide users with comprehensive protection against potential threats. Moreover, OnePlus’s Intelligent Shield program has proven effective in identifying and blocking malicious applications.

    Strengthening Efforts in App Security

    Kinder Liu, the head of OnePlus, emphasized the significance of safeguarding user information as an integral part of the company’s mission. By partnering with the ADA, OnePlus aims to further fortify its endeavors in ensuring the security of apps against online dangers. The collaboration will enable OnePlus to leverage the collective knowledge and expertise of the ADA members, thereby enhancing the security measures implemented in their devices and applications.

    Conclusion

    OnePlus’s association with the App Defense Alliance marks a significant milestone for the company and the entire smartphone industry. By joining forces with industry leaders in app security, OnePlus is demonstrating its steadfast commitment to protecting user data and providing a secure user experience. As the first company of its kind to enter the ADA, OnePlus is setting a precedent for others to prioritize security and work collaboratively towards a safer digital ecosystem.

  • Microsoft Addresses the Security Loophole Behind Taylor Swift’s Nude Deepfakes

    Microsoft Addresses the Security Loophole Behind Taylor Swift’s Nude Deepfakes

    The Misuse of AI by Microsoft: A Lesson in Caution

    The use of artificial intelligence (AI) continues to grow rapidly, with many companies integrating this technology into their services. Microsoft, a prominent player in the field, aims to enhance user experiences through generative AI technologies that can create text and images. However, it is crucial to exercise caution with AI, as improper use can lead to significant issues. Microsoft recently faced challenges when a tool initially used for creating AI-generated images was misused to create inappropriate content involving celebrities, including Taylor Swift.

    Microsoft’s Swift Action Against Taylor Swift Deepfake Exploits

    Artificial intelligence has opened up new possibilities in photo editing, allowing users to create any photo they desire without needing extensive Photoshop skills. However, not all individuals use this technology for innocent purposes. In the case of Microsoft’s Azure Face API, some malicious users exploited its capabilities to generate nude photos and videos of celebrities, including Taylor Swift. Recognizing the severity of the situation, Microsoft took swift action to address the security vulnerability that enabled the creation of such deepfake content.

    Fixing the Security Vulnerability

    Microsoft discovered a security loophole that allowed attackers to manipulate certain API parameters, enabling them to replace Taylor Swift’s face with that of another person. To rectify this issue, Microsoft promptly released an update that blocks the use of invalid parameters in the API. While this step is commendable, it is important to note that it alone may not effectively address the escalating deepfake crisis.

    The Growing Threat of Deepfake Content

    Advancements in artificial intelligence and technology have made it incredibly easy to create deepfake content. These manipulated photos and videos are frequently used to spread fake news or conduct smear campaigns. In this specific instance, Taylor Swift became a victim of such misuse.

    Tech Companies Taking Action

    Fortunately, tech companies are actively working to combat these problems. Microsoft is diligently fixing the vulnerability in its Azure Face API, ensuring that such deepfake content creation is no longer possible. Additionally, under the leadership of Elon Musk, X has limited the platform for Taylor Swift searches to prevent the spread of these videos on social media. It is crucial to recognize that sharing explicit images, whether deepfake or not, can have serious consequences. For ethical reasons and more, it is advisable to avoid creating, sharing, or contributing to the dissemination of such content.