Category: Artificial intelligence

  • AI Showdown: Grok Impresses Mrwhosetheboss, ChatGPT Triumphs

    AI Showdown: Grok Impresses Mrwhosetheboss, ChatGPT Triumphs

    Key Takeaways

    1. Grok performed well initially but struggled before finishing second to ChatGPT.
    2. ChatGPT and Gemini had an advantage with a video generation feature not available to other models.
    3. In a real-world problem-solving task, Grok gave the most direct answer, while Perplexity struggled with confusion.
    4. In cake-making challenges, Grok correctly identified the odd item, while other models misidentified it.
    5. All models experienced “hallucinations,” confidently stating incorrect information during various tests.


    In a recent video, Mrwhosetheboss put various AI models to the test, including Grok (Grok 3), Gemini (2.5 Pro), ChatGPT (GPT-4o), and Perplexity (Sonar Pro). Throughout the video, he shared his admiration for Grok’s performance. Initially, Grok performed really well but then struggled a bit before regaining its strength and ended up in second place behind ChatGPT. It’s important to note that ChatGPT and Gemini received an advantage due to a feature that the other models did not have — video generation.

    Testing Real-World Problem Solving

    To start the evaluation, Mrwhosetheboss examined the AI models’ ability to solve real-world problems. He presented each model with the following prompt: “I drive a Honda Civic 2017, how many of the Aerolite 29″ Hard Shell (79x58x31cm) suitcases would I be able to fit in the boot?” Grok gave the most direct answer, stating “2”. ChatGPT and Gemini suggested that theoretically, it could fit 3, but realistically, it would be 2. On the other hand, Perplexity got confused and, after doing simple math, mistakenly concluded that it could fit “3 or 4” without considering the suitcase’s shape.

    Challenging Cake-Making Skills

    Next, Mrwhosetheboss didn’t hold back as he asked the chatbots for cake-making advice. He also included an image of five items, one of which was out of place for baking — a jar of dried Porcini mushrooms. Most of the models fell for this ruse. ChatGPT misidentified it as a jar of ground mixed spice, Gemini thought it was crispy fried onions, and Perplexity guessed it was instant coffee. Grok, however, correctly recognized it as a jar of dried mushrooms from Waitrose. Here is the image he provided:

    Universal Hallucinations

    Continuing with the testing, he challenged the AIs with math, product suggestions, accounting, language translations, logical reasoning, and more. A common issue across all the models was hallucination; each of them showed some degree of this phenomenon at various points in the video, confidently discussing things that simply weren’t real. By the end, here’s how each AI ranked:

    Artificial intelligence has significantly eased many tasks, especially since the inception of LLMs. The book “Artificial Intelligence” (currently priced at $19.88 on Amazon) aims to help individuals make the most of AI tools.

    Source:
    Link

  • Microsoft Shifts Focus to AI Agents Amid Mass Layoffs

    Microsoft Shifts Focus to AI Agents Amid Mass Layoffs

    Key Takeaways

    1. Microsoft is laying off 9,000 workers across various departments, framed as organizational adjustments.
    2. CEO Satya Nadella revealed that about 30% of Microsoft’s code is now generated by AI, indicating a push for AI integration.
    3. There are concerns about employee morale, with some expressing frustration over job cuts despite the company’s profitability.
    4. The memo from management suggested that the layoffs aim to increase “agility and effectiveness,” hinting at a potential replacement of human roles with AI.
    5. While Microsoft has not officially linked AI agents to the layoffs, the trend of using AI in development is evident in projects and internal initiatives.


    Microsoft is currently undergoing another significant wave of layoffs, letting go of 9,000 workers across different departments. Although the company’s leadership refers to these layoffs as organizational adjustments, a recent report from a developer familiar with the situation suggests that Microsoft aims to substitute its human employees with artificial intelligence.

    AI Integration on the Rise

    At the LlamaCon AI developer event held by Meta in May, Microsoft CEO Satya Nadella noted that around 30% of the code produced by the company is now generated by AI. It appears that Microsoft is accelerating its AI integration efforts, which may be linked to the recent job cuts. An Engadget report quotes a developer at Microsoft, who received a memo from Phil Spencer but was not among those laid off, indicating that management is “trying their hardest to replace as many jobs as they can with AI agents.”

    Concerns About Employee Morale

    While the report didn’t go into much detail about this assertion, the memo did mention “increase agility and effectiveness” as one of the reasons for the layoffs. If AI agents are properly utilized, they have the potential to greatly streamline various processes, lending credibility to the developer’s claim that Microsoft is striving to replace human roles with AI technology. This trend isn’t isolated, as even Amazon has recently communicated to its staff that AI agents will take over certain human job functions.

    The developer also expressed frustration over the memo and the overall state of development at Microsoft. “I’m personally really angry that Phil’s email to us highlighted that this was the most profitable year ever for Xbox while also announcing layoffs. I wasn’t clear on what part of that I was meant to feel proud about,” the developer reportedly told Engadget. They further mentioned that employees are dissatisfied with the product quality and that there’s a lot of motivational talk being used to boost morale.

    The Future of AI at Microsoft

    It’s important to highlight that Microsoft has not directly connected AI agents to the layoffs. However, given the prevalence of AI-generated code, the use of generative AI in projects like Call of Duty: Black Ops 6, and other internal AI initiatives, it seems that AI agents are a key objective for the company.

    Source:
    Link

  • AI Prefers Fresh Content: Study Shows Recency Boosts Visibility

    AI Prefers Fresh Content: Study Shows Recency Boosts Visibility

    Key Takeaways

    1. Fresh content is becoming crucial for SEO as large language models (LLMs) favor newer articles.
    2. Changing publication dates may positively impact Google rankings, contrary to previous beliefs.
    3. Content recency’s importance varies by topic; rapidly changing fields like finance prioritize the latest information.
    4. Older, high-quality content can still be effective if regularly updated and maintained.
    5. Trustworthiness and relevance are important factors considered by LLMs, alongside content freshness.


    In the age of ChatGPT, Perplexity, and Google AI Overviews, a traditional SEO element is returning: the freshness of content. A fresh study from Seer Interactive indicates that large language models (LLMs) prefer newer content rather than older pieces. This trend of recentness has important consequences for those creating content strategies.

    The Myth of Publication Dates

    For many years, it was thought to be a myth in SEO that just changing the publication date could elevate a page’s ranking in Google. However, in the time of LLMs, there might be some validity to this idea. A recent analysis looked at over 5,000 URLs from log files of AI tools including ChatGPT, Perplexity, and Google AI Overviews, examining the relationship between publication dates and visibility. The findings are quite remarkable: 89% of the content identified by LLMs was released between 2023 and 2025, whereas only 6% of interactions with AI involved materials that were older than six years.

    Topic Variability in Content Recency

    The impact of recency differs depending on the subject matter. In rapidly changing areas like finance, there is a high demand for the latest information – content that was published before 2020 is seldom shown. The travel sector also has a tendency to highlight more current pieces. On the other hand, more stable fields, such as energy or DIY topics like building a patio, still allow for older, well-crafted articles to be featured by AI systems.

    In fast-evolving industries like finance or technology, being visible means needing regular updates and a continuous flow of new content. This is where the recency of content becomes very important – new or recently updated pieces usually rank better in AI systems. However, in more stable areas, having lasting evergreen content can still work well, as long as it continues to be of high quality and relevant.

    Enhancing Visibility of Older Content

    Older, high-quality articles still have worth – well-thought-out updates can greatly improve their visibility. To keep a strong presence in AI overviews and chatbot results, it is crucial to continuously update content while maintaining its depth and detail. Although large language models generally prefer recent pieces, they also consider factors like trustworthiness and relevance.

    Source:
    Link

  • AI Disrupts Entry-Level Job Market in the UK

    AI Disrupts Entry-Level Job Market in the UK

    Key Takeaways

    1. AI growth is significantly impacting the job market, leading to concerns about job replacement for entry-level positions.
    2. There has been a 32% decrease in entry-level job openings in the UK since the introduction of ChatGPT in 2022.
    3. Entry-level positions are expected to shrink to 25% of the overall UK job market by 2025.
    4. Major companies like Microsoft and Google are rapidly integrating AI into their operations, increasing reliance on technology.
    5. Experts warn that governments need to act now to manage AI regulations and mitigate potential unemployment during this technological shift.


    The rapid growth of AI in recent years is beginning to change the industrial scene in a major way. Nonetheless, experts have raised concerns that businesses will increasingly replace human labor with AI technology. For example, Dario Amodei, the CEO of Anthropic, has cautioned that AI may lead to a 50% reduction in entry-level positions within the next five years.

    Job Market Shift

    Amodei’s forecasts now appear to be alarmingly correct, as Adzuna, a job search platform based in the UK, has noted a significant decline in job openings for entry-level positions. According to their research, ever since ChatGPT was introduced in 2022, the UK job market has experienced a 32% decrease in new “graduate jobs, apprenticeships, internships, and junior positions that don’t require a degree,” as reported by The Guardian.

    Decline in Opportunities

    This fall in entry-level job vacancies is also said to have led to a shrinking of these positions to just 25% of the overall UK job market by 2025, marking a 4% drop from 2022 figures.

    Adzuna’s results seem to align with findings from Indeed. The Guardian cites Indeed’s data, stating that university graduates in the UK are facing the “most challenging job market since 2018,” with “advertised roles for recent grads having decreased by 33% in mid-June compared to last year.”

    The Unstoppable AI

    The ongoing advancement of AI is unlikely to slow down, as companies of all sizes are quickly adapting to integrate more AI into their teams. For instance, Microsoft is already creating 20-30% of its code using AI, while Google is producing even more.

    In light of this, Dario Amodei’s statement that “you can’t just step in front of the train and stop it” is quite relevant. According to Dario, the best we can do is “steer the train—steer it 10 degrees in a different direction from where it was going. That’s possible, but we need to act now.”

    It is uncertain how governments globally will manage AI regulations to ensure that the temporary surge in unemployment that often accompanies the beginning of a new “Industrial Revolution” doesn’t inflict as much pain this time around.

    Source:
    Link

  • Humanoid Robots Compete in Unique Soccer Tournament Showcase

    Humanoid Robots Compete in Unique Soccer Tournament Showcase

    Key Takeaways

    1. Humanoid robots participated in a fully autonomous soccer tournament at the Smart E-Sports Center in Beijing, highlighting advancements in AI technology.
    2. Four university teams programmed identical T1 robots to compete, showcasing their unique AI algorithms in direct matches.
    3. The tournament featured a 3-on-3 format with two ten-minute halves and a halftime, marking a significant departure from traditional human-influenced matches.
    4. While entertaining, the robots displayed clumsy movements, often stumbling, but the event celebrated their ability to complete matches independently.
    5. The success of this tournament suggests a future where friendly games between humans and robots could become a reality.


    Beijing, June 28, 2025 – In a groundbreaking event, humanoid robots took part in a fully autonomous soccer tournament held at the Smart E-Sports Center in Beijing. Four teams from universities were involved – Tsinghua University (THU Robotics), China Agricultural University (Mountain Sea), Beijing Information Science & Technology University (Blaze), and another Tsinghua team (Power Lab). Each team programmed identical T1 robots from Booster Robotics using their own AI algorithms to compete in direct matches.

    Unique Tournament Format

    This event wasn’t entirely without precedent, as humanoid robots have been participating in soccer at RoboCup events for years. What set this tournament apart was the total lack of human interference. The games were structured in a 3-on-3 format, featuring two halves of ten minutes each and a halftime of five minutes. In the championship match, THU Robotics triumphed over Mountain Sea with a score of 5–3, claiming the first title.

    Entertaining Yet Clumsy

    The matches leaned more towards being entertaining rather than showcasing top-tier sport. The 45-kilogram (99-pound) robots often stumbled, bumped into each other, and sometimes needed assistance to get back on their feet when they fell. Nevertheless, the audience of about 300 erupted in cheers for every successful play and save, viewing it as a sign of genuine advancement.

    Despite their robotic nature, the players still move in a way that’s more akin to penguins on ice than to elite athletes like Kylian Mbappé. However, the ability to complete full matches without human control signifies a major achievement: AI-powered machines are now capable of handling complex tasks in real-time. If this progress continues, we might soon see the next exciting development – friendly games between humans and robots could be on the horizon.

    Source:
    Link

  • Apple Drops Siri for Private ChatGPT and Claude AI Models

    Apple Drops Siri for Private ChatGPT and Claude AI Models

    Key Takeaways

    1. Apple is negotiating with OpenAI and Anthropic to use their large language models for Siri after struggling with its own AI development.
    2. The launch of Siri’s upgrades has been delayed twice until 2026, causing frustration within Apple’s AI team.
    3. Leadership changes were made in Apple’s AI team following the failure of Siri to drive iPhone upgrades and delays in the internal LLM.
    4. Apple found that Anthropic’s Claude LLM performed better than its Foundation Models for enhancing Siri’s capabilities.
    5. Privacy remains a top concern, with Apple exploring options to run third-party models on its own servers to maintain user privacy.


    After a time of testing its own Siri AI against competitors like ChatGPT, Claude, and Google Gemini, Apple is reportedly ready to give up on its Foundation Models.

    Talks with OpenAI and Anthropic

    The company is currently negotiating with OpenAI and Anthropic to utilize their large language models (LLMs) for the AI features it promised for Siri since introducing Apple Intelligence nine months ago.

    Sadly, Apple has struggled to implement the Siri AI upgrade using its own LLM, even on its most advanced devices like the iPhone 16 Pro Max, and has postponed it twice until 2026. The AI team seems to be either disheartened by the unclear guidance or attracted by the huge paychecks that companies like Meta and OpenAI offer to lure away their engineers.

    Changes in Leadership

    When the mostly unoriginal Apple Intelligence features failed to drive an upgrade cycle for the iPhone and the internal Siri LLM faced delays, Apple decided to replace the leader of its AI team and initiated a performance review. The findings suggested that the Siri AI capabilities it intended to deliver would be better served by Anthropic’s Claude LLM instead of its Foundation Models.

    Next in line for performance was ChatGPT, which is why Apple is in discussions with both Anthropic and OpenAI to establish a partnership to enhance Siri with their chatbot technologies.

    Privacy Concerns

    Apple’s primary concern when considering a third-party service is the privacy of iPhone users. It has explored the option of running Anthropic or OpenAI’s code on its own Private Cloud Compute server clusters, asking them to create tailored ChatGPT and Claude models that would allow Apple to maintain control over the privacy settings for future Siri AI users.

    Source:
    Link

  • Cloudflare Blocks Unpaid AI Web Scrapers from Accessing Data

    Cloudflare Blocks Unpaid AI Web Scrapers from Accessing Data

    Key Takeaways

    1. Cloudflare’s CEO Matthew Prince announced that all AI web crawler bots will be blocked by default to protect content creators.
    2. The online search environment is increasingly dominated by AI chatbots, making it harder for content creators to gain traffic and recognition for their work.
    3. AI crawlers are extracting data without compensating original content creators, leading to a sense of unfairness in the web ecosystem.
    4. Cloudflare plans to launch a marketplace to connect content creators with AI companies, focusing on content quality and knowledge enhancement.
    5. Recent disruptions caused by aggressive AI crawlers have led platforms like SourceHut to block major cloud service providers due to excessive traffic.


    Declaring “Content Independence Day,” Cloudflare’s CEO Matthew Prince shared significant updates to the company’s web service system. From now on, all AI web crawler bots will be blocked by default.

    In a blog entry, Prince explained how the current online search environment is dominated by AI chatbots, like Google’s Gemini and OpenAI’s ChatGPT. While these tools provide value, they also extract data from the internet without any consequences, neglecting to reward the original content creators.

    Challenges for Content Creators

    Prince pointed out that recent modifications in Google Search have made it ten times “more difficult for a content creator to get the same volume of traffic” as they did a decade ago.

    He stated, “Instead of being a fair trade, the web is being stripmined by AI crawlers, with content creators seeing almost no traffic and thus almost no value.”

    Prince expressed that the content being scraped serves as “the fuel that powers AI engines,” and it is only just that the original creators receive compensation for their work.

    New Marketplace Initiative

    Cloudflare also unveiled plans for a new marketplace designed to connect creators with AI companies. This marketplace will evaluate available content not just based on the traffic it brings in but also “on how much it furthers knowledge.” Prince is optimistic that this will help AI engines improve swiftly, potentially ushering in a new golden age of high-quality content creation.

    He acknowledged that he doesn’t have all the solutions right now, but the company is collaborating with “leading computer scientists and economists to find them.”

    Recent Issues with AI Crawlers

    Recently, SourceHut, a platform for hosting open-source Git repositories, reported disruptions caused by “aggressive LLM crawlers.” They have blocked multiple cloud service providers, including Google Cloud and Microsoft Azure, due to the overwhelming traffic coming from their networks.

    In January, DoubleVerify, a web analytics platform, noted an 86% rise in General Invalid Traffic (GIVT) from AI scrapers and other automated tools compared to 2024.

    Despite previous commitments, OpenAI’s GPTbot has also discovered methods to ignore or bypass a site’s robots.txt file entirely, leading to an enormous increase in traffic for domain owners and potentially high costs.

    Source:
    Link

  • MIT Study: ChatGPT’s Impact on Brain Health and Cognition

    MIT Study: ChatGPT’s Impact on Brain Health and Cognition

    Key Takeaways

    1. Overusing generative AI can harm critical thinking skills and creativity.
    2. Reliance on AI leads to diminished cognitive abilities and a decline in brain function.
    3. Users of AI may develop confirmation bias, only engaging with information that supports their beliefs.
    4. The study found that participants using AI experienced higher frustration and produced less satisfactory essays.
    5. The long-term effects of AI on cognitive abilities are still unknown and could have serious societal implications.


    A recent investigation called “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (PDF) by researchers from MIT Lab examined how large language models impact the human mind.

    Generative AI and Its Effects on Thinking

    In April, Microsoft commissioned a study that revealed that overusing generative AI harms critical thinking skills and stifles creativity. The MIT research not only backs up these claims but indicates that the repercussions could be even more extensive.

    The primary outcome shows a decline in brain function, with individuals who often rely on AI showing diminished cognitive abilities. The researchers also found a reduction in the tendency to critically evaluate information, leading to an “echo chamber effect.” This phenomenon causes individuals to only see content that supports their existing beliefs, known as “confirmation bias.” As a result, conversations around differing perspectives decline, leading to increased political division and loneliness, among other issues.

    Experiment Details

    A total of 54 participants, aged between 18 and 39, were split into three separate groups, each tasked with writing an essay. The first group utilized generative AI models like ChatGPT or Google Gemini for their work.

    The second group had the chance to use search engines for gathering information, while the third group had no tools available at all. Following this, all participants underwent a second phase where they were again without any tools.

    Frustration and Outcomes

    Besides the effects mentioned earlier, it was observed that those in the first group experienced higher frustration levels and produced less satisfactory results compared to the other groups.

    It’s important to mention that the findings of this study have not been peer-reviewed yet. Additionally, long-term effects are still unknown. However, if ongoing research continues to show that the use of generative AI models results in reduced brain activity and critical thinking over time, there could be serious implications for society and human progress in the future.

    Source:
    Link

  • Try Virtual Clothing with Google Labs Doppl App

    Try Virtual Clothing with Google Labs Doppl App

    Key Takeaways

    1. Doppl is a new app by Google Labs for Android and Apple devices that allows users to virtually try on clothes using AI technology.
    2. The app analyzes user photos to understand body shape and fit, enhancing the shopping experience by showing how clothes might look on the user.
    3. Doppl assesses clothing images to understand fabric types and how garments drape and fit on different body types.
    4. The app saves time and effort in shopping by reducing the need for returns due to poor fit, but raises concerns about privacy and surveillance.
    5. Doppl is currently available for testing on both the Google Play Store and Apple App Store.


    Google Labs has launched Doppl, a new app for both Android and Apple devices that lets users virtually try on clothes. This app utilizes advanced artificial intelligence to swap outfits in photos with the chosen clothing.

    AI in Fashion

    Artificial intelligence has been applied in various fields, from military targeting to creating amazing art that has never been seen before. Once an AI model has analyzed millions of images of people in different clothing, it learns how garments should fit and flow on a person’s body.

    How Doppl Works

    Doppl begins by capturing a picture of the user. While the exact algorithms are not revealed, the AI examines this image to determine the user’s body shape, whether they are slim or heavier. After that, pictures of the clothing intended for virtual fitting are uploaded. The AI also studies these images to comprehend the fabric types used.

    The AI employs undisclosed techniques to assess how the clothing drapes and fits the user’s body, taking into account their curves, and then creates both still images and motion videos to show how these outfits might appear in real life.

    Advantages and Concerns

    This app helps users save time and effort when shopping, reducing the hassle of buying and returning items that don’t fit or look right. However, there is a potential downside, as it might improve the ability of people to be identified, even if they try to hide from surveillance cameras.

    The app is currently available for testing on the Google Play Store for Android smartphones and on the Apple App Store for Apple devices. For those curious about hidden cameras watching them in hotels or while trying on clothes in fitting rooms, a detector like this one on Amazon could be worth considering.

    Source:
    Link


  • German Authorities Urge Google and Apple to Remove Deepseek App

    German Authorities Urge Google and Apple to Remove Deepseek App

    Key Takeaways

    1. Deepseek has been prohibited in Italy, and German officials are taking action to remove it from Google and Apple’s platforms.
    2. Allegations include violations of EU data protection laws, particularly regarding user data transfer to China without adequate safeguards.
    3. The app collects sensitive user information, raising concerns about potential access by Chinese authorities.
    4. The Berlin data protection authority may impose fines of up to 4% of Deepseek’s global revenue, but enforcement against a foreign entity is challenging.
    5. The request to block Deepseek follows a previous warning to halt data transfers, and while it may be removed from app stores, it will still be accessible via web browsers.


    Deepseek has been prohibited in Italy, and now German data protection officials are taking action against the widely-used AI application from China. According to Der Spiegel, the Berlin Commissioner for Data Protection and Freedom of Information has filed a complaint with both Google and Apple, formally asking them to remove the Deepseek app from their platforms, making it unavailable to users in Germany.

    Allegations of Data Violations

    The basis for this request is purported violations of data protection laws, particularly concerning the transfer of user data from Europe to China. The company has not presented adequate proof that user data is safeguarded in China in a similar way as it is in Europe. According to the EU’s GDPR (General Data Protection Regulation), protecting user data is a fundamental requirement for its transfer to nations outside the EU. However, this does not ensure that other Chinese firms or the Chinese government cannot access data from European users.

    Concerns Over User Data

    This situation is particularly alarming because the chatbot app gathers a wide variety of potentially sensitive information about its users, such as text inputs, chat histories, uploaded files, location details, and device data. Chinese authorities may potentially gain access to all this information, which is already in the possession of the state for all domestic businesses.

    Possible Penalties and Future Actions

    The Berlin data protection authority has the option to impose a fine that could reach up to 4% of the company’s worldwide revenue. Nonetheless, as officials have indicated, enforcing this against a foreign entity would be a challenging task. It is worth noting that this action did not come without prior warning; in May, Berlin’s data protection officials had already set a deadline for the company to halt data transfers to China. Since the Deepseek developers failed to meet this deadline, the request for blocking has been made under the Digital Services Act. Apple and Google are now required to make a decision regarding the blocking very soon. However, the model will still remain accessible through web browsers in the future.

    Source:
    Link