Category: Artificial intelligence

  • Siri Needs Apple Intelligence Updates to Answer Simple Questions

    Siri Needs Apple Intelligence Updates to Answer Simple Questions

    Key Takeaways

    1. Apple confirmed that updates to Siri with Apple Intelligence will take longer than expected.
    2. Users reported that Siri struggles to answer basic questions, often responding with “Sorry, I don’t understand.”
    3. A Reddit thread highlighted Siri’s incorrect responses, including a bizarre answer regarding the current month.
    4. Siri provided confusing replies, such as claiming the Apple Podcasts app was not installed, despite it being pre-installed.
    5. Despite being around for 14 years, Siri’s performance has not improved significantly and is still less capable than Google Assistant.


    Apple recently confirmed that updates to Siri with Apple Intelligence will take longer than they initially thought. While users are waiting for these improvements, it appears that Siri is struggling to answer even basic queries. A Reddit thread highlighted this issue, starting with a simple question: “what month is it?” Siri failed to respond correctly, replying, “Sorry, I don’t understand.”

    User Experiences Shared

    Many other Reddit users joined the conversation, sharing their own frustrating experiences with Siri. The assistant repeatedly gave the same “Sorry, I don’t understand” answer when users asked the same question on various iPhone models and different iOS versions. When someone reworded the question to “what is the current month?”, Siri bizarrely replied, “Saturday, March 1, 2025.”

    Surprising Responses

    In another instance, a user requested Siri to play a podcast, but the assistant replied, “I’m trying to play from Apple Podcasts but it doesn’t look like you have it installed.” This response is puzzling since the Apple Podcasts app is pre-installed and cannot be removed. Additionally, another user asked Siri to open a specific app that was clearly installed on their device, but instead of finding it, Siri suggested the same app from the App Store with an ‘Open’ button, not a ‘Get’ option.

    Siri has been around for 14 years, and although it has never been as capable as Google Assistant, this latest performance seems particularly disappointing. Apple has been working on enhancements with the Apple Intelligence updates, but those have been delayed until later in the year.

    Source:
    Link

  • ChatGPT’s Odd Defamation of Norwegian Man’s Inquiry

    ChatGPT’s Odd Defamation of Norwegian Man’s Inquiry

    Key Takeaways

    1. ChatGPT provided a false and damaging response to a query about a man named Arve Hjalmar Holmen, wrongly accusing him of murdering his sons.
    2. Holmen was shocked by the AI’s claims, which included fabricated details about a murder case that supposedly took place in December 2020.
    3. He is pursuing legal action against OpenAI, supported by the privacy rights organization Noyb, due to the unfounded allegations made by the AI.
    4. Noyb highlighted that while the AI includes a disclaimer about accuracy, this does not exempt OpenAI from responsibility under GDPR for generating false statements.
    5. The incident raises concerns about AI accountability and the potential impacts of misinformation generated by such technologies.


    ChatGPT developer OpenAI is once again in hot water over a privacy issue after it provided a false and damaging answer to a seemingly innocent question. A man from Norway, named Arve Hjalmar Holmen, asked ChatGPT, “who is Arve Hjalmar Holmen?” and the AI wrongfully accused him of killing two of his sons and trying to kill a third. While there were some factual elements in the answer, the murder allegation was entirely untrue.

    Shocking Response

    According to a report from TechCrunch, Holmen was left stunned after submitting what he thought was a simple inquiry to ChatGPT (you can see the screenshot below). The AI claimed that Holmen had attracted attention due to a horrific incident where he supposedly murdered his two sons, who were ages 7 and 10, in Trondheim back in December 2020. Additionally, it stated that he was later charged with attempting to murder his third son.

    Legal Action Begins

    The AI further claimed that the case was “widely reported in the media” and mentioned that Holmen received a sentence of 21 years in prison.

    In response to this, Holmen is pursuing legal action against OpenAI, with support from the privacy rights organization Noyb. This group argued that while the AI’s accusations were completely unfounded, it did correctly identify Holmen’s hometown and confirmed that he has three sons. Noyb also attempted to investigate why such a misleading response was generated but couldn’t identify a specific reason behind it.

    Responsibility Under GDPR

    While ChatGPT does contain a notice that its answers may not always be accurate and encourages users to verify important information, Noyb asserts that this disclaimer doesn’t absolve the company from responsibility. They emphasize that OpenAI has an obligation under the European Union’s General Data Protection Regulation (GDPR) to avoid generating seriously false statements in the first place.

    Source:
    Link


  • Infinix AI∞ Beta Plan Launches with Note 50 Series for Smart Living

    Infinix AI∞ Beta Plan Launches with Note 50 Series for Smart Living

    Key Takeaways

    1. Infinix launched the “Infinix AI∞ Beta Plan,” integrating AI into daily life and user experiences.
    2. The initiative, named “Gen Beta,” focuses on AI as a core part of creativity, gaming, and communication.
    3. Key features include AI-enhanced gaming capabilities like the XBoost AI gaming engine and AI Magic Box for smoother gameplay.
    4. The AI∞ system enhances communication with over 1,000 smart features, such as live translations and creative tools.
    5. Infinix is expanding its AI ecosystem beyond smartphones with devices like AI Buds, AI Ring, and AI Glasses for smart health and assistance.


    Infinix has launched its new “Infinix AI∞ Beta Plan,” representing a major step forward in AI technology. This announcement was made during a unique launch event focused on vertical formats, showcasing the brand’s ambition to weave AI into daily life. The project, named “Gen Beta,” aspires to create a future where AI becomes a fundamental part of creativity, gaming, and communication, rather than just a tool. This initiative coincides with the release of the Note 50 series, which will further embed AI features into its offerings.

    The Infinix AI∞ Beta Plan: Enhancing User Experiences

    The “Infinix AI∞ Beta Plan” aims to improve the way users interact with their devices by introducing AI functions that grow through ongoing updates and early access to new tech. This initiative surpasses traditional AI uses by making it a core component of entertainment, gaming, and daily activities. Central to this change is the Infinix AI∞ Lab, which ensures that AI evolves with user preferences and enhances experiences in real-time.

    Key Features: Gaming and Beyond

    A notable feature of this plan is the addition of AI-enhanced gaming capabilities. The XBoost AI gaming engine optimizes gameplay with smart enhancements, while the AI Magic Box allows for automated actions during games. Tools like One-Tap Infinix AI∞ provide immediate access to gaming utilities, creating a smooth and engaging experience. ZoneTouch Master adjusts controls for accuracy, and the Magic Voice Changer introduces a playful element by altering voices during gaming sessions.

    Expanding AI in Communication

    Infinix AI∞ also aims to improve user interactions by providing a fluid AI-driven experience across various apps. Users can quickly kickstart Folax to unlock over 1,000 smart features, like live translations, object recognition, and conversations powered by deep learning. The AI∞ creative assistant boosts usability with options like AI Eraser, AI Cutout, and AI Wallpaper Generator, simplifying creative tasks and making them more efficient.

    Infinix AIoT Ecosystem: Beyond Smartphones

    Infinix’s advancement in AI goes past just mobile devices, introducing an AI-powered ecosystem featuring AI Buds, AI Ring, and AI Glasses. These gadgets use AI for immediate translations, smart health monitoring, and greater user ease. The AI Buds provide hybrid ANC and come with a touchscreen charging case, while the AI Ring combines style and technology, featuring all-week battery life and AI fitness tracking. AI Glasses and AI Glasses Pro offer smart assistance, scene recognition, and instant translation, further broadening the AI experience.

    Moreover, Infinix has unveiled AI Blind Box Figures—collectible figures with unique personalities and interactive options, blending tech with entertainment. The Note 50 series plays a significant role in this AI-driven evolution, featuring advanced AI enhancements for an improved user experience.


  • Nvidia RTX Pro 6000 GPU: 96GB VRAM for Desktops, 24GB for Laptops

    Nvidia RTX Pro 6000 GPU: 96GB VRAM for Desktops, 24GB for Laptops

    Key Takeaways

    1. The RTX Pro 6000 is designed for professionals, featuring 96GB of GDDR7 VRAM and a bandwidth of 1.6 TB/s, surpassing the GeForce RTX 5090’s 32GB VRAM.
    2. It excels in AI workloads, rivaling AMD’s Ryzen Strix Halo, and is built for managing large AI models efficiently.
    3. The GPU has a thermal design power (TDP) of 400 to 600 watts and supports advanced technologies like PCIe 5.0 and DisplayPort 2.1.
    4. A laptop version of the RTX Pro 6000 is available with 24GB of VRAM, while Nvidia offers budget-friendly options with the RTX Pro 3000, 2000, 1000, and 500 series.
    5. The RTX Pro 6000 is expected to start shipping in April, with pre-built systems available from Dell, HP, and Lenovo in May, but pricing details have not yet been revealed.


    The RTX Pro 6000 marks a new high point for Nvidia’s graphics cards aimed at professionals. This GPU is mainly made for AI tasks, game creators, and other expert users who require a substantial amount of video memory. In comparison, the Nvidia GeForce RTX 5090 has “only” 32GB of GDDR7 VRAM, whereas the desktop and server editions of the RTX Pro 6000 boast an impressive 96GB of GDDR7 along with a bandwidth of 1.6 TB/s.

    Competing in AI Workloads

    With its 96GB of VRAM, the RTX Pro 6000 rivals AMD’s Ryzen Strix Halo when it comes to handling AI jobs, and this graphics card is expected to manage large AI models at a significantly quicker pace. The GPU operates with a thermal design power (TDP) ranging from 400 to 600 watts, and it supports modern technologies like PCIe 5.0 and DisplayPort 2.1. The sleeker Max-Q version may catch the eye of those looking to install multiple graphics cards within the same PC case.

    Laptop and Other Options

    Nvidia also provides a laptop version of the RTX Pro 6000, although this variant is capped at 24GB of VRAM, similar to the GeForce RTX 5090 for laptops. Additionally, Nvidia offers a range of more budget-friendly professional GPUs, like the RTX Pro 3000, 2000, 1000, and 500, which are built on the Blackwell architecture. However, Nvidia has not yet disclosed specifics about the CUDA core count or clock speeds for these new RTX Pro graphics cards.

    Release Timeline

    As of now, Nvidia has not announced the official pricing for its latest professional graphics cards. The RTX Pro 6000 is anticipated to begin shipping in April, while pre-built systems from Dell, HP, and Lenovo are expected to be available starting in May.

    Source:
    Link


     

  • Nvidia Launches DGX Station AI Supercomputer with 72-Core CPU

    Nvidia Launches DGX Station AI Supercomputer with 72-Core CPU

    Key Takeaways

    1. Nvidia’s DGX Station is a powerful AI supercomputer designed for developers and researchers to build and run large language models (LLMs) locally.
    2. The DGX Station features the GB300 Grace Blackwell Ultra Superchip, enabling it to handle models with up to 200 billion parameters and offering significant performance improvements over the smaller DGX Spark.
    3. Its architecture includes a 72-core Grace CPU and Blackwell Ultra GPU, connected via NVLink-C2C, providing seven times the bandwidth of PCIe Gen 5 and enhancing AI processing efficiency.
    4. The DGX Station utilizes the ConnectX-8 SuperNIC for fast networking and runs on a customized version of Ubuntu Linux, facilitating easy transition from local to cloud-based AI model deployment.
    5. The DGX Station is expected to be available from third-party manufacturers in late 2025, while Nvidia’s 5090 GPU is currently available for those looking to develop AI LLMs now, albeit at high prices.


    Nvidia has introduced its latest desktop AI supercomputer, known as the DGX Station. This advanced machine is tailored for AI developers, researchers, and data scientists, enabling them to build and execute their large language models (LLMs) and projects locally.

    Enhanced Power and Performance

    The DGX Station boasts significantly greater capabilities compared to the smaller DGX Spark (previously referred to as Project DIGITS), as it can handle local models with 200 billion parameters, thanks to the GB300 Grace Blackwell Ultra Desktop Superchip. This Superchip is equipped with 496GB LPDDR5X of CPU memory alongside 288GB HBM3e of GPU memory.

    Cutting-Edge Architecture

    Featuring a 72-core Grace CPU linked via NVLink-C2C to a Blackwell Ultra GPU, the Superchip’s NVLink-C2C connection offers seven times the bandwidth of PCIe Gen 5, reaching speeds of 900 GB/s. The Blackwell Ultra GPU is capable of delivering up to 1.5 times the AI FLOPS compared to the Blackwell GPU, and it is specifically optimized to process FP4 models. This enhancement boosts AI processing efficiency by alleviating memory and computational demands.

    Networking and Operating System

    The DGX Station connects with other DGX Stations using the ConnectX-8 SuperNIC, which can transfer data at speeds of up to 800 Gb/s. It operates on a customized version of Ubuntu Linux, known as the DGX operating system, which is tailored to support the complete Nvidia AI software stack. This setup facilitates the transition of LLM AI models from local development to the cloud, simplifying their release and scaling. The DGX Station is expected to be available from third-party computer manufacturers in late 2025.

    For those eager to dive into AI LLM development right now, you can purchase an Nvidia 5090 GPU (available on Amazon) which can run models up to approximately 30 billion parameters. However, it’s important to note that the 5090 cards are currently priced well above their MSRP, exceeding $4,000. The 4060 Ti 16GB GPU, which can manage models up to around 14 billion parameters, is also overpriced but can be found for under $1,000 (also available on Amazon).

    Nvidia has made this announcement through its news release, highlighting the arrival of both the DGX Spark and DGX Station personal AI computers. These systems, powered by NVIDIA Grace Blackwell, aim to bring accelerated AI capabilities to developers, researchers, and data scientists, with prominent computer manufacturers such as ASUS, Dell Technologies, HP, and Lenovo set to produce them.

    Source:
    Link

  • Gemini Deep Research Now Available for Free Users with New AI Models

    Gemini Deep Research Now Available for Free Users with New AI Models

    Key Takeaways

    1. Google has launched experimental versions of its Gemini, including 2.0 Flash, 2.0 Pro, and Personalisation, available on Android, iOS, and the web.
    2. Gemini 2.0 Flash offers faster performance and better efficiency compared to the older version, while details on 2.0 Pro enhancements are not specified.
    3. The Personalisation model uses users’ Google Search history for more customized and accurate responses, improving user experience.
    4. The Deep Research feature, previously limited to Gemini Advanced users, is now free for all users, providing in-depth, easy-to-understand reports on complex subjects.
    5. These updates empower users with free access to advanced AI tools for both personal and work-related tasks, with ongoing enhancements expected in the Gemini AI ecosystem.


    Google has started to launch new experimental versions of its Gemini, which includes 2.0 Flash (experimental), 2.0 Pro (experimental), and Personalisation (experimental). This is accessible on Android, iOS, and the web as well. Additionally, the tech giant has made its Deep Research feature free for all users on these platforms.

    What’s New?

    The upgraded models offer improved functionality and efficiency. The Gemini 2.0 Flash (experimental) is said to provide faster performance and better efficiency than the older version, although Google hasn’t shared specific details about the enhancements in 2.0 Pro (experimental).

    The Personalisation (experimental) model takes advantage of users’ Google Search history to give more customized and accurate answers. This feature can significantly enhance the user experience since it will have more context regarding your preferences.

    Deep Research Feature Now Accessible

    Deep Research, which was previously limited to Gemini Advanced users, is now available to everyone due to this update. This tool uses AI to delve into complex subjects and produce thorough, easy-to-understand reports. It’s a fantastic resource for learning the essentials of a topic without spending hours looking for relevant information online. Now, all users can find this feature in the drop-down menu at the top of the Gemini interface.

    Reports have indicated that early access to these updates has been seen on devices like the Galaxy S23 in India and the Galaxy S25, with the latter enjoying the benefits of the Google One AI Premium plan provided by Google and Samsung. Users can now try out the new models and take advantage of the Deep Research capabilities without any extra charges.

    Empowering Users with Free Features

    By making these features available for free, Google is allowing users to utilize advanced AI for both personal and work-related tasks. Keep an eye out for more updates as Google continues to enhance and develop its Gemini AI ecosystem.

    Source:
    Link


     

  • Apple AirPods Pro: New Features Revealed Early

    Apple AirPods Pro: New Features Revealed Early

    Key Takeaways

    1. Apple is enhancing its products with more AI capabilities, focusing on iPhones, iPads, and Macs.
    2. iOS 19 will introduce additional AI functionalities, set to launch this autumn for compatible iPhones.
    3. AI features will also extend to AirPods Pro 2, likely in conjunction with iPadOS 19 and macOS 16.
    4. AirPods Pro will support live translation, allowing users to hear spoken language translated into their preferred language.
    5. Apple aims to compete with Google’s Pixel Buds by adding this translation feature to AirPods Pro.


    A recent report by Mark Gurman has revealed new insights into Apple’s ongoing efforts to enhance its products with additional AI capabilities. The company has recently concentrated on improving its iPhone, iPad, and Mac lines, introducing features like Image Playground and Genmoji under the umbrella of Apple Intelligence.

    AI Integration in Future Updates

    Gurman indicates that Apple intends to incorporate even more AI functionalities into iOS 19, which is anticipated to be released this autumn for all iPhones that have been launched in the past five to six years. The expansion of AI features is also expected to extend to the AirPods Pro 2 (curr. $169.99 on Amazon), likely when used with an iPad updating to iPadOS 19 or a Mac running macOS 16.

    New Features for AirPods Pro

    Specifically, Gurman mentions that the AirPods Pro will receive support for live translation. This means users will be able to listen to any spoken language translated into their preferred language, similar to the Babel fish from The Hitchhiker’s Guide to the Galaxy. Essentially, Apple aims to equip the AirPods Pro to compete with Google’s Pixel Buds series, which already provide comparable features. However, it’s uncertain if this capability will be linked to certain models of AirPods, particularly the pricier versions.

    Source:
    Link

  • Manus AI Unveils General AI Agent for Complex Real-World Tasks

    Manus AI Unveils General AI Agent for Complex Real-World Tasks

    Key Takeaways

    1. Manus AI has launched a new general AI agent that can autonomously seek answers using multiple large language models (LLMs) simultaneously.
    2. Traditional chatbots have limitations due to their reliance on specific training datasets and often struggle with complex, problem-solving questions.
    3. Manus AI’s agent can break tasks into smaller parts and process them simultaneously, improving problem-solving capabilities.
    4. The agent can generate various outputs, including text, spreadsheets, interactive charts, web pages, and video games.
    5. Manus AI’s agent scored 57.7% on Level 3 prompts in the GAIA AI benchmark and over 70% on simpler Level 1 and 2 prompts, outperforming other AI research systems.


    Manus AI has introduced a new general AI agent that can autonomously seek answers to complicated questions by using multiple large language models (LLMs) at the same time. Right now, interested users can get access by requesting an invitation.

    Limitations of Traditional Chatbots

    Regular chatbots like OpenAI ChatGPT, Microsoft CoPilot, and Anthropic Claude rely on a specific dataset for their training, which means their knowledge has boundaries. They can’t respond to questions that aren’t included in their training data, even though some companies try to improve this by letting their chatbots browse the Internet for up-to-date info. Nevertheless, these chatbots struggle with complex questions that need problem-solving skills.

    New Approaches to Problem Solving

    To address this limitation, certain AI companies have allowed their AIs to work through problems step by step, look at data found online, and create an answer. One example is OpenAI Deep Research, which launched last month, and now Manus AI has entered the scene with its new agent.

    In contrast to OpenAI’s product, Manus’s agent takes advantage of multiple AI LLMs, gaining benefits from each one. Tasks are automatically divided into smaller parts and handled simultaneously. Users can observe the AI’s process as it systematically tackles problems. This agent can generate not just text responses but also spreadsheets, interactive charts, web pages, and even video games.

    Performance Metrics

    While Manus AI’s agent achieved a score of 57.7% on Level 3 prompts in the GAIA AI benchmark, which assesses real-world questions that are challenging even for humans, it manages to answer simpler Level 1 and 2 prompts correctly over 70% of the time. Manus AI claims that its agent outperforms other AI systems designed for researching answers currently available.

    Source:
    Link


  • CheckMag | Protoclone: Humanoid Robot Mimics Human Movement

    CheckMag | Protoclone: Humanoid Robot Mimics Human Movement

    Key Takeaways

    1. The Protoclone uses over 1,000 artificial muscle fibers and 500 sensors to mimic the human musculoskeletal system, allowing for fluid and natural movements.
    2. With 200 degrees of freedom, the Protoclone can perform a wide range of complex actions, setting it apart from traditional robots.
    3. Potential applications include assisting people with disabilities, taking on household chores, and serving as caregivers for seniors.
    4. In workplace settings, Protoclones could handle difficult or repetitive tasks, reducing human injury risk and increasing productivity.
    5. The lifelike movements of the Protoclone raise ethical questions and societal concerns, highlighting the dual nature of technological advancements in robotics.


    Unlike typical humanoid robots that depend on stiff metal structures and electric motors, the Protoclone features more than 1,000 artificial muscle fibers and 500 sensors, closely imitating the human body’s musculoskeletal system. Boasting 200 degrees of freedom, this machine can perform movements that are incredibly fluid and natural, distinguishing it from standard robotic designs.

    Potential Benefits

    The potential outcomes of this innovation might be significant. In domestic settings, devices like the Protoclone could aid people with disabilities, take on household tasks, or act as caregivers for seniors. Components of the machine, such as its arms, hands, legs, or feet, could serve as realistic prosthetic limbs for those who have lost theirs. In work environments, these robots could tackle challenging or repetitive tasks, lessening the risk of injuries for humans while boosting productivity. Nevertheless, the lifelike movements of the Protoclone have generated diverse opinions—some consider it a technological wonder, while others are disturbed by its realistic features. Clone Robotics seems to be leaning into this discomfort, releasing a video showcasing the twitching, dangling robot accompanied by eerie music in a dimly lit setting.

    Societal Impact

    As humanoid robotics keeps advancing, society might soon encounter both thrilling opportunities and tough ethical dilemmas, quicker than we anticipated.

    Source:
    Link

  • Boston Dynamics Advances Atlas Robotics Development Progress

    Boston Dynamics Advances Atlas Robotics Development Progress

    Key Takeaways

    1. Boston Dynamics has introduced a new version of Atlas that does not use hydraulic power, addressing issues with leaks and maintenance costs from the previous model, Atlas HD.
    2. The new Atlas design allows for faster and more efficient movements, including 360-degree motion and the ability to walk backwards instead of turning around.
    3. Future applications for Atlas include roles in car manufacturing and other human workspaces, aiming to enhance productivity.
    4. Insights from practical uses of Atlas are helping to refine its functions as a working robot, similar to existing products like Spot and Stretch.
    5. Boston Dynamics is optimistic about integrating artificial intelligence to improve Atlas’ skill development in future applications.


    Almost one year since the introduction of the new Atlas, Boston Dynamics has shared a video showcasing significant advancements with the robot. In this latest update, the company elaborates on the capabilities of Atlas and discusses the benefits of shifting from the hydraulic-powered Atlas HD. Developers have described Atlas HD as “messy,” highlighting the high costs associated with the technology and its maintenance.

    New Features and Advantages

    The latest version of Atlas, which does not rely on hydraulics, has eliminated issues related to hydraulic fluid leaks that were common with Atlas HD. This new design allows the robot to operate more quickly and efficiently, thanks to its ability to perform numerous 360-degree movements without the limitations that come with human-like motion. For instance, instead of needing to turn around, the robot can simply walk backwards.

    Future Applications

    Boston Dynamics is committed to turning Atlas into a productive robot equipped with professional skills. In the future, Atlas is expected to assist in car manufacturing and will be implemented in workspaces that were originally meant for humans. It remains to be seen if Atlas will master the remarkable tasks that Atlas HD showcased over the years. Currently, there is a lack of a “fun video,” which has become a tradition for Boston Dynamics. Last year’s Halloween video featured Atlas sorting car parts for Hyundai, its owner.

    According to Boston Dynamics, insights from practical applications are aiding in refining Atlas’ function as a working robot. Products like Spot and Stretch are already available in the market. Additionally, the topic of artificial intelligence is addressed by Boston Dynamics, which expresses hope that AI will enhance Atlas’ skill development.

    Source:
    Link