Category: Artificial intelligence

  • Tesla Uses AI to Improve Service Complaints and Customer Support

    Tesla Uses AI to Improve Service Complaints and Customer Support

    Key Takeaways

    1. Tesla is improving customer service by introducing a new AI Agent for direct communication with vehicle owners.
    2. The AI Agent can monitor response times, understand customer feelings, and notify higher authorities if necessary.
    3. Customers can escalate unresolved issues by typing ‘Escalate’ if they don’t receive a response within two weeks.
    4. The AI Agent is currently being tested at 10 selected service centers with built-in safeguards to prevent misuse.
    5. Tesla is leveraging its technology and AI innovations, which are also used in features like Full Self-Driving and the Tesla app.


    Tesla has been under fire for its customer service, but it’s working to make things better. The carmaker is using artificial intelligence (AI) to enhance how it communicates with customers.

    New AI Agent Introduction

    Raj Jegannathan, who leads the AI, IT Infrastructure, Cybersecurity, and Vehicle Service teams, announced on X that Tesla has rolled out a new AI Agent. This tool will engage with vehicle owners directly. It can monitor how long it takes to respond to issues, understand the feelings behind messages, and alert higher-ups if needed.

    Escalation Feature

    Jegannathan further explained that the communication system will enable Tesla owners to escalate their issues by simply typing ‘Escalate’ if they haven’t received a response within two weeks.

    Pilot Program Launch

    The AI Agent is currently functioning at 10 selected service centers. To avoid misuse, Tesla has implemented certain safeguards within the system.

    It’s fascinating to witness how Tesla is tackling one of its customers’ biggest challenges by utilizing its own technology. While the firm is best known for its electric vehicles, it also excels in AI innovation. Tesla has already woven AI into various aspects of its operations. For instance, its Full Self-Driving (FSD) feature is continually refined using real-world data from countless cars. This technology is crucial for manufacturing as well. AI also drives the Tesla app and segments of the company’s website.

    Source:
    Link

  • Arizona Court Accepts AI Video Testimony from Deceased Victim

    Arizona Court Accepts AI Video Testimony from Deceased Victim

    Key Takeaways

    1. An Arizona court allowed an AI-generated video statement of a deceased victim, Christopher Pelkey, to be shown during a sentencing hearing for his killer, Gabriel Horcasitas.

    2. The avatar was created by Pelkey’s sister, Stacey Wales, using image generation, voice cloning, and AI scripting tools, and it delivered a victim impact statement.

    3. The video included a disclaimer about its AI origin and featured actual footage of Pelkey, addressing Horcasitas and expressing the family’s feelings about their loss.

    4. Arizona law permits victim impact statements in various formats, and there were no objections to the inclusion of the AI video during the sentencing.

    5. Judge Todd Lang imposed the maximum sentence of 10.5 years, acknowledging the emotional impact of the AI-generated statement on the case.


    In a groundbreaking event for the U.S. legal system, an Arizona court has allowed an AI-created video statement of a deceased victim to be shown during a sentencing hearing. The video, which showcased a digital version of Christopher Pelkey, who lost his life in a 2021 road rage incident, was presented when Gabriel Horcasitas was being sentenced for his death.

    Creation of the Avatar

    The avatar was crafted by Pelkey’s sister, Stacey Wales, with assistance from her husband. They utilized a mix of image generation, voice cloning, and generative AI scripting tools. The resulting video displayed a digitally animated Pelkey delivering a victim impact statement directed at both the court and Horcasitas. This video was part of the family’s narrative during the sentencing phase.

    Presentation and Reactions

    The video began with a disclaimer indicating its AI origin and included actual footage of Pelkey from his life before returning to the avatar. In the video, the AI version addressed Horcasitas directly, conveying the family’s feelings about their loss and the impact of the past three and a half years on their lives.

    Under Arizona law, victim impact statements can be shared in different formats, and there were no objections to the AI video being included. The family made it clear that they authored the content, which wasn’t meant to be interpreted as Pelkey’s own words.

    Legal Insights

    Jessica Gattuso, the attorney for the family, mentioned that Arizona’s victim rights laws allowed the family to choose how to present their statement. “I didn’t see any issues with the AI and there was no objection,” she remarked.

    Judge Todd Lang, who oversaw the case, recognized the emotional weight of the video during the sentencing. He imposed the maximum sentence of 10.5 years, in line with the family’s wishes.

    Stacey Wales explained that the video was made using Stable Diffusion with LoRA (Low-Rank Adaptation) for generating images, and a different AI model to mimic Pelkey’s voice. She described the project as a way to help the court grasp the profound effect her brother’s death had on their family.

    Horcasitas was convicted in March 2025 and received his sentence this month. This case represents the first documented instance of a U.S. court accepting an AI-generated avatar to symbolize a deceased victim during sentencing.

    Source:
    Link

  • Robot Malfunction: Unitree H1 Goes Wild in Creepy Video

    Robot Malfunction: Unitree H1 Goes Wild in Creepy Video

    Key Takeaways

    1. A humanoid robot from Unitree Robotics lunged unexpectedly during testing, nearly injuring nearby workers.
    2. The cause of the robot’s behavior is believed to be a programming mistake, with no evidence of autonomous intent.
    3. The incident sparked humor and concern on social media, highlighting public fears about humanoid robot safety.
    4. There are increasing calls for better emergency protocols and accountability in AI and robotics development.
    5. Previous incidents with humanoid robots emphasize the ongoing need for safety and reliability in the field.


    Since May 4, a video that’s been going around on X has raised questions about the increasing excitement surrounding humanoid robots. The footage captures a concerning moment with the H1 model from the Chinese company Unitree Robotics. While being tested, this 5-foot-11-inch android unexpectedly lunged uncontrollably, almost hitting two workers nearby. Thankfully, an engineer acted quickly to regain control of the situation, and fortunately, no one was injured.

    Cause of the Outburst

    The precise reason for the robot’s sudden behavior hasn’t been officially determined yet, but a programming mistake is believed to be the culprit. Experts emphasize that there’s no indication the android acted with any autonomous intent; instead, it seems to be a common hazard in the development of self-operating systems. Nonetheless, this incident has ignited a lively discussion on social media.

    Reactions on Social Media

    The video spread rapidly on platforms such as X and Reddit, generating thousands of responses, many filled with humor and satire. Comments like “Skynet is online,” “Temu Optimus,” and “This is the beginning of the end” underline how much pop culture influences how people view humanoid robots. Beneath the humor, though, there’s a strong sense of worry. Numerous users highlighted the dangers that such failures could bring in critical areas like robotic surgery, where dependability is crucial.

    Calls for Better Protocols

    The incident has also led to demands for more defined emergency measures. “There should be a big red button like at gas stations,” one user proposed. Others criticized the developers, stating things like “There are no bad robots, only bad programmers,” highlighting the increasing call for accountability in AI and robotics development.

    This event is one of several concerning incidents involving humanoid robots. Just in February 2025, a robot at a festival in China headed towards a crowd before being halted in the nick of time. Incidents like these demonstrate that, even with significant progress in humanoid robotics, ensuring safety and reliability continues to be one of the field’s most urgent issues.

    Source:
    Link

  • Garmin Confirms New Features Will Be Behind Paywall

    Garmin Confirms New Features Will Be Behind Paywall

    Key Takeaways

    1. Garmin launched the Connect+ subscription service at $6.99/month or $69.99/year, while maintaining free access to current features in Garmin Connect.
    2. Connect+ offers tailored insights powered by AI, improved live tracking, and unique achievement badges.
    3. The service is seen as a key part of Garmin’s long-term fitness strategy, according to CEO Cliff Pemble.
    4. Future features may be exclusive to Connect+ subscribers, with positive feedback from early users noted.
    5. Upcoming updates are expected with the launch of the Forerunner 970, which will replace the Forerunner 965.


    Garmin introduced its Connect+ subscription service at the end of March, sparking some debate among users. The service is priced at $6.99 per month or $69.99 per year. Garmin has tried to calm concerns by stating that all current features and data within Garmin Connect will still be accessible for free.

    Connect+ Developments

    Recently, Garmin has provided more information about the future of Connect+ and how it plans to enhance the service. During the Q1 2025 earnings call, Garmin’s President and CEO, Cliff Pemble, shared some insights. He mentioned that Garmin Connect+ is a premium plan that offers tailored insights using artificial intelligence, improved live tracking, and unique achievement badges.

    Enhanced Health and Fitness Insights

    Connect+ aims to boost users’ understanding of their health and fitness through personalized Active Intelligence insights powered by AI. As noted by the5krunner and other observers, Pemble went into more detail, though these specific remarks are not included in Garmin’s earnings call PDF. Instead, they can be found in the linked video starting from 11:23.

    According to Pemble, Connect+ is a “long-term thing for us [Garmin]” and a “very important part” of their fitness strategy moving forward. He also suggested that upcoming features may be restricted to Connect+ subscribers, mentioning positive feedback from early users. While he did not specify what these paywalled features would be, he indicated they would relate to the company’s premium offerings. The first signs of these updates are likely to come with the launch of the Forerunner 970, expected to replace the Forerunner 965 later this season, which is currently priced at $499.99 on Amazon.

    Source:
    Link

  • Apple Exec Warns: iPhone May Become Obsolete Due to AI

    Apple Exec Warns: iPhone May Become Obsolete Due to AI

    Key Takeaways

    1. Eddy Cue’s remark about a drop in search volume on Safari indicates a potential shift in how iPhone users will engage with web search.
    2. Apple plans to integrate generative AI search tools into Safari, moving away from traditional Google search interfaces.
    3. Google’s dominance in search is being challenged by AI platforms like ChatGPT, impacting its market value.
    4. The iPhone’s role as a primary access point for search is changing, potentially affecting Google’s advertising revenue.
    5. The rise of AI may disrupt Google’s search monopoly more effectively than legal actions from regulators.


    At a pivotal moment in an important antitrust trial in Washington, D.C., Eddy Cue, Apple’s Senior Vice President of Services, shared a remark that could signal a significant change in the iPhone’s usual function in web search. “For the first time in 22 years, we’ve seen search volume drop on Safari,” he declared. Cue’s testimony aimed to support Apple’s multi-billion-dollar agreement with Google, but it also suggested a larger transition that might alter how users engage with the iPhone. The main catalyst behind this change is artificial intelligence.

    AI Integration in Safari

    Reports indicate that Apple is looking to incorporate generative AI search tools such as OpenAI and Perplexity into Safari. This move could fundamentally transform how iPhone users search for information, shifting from the conventional Google search box to interfaces powered by AI. This trend has already begun influencing the market, as shares of Google’s parent company Alphabet fell by more than 7% this week, wiping out about $150 billion in market capitalization.

    Changes in Revenue Dynamics

    Cue remarked in court, “It just seems crazy to me,” in reference to potential revenue losses linked to Google’s antitrust challenges, acknowledging that the true disruption may arise more from AI than from legal decisions. His statements mirror a rising sentiment within the industry. Google’s long-held supremacy in search is now being challenged by generative AI platforms. With ChatGPT reportedly managing over 1 billion search queries each week and Apple working on redesigning Safari with AI capabilities, the iPhone’s role as a gateway to Google’s ecosystem might be fading.

    The Redefinition of the iPhone

    “The iPhone’s role as the portal to search is being redefined,” stated Yory Wurmser, an analyst at eMarketer. “When your default gateway no longer leads to Google but to AI, the device becomes less of a command centre and more of a conduit.” This strategic shift by Apple could have extensive implications for the smartphone and advertising sectors. Analysts caution that changing Google’s default search position on iOS could destabilize Google’s advertising revenue and redirect billions in global advertising expenditure. As U.S. regulators deliberate on actions regarding default search engine agreements, Apple seems to be gearing up its platform for a future dominated by AI.

    At this moment, the iPhone continues to hold a dominant position. However, as Cue pointed out, the emergence of AI may achieve what courts cannot — diminish Google’s search monopoly and, in doing so, reshape the iPhone itself.

    Source:
    Link

  • Humanoid RoboCop: China Deploys Real-Life Police Robot

    Humanoid RoboCop: China Deploys Real-Life Police Robot

    Key Takeaways

    1. PM01 is a humanoid police robot designed to assist police officers with monitoring public areas and aiding tourists.
    2. The robot features advanced capabilities, including facial recognition and multilingual communication, but is not an independent law enforcement agent.
    3. Priced at approximately $14,000, PM01 aims for widespread deployment in various sectors, including education and retail, in addition to policing.
    4. The introduction of PM01 has sparked mixed reactions online, with some viewing it as dystopian or “scary,” while others see it as a positive step in technological advancement.
    5. The robot gained significant attention after a video demonstration, highlighting both its abilities and the public’s concerns about automation in society.


    Humanoid robots have already demonstrated impressive abilities such as Kung Fu and even finishing a half marathon, but now, they might soon be seen in police uniforms. In Shenzhen, China, a humanoid police robot has begun its first patrol. Named PM01 and created by Engine AI, this robot was unveiled as part of a larger effort to integrate robotics into public safety measures. PM01 stands at 4 feet 7 inches tall and weighs 88 pounds, and its main job is to assist police officers with tasks like monitoring public areas and aiding tourists.

    Supportive Functionality

    PM01 is not an independent law enforcement agent but a support tool meant to help police with their regular responsibilities. It wears a bright vest similar to that of human officers and is driven by industrial-grade actuators. It has a 320° rotating hip that allows it to scan its environment without moving its feet. With facial recognition capabilities and microphones that can understand both Mandarin and Cantonese, PM01 is able to greet people, report lost children, and communicate suspicious activities to a central command center. A video on YouTube shows the robot even doing a forward somersault, showcasing some flair in its abilities.

    Affordable Robotics

    The price tag for PM01 is approximately 14,000 US dollars, making it relatively budget-friendly and aimed at future widespread deployment in cities—not just for police use, but also in civilian contexts like education and retail.

    The PM01 was first revealed during a 40-second clip by the Chinese news agency CGTN on February 26, 2025. Initially, it didn’t attract much attention, but the discussion has picked up significantly on platforms like YouTube and Reddit. The reactions on YouTube have been particularly striking, with many users labeling the robot as “scary,” “dystopian,” or even “sick.” Comparisons to RoboCop and the Terminator are frequently made, often paired with worries about dehumanization, a lack of empathy, and larger ethical concerns. Only a few see PM01 as a natural progression toward an ever-more automated world.

    Mixed Reactions on Reddit

    Conversely, Reddit users have taken a more optimistic view, with a community that tends to embrace technology and innovation. A post from April 5, 2025, gained notable attention for talking about the robot’s authenticity, sparked by an interaction with streamer IShowSpeed. Many users were astonished and impressed to find out that the robot was indeed real, especially since initial rumors suggested it could be CGI.

    Source:
    Link

  • 44 New Earth-Like Exoplanet Candidates Discovered in Breakthrough

    44 New Earth-Like Exoplanet Candidates Discovered in Breakthrough

    Key Takeaways

    1. A research group from the University of Bern developed a machine learning model to identify planetary systems likely to host Earth-like exoplanets, achieving 99% accuracy.
    2. The model was trained using synthetic data from the “Bern Model of Planet Formation and Evolution.”
    3. After training, the model identified 44 planetary systems that may contain unknown Earth-like planets.
    4. The findings are crucial for upcoming space missions like ESA’s PLATO, which launches in 2026, to discover habitable exoplanets.
    5. The model will help select the best targets for future missions like LIFE, aimed at studying the atmospheres of distant planets for signs of life.


    A research group hailing from the University of Bern and the National Center of Competence in Research PlanetS has reached a key point in the quest for planets that may support life. Announced on April 9, 2025, this group has created a machine learning model that can accurately identify planetary systems likely to have Earth-like exoplanets. This advancement not only propels the search for habitable planets but also represents an exciting step toward finding alien life.

    Development of the AI Model

    The machine learning model was crafted under the direction of Dr. Jeanne Davoult during her doctoral studies at the University of Bern, with assistance from Prof. Dr. Yann Alibert and Romain Eltschinger at the Center for Space and Habitability (CSH). It underwent training using synthetic data produced by the acclaimed “Bern Model of Planet Formation and Evolution,” which mimics the physical mechanisms involved in forming planetary systems. The results are impressive: the model boasts a 99% accuracy rate in identifying systems that are very likely to host at least one Earth-like planet.

    Application to Real Data

    Once the training was complete, the model was tested on real observational data, leading to the identification of 44 planetary systems that might harbor previously unknown Earth-like planets. These results hold great importance for future space missions, including ESA’s PLATO and the proposed LIFE project, both of which aim to find and analyze Earth-like worlds.

    PLATO (PLAnetary Transits and Oscillations of stars), scheduled to launch in 2026, will employ the transit method along with asteroseismology to discover potentially habitable exoplanets, particularly focusing on those orbiting stars similar to our Sun. The best candidates chosen by PLATO will serve as the groundwork for future missions like LIFE (Large Interferometer For Exoplanets), which plans to study the atmospheres of distant planets through infrared spectroscopy and nulling interferometry in order to search for biosignatures such as water or methane. The novel machine learning model could significantly aid in pre-selecting the most viable targets, thus improving the effectiveness and success rates of these missions.

    Source:
    Link

  • Gemini AI Introduces Image Editing, Widget, and Kids’ Version

    Gemini AI Introduces Image Editing, Widget, and Kids’ Version

    Key Takeaways

    1. Enhanced memory and learning abilities for users in Google Gemini.
    2. New generative AI-based image editing features for modifying existing images.
    3. Introduction of a homescreen widget for easy access to the assistant.
    4. Multiple size options for the widget, following Material 3 design principles.
    5. Gemini will be available for users under 13, with parental controls and data privacy assurances.


    This year, Google Gemini is going to see some big upgrades in important areas. One of the main focus points will be its enhanced memory and learning abilities for users. Additionally, Gemini will introduce generative AI-based image editing features, a new widget, and a special version for kids.

    Improved Image Editing

    If you regularly use Gemini, you might know that the assistant can create images. However, you were unable to ask it to modify the final image. This meant that to get the result closer to what you originally envisioned, you had to repeat the prompt with some changes, hoping it wouldn’t stray too far from your expectations.

    Now, Gemini can utilize its AI-driven image creation tools to make edits to existing pictures. You can simply upload an image to Gemini and request changes using prompts. The assistant can add, swap, or take away elements, including backgrounds. This feature is great for adjusting both AI-created images as well as other photos from your collection.

    New Widget for Easy Access

    Google is also launching a Gemini homescreen widget that adheres to the Material 3 design principles. This means it will adapt to the main color scheme of your icons and wallpaper. There are different size options available, with 1×1 being the smallest and 5×3 being the largest. The widget serves as an alternative way to access the assistant with just a tap, letting you enable Live mode, activate the camera, or directly upload files.

    Kid-Friendly Features

    Lastly, Gemini will be available for accounts belonging to users under 13 years old. Parents with accounts that have parental controls already set are receiving emails informing them that their children can now access the AI assistant through Google Family Link. Google promises not to use children’s data to train its AI systems. The email also encourages parents to discuss with their kids that Gemini is not a human being.

    Source:
    Link

  • Elon Musk’s Grok AI Launching on Microsoft Azure AI Foundry

    Elon Musk’s Grok AI Launching on Microsoft Azure AI Foundry

    Key Takeaways

    1. Microsoft is working on integrating Elon Musk’s Grok AI model into the Azure AI Foundry for software developers.
    2. Grok will be accessible via Microsoft’s toolkit and hosted on its servers, requiring no additional training for developers.
    3. This integration is part of Microsoft’s strategy to offer diverse AI models and expand its AI platform.
    4. Microsoft is exploring multiple AI models, including those from Meta and Anthropic, amid ongoing tensions with OpenAI.
    5. No formal partnership between Microsoft and xAI has been announced, but more details may emerge at the upcoming Microsoft Build conference on May 19.


    Over the past few weeks, engineers at Microsoft have been working on incorporating Elon Musk’s Grok AI model into the Azure AI Foundry, as reported by The Verge. It is thought that software developers utilizing Azure AI Foundry will eventually have the ability to integrate Grok into their applications, alongside having direct access to the AI itself.

    Integration Details

    If the model makes its way into Azure, it will be accessible via Microsoft’s toolkit and hosted on its servers, eliminating the need for developers to undergo any training. However, the extent of the integration remains unclear at this stage. It’s important to note that Grok is not being trained on Microsoft’s servers, as xAI decided to create its own infrastructure after ending a previous agreement with Oracle.

    Broader Strategy

    This move aligns with Microsoft’s larger strategy to provide flexible access to a range of AI models. The company has been actively expanding its AI platform, incorporating as many externally developed models as it can. Earlier this year, they swiftly introduced DeepSeek’s R1 model. Moreover, Microsoft is already utilizing or exploring other AI models in internal tests, including those from Meta and Anthropic. Grok is a part of the same objective to diversify the offerings available on the Azure cloud service.

    Current Relationships

    The timing of this decision is particularly interesting, especially given the ongoing tensions between Microsoft and OpenAI. In addition, there are continuing legal disputes between Elon Musk and OpenAI, complicating matters further. While Microsoft maintains a close working relationship with OpenAI, it is also increasingly focusing on other artificial intelligence models.

    No formal announcements have been made yet regarding a partnership between Microsoft and xAI. However, with the Microsoft Build conference scheduled for May 19, there is a possibility that more details about this new collaboration and its specifics will be revealed at that event.

    Source:
    Link

  • Xiaomi Unveils MiMo-7B: First Open-Source LLM for Coding and Reasoning

    Xiaomi Unveils MiMo-7B: First Open-Source LLM for Coding and Reasoning

    Key Takeaways

    1. Xiaomi has launched its first open-source AI system, MiMo-7B, designed for complex reasoning tasks and excelling in mathematics and code generation.
    2. MiMo-7B has 7 billion parameters and competes effectively with larger models from OpenAI and Alibaba, especially in mathematical reasoning and coding contests.
    3. The model’s training involved a comprehensive dataset of 200 billion reasoning tokens, using a multi-token prediction goal to enhance performance and reduce inference times.
    4. Post-training enhancements include unique algorithms for reinforcement learning and infrastructure improvements that significantly boost training and validation speeds.
    5. MiMo-7B is available in four public variants, with notable performance benchmarks in mathematics and coding, and it can be accessed on Hugging Face and GitHub.


    Xiaomi has quietly entered the large language model arena with its new MiMo-7B, marking its first open-source AI system for the public. Created by the recently formed Big Model Core Team, MiMo-7B is designed for complex reasoning tasks and excels beyond rivals like OpenAI and Alibaba when it comes to mathematical reasoning and code generation.

    Model Specifications

    As indicated by its name, MiMo-7B has 7 billion parameters. Even though it is much smaller than many leading LLMs, Xiaomi asserts that it competes equally with larger models such as OpenAI’s o1-mini and Alibaba’s Qwen-32B-Preview, all of which are capable of AI reasoning.

    Xiaomi MiMo-7B surpasses OpenAI and Alibaba’s models in mathematics reasoning (AIME 24-25) and code contests (LiveCodeBench v5).

    Training Details

    The foundation of MiMo-7B is a rigorous pre-training schedule. Xiaomi claims to have created a comprehensive dataset consisting of 200 billion reasoning tokens and has provided the model with a total of 25 trillion tokens through three phases of training.

    Instead of the conventional next-token prediction, the company opted for a multi-token prediction goal, which they say reduces inference times without compromising the quality of the outputs.

    Post-Training Enhancements

    The post-training phase combines various reinforcement learning methods alongside infrastructure enhancements. Xiaomi developed a unique algorithm called Test Difficulty Driven Reward to mitigate the sparse reward challenges often seen in RL tasks involving intricate algorithms. Moreover, they introduced an Easy Data Re-Sampling technique to ensure stable training.

    On the infrastructure side, Xiaomi has created a Seamless Rollout system to minimize GPU downtime during both training and validation. According to their internal metrics, this results in a 2.29× increase in training speed and almost a 2× boost in validation performance. The rollout engine also supports inference methods like multi-token prediction in vLLM settings.

    Availability and Performance

    Now, MiMo-7B is open source with four public variants available:
    – Base: the unrefined, pre-trained model
    – SFT: a version refined with supervised data
    – RL-Zero: a variant enhanced through reinforcement learning starting from the base
    – RL: a more refined model based on SFT, claimed to offer the best accuracy

    Xiaomi has also shared benchmarks to support its claims, at least theoretically. In mathematics, the MiMo-7B-RL variant is said to achieve 95.8% on MATH-500 and over 68% on the 2024 AIME dataset. Regarding code, it scores 57.8% on LiveCodeBench v5 and nearly 50% on version 6. Other general knowledge tasks like DROP, MMLU-Pro, and GPQA are also included, though scores hover in the mid-to-high 50s—respectable for a model with 7 billion parameters, yet not groundbreaking.

    MiMo-7B can now be accessed on Hugging Face under an open-source license, and all relevant documentation and model checkpoints are available on GitHub.