Tag: Google I/O 2025

  • Garmin Health Connect Integration Coming for Smartwatch Users

    Garmin Health Connect Integration Coming for Smartwatch Users

    Key Takeaways

    1. Garmin Connect will integrate with Android’s Health Connect in June 2025, allowing data synchronization across devices.
    2. At Google I/O 2025, Garmin reported a 50% increase in active users and announced collaborations with multiple fitness apps.
    3. New features for Health Connect include Medical Records APIs and two new data types: Background Reads and History Reads.
    4. Users will have control over data sharing through a smartphone dashboard, enabling them to manage what data is shared and delete it if needed.
    5. The exact launch date for the Garmin and Health Connect integration has not yet been specified.


    Garmin Connect is set to soon join forces with Android’s Health Connect. This Google service lets users easily share their health information across different apps and devices, which can give you a wider perspective on your health and may help you identify larger trends.

    Announcement at Google I/O

    During the Google I/O 2025 developer conference, it was announced that Garmin would become part of the Health Connect network in June 2025. This development will enable users of Garmin smartwatches to synchronize their workout and health data with other products within the Android ecosystem. Some apps that are already connected to Health Connect include AllTrails, Dexcom G7, Peloton, Oura, and Withings; you can find the complete list on the Google Play Store.

    User Growth and New Features

    The company highlighted that it has experienced a 50% increase in active users over the past six months and pointed out recent collaborations with Flo, Runna, and Mi Fitness. It also shared information about new features for Health Connect, such as Medical Records APIs that work with the new Fitbit Medical record navigator. Additionally, there are two new data types being introduced: Background Reads, which help developers deliver timely insights to users, and History Reads, which highlight long-term trends in health data.

    User Control Over Data Sharing

    Android users will have the ability to control what data is shared between apps through a dashboard on their smartphones. From this dashboard, they can also delete data or decide to prioritize specific data sources over others. However, Garmin and Google have not yet provided a specific date for when the integration will officially launch.

    Source:
    Link


     

  • Google Launches Beam: 3D Video Calling with Real-Time Translation

    Google Launches Beam: 3D Video Calling with Real-Time Translation

    Key Takeaways

    1. Google introduced Beam, a 3D video calling system that simulates in-person conversations without the need for headsets or glasses, building on Project Starline.
    2. Beam uses advanced technology, including six high-definition cameras, light-field rendering, and head tracking at 60 frames per second, to create lifelike video calls.
    3. The system maintains eye contact, recognizes gestures, and captures facial expressions, enhancing the sense of presence during calls.
    4. Beam features real-time AI translation, supporting multiple languages while preserving the speaker’s tone and voice, initially available for English and Spanish.
    5. Google plans to collaborate with HP to launch Beam by late 2025, targeting both business and broader communication uses, with integration into Google Meet and other services.


    At the I/O 2025 event, Google introduced Beam, a cutting-edge 3D video calling system made to mimic in-person conversations without needing any headsets or glasses. This platform is the result of the earlier Project Starline.

    Origins of Beam

    Project Starline was a trial telepresence initiative that Google launched in 2021, aimed at creating realistic, 3D video calls that would give the impression that the person on the other end was actually in the same room with you.

    Building on the ideas of Project Starline, Beam merges AI-powered depth sensing technology with a compact light field display to produce lifelike, volumetric images of participants during calls in real time.

    Advanced Technology

    Beam is equipped with six high-definition cameras, light-field rendering, and precise head tracking that operates at an impressive 60 frames per second, all to create a genuine sense of presence. This technology enables users to maintain eye contact, recognize gestures, and understand facial expressions, all while avoiding the discomfort of headsets.

    The initial model was quite large and cumbersome, but Beam has now been streamlined into a sleeker, market-ready product. Google is collaborating with HP to launch it by late 2025, with early adopters including Salesforce, Deloitte, and Duolingo.

    Exciting Features

    A notable new feature of Beam is its real-time AI translation. This is powered by Google’s Gemini models, allowing individuals to converse in various languages while keeping the unique tone and voice of the speaker. Currently, it supports English and Spanish, with plans to add Italian, German, and Portuguese soon. Google has also mentioned that this technology will be integrated into Google Meet to enhance collaboration across language differences.

    Sundar Pichai, the CEO of Google, stated that Beam is part of the company’s larger effort to make remote communication feel more intuitive and natural. Google aims to connect it with services like Zoom and eventually expand its use beyond just business settings.

    Source:
    Link


  • Google Gemini 2.5 Update: Agent Mode, Deep Think, and Tools

    Google Gemini 2.5 Update: Agent Mode, Deep Think, and Tools

    Key Takeaways

    1. Gemini 2.5 Flash: Delivers improved reasoning and efficiency, operating 20-30% lighter in token consumption, available for all users and developers starting May 20.

    2. Deep Think Mode: Focuses on enhanced reasoning capabilities for math, coding, and multimodal evaluations, currently being tested by select API users with further safety evaluations planned.

    3. Agent Mode for Subscribers: Allows Gemini Ultra subscribers to set goals, pulling information from live searches and Google applications through a split interface.

    4. Gemini Canvas Enhancements: Introduces a “Create” menu for generating webpages, infographics, quizzes, audio summaries, and personalized app frameworks.

    5. Deep Research Tool: Enables users to merge public information with private documents, enhancing research capabilities for students and professionals.


    At I/O 2025, Google has unveiled a significant range of updates for its Gemini application. The firm revealed new features such as Gemini 2.5 Flash, an enhanced Deep Think mode, a ChatGPT-like Agent Mode, as well as support for quiz creation, audio summaries, and research based on documents—all integrated into a single platform.

    Key Upgrade: Gemini 2.5 Flash

    The most notable enhancement at this moment is Gemini 2.5 Flash, which offers improved reasoning abilities and greater efficiency. It operates 20-30% lighter in terms of token consumption and is already available in the Gemini app for all users. Developers and enterprise clients can also utilize an updated version via Google AI Studio and Vertex AI starting May 20. The complete Gemini 2.5 Pro model is anticipated to be released in early June.

    Deep Think Mode and Its Features

    In addition, Gemini 2.5 Deep Think is being promoted as an upgrade centered on reasoning, demonstrating better results in areas like math (USAMO 2025), coding (LiveCodeBench v6), and multimodal evaluations (MMMU). Google mentions this mode will undergo additional safety evaluations before being made public, but it is already being tested by a select group of Gemini API users.

    New Agent Mode for Subscribers

    If you are a Gemini Ultra subscriber, you will soon gain access to Agent Mode, which is powered by something known as Project Mariner. In this mode, you simply tell Gemini your goal, and it pulls information from live web searches, your Google applications, and external resources to accomplish it. This function features a split display where the chat interface is on the left, while a browser-like panel manages content on the right.

    On the creative front, Gemini Canvas has introduced a new “Create” menu, allowing users to generate complete webpages, infographics, quizzes, or audio summaries directly through chat. You can also describe a personalized app and let Gemini create a starting framework. Additionally, for students or working professionals, a Deep Research tool now enables you to merge public information with private PDFs, Google Drive documents, and market insights.

    Source:
    Link