Category: Artificial intelligence

  • Revamped Siri Launching This Fall with iOS 19 Update

    Revamped Siri Launching This Fall with iOS 19 Update

    Key Takeaways

    1. Enhanced Siri updates have been delayed but are expected to launch this fall with iOS 19.
    2. The delay is linked to changes in Apple’s management team overseeing Siri’s development.
    3. A new feature allowing users to edit and send photos is confirmed, but other features remain uncertain.
    4. Apple aims to provide a more personalized Siri experience, with improvements anticipated in the coming year.
    5. The new Siri is designed to address ongoing user concerns about its performance and response accuracy.


    Apple’s long-anticipated updates to Siri have been delayed for some time, missing their initial timeline. According to a recent report, the enhanced Siri is set to launch this fall, coinciding with the expected release of iOS 19. This delay appears to stem from significant changes within Apple’s management team responsible for integrating Apple Intelligence features into Siri. The shift in leadership seems to have realigned the project, enabling the company to fulfill its commitments.

    Report Details

    The information originates from The New York Times, as reported by 9to5Mac. It suggests that Apple intends to introduce a refreshed Siri in the fall, which will include a feature that allows users to edit and send photos to friends. This insight is attributed to three sources familiar with the company’s strategy. However, since the report highlights only this specific feature, it remains uncertain whether all the other promised Apple Intelligence features for Siri will also debut in the fall.

    Anticipation for More Features

    It seems logical that Apple would not limit itself to a single feature update for Siri, especially considering all the enhancements that were promised last year. Earlier in March, Apple indicated that the more personalized Siri experience would take longer than initially anticipated and is expected to arrive “in the coming year.” While there was uncertainty around the update’s timeline, it now appears that the new features may launch alongside iOS 19, anticipated for September or shortly thereafter, still within this calendar year.

    Addressing User Concerns

    There have been reports about Siri struggling to answer straightforward questions, which has frustrated iPhone users for an extended period. The new Siri experience is designed to tackle these issues and provide substantial improvements to its overall functionality.

    Source:
    Link

  • AI Shopping App Founder Charged with Fraud for Human Use

    AI Shopping App Founder Charged with Fraud for Human Use

    Key Takeaways

    1. Albert Saniger, creator of the AI shopping app Nate, was charged with fraud by the DOJ for misleading investors about the app’s capabilities.
    2. Nate aimed to provide a seamless one-click checkout experience across multiple e-commerce platforms, but did not function as advertised.
    3. Transactions in Nate were often manually processed by contractors rather than being handled by AI, contradicting claims of AI-driven operations.
    4. Saniger allegedly promoted Nate as AI-driven to secure $38 million in investments, despite knowing the app’s limitations.
    5. In early 2023, Saniger began selling off Nate’s assets to cover expenses, leading to significant losses for investors.


    Turns out, the creator of a shopping app that uses artificial intelligence wasn’t very smart after all.

    Albert Saniger, who launched an AI shopping application called Nate, was charged with fraud by the United States Department of Justice (DOJ) on Wednesday. According to the DOJ, Saniger “took part in a plan to deceive investors and potential investors in his start-up Nate, Inc. by making significantly false and misleading claims about the company’s proprietary artificial intelligence (“AI”) and its operational functions.”

    The Purpose of Nate

    Nate is an application that was created in 2018 to provide a single checkout experience across various e-commerce platforms, enabling users to make purchases with just one click, regardless of the retailer. However, the DOJ contends that the app does not operate as intended; instead, “transactions processed through nate [sic] were sometimes manually handled by contractors in the Philippines and Romania, and at other times, they were completed by bots.” They claim that the actual use of AI in Nate for transaction processing was “essentially 0.”

    Misleading Investors

    The DOJ also argues that Saniger was completely aware that Nate needed manual intervention to work properly but kept promoting the app as “AI-driven” to attract investments, which included a whopping $38 million Series A funding round in 2021.

    To wrap up the DOJ’s case, it is claimed that Saniger started selling off Nate’s assets at the start of 2023 to manage expenses after running out of cash. This move allegedly left investors facing “near total losses.”

    For more details, you can access the complete indictment filed by the DOJ through the link provided in the sources section below.

    Source:
    Link

  • Pangram Launches Enhanced AI Writing Detector for Schools

    Pangram Launches Enhanced AI Writing Detector for Schools

    Key Takeaways

    1. Pangram AI writing detector helps educators identify AI-generated essays that students may submit as their own work.
    2. The updated detector has a 93% accuracy rate in identifying AI-generated content from various chatbot sources.
    3. It features a low false positive rate, ensuring most flagged submissions are genuinely cases of academic dishonesty.
    4. The software can highlight specific text parts likely written by AI versus human, helping educators analyze mixed submissions.
    5. Pangram offers a split-screen display for comparing submitted texts with AI-generated outputs, aiding in thorough analysis.


    The updated Pangram AI writing detector helps stop students from passing off AI-created essays as their own to earn easy grades.

    AI tools like OpenAI’s ChatGPT have become popular among students looking for success with minimal effort. These AI chatbots are capable of researching various topics and producing well-crafted essays. Moreover, new AI humanizer tools can adjust these texts to sound more like they are written by a person. This development complicates the task of spotting plagiarism, leaving educators to depend on a variety of signals to recognize AI-authored writing.

    Tackling the Challenge

    Pangram has addressed this issue by enhancing its AI writing detector to recognize texts produced by chatbots and subsequently modified by humanizer tools to appear more human-like. It boasts a 93% accuracy rate in determining which chatbot family—such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, or xAI’s Grok—generated the content.

    A Teacher’s Ally

    For educators, a crucial aspect is the low false positive rate, indicating that most flagged submissions are indeed cheating cases. The software can pinpoint which parts of a text are likely penned by a human or an AI, aiding in uncovering situations where students blend human and AI text to evade detection. Pangram features a split-screen display, allowing users to juxtapose submitted texts with AI-generated outputs from the selected chatbot for a thorough analysis in tricky scenarios.

    In a different context, professionals who need to use AI at their jobs might find Microsoft Copilot useful. This tool, available in a book on Amazon, can enhance productivity in responding to sales queries, crafting marketing pitches, and handling various office chores.

    Source:
    Link


  • Google Workspace Flows and Gemini Features for Docs, Sheets, Meet

    Google Workspace Flows and Gemini Features for Docs, Sheets, Meet

    Key Takeaways

    1. Google is enhancing Workspace tools with AI features, integrating Gemini into Docs, Sheets, Meet, and Chat.
    2. Workspace Flows automates multi-step processes using custom AI agents called Gems to streamline tasks like analyzing customer feedback.
    3. New Google Docs features include audio readouts and podcast-style summaries, along with the “Help me refine” tool for writing improvement.
    4. Sheets will gain an on-demand analyst feature called “Help me analyze,” providing insights, trend identification, and data visualization.
    5. Improvements are also being made in Chat and the new Google Vids app, allowing alpha customers to create original video clips.


    Google’s Cloud Next 2025 event is currently taking place, and the company has unveiled a range of enhancements to its Workspace tools, incorporating AI features. To assist businesses in achieving better outcomes, Gemini is being further embedded into Docs, Sheets, Meet, Chat, and other applications. A new method to automate tasks across these tools has also been introduced. This feature, called Workspace Flows, is being rolled out for customers participating in the alpha program.

    Automating Multi-step Processes

    Workspace Flows employs Gems, which are custom AI agents built on Gemini, to facilitate the automation of multi-step tasks. This allows businesses to conduct research, analyze data, and create the necessary content simply by using text prompts. In a demonstration, Google illustrated how Workspace Flows could manage incoming customer feedback. It automates the analysis process with steps such as “When a form response comes in”, “Ask Gemini,” and directing it through a text command, ultimately followed by “Post a message in a space,” prioritizing feedback based on its severity.

    Enhanced Customer Interaction

    This AI agent can also be trained to provide appropriate responses to customers. It will review customer feedback and documentation to offer solutions to customer inquiries effectively.

    Workspace Flows is accessible to those in the Gemini for Google Workspace Alpha program.

    New Features Coming to Google Docs

    Google Docs will soon feature full audio readouts for documents and podcast-style summaries. This aims to assist users in verifying their scripts or any written material by simulating a reading by a voice. This feature is expected to be available in alpha in the upcoming weeks.

    The Help me refine feature is designed to offer “thoughtful suggestions on how to strengthen your argument, improve the structure of your piece, or make your key points clearer,” among other enhancements. The goal is not just to rewrite text but to provide a writing coach to help users enhance their abilities. This feature will also launch in alpha later this quarter.

    Advanced Data Analysis in Sheets

    Moreover, this feature will act as an on-demand analyst, delivering insightful information based on the document’s data. It can identify trends, generate charts for improved data visualization, and recommend subsequent steps. Help me analyze will be integrated into Sheets later this year.

    Later this quarter, Gemini will be capable of offering quick summaries of parts users may have overlooked during meetings, clarifying specific topics, generating recaps, and assisting in organizing thoughts to enable users to present their arguments more effectively.

    In addition to these enhancements, improvements to Gemini are also being implemented in Chat and the new Google Vids app. Alpha customers will soon have the ability to create original video clips directly within the Vids app. These features are powered by the Veo 2 model.

    Source:
    Link

  • NotebookLM App for iOS and Android Launching Soon

    NotebookLM App for iOS and Android Launching Soon

    Key Takeaways

    – Google introduced NotebookLM, an AI tool for summarizing documents and answering questions.
    – A mobile app for NotebookLM is expected but no release date or platforms have been announced.
    – The app may have limited initial access in certain countries before a broader rollout.
    – The Audio Overview feature will soon support multiple languages and explore various formats.
    – Excitement is growing among users for the upcoming mobile app and enhanced features.


    It has been nearly two years since Google introduced NotebookLM, a tool that utilizes AI to facilitate learning. This innovative AI Notebook allows users to upload their documents, which it can then summarize or respond to questions about. Additionally, Google enhanced the tool last year by adding a feature called Audio Overview, enabling NotebookLM to convert uploaded documents into an audio dialogue resembling a podcast with two hosts. Despite its usefulness, NotebookLM is currently only accessible as a web application, though changes are expected soon.

    Upcoming Mobile App Announcement

    Recently, a post on X from the official NotebookLM account indicated that a mobile app is on the horizon. This announcement was made as a quoted reply to a user who praised NotebookLM for its effectiveness as a study aid. However, the post did not specify when the app will be released or which platforms will receive it first.

    Limited Initial Release

    Once the NotebookLM app becomes available, it may initially have limited access in certain countries before expanding to a broader audience. Despite this potential limitation, the upcoming mobile app has generated excitement among users who find the tool beneficial.

    Multilingual and Enhanced Features

    In other news, NotebookLM has announced that the Audio Overview feature will soon support languages beyond English. They’re also exploring various hosts, voices, and show formats. However, there is no specified timeline for when these new features will be implemented.

    Source:
    Link

  • Samsung Launches Real-Time Visual AI for Galaxy S25 Series Today

    Samsung Launches Real-Time Visual AI for Galaxy S25 Series Today

    Key Takeaways

    1. Samsung launched the Visual AI feature for the Galaxy S25 series on April 7, 2025, allowing real-time visual chats for free.
    2. Users can easily interact with the AI by pressing and holding the side button to get suggestions based on what they see through their camera.
    3. Gemini Live offers real-time assistance, enabling users to share their screens for tailored advice while shopping or making decisions.
    4. The new feature enhances communication with the AI, moving beyond text-based interactions to a more effective visual experience.
    5. This upgrade raises expectations for smartphone capabilities, with additional updates for Galaxy S24 series and 6th generation foldables also being released.


    Samsung has unveiled a new AI enhancement for the Galaxy S25 series, launching today, April 7, 2025. This upgrade is named Visual AI and introduces real-time visual chats to your device at no cost. Yes, S25 users can now share what they see with the AI and have discussions about it—all for free, at least for the time being.

    Easy Interaction with AI

    This visual feature is super useful for quick suggestions. To activate it, just press and hold the side button, and Gemini will pop up. You can share your camera’s view and ask questions about what you observe. Need help choosing an outfit? Just point your phone and ask your Galaxy S25 for advice. Gemini Live might respond with, “That shirt looks great—try it with these jeans.” It’s a valuable tool when you’re torn between choices, saving you time and simplifying decision-making. You can also seek tips on organizing your closet for better space management.

    Real-Time Assistance

    Gemini Live can access your screen in real time. As the company highlighted, users can “share their screen while shopping online for tailored style suggestions.” It’s like having a friend who’s always ready to assist and can see exactly what you’re facing. Unlike the older text-based chat feature, this visual interaction enhances communication between users and the AI, making it much more effective and efficient.

    Setting New Standards

    “This is a bold move,” says Jay Kim, the Executive VP at Mobile eXperience, referring to the collaboration with Google to make your phone smarter for everyday use. Starting today, every S25 user will benefit from this visual AI upgrade, raising the expectations for smartphone capabilities. For those with the Galaxy S24 series or the 6th generation foldables, One UI 7 is also being launched today.

  • Virtual Production and AI Transforming Film and Theatre

    Virtual Production and AI Transforming Film and Theatre

    Key Takeaways

    1. Virtual Production Revolution: Virtual production, exemplified by The Mandalorian, allows filmmakers to shoot on digital sets with real-time adjustments, making the process more adaptable and economical.

    2. Stagecraft Technology: Stagecraft merges high-quality digital environments with physical sets, enabling quick scene changes and reducing the need for location shoots and expensive physical sets.

    3. AI in Script Evaluation: AI tools like ScriptBook assess scripts based on story elements rather than cast or director data, emphasizing the importance of narrative in predicting a script’s success.

    4. Post-Production Enhancements: AI technologies, such as Adobe’s Sensei, automate video editing tasks, allowing editors to focus on storytelling while speeding up the editing process.

    5. Immersive Theatre Experiences: The integration of AR and VR in live performances, as seen in the Royal Shakespeare Company’s productions, creates interactive and immersive theatre experiences for audiences.


    Real-time technology is changing the entertainment world, impacting both film and theatre in ways that were once thought impossible. Leading this change is virtual production, a technique that enables filmmakers to shoot on digital sets with immediate adjustments. This provides a more adaptable and economical option compared to conventional filming methods. The technology, most notably showcased in the Disney+ series The Mandalorian, is altering the way stories are presented on screen.

    The Impact of Stagecraft

    The Mandalorian made a mark by using Stagecraft, a state-of-the-art virtual production system created by Industrial Light & Magic (ILM). Stagecraft brings together expansive, high-quality digital environments and actual sets, allowing filmmakers to create scenes that seem to occur in far-off places without leaving the studio. This innovative tech allows for quick changes to virtual settings, greatly minimizing the need for location shoots or costly physical sets.

    Jon Favreau, who created the series, shared that his work on The Jungle Book and The Lion King led him to believe “there had been breakthroughs in game-engine technology that were the key to solving this problem.” Favreau saw that the real-time features of these tools could significantly alter the filmmaking method by enabling directors to visualize and modify their scenes instantly on set. This results in a smooth blend of physical and digital components, eliminating many limitations associated with traditional production.

    AI’s Role in Filmmaking

    Artificial intelligence is also transforming the creative side of filmmaking. According to The Guardian, ScriptBook’s AI mainly evaluates scripts without depending on cast or director data. “Most people believe that cast is everything, but we’ve learned that the story has the highest predictive value,” said Nadira Azermai, the company’s founder. The software can evaluate over 400 aspects in just six minutes, including emotional analysis, journeys of protagonists and antagonists, structural elements like the three-act format, and audience appeal. Even though cast details can improve precision, Azermai notes that “if the computer says no to begin with, no additional information will change it to a yes.”

    Advancements in Post-Production

    In post-production, AI is also breaking new ground. Adobe’s Sensei AI, part of Adobe Premiere Pro, helps video editors by automating tasks such as trimming and organizing footage according to mood, pacing and rhythm. AI takes care of much of the technical workload, allowing editors to concentrate on storytelling and creative choices. This speeding up of the editing process lets films be finished on tighter schedules without sacrificing quality.

    Theatre has also jumped on the real-time technology bandwagon. Companies are increasingly using digital innovations like augmented reality (AR) and virtual reality (VR) to enrich live performances and craft immersive experiences. The Royal Shakespeare Company (RSC), for example, used AR in its 2016 production of The Tempest, working with Intel and Imaginarium Studios. This show featured a vibrant digital backdrop that reacted to the actors’ actions in real-time, including a digitally created storm that altered the stage experience for the audience.

    New Formats in Theatre

    The innovation has also spread into immersive formats. In 2021, the Royal Shakespeare Company introduced Dream, a digital theatre initiative based on A Midsummer Night’s Dream. This production utilized motion capture and gaming technology to create a computer-generated forest. Actors performed within a designated space, and the system translated their movements onto avatars instantly. Audiences participated remotely and helped shape the environment. This project opened up new avenues for experiencing theatre.

    Lastly, live-streaming remains a strong tool for broadening access to theatre. The UK’s National Theatre has launched a new worldwide streaming platform, making its celebrated productions available to viewers across the globe. This initiative not only creates new revenue streams but also democratizes access to high-quality theatre, allowing audiences to enjoy compelling performances beyond the physical stage.

    Source:
    Link


     

  • Microsoft Boosts Copilot AI with Personalized Podcasts and More

    Microsoft Boosts Copilot AI with Personalized Podcasts and More

    Key Takeaways

    1. Customized Podcasts: Copilot can generate personalized podcasts based on user interests, enhancing entertainment and learning through audio.

    2. Deep Research Capability: The AI can perform in-depth research and tackle complex questions step-by-step, similar to human reasoning, using various web sources.

    3. Real-Time Assistance: Copilot offers real-time help by observing users’ environments, aiding Windows users with desktop management, file organization, and task completion.

    4. Memory and Personalization: With user permission, Copilot remembers past conversations and personalizes experiences, offering summaries, reminders, and suggestions.

    5. Task Management Features: The new Actions feature allows Copilot to assist with booking flights, making reservations, and other tasks to simplify users’ lives.


    Microsoft has made enhancements to its Copilot AI chatbot, adding new features that boost its capability to respond to inquiries, entertain users, and retain all the information it has discussed.

    Customized Podcasts and Deep Research

    Now, Copilot can generate tailored podcasts based on users’ personal interests and topics, which is great when someone wants to be entertained or learn through audio. The AI is also equipped to perform in-depth research, meaning it can handle complex questions by working through issues step-by-step, similar to how a human might approach them. It uses information from various sources on the web and its ability to mix different answers together to produce useful reports.

    Real-Time Assistance for Users

    For those using mobile devices, the AI can observe the user’s environment in real-time, helping to answer questions. Windows users get an extra benefit as the AI can view their desktop, assisting them in adjusting settings, managing files, searching for information, and engaging with content to help users complete tasks and projects.

    Remembering Conversations and Personalization

    Copilot is now capable of remembering every chat and interaction, with the user’s permission, along with all relevant information. This feature enables the AI to create pages that summarize personal thoughts and notes on various discussions and projects. Users are also able to personalize the AI’s avatar for a more customized experience.

    Moreover, the chatbot can automatically provide reminders and suggestions based on what it has learned about the user’s life. This includes the ability to search for deals on items that users wish to purchase. The new Actions feature allows the AI to handle tasks like booking flights and making dinner reservations for the user.

    Microsoft Copilot is available for free on the Windows 11 operating system, Edge web browser, smartphone apps, and online platforms. Readers who are unfamiliar with using Copilot can check out a guide available on Amazon before trying the AI chatbot on any computer that runs Windows 11, such as the Surface Laptop Copilot+ PC available on Amazon.

    Source:
    Link

  • Microsoft 365 Copilot Introduces AI Agents for Business Tasks

    Microsoft 365 Copilot Introduces AI Agents for Business Tasks

    Key Takeaways

    1. Microsoft introduced two new AI reasoning agents, Analyst and Researcher, to its 365 Copilot service for business users.
    2. These AI agents can perform tasks similar to entry-level data analysts and business consultants, offering 24/7 virtual assistance.
    3. Microsoft 365 Copilot enhances efficiency by accessing corporate emails, data, and approved external sources to generate reports and analyze metrics.
    4. The Analyst agent uses a customized version of OpenAI o3-mini to evaluate data and create reports, while the Researcher agent focuses on advanced research and search functionalities.
    5. Access to these AI agents requires a subscription to Microsoft 365 Copilot, costing $30 per month in addition to a qualifying Microsoft 365 plan.


    Microsoft has introduced two fresh AI reasoning agents to its 365 Copilot service – Analyst and Researcher. These agents carry out tasks akin to those performed by entry-level data analysts and business consultants, leveraging cutting-edge large language models (LLMs).

    Benefits for Business Users

    Business users can take advantage of virtual assistants that are accessible around the clock, which could potentially perform tasks quicker and at a reduced cost compared to human assistants. However, it’s important to note that users must still verify the work produced by these AI agents.

    Features of Microsoft 365 Copilot

    Microsoft 365 Copilot is a service designed for businesses that employs AI across its features to enhance efficiency. This service has the capability to access corporate emails, data, and files, as well as approved external data sources. The new AI agents can tap into all this information to respond to prompts, assess company metrics, and create reports.

    Details on the AI Agents

    The Analyst AI agent uses a tailored version of OpenAI o3-mini, equipped with Python programming skills to evaluate corporate data. It can summarize its discoveries in reports, complete with figures and visuals. Each action is tracked, and all references are connected, enabling users to confirm the results.

    On the other hand, the Researcher AI agent employs a modified version of OpenAI o3 that comes with enhanced research abilities, along with integration into 365 Copilot and advanced search functionalities. Recent progress in AI technologies allows chatbots to address intricate issues by utilizing both internal and external business data, producing well-structured reports. Users can offer feedback and additional prompts to enhance the report quality.

    Subscription Information

    To access these agents, a subscription to Microsoft 365 Copilot is necessary, costing $30 per month in addition to a qualifying Microsoft 365 plan. Readers interested in exploring the various features of Microsoft 365 and Copilot can find more information in this book available on Amazon.

    Source:
    Link


  • OpenAI Launches Accessible “Ghibli” Image Generator for All Users

    OpenAI Launches Accessible “Ghibli” Image Generator for All Users

    Key Takeaways

    1. OpenAI’s new image generator resembles Studio Ghibli art and is now available to all users, but free users are limited to three creations daily.

    2. The “Ghibli” trend has emerged on social media, with users creating and sharing “Ghibli-fied” versions of portraits, causing a spike in image generation requests.

    3. There is ongoing debate about copyright issues related to AI-generated images, with concerns about the legality of using copyrighted material for training models.

    4. Privacy experts have raised concerns that OpenAI may collect high-quality image data through this trend, but this remains speculative.

    5. Some users have faced errors related to copyright issues when generating images, while others continue to share their creations without problems.


    OpenAI has released its new image generator, which has already made headlines for resembling the art from Studio Ghibli. Now, this feature is accessible to all users on the platform.

    Although the company has not made an official statement, a previous post on X by CEO Sam Altman hinted that free users will be limited to three image creations each day.

    The Ghibli Trend

    Since its launch in March, the image generator has sparked the current “Ghibli” phenomenon on social media, where users are posting “Ghibli-fied” versions of their portraits or others’ because, well, why not?

    This trend has gained so much popularity that Altman commented on their “GPUs are melting” due to the overwhelming number of requests for image generations.

    Legal Discussions Around AI

    This surge has led to a fascinating debate online regarding copyright issues and AI responsibilities when it comes to potential infringements.

    In an interview with TechCrunch, intellectual property attorney Evan Brown mentioned that the generator functions in “a legal grey area.” You can’t take legal action for someone copying a style, but you can pursue a case against the use of copyrighted material to train image generation models. This matter is currently hotly contested in courts, as the question of whether training models with copyrighted content falls under fair use is still unresolved.

    Privacy Concerns

    Some privacy experts have speculated that this could be OpenAI’s strategy for collecting high-quality image data. However, it’s still just a theory at this point. If you decide to join the trend, be sure not to share any personal details or images.

    A number of users on Reddit have mentioned encountering error messages indicating that GPT could not produce the images “due to copyright and intellectual property concerns.” However, these reports seem to vary, as many others continue to share their Ghibli-inspired images across social media platforms.

    Source:
    Link