Category: Artificial intelligence

  • Tesla’s HW5 FSD Computer May Face Export Control Limits

    Tesla’s HW5 FSD Computer May Face Export Control Limits

    Key Takeaways

    1. Tesla’s HW5 AI chip is powerful but will have limited capabilities due to US export restrictions on AI chips.
    2. The new AI5 chip is much stronger than the current AI4 chip used in Tesla’s Model Y robotaxis.
    3. Recent US regulations restrict the export of advanced AI chips, impacting Tesla’s ability to fully utilize the AI5 chip internationally.
    4. Elon Musk is optimistic that export control thresholds will increase, potentially reducing the need to limit the AI5’s capabilities.
    5. Tesla is also developing the AI6 chip, aiming for compatibility with a range of products, including Optimus robots and self-driving vehicles.


    Tesla is set to limit the capabilities of its impressive AI chip designed for the upcoming HW5 computer, which is expected to go into mass production by late 2026.

    Power Meets Restrictions

    The new Hardware 5.0 FSD computer, now referred to as AI5, is said to be incredibly powerful for artificial intelligence tasks. However, it may conflict with the export restrictions the US government has placed on AI chips due to national security concerns.

    Elon Musk claims that Tesla produces the top designs for AI chips, expertly integrating computational prowess with its self-driving software. This is why the company is once again developing the HW5 internally. He stated, “there’s still not a chip that exists that we would prefer to put in our car,” even though the current HW4 computer was created a few years back.

    A Comparison with AI4

    The AI4 chip currently powering Tesla’s Model Y robotaxis for unsupervised FSD is reportedly significantly weaker than the forthcoming AI5. This disparity means that Tesla will have to intentionally limit the capabilities of the HW5 computer to comply with government export rules on AI chips.

    In January, the US government imposed new regulations regarding the quantity and performance of AI chips. Over 100 allied nations faced limits on the advanced AI chips they could procure from American firms. Additionally, countries like China were given restrictions on the processing power of these chips.

    Changes in Export Regulations

    The Biden administration not only prohibited the export of Nvidia’s advanced B100 and B200 AI chips but also imposed limitations on its midrange H20 silicon aimed at aligning with government standards. The ban on H20 chips has now been lifted, and Musk is optimistic that the thresholds for export controls will rise over time, which would allow Tesla to avoid “nerfing” its AI5 computer for international use.

    Musk mentioned that mass production of Tesla vehicles featuring the AI5/HW5 computer and FSD camera system is targeted to start in the fourth quarter of 2026.

    Looking to the Future

    While the AI5 computer is under development, Tesla is also planning for the future with its AI6 hardware. To achieve economies of scale for the AI6 chip, the company aims to make it compatible with a smart ecosystem that includes Optimus robots and self-driving vehicles.

    “Considering Dojo 3 and the AI6 inference chip, it seems like intuitively, we want to converge where it’s basically the same chip but used in, say, two of them in a car or an Optimus and possibly a greater number on a board,” Musk shared.

    Source:
    Link

  • WhoFi Technology: Detecting People with Wi-Fi Radiation

    WhoFi Technology: Detecting People with Wi-Fi Radiation

    Key Takeaways

    1. Traditional re-identification systems rely on video footage, making them vulnerable to issues like masks, poor lighting, and angle changes.
    2. WhoFi technology uses Channel State Information (CSI) from Wi-Fi signals to create unique, identifiable patterns as individuals move through a Wi-Fi zone.
    3. The system filters out irregularities and enhances data using deep learning to generate individual vector signatures for accurate identification.
    4. WhoFi achieved a 95.5% accuracy rate in a study with 14 participants, showing resilience against clothing variations and obstacles like walls.
    5. While WhoFi can improve security monitoring, it raises concerns about potential invisible surveillance and unintended data sharing.


    Traditional re-identification systems depend heavily on video footage, which makes them susceptible to issues like masks, poor lighting, or shifting angles. On the other hand, the WhoFi tech created by researchers at La Sapienza University in Rome utilizes Channel State Information (CSI). This data, which is part of the radio signals from modern Wi-Fi routers, delivers precise measurements of signal strength and behavior. As a person moves through a Wi-Fi zone, they cause subtle changes to these signals in a way that is unique and identifiable.

    Enhancing Signal Accuracy

    To ensure the patterns generated are trustworthy, WhoFi filters out irregularities, fixes synchronization problems, and boosts the data with well-targeted variations. Following this, a deep learning model evaluates the signal patterns and generates an individual vector signature for each person.

    High Accuracy in Testing

    In a study that was published, WhoFi was evaluated with 14 participants wearing different types of clothing, achieving an impressive accuracy rate of 95.5%. The system showed resilience against external elements like clothing or line of sight. Walls didn’t hinder performance either, as the approach does not depend on visual contact but rather on how radio waves interact with internal body structures, such as bones.

    Unlike traditional cameras, WhoFi does not capture or process any visual information, making it potentially more efficient in terms of data. However, this also introduces new challenges. Anyone sending out Wi-Fi signals might unintentionally share data about those nearby, even if those individuals lack any transmitting devices.

    Potential Applications and Concerns

    In real-world applications, WhoFi can be extremely beneficial, particularly in monitoring areas that require high security or sensitivity. Nevertheless, this technology also poses risks of invisible, unwanted, and even illegal surveillance.

    Source:
    Link


     

  • Walker S2 Robot Beats Tesla Optimus with 3-Minute Battery Swap

    Walker S2 Robot Beats Tesla Optimus with 3-Minute Battery Swap

    Key Takeaways

    1. Elon Musk sees the Optimus robot as a future multitrillion-dollar venture for Tesla, but it faces growing competition.
    2. UBtech’s Walker S2 robot features 11 degrees of freedom in its arm, enabling it to handle delicate items effectively and move at a speed of two meters per second.
    3. The Walker S2 can operate autonomously and has the ability to change its own battery, ensuring continuous productivity without downtime.
    4. Battery swap stations for electric vehicles are becoming popular in China, allowing quick battery replacements and enhancing vehicle efficiency.
    5. UBTech’s Walker S2 applies the battery swap concept, allowing it to exchange its battery in just three minutes, contrasting with Tesla’s Optimus which requires stationary charging.


    While Elon Musk believes the Optimus robot will become a multitrillion-dollar venture for Tesla in the future, it faces increasing competition.

    New Developments in Robotics

    One example is the newly launched S2 version of the Walker industrial humanoid robot from UBtech Robotics. Similar to Optimus 2, it provides 11 degrees of freedom (DoF) in its robotic arm, allowing it to handle small and delicate items more effectively, along with various features expected from a modern humanoid robot designed for industrial applications.

    The Walker S2 can traverse a warehouse at a speed of two meters per second. It can also bend or squat for lifting heavy items, offering a pitch angle range of 170°, and it can twist its torso up to 162 degrees.

    Advanced Features

    The S2 is equipped with a large language AI model that enables voice commands and interactions with humans as it performs its tasks, much like Optimus does.

    However, where it surpasses Tesla’s Optimus is in its ability to operate autonomously around the clock. The second generation of Optimus can locate a charging station on its own, drive there, and plug itself in for recharging.

    In contrast, the Walker S2 has taken it a step further by not needing to remain inactive during charging. The company asserts that it has developed the first humanoid industrial robot that can change its own battery, ensuring continuous productivity.

    Battery Swap Innovations

    Battery swap stations for electric vehicles are gaining traction in China as a quicker alternative to conventional charging. For instance, an EV manufacturer like NIO completes 100,000 swaps daily and has achieved 80 million total, allowing them to sell their vehicles at a 30% lower price through a battery-as-a-service (BaaS) model. When a swap station is required, vehicles can exit the highway, reach the station, and have their batteries replaced automatically in just a few minutes, enabling them to continue their journey with a new battery.

    This battery swap idea has recently gained a significant advocate in China. The largest battery manufacturer, CATL, is making a substantial investment in battery swaps and intends to establish numerous stations in key urban areas and along major routes, either independently or in collaboration with innovative firms like NIO.

    Inspired by the electric vehicle trend in China, UBTech has effectively mirrored this concept with the Walker S2. As showcased in the product video below, the new Walker robot can travel to a factory swap station, remove its depleted battery, install a fully charged one, and resume work in just three minutes, while Optimus remains stationary, plugged in and charging.

    Source:
    Link

  • Proton Launches Lumo: Privacy-Focused AI Assistant for Users

    Proton Launches Lumo: Privacy-Focused AI Assistant for Users

    Key Takeaways

    1. Privacy-Centric AI: Proton’s AI chatbot, Lumo, prioritizes user privacy, opposing “surveillance capitalism” prevalent in Big Tech.

    2. Strong Security Features: Lumo employs “zero-access” encryption, ensuring that user data is inaccessible to third parties, including Proton itself.

    3. File Handling and Encryption: Lumo analyzes uploaded documents without retaining any information, and linked files from Proton Drive maintain end-to-end encryption.

    4. Web Search Options: Lumo has a web search feature that is off by default, using privacy-friendly search engines if enabled.

    5. Tiered Access and Features: Users can interact with Lumo through various account tiers, with free accounts having limited access and paid subscriptions offering enhanced features.


    Proton, known for its secure email service Proton Mail, has introduced a new AI chatbot focused on privacy, called Lumo.

    Vision for Privacy

    According to Andy Yen, the CEO and founder of Proton, their aim is to create “AI that puts people ahead of profits.” This is a direct challenge to what he refers to as “surveillance capitalism” that dominates Big Tech.

    Security Features

    Lumo is designed with numerous security features to protect user data. This AI assistant can perform various tasks like summarizing documents, coding, and writing emails, with all information saved locally on the user’s device.

    Proton utilizes “zero-access” encryption, which provides a unique encryption key for you to access your content.

    This structure ensures that no third party, including Proton itself, can view your information. Thus, your data remains off-limits for advertisers, government agencies, or for training large language models.

    File Handling and Encryption

    You can upload documents for Lumo to analyze; however, the chatbot does not keep any information from those files. Moreover, when you link files from Proton Drive to Lumo, they retain their end-to-end encryption while interacting with the chatbot.

    Lumo works with a variety of open-source large language models hosted on Proton’s servers in Europe, such as Mistral’s Nemo, Mistral Small 3, Nvidia’s OpenHands 32B, and the Allen Institute for AI’s OLMO 2 32B model.

    The system assigns tasks to the model that is best suited for the specific inquiry. A representative from Proton commented, “programming-related questions are managed by OpenHands, which focuses on coding tasks.”

    Web Search and Accessibility

    Lumo incorporates a web search function, but it is turned off by default to prioritize user privacy. If you choose to enable this feature, Lumo uses “privacy-friendly” search engines to gather information from the web.

    You can access Lumo through its website, lumo.proton.me, and through specialized applications for both iOS and Android. Access is organized into various tiers.

    People without a Lumo or Proton account can ask a “limited number” of questions each week, and they won’t have access to their chat histories.

    Users with a free account can utilize an encrypted chat history, upload small files, and save a limited number of chats as favorites.

    For a monthly subscription of $12.99, the Lumo Plus plan offers unlimited chats, extended encrypted chat history, boundless favorites, and the ability to upload larger files.

    Source:
    Link

  • Upcoming WhatsApp Feature to Enhance Chat Convenience

    Upcoming WhatsApp Feature to Enhance Chat Convenience

    Key Takeaways

    1. WhatsApp is developing a new feature called Quick Recap to summarize multiple chats using AI.
    2. Users will be able to select up to five chats to receive an AI-generated summary of unread messages.
    3. The feature will maintain end-to-end encryption, ensuring chat content remains private and secure.
    4. Chats with Advanced Chat Privacy will be excluded from the Quick Recap feature for added protection.
    5. Quick Recap is currently not available, with a phased rollout expected for beta testers before reaching the stable app version.


    WhatsApp is gearing up to enhance user experience, as per recent findings from the WhatsApp tracker WABetaInfo. The messaging platform is working on a feature named Quick Recap, which employs artificial intelligence to summarize multiple chats simultaneously. This innovation aims to assist frequent users in quickly catching up on unread messages.

    New Feature Development

    WABetaInfo, known for its analysis of beta versions, reports that this feature is currently being developed for the Android beta version 2.25.21.12. Users will soon have the option to select up to five chats from their chat tab. By clicking on a new Quick Recap icon, they will get an AI-generated summary of unread messages. This feature is anticipated to be particularly beneficial for individuals who manage numerous group or personal chats, such as professionals or community managers.

    Privacy and Security

    A major highlight of this new feature is its commitment to maintaining end-to-end encryption. As stated by SmartDroid.de, WhatsApp utilizes a method called private processing technology. This ensures that the AI analysis occurs in a secure environment, meaning that neither WhatsApp nor Meta can access the chat content or the summary results. Interestingly, chats that have Advanced Chat Privacy will be excluded from this feature by design, offering additional safety for sensitive discussions.

    Availability Timeline

    Currently, Quick Recap is not available to users in any version of the app, either regular or beta. While there’s no confirmed release date yet, a phased rollout is anticipated. Initially, the feature will likely be available to a select group of beta testers and then gradually widen its reach to more beta users. Eventually, it will be included in the stable version of the app.

    Source:
    Link

  • Introducing Baby Grok: A Child-Friendly Chatbot by xAI

    Introducing Baby Grok: A Child-Friendly Chatbot by xAI

    Key Takeaways

    1. Elon Musk announced the development of “Baby Grok,” a new app focused on kid-friendly content.
    2. The decision to create Baby Grok comes after backlash over the original Grok’s customizable 3D avatars, which were criticized as sexualized.
    3. Specific details about how Baby Grok will differ from the current Grok chatbot are still pending.
    4. The news of Baby Grok has been positively received on social media, with many expressing excitement for a child-appropriate alternative.
    5. The original Grok chatbot, launched in 2023, has faced challenges with inappropriate responses, which will be addressed in the child-friendly version.


    After facing backlash for the launch of customizable 3D avatars in its AI chatbot Grok 4, which some critics deemed sexualized, CEO Elon Musk has decided to create a new option for kids. On July 19, 2025, Musk shared on his official X account that they are developing a fresh app focused solely on content suitable for children. He announced, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

    Details Still Pending

    Musk has not yet revealed specifics about how this new app will function or how it will technically differ from the current Grok chatbot. Grok has gained a reputation for giving risqué or informal responses when users ask, which raises concerns in educational settings. Therefore, launching Baby Grok can be viewed as a direct response to such feedback.

    Positive Reactions Online

    The news has been mostly well-received on social media. A lot of users expressed their excitement for an AI app that is specifically geared toward children. Some mentioned that they had previously restricted their kids from using Grok. Baby Grok might serve as a proper alternative for families.

    The initial Grok chatbot made its debut in 2023, competing with OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini. Recently, xAI released an updated version, Grok 4, which Musk claims can tackle academic questions at a PhD level. He also noted that while Grok has a solid grasp of technical topics, it sometimes lacks common sense—an issue that will likely receive special focus during the creation of the child-friendly app.

    Source:
    Link

  • Train AI for Free: Why It Doesn’t Thank You

    Train AI for Free: Why It Doesn’t Thank You

    Key Takeaways

    1. Unpaid Workforce: Using free AI tools makes users part of a global unpaid workforce, helping to train AI without compensation.

    2. Reinforcement Learning: AI chatbots improve through user feedback, with interactions recorded to refine their performance, even for paying users.

    3. Human Labor Behind AI: Real people, often low-paid contractors, evaluate AI responses and provide feedback that drives the training process.

    4. Feedback Mechanism: User feedback informs smaller reward models that guide how the main AI responds, shaping its tone and helpfulness.

    5. Growing Market: The market for training data is booming, expected to grow significantly, while many users remain unaware that their interactions are being used for AI development.


    Ever felt like your late-night chats with ChatGPT are making Silicon Valley richer while you struggle with insomnia? Well, they are. If you’re using free AI tools, guess what? You’ve become part of a global unpaid workforce, and no one even gave you a thank-you mug.

    The Reality of AI Training

    Let’s break it down. Free AI chatbots, such as ChatGPT, Claude, and Gemini, rely on something known as Reinforcement Learning from Human Feedback (RLHF) to get better. It may sound complex, but here’s the straightforward explanation:

    You ask a question, the AI responds, and you give it a thumbs up or down. If you like one answer more than another, congratulations—you just helped train the model. Your feedback is recorded, processed, and eventually, the AI adapts to be more “helpful.”

    You’re Part of the Process

    These tools aren’t just floating around in the cloud for no reason. They learn from your interactions. You’re not just having a conversation; you’re essentially a low-cost (read: unpaid) data annotator.

    Think paying for GPT-4 means you’ve escaped the data harvesting? Think again! Unless you’ve opted out in your ChatGPT settings, your chats are still used to refine the AI’s performance. That’s right—you’re shelling out $20 a month to aid in product development. Pretty clever, huh?

    OpenAI, for instance, utilizes discussions from both free and paying users to enhance its models, unless you disable “chat history.” Google’s Gemini has a similar approach. Anthropic’s Claude? It’s also gathering preferences to improve its alignment models.

    Behind the Scenes

    Behind every complex term like RLHF lies a very tangible process involving humans. Companies hire contractors to evaluate responses, flag inaccuracies, and categorize prompts.

    Businesses like Sama (previously linked to OpenAI), Surge AI, and Scale AI provide this labor, often employing low-wage workers who toil long hours, many from developing nations. Reports from 2023 revealed that RLHF labelers earned between $2 to $15 an hour, depending on their location and role. Therefore, real people are constantly clicking “this response is better.” It’s this feedback loop that fuels the bots.

    If you’re giving thumbs up feedback, you’re essentially doing a small part of their job… for nothing.

    The Feedback Mechanism

    Here’s where it becomes intriguing. Your feedback doesn’t directly train the main model. Instead, it goes into reward models, which are smaller systems that inform the main AI how to act. So, when you say, “I prefer this answer,” you’re contributing to the internal guide that the bigger model follows. When enough people provide feedback, the AI starts to feel more human-like, more polite, and more helpful… or more like a writer with boundary issues.

    AI keeps track of tone. When you interact with it in a specific style—be it sarcastic, scholarly, or straight to the point—the system learns to reply accordingly. It’s not stealing your writing style and selling it (yet), but your habits help shape the collective training experience, especially when the bot notices that others appreciate your tone or phrasing.

    The Role of CAPTCHA

    It’s less about copying you and more about duplicating what works best. What works often originates from someone who never agreed to style duplication.

    And those CAPTCHA challenges you solve to prove you’re human? You’re not just clicking on traffic lights and crosswalks to access your email. You’re actually labeling data for machine learning systems. Google’s reCAPTCHA, hCaptcha, and Cloudflare’s Turnstile all contribute visual data to training processes, helping AIs understand the world one blurry street sign at a time.

    So yes, even your security checks are now part of the feedback system.

    The Booming Market

    This isn’t some wild conspiracy theory. The market for training data is thriving. As reported by MarketsandMarkets, the global training data market is expected to rise from $1.5 billion in 2023 to over $4.6 billion by 2030. While this includes synthetic data and curated datasets, the significance of human-labeled real-world data—what you casually provide each day—is on the rise.

    Yet, most users still believe their chatbot chats vanish into thin air. Spoiler alert: they don’t. Not unless you’ve specifically turned off logging (and even then… you should verify).

    Your Role in the Future

    Here’s the twist. You’re contributing to the very technology that could one day take your job, surpass your creativity, or turn your tweets into product samples. This doesn’t mean you should stop using AI, but it’s important to understand what you’re helping to create. And perhaps, just perhaps, ask for a bit of transparency in return.

    After all, if your unpaid contributions are shaping the next generation of billion-dollar AI systems, the least they could do is express some gratitude.

    Source:
    Link

  • Mobvoi TicNote Launches AI-Powered Voice Recorder for Easy Notes

    Mobvoi TicNote Launches AI-Powered Voice Recorder for Easy Notes

    Key Takeaways

    1. AI-Driven Transcription Tool: The TicNote turns smartphones into AI-powered transcription tools that gather, display, and analyze notes.
    2. Innovative Study Assistant: Mobvoi’s TicNote acts as a study companion, generating reports and discussing data with users through its active agent feature.
    3. Advanced Insights Generation: The device, called ‘Shadow,’ utilizes advanced models like GPT-40 to create insights and “Aha Moments” from recorded content.
    4. User Data Security: TicNote ensures user information is securely encrypted within its 64GB built-in storage.
    5. Sleek and Long-Lasting Design: The TicNote features a compact design with a battery life of up to 25 hours and offers various connectivity options, including Wi-Fi and Bluetooth.


    The capability to turn a smartphone into an AI-driven transcription tool that can also gather, display, and analyze notes is becoming more popular.

    New Study Companion

    Mobvoi claims that its latest device, the TicNote, serves as a new kind of study assistant. It can create reports for users or even discuss the data through its innovative active agent feature.

    Generating Insights

    The device, known as ‘Shadow,’ is designed to produce “insights” and those enlightening “Aha Moments” by utilizing advanced models like GPT-40, GPT-40-mini, and DeepSeek-R1 based on the recorded content.

    User Data Security

    The TicNote is also designed to keep all user information securely encrypted within its built-in storage, which could be a tad larger at 64GB.

    Sleek Design

    Mobvoi has crafted a sleek chassis measuring 86 x 55 x 3 millimeters (mm) for the TicNote, featuring a matte, textured front. It includes a magnetic ring that serves as a MagSafe-like accessory and incorporates a battery that can last up to 25 hours on a single charge.

    Connectivity Options

    This new competitor to Plaud is compatible with Wi-Fi, Bluetooth, and cable connections. It also features a switch for its recording modes along with a power button.

    Source:
    Link

  • Tesla Diner Launches with Optimus Server and Cybertruck Burgers

    Tesla Diner Launches with Optimus Server and Cybertruck Burgers

    Key Takeaways

    1. Tesla’s new diner and drive-in movie theater, inspired by a 50s theme, is set to open soon with a unique design.
    2. The Optimus humanoid robot will enhance the atmosphere, serving items like popcorn and featuring in a new action figure.
    3. Guests can order food directly from their Tesla’s display interface while watching movies, with audio routed through their car’s sound system.
    4. The soft opening honored first responders and introduced a new fried chicken and waffle sandwich, keeping the menu classic with a modern twist.
    5. The diner will have around 80 Supercharger stalls and aims to blend retro vibes with futuristic elements, with more unique Supercharger concepts planned for the future.


    About two years after Tesla’s unusual Supercharger station design was approved, the diner and drive-in movie theater with a 50s theme is set to open, possibly on Monday, July 21.

    Robot Ambience

    Guests invited to the soft launch have noted that Tesla’s Optimus humanoid robot will play a key role in creating the atmosphere at the diner.

    In addition to serving popcorn, Tesla is introducing a new black Optimus action figure styled as a fast-food server, complete with a box of fries. Speaking of boxes, the diner’s burgers and other menu items will be served in containers shaped like the Cybertruck.

    Movie Experience

    Cars that arrive to watch a film at the theater will have their audio routed through the vehicle’s sound system, which is a standard feature of drive-in experiences.

    However, what’s unique is that when drivers pull into the diner, the menu pops up on Tesla’s display interface. They can easily order burgers, fries, milkshakes, and more, which will then be delivered right to their car.

    Celebrating Community

    The soft opening of the Tesla Diner on Santa Monica Boulevard honored first responders, and a new fried chicken and waffle sandwich seemed to be a favorite among attendees. The diner is sticking to classic fare but adding its own design flair.

    The Tesla Diner Supercharger station will feature around 80 stalls, with Optimus robots waving at guests and a cleverly positioned Cybercab nearby.

    The blend of a 50s diner and drive-in movie theater vibe with futuristic Cybertruck-inspired elements, like food packaging and metal trays, could make the new Tesla Diner a great spot to hang out.

    This isn’t the only distinctive Supercharger station Tesla is developing; they also have plans for a spaceship-themed CyberCanopy in Roswell, New Mexico, for example.

    Source:
    Link


  • Netflix Unveils Generative AI in Sci-Fi Series The Eternaut

    Netflix Unveils Generative AI in Sci-Fi Series The Eternaut

    Key Takeaways

    1. Netflix has used generative AI for the first time in The Eternaut, creating a building collapse scene more efficiently than traditional VFX methods.
    2. Co-CEO Ted Sarandos emphasized that AI tools enhance creativity without replacing human roles, streamlining production for teams with limited resources.
    3. The collaboration with Eyeline Studios and local Argentine creatives highlights Netflix’s commitment to combining AI with human talent in production.
    4. The entertainment industry is cautious about generative AI, with ongoing concerns about job security and the need for clear guidelines to protect creative roles.
    5. Netflix is exploring further applications of generative AI, including natural language search and AI-driven interactive ads, aiming to improve user experience and internal processes.


    Netflix has for the first time incorporated generative AI into one of its productions. In the Argentine sci-fi series The Eternaut, an important moment featuring a building collapsing in Buenos Aires was created using AI tools instead of the usual visual effects techniques.

    Confirmation from Netflix Leadership

    Ted Sarandos, the co-CEO of Netflix, shared this information during the company’s earnings call for the second quarter. The sequence that involved AI was finished approximately ten times quicker than it would have been with traditional VFX methods. Sarandos noted that this efficiency allowed the scene to be created without significantly raising the show’s production costs.

    He mentioned, “The costs of the special effects without AI simply wouldn’t have worked for a show with that budget.”

    A New Era for Production Teams

    Netflix has positioned this choice as a way to enhance creativity for production teams that work with limited resources. Sarandos emphasized that this isn’t about replacing humans with machines. “This is real people doing real work with better tools,” he remarked. He also highlighted how AI is being utilized in pre-visualization and shot planning, areas that audiences don’t typically see.

    The Eternaut is inspired by a renowned Argentine graphic novel and depicts the lives of survivors after a toxic snowfall in a post-apocalyptic Buenos Aires. The building collapse scene is the first occasion where Netflix has openly acknowledged the use of generative AI for visual effects in one of its series.

    Collaborating with Local Talent

    To bring this sequence to life, Netflix collaborated with Eyeline Studios, its internal VFX team, alongside Argentine creatives. While the company hasn’t provided specific details on the financial or time savings, the claim of a tenfold increase in speed offers a general idea of the impact that AI tools have had.

    The larger entertainment industry remains uncertain about how to adapt to generative AI. Last year, writers and actors went on strike partly to establish guidelines for how studios could employ this technology.

    The agreements reached allowed for the use of AI in specific situations but ensured that humans remained in key creative positions.

    Industry Concerns and Future Prospects

    Netflix’s application of AI in The Eternaut appears to align with this strategy. AI was utilized to support rather than replace VFX artists. However, not everyone in the industry is reassured. There are persistent worries about job losses and decreasing demand for skilled labor in post-production roles.

    In addition to VFX, Netflix is also exploring other applications of generative AI. The company is trialing a search function that comprehends natural language and plans to introduce AI-driven interactive ads later this year. These innovations aim to enhance user experience and streamline internal processes.

    During a quarter where Netflix generated $11 billion in revenue—partly thanks to the final season of Squid Game—the company is experiencing positive momentum. Sarandos indicated that a mix of quality content, increased prices, and better advertising performance contributed to results that exceeded expectations.

    Whether generative AI will become a staple in Netflix’s productions relies on its long-term effectiveness and the responses from creators and unions.

    In conclusion, the key point is clear: A show that would have struggled to afford a complex VFX shot managed to achieve it with AI. The sequence appears believable, costs were controlled, and the production timeline was maintained.

    This serves as a small but significant indication of how AI is beginning to influence what viewers see on their screens. You may not recognize it, but it’s there, and for Netflix, that might be precisely the goal.

    Source:
    Link