Category: Artificial intelligence

  • AI Cyberattacks: How LLMs Plan and Execute Operations Autonomously

    AI Cyberattacks: How LLMs Plan and Execute Operations Autonomously

    Key Takeaways

    1. Large language models (LLMs) can imitate network breaches similar to real attacks when enhanced with advanced planning skills.
    2. LLMs can autonomously penetrate networks, identify weaknesses, and execute complex attacks without human assistance.
    3. The study indicates that sophisticated AI can adapt to changing network conditions and make independent decisions.
    4. There are serious cybersecurity risks, as malicious actors could misuse LLMs for automated and widespread attacks.
    5. LLMs also have potential benefits, enabling businesses to develop and test cybersecurity strategies through simulations to identify vulnerabilities.


    A recent investigation directed by Brian Singer, who is pursuing a PhD in electrical and computer engineering at Carnegie Mellon University, has shown that large language models (LLMs) can imitate network breaches that are strikingly similar to actual attacks. This is particularly true when these models are enhanced with advanced planning skills and tailored agent frameworks.

    Key Findings of the Study

    In this investigation, the LLMs were able to penetrate corporate networks, discover weaknesses, and execute complex attacks all without any human help. The findings highlight that sophisticated AI models can do more than just simple tasks; they can also make decisions on their own and adapt to changing network conditions.

    Implications for Cybersecurity

    This raises both serious dangers and possible advantages for the field of cybersecurity. On one hand, bad actors might take advantage of these technologies to make their attacks more automated and widespread. Conversely, businesses and security experts could leverage LLMs to create and evaluate cybersecurity strategies, like running simulations to find potential vulnerabilities before they can be exploited.

    The details of this study can be found on Anthropic’s research website, and a preprint of the paper is also accessible on arXiv. These documents provide important information about the techniques and consequences of this innovative and challenging research regarding AI-driven cyberattacks.

    Source:
    Link

  • Nvidia B200 Shipments to China Surpass US Export Restrictions

    Nvidia B200 Shipments to China Surpass US Export Restrictions

    Key Takeaways

    1. Nvidia’s B200 AI processors are still being shipped to China despite U.S. export restrictions, revealing the ineffectiveness of current regulations.
    2. A new reseller, “Gate of the Era,” has emerged, facilitating around $400 million in transactions of B200 hardware through various channels.
    3. Demand for restricted GPUs like the B200 and H100 is driven mainly by smaller Chinese companies and third-party datacenter operators, rather than major tech firms.
    4. The introduction of Nvidia’s lower-spec H20 chip has reduced some demand for the B200, as customers consider this compliant option.
    5. Supply chains are evolving, with Southeast Asian countries serving as staging areas for shipments to China, prompting discussions of further regional export controls by Washington.


    Shipments exceeding $1 billion of Nvidia’s leading B200 artificial-intelligence processors have still made their way to China in the three months following Washington’s stricter export limitations, highlighting how ineffective the current restrictions are. The B200, which is the same graphics processing unit (GPU) used by companies like OpenAI, Google, and Meta for training sophisticated models, is a specialized electronic circuit built for swift image and data processing. Even though it’s officially banned in the Chinese market, it has become a key part of a vibrant gray market for high-end U.S. chips.

    Emergence of New Resellers

    Documents from the Financial Times show that a reseller based in Anhui, known as “Gate of the Era,” has quickly become a significant player in this trade. Established in February, the company obtained at least two large shipments of B200 racks, each containing eight GPUs along with necessary components and software. Estimates suggest that the company has facilitated around $400 million in hardware transactions through both direct and indirect channels. The market prices have varied between RMB 3 million and 3.5 million (approximately US$489,000) for each rack, a decrease from over RMB 4 million (around $560,000) in mid-May. These prices remain about 50% higher than similar offerings in the U.S.

    Market Dynamics and Demand

    Distributors are promoting B200, H100, and other restricted GPUs on platforms like Douyin and Xiaohongshu, arranging on-site tests before customers pick up the hardware. One intermediary compared the situation to a “seafood market.” There is a good supply as long as buyers are okay with the lack of Nvidia’s official support.

    Since major Chinese tech firms risk their global compliance programs by using illegal hardware, demand is primarily from smaller domestic companies, third-party datacenter operators, and entities already blacklisted by the U.S. The reintroduction of Nvidia’s lower-spec H20—a version of Nvidia’s AI chip designed with reduced capabilities to comply with export regulations—has lessened some of the demand pressure. Resellers are noting a decrease in B200 transactions as customers consider this sanctioned option.

    Evolving Supply Chains

    Nevertheless, the supply chain is still adapting. Industry professionals have mentioned that Southeast Asian locations, especially Thailand and Malaysia, are acting as temporary staging areas. This has led Washington to consider imposing more regional controls in the upcoming quarter. Even if these routes were to close, distributors claim that new channels through less-restricted European regions are already being utilized, highlighting the financial incentives that continue to facilitate the flow of advanced U.S. AI silicon into China, despite ongoing rounds of export regulations.

    Source:
    Link


     

  • Kimi K2 AI Model from China Surpasses GPT-4.1 in Benchmarks

    Kimi K2 AI Model from China Surpasses GPT-4.1 in Benchmarks

    Key Takeaways

    1. Moonshot AI launched Kimi K2, a new language model for developers and professional users, inspired by OpenAI’s GPT models.
    2. Kimi K2 utilizes a mixture-of-experts architecture with around one trillion parameters, activating only 32 billion at a time for efficiency.
    3. The model has two versions: Instruct for direct interaction and Base for research and fine-tuning, both accessible via an OpenAI-compatible API.
    4. Kimi K2 features a unique training strategy that allows it to independently structure tasks and generate program code, providing reliable answers without clear directives.
    5. The model has shown strong performance compared to GPT-4.1, excelling in mathematics, science, and multilingual capabilities, though it struggles with vague or ambiguous queries.


    Chinese company Moonshot AI has launched a new language model named Kimi K2, targeting developers and professional users. This model takes inspiration from OpenAI’s GPT models but aims to achieve better outcomes in certain areas. Kimi K2 is now accessible through an API.

    Introduction and Specifications

    As reported by Reuters, Kimi K2 made its debut in July 2025. The model utilizes a mixture-of-experts architecture and boasts around one trillion parameters, although only 32 billion of these are actively used at any time. This design choice helps to conserve computing resources and enhances overall efficiency.

    Version Options and Integration

    Kimi K2 comes in two different versions: Instruct, which is designed for users who want direct interaction with the model, and Base, which is meant for research and personal fine-tuning. Both versions can be integrated using an OpenAI-compatible application programming interface (API). However, Moonshot AI has stated that commercial usage is restricted in situations involving a high number of users or substantial sales figures.

    Unique Training Strategy

    A significant distinction between Kimi K2 and many other language models is its training approach. Kimi K2 was intentionally crafted to independently structure tasks, utilize tools, and generate simple program code. This model is created to deliver dependable answers even without clear chain-of-thought directives.

    Performance Comparison

    According to Shinkai, Kimi K2 has shown strong performance in comparison to GPT-4.1 in various evaluations, including practical programming tasks and knowledge assessments. While specific outcomes may differ based on usage, the new model tends to excel, especially in areas like mathematics, science, and multilingual capabilities. However, Reuters notes that there are weaknesses as well; for instance, vague questions or ambiguous tasks can lead to longer or incomplete responses.

    Growing AI Landscape

    Moonshot AI is part of a rising trend of Chinese companies developing and releasing their own AI models to the public. Kimi K2 stands out as a powerful and adaptable model that might establish its presence in international markets over time.

    Source:
    Link

  • Tesla’s HW5 FSD Computer May Face Export Control Limits

    Tesla’s HW5 FSD Computer May Face Export Control Limits

    Key Takeaways

    1. Tesla’s HW5 AI chip is powerful but will have limited capabilities due to US export restrictions on AI chips.
    2. The new AI5 chip is much stronger than the current AI4 chip used in Tesla’s Model Y robotaxis.
    3. Recent US regulations restrict the export of advanced AI chips, impacting Tesla’s ability to fully utilize the AI5 chip internationally.
    4. Elon Musk is optimistic that export control thresholds will increase, potentially reducing the need to limit the AI5’s capabilities.
    5. Tesla is also developing the AI6 chip, aiming for compatibility with a range of products, including Optimus robots and self-driving vehicles.


    Tesla is set to limit the capabilities of its impressive AI chip designed for the upcoming HW5 computer, which is expected to go into mass production by late 2026.

    Power Meets Restrictions

    The new Hardware 5.0 FSD computer, now referred to as AI5, is said to be incredibly powerful for artificial intelligence tasks. However, it may conflict with the export restrictions the US government has placed on AI chips due to national security concerns.

    Elon Musk claims that Tesla produces the top designs for AI chips, expertly integrating computational prowess with its self-driving software. This is why the company is once again developing the HW5 internally. He stated, “there’s still not a chip that exists that we would prefer to put in our car,” even though the current HW4 computer was created a few years back.

    A Comparison with AI4

    The AI4 chip currently powering Tesla’s Model Y robotaxis for unsupervised FSD is reportedly significantly weaker than the forthcoming AI5. This disparity means that Tesla will have to intentionally limit the capabilities of the HW5 computer to comply with government export rules on AI chips.

    In January, the US government imposed new regulations regarding the quantity and performance of AI chips. Over 100 allied nations faced limits on the advanced AI chips they could procure from American firms. Additionally, countries like China were given restrictions on the processing power of these chips.

    Changes in Export Regulations

    The Biden administration not only prohibited the export of Nvidia’s advanced B100 and B200 AI chips but also imposed limitations on its midrange H20 silicon aimed at aligning with government standards. The ban on H20 chips has now been lifted, and Musk is optimistic that the thresholds for export controls will rise over time, which would allow Tesla to avoid “nerfing” its AI5 computer for international use.

    Musk mentioned that mass production of Tesla vehicles featuring the AI5/HW5 computer and FSD camera system is targeted to start in the fourth quarter of 2026.

    Looking to the Future

    While the AI5 computer is under development, Tesla is also planning for the future with its AI6 hardware. To achieve economies of scale for the AI6 chip, the company aims to make it compatible with a smart ecosystem that includes Optimus robots and self-driving vehicles.

    “Considering Dojo 3 and the AI6 inference chip, it seems like intuitively, we want to converge where it’s basically the same chip but used in, say, two of them in a car or an Optimus and possibly a greater number on a board,” Musk shared.

    Source:
    Link

  • WhoFi Technology: Detecting People with Wi-Fi Radiation

    WhoFi Technology: Detecting People with Wi-Fi Radiation

    Key Takeaways

    1. Traditional re-identification systems rely on video footage, making them vulnerable to issues like masks, poor lighting, and angle changes.
    2. WhoFi technology uses Channel State Information (CSI) from Wi-Fi signals to create unique, identifiable patterns as individuals move through a Wi-Fi zone.
    3. The system filters out irregularities and enhances data using deep learning to generate individual vector signatures for accurate identification.
    4. WhoFi achieved a 95.5% accuracy rate in a study with 14 participants, showing resilience against clothing variations and obstacles like walls.
    5. While WhoFi can improve security monitoring, it raises concerns about potential invisible surveillance and unintended data sharing.


    Traditional re-identification systems depend heavily on video footage, which makes them susceptible to issues like masks, poor lighting, or shifting angles. On the other hand, the WhoFi tech created by researchers at La Sapienza University in Rome utilizes Channel State Information (CSI). This data, which is part of the radio signals from modern Wi-Fi routers, delivers precise measurements of signal strength and behavior. As a person moves through a Wi-Fi zone, they cause subtle changes to these signals in a way that is unique and identifiable.

    Enhancing Signal Accuracy

    To ensure the patterns generated are trustworthy, WhoFi filters out irregularities, fixes synchronization problems, and boosts the data with well-targeted variations. Following this, a deep learning model evaluates the signal patterns and generates an individual vector signature for each person.

    High Accuracy in Testing

    In a study that was published, WhoFi was evaluated with 14 participants wearing different types of clothing, achieving an impressive accuracy rate of 95.5%. The system showed resilience against external elements like clothing or line of sight. Walls didn’t hinder performance either, as the approach does not depend on visual contact but rather on how radio waves interact with internal body structures, such as bones.

    Unlike traditional cameras, WhoFi does not capture or process any visual information, making it potentially more efficient in terms of data. However, this also introduces new challenges. Anyone sending out Wi-Fi signals might unintentionally share data about those nearby, even if those individuals lack any transmitting devices.

    Potential Applications and Concerns

    In real-world applications, WhoFi can be extremely beneficial, particularly in monitoring areas that require high security or sensitivity. Nevertheless, this technology also poses risks of invisible, unwanted, and even illegal surveillance.

    Source:
    Link


     

  • Walker S2 Robot Beats Tesla Optimus with 3-Minute Battery Swap

    Walker S2 Robot Beats Tesla Optimus with 3-Minute Battery Swap

    Key Takeaways

    1. Elon Musk sees the Optimus robot as a future multitrillion-dollar venture for Tesla, but it faces growing competition.
    2. UBtech’s Walker S2 robot features 11 degrees of freedom in its arm, enabling it to handle delicate items effectively and move at a speed of two meters per second.
    3. The Walker S2 can operate autonomously and has the ability to change its own battery, ensuring continuous productivity without downtime.
    4. Battery swap stations for electric vehicles are becoming popular in China, allowing quick battery replacements and enhancing vehicle efficiency.
    5. UBTech’s Walker S2 applies the battery swap concept, allowing it to exchange its battery in just three minutes, contrasting with Tesla’s Optimus which requires stationary charging.


    While Elon Musk believes the Optimus robot will become a multitrillion-dollar venture for Tesla in the future, it faces increasing competition.

    New Developments in Robotics

    One example is the newly launched S2 version of the Walker industrial humanoid robot from UBtech Robotics. Similar to Optimus 2, it provides 11 degrees of freedom (DoF) in its robotic arm, allowing it to handle small and delicate items more effectively, along with various features expected from a modern humanoid robot designed for industrial applications.

    The Walker S2 can traverse a warehouse at a speed of two meters per second. It can also bend or squat for lifting heavy items, offering a pitch angle range of 170°, and it can twist its torso up to 162 degrees.

    Advanced Features

    The S2 is equipped with a large language AI model that enables voice commands and interactions with humans as it performs its tasks, much like Optimus does.

    However, where it surpasses Tesla’s Optimus is in its ability to operate autonomously around the clock. The second generation of Optimus can locate a charging station on its own, drive there, and plug itself in for recharging.

    In contrast, the Walker S2 has taken it a step further by not needing to remain inactive during charging. The company asserts that it has developed the first humanoid industrial robot that can change its own battery, ensuring continuous productivity.

    Battery Swap Innovations

    Battery swap stations for electric vehicles are gaining traction in China as a quicker alternative to conventional charging. For instance, an EV manufacturer like NIO completes 100,000 swaps daily and has achieved 80 million total, allowing them to sell their vehicles at a 30% lower price through a battery-as-a-service (BaaS) model. When a swap station is required, vehicles can exit the highway, reach the station, and have their batteries replaced automatically in just a few minutes, enabling them to continue their journey with a new battery.

    This battery swap idea has recently gained a significant advocate in China. The largest battery manufacturer, CATL, is making a substantial investment in battery swaps and intends to establish numerous stations in key urban areas and along major routes, either independently or in collaboration with innovative firms like NIO.

    Inspired by the electric vehicle trend in China, UBTech has effectively mirrored this concept with the Walker S2. As showcased in the product video below, the new Walker robot can travel to a factory swap station, remove its depleted battery, install a fully charged one, and resume work in just three minutes, while Optimus remains stationary, plugged in and charging.

    Source:
    Link

  • Proton Launches Lumo: Privacy-Focused AI Assistant for Users

    Proton Launches Lumo: Privacy-Focused AI Assistant for Users

    Key Takeaways

    1. Privacy-Centric AI: Proton’s AI chatbot, Lumo, prioritizes user privacy, opposing “surveillance capitalism” prevalent in Big Tech.

    2. Strong Security Features: Lumo employs “zero-access” encryption, ensuring that user data is inaccessible to third parties, including Proton itself.

    3. File Handling and Encryption: Lumo analyzes uploaded documents without retaining any information, and linked files from Proton Drive maintain end-to-end encryption.

    4. Web Search Options: Lumo has a web search feature that is off by default, using privacy-friendly search engines if enabled.

    5. Tiered Access and Features: Users can interact with Lumo through various account tiers, with free accounts having limited access and paid subscriptions offering enhanced features.


    Proton, known for its secure email service Proton Mail, has introduced a new AI chatbot focused on privacy, called Lumo.

    Vision for Privacy

    According to Andy Yen, the CEO and founder of Proton, their aim is to create “AI that puts people ahead of profits.” This is a direct challenge to what he refers to as “surveillance capitalism” that dominates Big Tech.

    Security Features

    Lumo is designed with numerous security features to protect user data. This AI assistant can perform various tasks like summarizing documents, coding, and writing emails, with all information saved locally on the user’s device.

    Proton utilizes “zero-access” encryption, which provides a unique encryption key for you to access your content.

    This structure ensures that no third party, including Proton itself, can view your information. Thus, your data remains off-limits for advertisers, government agencies, or for training large language models.

    File Handling and Encryption

    You can upload documents for Lumo to analyze; however, the chatbot does not keep any information from those files. Moreover, when you link files from Proton Drive to Lumo, they retain their end-to-end encryption while interacting with the chatbot.

    Lumo works with a variety of open-source large language models hosted on Proton’s servers in Europe, such as Mistral’s Nemo, Mistral Small 3, Nvidia’s OpenHands 32B, and the Allen Institute for AI’s OLMO 2 32B model.

    The system assigns tasks to the model that is best suited for the specific inquiry. A representative from Proton commented, “programming-related questions are managed by OpenHands, which focuses on coding tasks.”

    Web Search and Accessibility

    Lumo incorporates a web search function, but it is turned off by default to prioritize user privacy. If you choose to enable this feature, Lumo uses “privacy-friendly” search engines to gather information from the web.

    You can access Lumo through its website, lumo.proton.me, and through specialized applications for both iOS and Android. Access is organized into various tiers.

    People without a Lumo or Proton account can ask a “limited number” of questions each week, and they won’t have access to their chat histories.

    Users with a free account can utilize an encrypted chat history, upload small files, and save a limited number of chats as favorites.

    For a monthly subscription of $12.99, the Lumo Plus plan offers unlimited chats, extended encrypted chat history, boundless favorites, and the ability to upload larger files.

    Source:
    Link

  • Upcoming WhatsApp Feature to Enhance Chat Convenience

    Upcoming WhatsApp Feature to Enhance Chat Convenience

    Key Takeaways

    1. WhatsApp is developing a new feature called Quick Recap to summarize multiple chats using AI.
    2. Users will be able to select up to five chats to receive an AI-generated summary of unread messages.
    3. The feature will maintain end-to-end encryption, ensuring chat content remains private and secure.
    4. Chats with Advanced Chat Privacy will be excluded from the Quick Recap feature for added protection.
    5. Quick Recap is currently not available, with a phased rollout expected for beta testers before reaching the stable app version.


    WhatsApp is gearing up to enhance user experience, as per recent findings from the WhatsApp tracker WABetaInfo. The messaging platform is working on a feature named Quick Recap, which employs artificial intelligence to summarize multiple chats simultaneously. This innovation aims to assist frequent users in quickly catching up on unread messages.

    New Feature Development

    WABetaInfo, known for its analysis of beta versions, reports that this feature is currently being developed for the Android beta version 2.25.21.12. Users will soon have the option to select up to five chats from their chat tab. By clicking on a new Quick Recap icon, they will get an AI-generated summary of unread messages. This feature is anticipated to be particularly beneficial for individuals who manage numerous group or personal chats, such as professionals or community managers.

    Privacy and Security

    A major highlight of this new feature is its commitment to maintaining end-to-end encryption. As stated by SmartDroid.de, WhatsApp utilizes a method called private processing technology. This ensures that the AI analysis occurs in a secure environment, meaning that neither WhatsApp nor Meta can access the chat content or the summary results. Interestingly, chats that have Advanced Chat Privacy will be excluded from this feature by design, offering additional safety for sensitive discussions.

    Availability Timeline

    Currently, Quick Recap is not available to users in any version of the app, either regular or beta. While there’s no confirmed release date yet, a phased rollout is anticipated. Initially, the feature will likely be available to a select group of beta testers and then gradually widen its reach to more beta users. Eventually, it will be included in the stable version of the app.

    Source:
    Link

  • Introducing Baby Grok: A Child-Friendly Chatbot by xAI

    Introducing Baby Grok: A Child-Friendly Chatbot by xAI

    Key Takeaways

    1. Elon Musk announced the development of “Baby Grok,” a new app focused on kid-friendly content.
    2. The decision to create Baby Grok comes after backlash over the original Grok’s customizable 3D avatars, which were criticized as sexualized.
    3. Specific details about how Baby Grok will differ from the current Grok chatbot are still pending.
    4. The news of Baby Grok has been positively received on social media, with many expressing excitement for a child-appropriate alternative.
    5. The original Grok chatbot, launched in 2023, has faced challenges with inappropriate responses, which will be addressed in the child-friendly version.


    After facing backlash for the launch of customizable 3D avatars in its AI chatbot Grok 4, which some critics deemed sexualized, CEO Elon Musk has decided to create a new option for kids. On July 19, 2025, Musk shared on his official X account that they are developing a fresh app focused solely on content suitable for children. He announced, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

    Details Still Pending

    Musk has not yet revealed specifics about how this new app will function or how it will technically differ from the current Grok chatbot. Grok has gained a reputation for giving risqué or informal responses when users ask, which raises concerns in educational settings. Therefore, launching Baby Grok can be viewed as a direct response to such feedback.

    Positive Reactions Online

    The news has been mostly well-received on social media. A lot of users expressed their excitement for an AI app that is specifically geared toward children. Some mentioned that they had previously restricted their kids from using Grok. Baby Grok might serve as a proper alternative for families.

    The initial Grok chatbot made its debut in 2023, competing with OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini. Recently, xAI released an updated version, Grok 4, which Musk claims can tackle academic questions at a PhD level. He also noted that while Grok has a solid grasp of technical topics, it sometimes lacks common sense—an issue that will likely receive special focus during the creation of the child-friendly app.

    Source:
    Link

  • Train AI for Free: Why It Doesn’t Thank You

    Train AI for Free: Why It Doesn’t Thank You

    Key Takeaways

    1. Unpaid Workforce: Using free AI tools makes users part of a global unpaid workforce, helping to train AI without compensation.

    2. Reinforcement Learning: AI chatbots improve through user feedback, with interactions recorded to refine their performance, even for paying users.

    3. Human Labor Behind AI: Real people, often low-paid contractors, evaluate AI responses and provide feedback that drives the training process.

    4. Feedback Mechanism: User feedback informs smaller reward models that guide how the main AI responds, shaping its tone and helpfulness.

    5. Growing Market: The market for training data is booming, expected to grow significantly, while many users remain unaware that their interactions are being used for AI development.


    Ever felt like your late-night chats with ChatGPT are making Silicon Valley richer while you struggle with insomnia? Well, they are. If you’re using free AI tools, guess what? You’ve become part of a global unpaid workforce, and no one even gave you a thank-you mug.

    The Reality of AI Training

    Let’s break it down. Free AI chatbots, such as ChatGPT, Claude, and Gemini, rely on something known as Reinforcement Learning from Human Feedback (RLHF) to get better. It may sound complex, but here’s the straightforward explanation:

    You ask a question, the AI responds, and you give it a thumbs up or down. If you like one answer more than another, congratulations—you just helped train the model. Your feedback is recorded, processed, and eventually, the AI adapts to be more “helpful.”

    You’re Part of the Process

    These tools aren’t just floating around in the cloud for no reason. They learn from your interactions. You’re not just having a conversation; you’re essentially a low-cost (read: unpaid) data annotator.

    Think paying for GPT-4 means you’ve escaped the data harvesting? Think again! Unless you’ve opted out in your ChatGPT settings, your chats are still used to refine the AI’s performance. That’s right—you’re shelling out $20 a month to aid in product development. Pretty clever, huh?

    OpenAI, for instance, utilizes discussions from both free and paying users to enhance its models, unless you disable “chat history.” Google’s Gemini has a similar approach. Anthropic’s Claude? It’s also gathering preferences to improve its alignment models.

    Behind the Scenes

    Behind every complex term like RLHF lies a very tangible process involving humans. Companies hire contractors to evaluate responses, flag inaccuracies, and categorize prompts.

    Businesses like Sama (previously linked to OpenAI), Surge AI, and Scale AI provide this labor, often employing low-wage workers who toil long hours, many from developing nations. Reports from 2023 revealed that RLHF labelers earned between $2 to $15 an hour, depending on their location and role. Therefore, real people are constantly clicking “this response is better.” It’s this feedback loop that fuels the bots.

    If you’re giving thumbs up feedback, you’re essentially doing a small part of their job… for nothing.

    The Feedback Mechanism

    Here’s where it becomes intriguing. Your feedback doesn’t directly train the main model. Instead, it goes into reward models, which are smaller systems that inform the main AI how to act. So, when you say, “I prefer this answer,” you’re contributing to the internal guide that the bigger model follows. When enough people provide feedback, the AI starts to feel more human-like, more polite, and more helpful… or more like a writer with boundary issues.

    AI keeps track of tone. When you interact with it in a specific style—be it sarcastic, scholarly, or straight to the point—the system learns to reply accordingly. It’s not stealing your writing style and selling it (yet), but your habits help shape the collective training experience, especially when the bot notices that others appreciate your tone or phrasing.

    The Role of CAPTCHA

    It’s less about copying you and more about duplicating what works best. What works often originates from someone who never agreed to style duplication.

    And those CAPTCHA challenges you solve to prove you’re human? You’re not just clicking on traffic lights and crosswalks to access your email. You’re actually labeling data for machine learning systems. Google’s reCAPTCHA, hCaptcha, and Cloudflare’s Turnstile all contribute visual data to training processes, helping AIs understand the world one blurry street sign at a time.

    So yes, even your security checks are now part of the feedback system.

    The Booming Market

    This isn’t some wild conspiracy theory. The market for training data is thriving. As reported by MarketsandMarkets, the global training data market is expected to rise from $1.5 billion in 2023 to over $4.6 billion by 2030. While this includes synthetic data and curated datasets, the significance of human-labeled real-world data—what you casually provide each day—is on the rise.

    Yet, most users still believe their chatbot chats vanish into thin air. Spoiler alert: they don’t. Not unless you’ve specifically turned off logging (and even then… you should verify).

    Your Role in the Future

    Here’s the twist. You’re contributing to the very technology that could one day take your job, surpass your creativity, or turn your tweets into product samples. This doesn’t mean you should stop using AI, but it’s important to understand what you’re helping to create. And perhaps, just perhaps, ask for a bit of transparency in return.

    After all, if your unpaid contributions are shaping the next generation of billion-dollar AI systems, the least they could do is express some gratitude.

    Source:
    Link