Tag: Llama 3.1

  • Lenovo AI Stick: Upgrade Your PC with AI at MWC 2025

    Lenovo AI Stick: Upgrade Your PC with AI at MWC 2025

    Key Takeaways

    1. Lenovo AI Stick: A compact device that allows non-AI PCs to run AI applications without cloud reliance.

    2. Processing Power: Features a Neural Processing Unit (NPU) with 32 TOPS, providing mid-range AI capabilities.

    3. Alternative to Copilot+: Offers Lenovo AI Now as a local solution for large language models and graphics creation, based on open-source Llama 3.1.

    4. Simple Connectivity: Easily connects to older PCs via a single USB-C Thunderbolt port and can be powered directly for enhanced performance.

    5. Cost-Effective Solution: Aims to provide an affordable option for users needing AI capabilities without investing in high-end PCs.


    At MWC 2025, Lenovo is excited to showcase its new AI concept gadgets. The most recent addition is the Lenovo AI Stick, a compact device that plugs into non-AI PCs, enabling them to run AI applications without relying on cloud services.

    Impressive Specifications

    The Lenovo AI Stick features a Neural Processing Unit (NPU) with an impressive processing speed of 32 TOPS (tera operations per second), which gives connected PCs a solid mid-range AI capability. To put this in perspective, Qualcomm’s Snapdragon X Elite NPUs found in many top-tier AI PCs operate at 45 TOPS, while Microsoft has set a standard of 40 TOPS for its Copilot+ functionalities. However, Lenovo is instead offering its AI Now features as an alternative to Copilot+. Based on the open-source Llama 3.1, the Lenovo AI Now enables local usage of large language models and graphics creation.

    Easy Connectivity

    For older PCs lacking NPUs, connecting the Lenovo AI Stick is a breeze via the single USB-C Thunderbolt port located on its side. Users with more demanding AI tasks can plug the AI Stick directly into a power source to enhance processing power, similar to the benefits provided by an external GPU.

    Given the high costs associated with modern high-end PCs, Lenovo’s innovative AI dongle could address a genuine need for users if it eventually becomes available as a consumer product.


  • Lenovo Launches AI Agent and Learning Platform for Next-Gen PCs

    Lenovo Launches AI Agent and Learning Platform for Next-Gen PCs

    Lenovo AI Now utilizes Llama 3.1 to offer a fresh, chat-oriented method for locating documents, files, and various data on the newest Lenovo PCs. The interface is organized into clear sections that aim to be user-friendly, featuring a PC Assistant that includes automatically created shortcuts for commands like "Turn on Battery Saver Mode."

    Local and Cloud Chat Features

    The AI Now program also includes sections for Local and Cloud Chat. Lenovo claims that it uses Microsoft Azure AI Content Safety and has received certifications like the UL Verified Mark for AI Model Transparency, ensuring user safety and security.

    Learning Zone Tools

    This AI can operate fully on the device, much like the Lenovo Learning Zone, a brand new set of tools and services designed to enhance education, whether it’s online or face-to-face. It’s designed to gather resources like audio/video lectures, PDF files, and presentations to create notes and summaries, which can be organized by subject.

    The Learning Zone is also capable of creating its own quizzes to help improve retention and engagement with educational content.

    Availability of Features

    This tool is expected to be available as a free optional download for select Lenovo AI PCs starting in December 2024, while AI Now is anticipated to launch in the first quarter of 2025.

    Lenovo Press Release

    Lenovo presents at Tech World 2024 Smarter AI for All with a comprehensive range of AI devices, solutions, and concepts.


  • Humans Outperform AI, Says Apple-Funded Study

    Humans Outperform AI, Says Apple-Funded Study

    Earlier this month, a group of six AI experts supported by Apple released a study introducing GSM-Symbolic, a new benchmark for AI that "allows for more controllable assessments, giving important insights and more dependable metrics for evaluating the reasoning abilities of models." Unfortunately, it appears that large language models (LLMs) still face significant limitations and are missing even the most fundamental reasoning skills, as shown by initial tests using GSM-Symbolic with AI systems from major companies like Meta and OpenAI.

    Issues with Current Models

    The research pointed out a major issue with current models, which is their lack of consistency when faced with similar questions. The findings indicated that minor changes in wording, which wouldn’t change the meaning for a human, often result in varied responses from AI systems. No specific model was identified as performing notably well.

    The report stated, "In particular, the effectiveness of all models drops [even] when just the numerical values in the question are modified in the GSM-Symbolic benchmark." It also found that "the weakness of mathematical reasoning in these models [shows] that their performance worsens significantly as the number of clauses in a question goes up."

    Study Details

    This 22-page study is accessible here (PDF file). The final two pages include problems with some irrelevant details added at the end, which shouldn’t change the answer for a human. Yet, the AI systems considered these parts, leading to incorrect answers.

    In conclusion, AI systems remain trapped in pattern recognition and still do not possess general problem-solving skills. This year saw the introduction of several LLMs, including Meta AI’s Llama 3.1, Nvidia’s Nemotron-4, Anthropic’s Claude 3, the Fugaku-LLM from Japan (the largest model ever trained solely on CPU power), and Nova by Rubik’s AI, which was launched earlier this month.

    Upcoming Publication

    Tomorrow, O’Reilly will publish the first edition of "Hands-On Large Language Models: Language Understanding and Generation" by Jay Alammar and Maarten Grootendorst. It is priced at $48.99 for the Kindle edition and $59.13 for the paperback version.

  • Open NotebookLM: Convert PDFs to Podcasts with Open Source

    Open NotebookLM: Convert PDFs to Podcasts with Open Source

    For those who are new to Google’s AI project, NotebookLM serves as a research assistant platform that allows users to upload documents. It utilizes Gemini 1.5 pro to prioritize notetaking when interacting with the information extracted from these documents. NotebookLM summarizes all uploaded documents in the user’s notebook and enables users to pose questions regarding the content. After processing the data, NotebookLM provides answers along with relevant citations from the uploaded files. One of its standout features is the capability to create podcasts based on the uploaded documents. The podcasts, generated by Gemini, feature AI-curated information and consist of audio discussions between two speakers about the topics found in the materials, with segments lasting between five and thirty minutes. However, some users might hesitate to upload their content to a proprietary large language model (LLM), which is where Open NotebookLM presents a different option.

    A User-Friendly Alternative

    Open NotebookLM offers a simple and user-friendly interface, constructed using various open-source and text-to-speech technologies to convert PDFs into podcasts. For PDF processing, it employs Llama 3.1, which has a character limit of 100,000. While it may not match Gemini’s capabilities, MeloTTS delivers reliable text-to-speech performance, allowing users to modify the AI’s tone to be either "fun" or "formal." Furthermore, Open NotebookLM is compatible with just over ten languages, including Spanish, French, and German among its selections. Users can currently experiment with the project on Chua’s Hugging Face page or compile it locally using the resources provided on the project’s GitHub repository.

    Accessing the Project

    Gabriel Chua can be found on both Hugging Face and GitHub, where users can explore the Open NotebookLM project further.

  • Meta Launches Llama 3.1: Open-source Model with 128K Token Context

    Meta Launches Llama 3.1: Open-source Model with 128K Token Context

    Meta introduced its latest open-source language model, Llama 3.1, on July 23rd. This version features numerous advancements, such as improved inference capabilities, expanded multilingual support, and an increased context length of 128K tokens.

    Comparable to Leading Models

    The highlight is the flagship 405B parameter Llama 3.1-405B. Meta claims this robust model matches the performance of top closed-source models in tasks like common-sense reasoning, guidance, mathematics, tool usage, and multilingual translation. Its capabilities are compared to GPT-4, GPT-4o, and Claude 3.5 Sonnet.

    Versatile Model Options

    Enhancements are not limited to the top-tier model. The 8B and 70B parameter versions of Llama 3.1 are also noted to be very competitive with other open-source and closed-source models of similar sizes.

    Availability and Support

    For those keen to explore, Llama 3.1 can now be downloaded from Meta’s official website and Hugging Face. Furthermore, over 25 major partners, including cloud services like AWS, Azure, and Google Cloud, as well as hardware manufacturers such as Nvidia and Dell, are confirmed to support the new model.