Tag: AI Supercomputer

  • MS-C931 Mini PC with 128GB RAM: Nvidia Outperforms Intel and AMD

    MS-C931 Mini PC with 128GB RAM: Nvidia Outperforms Intel and AMD

    Key Takeaways

    1. The MSI EdgeXpert MS-C931 is marketed as an AI supercomputer, designed for local AI model processing, offering cost-effectiveness and better privacy compared to cloud services.
    2. It has compact dimensions (5.9 x 5.9 x 2 inches) and is versatile for various applications, including robotics and traffic monitoring.
    3. The device features an Nvidia Blackwell graphics card, an ARM SoC with 20 cores, and claims an AI performance of 1,000 TOPS.
    4. It can manage large language models (LLMs) with up to 200 billion parameters locally, and this capacity doubles when two units are connected.
    5. The mini PC supports up to four displays, includes WiFi 7 and Bluetooth 5.3, offers 1TB of M.2 SSD storage, and has a 10Gbit/s Ethernet port.


    We’ve talked about MSI products several times before, even if not all of them are meant for regular consumers. MSI also has offerings for businesses and professionals, and the new EdgeXpert MS-C931 fits right in. This mini PC is marketed as an AI supercomputer, which suggests it has the power to run AI models locally. This comes with several benefits. For one, it can be more cost-effective than using cloud-based AI services, and it also offers better privacy. This aspect could be vital for consulting firms that handle sensitive information from clients.

    Design and Versatility

    The MSI EdgeXpert has dimensions of 5.9 x 5.9 x 2 inches, and it can be utilized for various applications, including robotics. Robots, for instance, could leverage real-time image recognition when linked to the MS-C931 through a local network. Additionally, this mini PC could be effective for monitoring or controlling traffic. It is powered by an Nvidia Blackwell graphics card along with an ARM SoC that boasts 20 cores. The AI capability of this device is claimed to reach 1,000 TOPS, a significant figure compared to the NPUs in Intel and AMD CPUs, which typically have AI performance in the double digits. Plus, it includes 128GB LPDDR5x RAM and NVLink C2C for smooth access between the processor and GPU to the memory.

    Performance and Connectivity

    MSI states that a single unit of this mini PC can manage an LLM with up to 200 billion parameters locally. If two MSI MS-C931 units are connected, that number effectively doubles. Essentially, this system can also function like a regular mini PC without any complications. It supports up to four displays via USB Type-C, which can also be used for connecting various peripherals. It comes equipped with WiFi 7 and Bluetooth 5.3, while the M.2 SSD offers a storage capacity of 1TB. Additionally, the Ethernet port supports a bandwidth of 10Gbit/s, and this mini PC runs on a 240W PSU.

    Source:
    Link

  • Nvidia Launches DGX Station AI Supercomputer with 72-Core CPU

    Nvidia Launches DGX Station AI Supercomputer with 72-Core CPU

    Key Takeaways

    1. Nvidia’s DGX Station is a powerful AI supercomputer designed for developers and researchers to build and run large language models (LLMs) locally.
    2. The DGX Station features the GB300 Grace Blackwell Ultra Superchip, enabling it to handle models with up to 200 billion parameters and offering significant performance improvements over the smaller DGX Spark.
    3. Its architecture includes a 72-core Grace CPU and Blackwell Ultra GPU, connected via NVLink-C2C, providing seven times the bandwidth of PCIe Gen 5 and enhancing AI processing efficiency.
    4. The DGX Station utilizes the ConnectX-8 SuperNIC for fast networking and runs on a customized version of Ubuntu Linux, facilitating easy transition from local to cloud-based AI model deployment.
    5. The DGX Station is expected to be available from third-party manufacturers in late 2025, while Nvidia’s 5090 GPU is currently available for those looking to develop AI LLMs now, albeit at high prices.


    Nvidia has introduced its latest desktop AI supercomputer, known as the DGX Station. This advanced machine is tailored for AI developers, researchers, and data scientists, enabling them to build and execute their large language models (LLMs) and projects locally.

    Enhanced Power and Performance

    The DGX Station boasts significantly greater capabilities compared to the smaller DGX Spark (previously referred to as Project DIGITS), as it can handle local models with 200 billion parameters, thanks to the GB300 Grace Blackwell Ultra Desktop Superchip. This Superchip is equipped with 496GB LPDDR5X of CPU memory alongside 288GB HBM3e of GPU memory.

    Cutting-Edge Architecture

    Featuring a 72-core Grace CPU linked via NVLink-C2C to a Blackwell Ultra GPU, the Superchip’s NVLink-C2C connection offers seven times the bandwidth of PCIe Gen 5, reaching speeds of 900 GB/s. The Blackwell Ultra GPU is capable of delivering up to 1.5 times the AI FLOPS compared to the Blackwell GPU, and it is specifically optimized to process FP4 models. This enhancement boosts AI processing efficiency by alleviating memory and computational demands.

    Networking and Operating System

    The DGX Station connects with other DGX Stations using the ConnectX-8 SuperNIC, which can transfer data at speeds of up to 800 Gb/s. It operates on a customized version of Ubuntu Linux, known as the DGX operating system, which is tailored to support the complete Nvidia AI software stack. This setup facilitates the transition of LLM AI models from local development to the cloud, simplifying their release and scaling. The DGX Station is expected to be available from third-party computer manufacturers in late 2025.

    For those eager to dive into AI LLM development right now, you can purchase an Nvidia 5090 GPU (available on Amazon) which can run models up to approximately 30 billion parameters. However, it’s important to note that the 5090 cards are currently priced well above their MSRP, exceeding $4,000. The 4060 Ti 16GB GPU, which can manage models up to around 14 billion parameters, is also overpriced but can be found for under $1,000 (also available on Amazon).

    Nvidia has made this announcement through its news release, highlighting the arrival of both the DGX Spark and DGX Station personal AI computers. These systems, powered by NVIDIA Grace Blackwell, aim to bring accelerated AI capabilities to developers, researchers, and data scientists, with prominent computer manufacturers such as ASUS, Dell Technologies, HP, and Lenovo set to produce them.

    Source:
    Link