Meta Launches Llama 3.1: Open-source Model with 128K Token Context

Meta Launches Llama 3.1: Open-source Model with 128K Token Context

Meta introduced its latest open-source language model, Llama 3.1, on July 23rd. This version features numerous advancements, such as improved inference capabilities, expanded multilingual support, and an increased context length of 128K tokens.

Comparable to Leading Models

The highlight is the flagship 405B parameter Llama 3.1-405B. Meta claims this robust model matches the performance of top closed-source models in tasks like common-sense reasoning, guidance, mathematics, tool usage, and multilingual translation. Its capabilities are compared to GPT-4, GPT-4o, and Claude 3.5 Sonnet.

Versatile Model Options

Enhancements are not limited to the top-tier model. The 8B and 70B parameter versions of Llama 3.1 are also noted to be very competitive with other open-source and closed-source models of similar sizes.

Availability and Support

For those keen to explore, Llama 3.1 can now be downloaded from Meta’s official website and Hugging Face. Furthermore, over 25 major partners, including cloud services like AWS, Azure, and Google Cloud, as well as hardware manufacturers such as Nvidia and Dell, are confirmed to support the new model.

Scroll to Top