Tag: NVIDIA Chips

  • Nvidia to Reveal New AI Chip at GTC Conference

    Nvidia to Reveal New AI Chip at GTC Conference

    Nvidia's Anticipated Showcase at GTC Conference

    Nvidia is gearing up to headline the forthcoming GTC conference, marking the industry's return to in-person events post-pandemic. All eyes are on the tech giant as they gear up to reveal their latest innovations, with a pivotal question hanging over the heads of both customers and investors: can Nvidia maintain its dominance in the AI chip market?

    The B100 Chip: Setting a New Price Benchmark

    The upcoming B100 chip is anticipated to come with a price tag exceeding $20,000, surpassing that of its predecessor. Nvidia's recent surge has been nothing short of remarkable, propelling their market cap to $2 trillion and positioning them closely behind Apple for the number two spot. Analysts foresee a doubling of revenue by 2024, driven by the soaring demand for their top-tier AI chips.

    Future-Proofing Success with the B100

    The focus isn't solely on current achievements but also on fortifying future success. The GTC event is projected to unveil the B100, Nvidia's next-generation AI processor. This formidable chip is poised to serve as the core of their upcoming AI systems, raising the performance bar even higher.

    Challenges and Opportunities on the Horizon

    Already facing a demand that outstrips supply, Nvidia's current chips entail prolonged wait times for developers. The expected price hike for the B100 beyond the $20,000 mark raises concerns among analysts about Nvidia's stock valuation. While the company has experienced a significant uptick in recent years, some analysts worry that overly optimistic future earnings forecasts could potentially impact the company's standing.

    Apart from hardware, Nvidia's software strategy presents another area of interest. Their CUDA platform entices developers with tools tailored for Nvidia chips, fostering developer allegiance. The forthcoming CUDA updates are poised to further solidify this loyalty. Nvidia's venture into cloud services is also a development to monitor closely. The geopolitical dimension, particularly concerning China, introduces an additional layer of complexity. US-imposed restrictions on access to Nvidia's cutting-edge chips have compelled China to develop its own AI hardware. While Chinese competitors offer chips on par with Nvidia's A100, they have yet to match the H100. Analysts anticipate that the B100 will widen this gap even further.

    As the GTC conference approaches, it is evident that this event will be pivotal for Nvidia. Stay updated with the latest Nvidia insights on Gizmochina as the conference unfolds!

  • NVIDIA Cloud G-SYNC Technology Enhances Cloud Gaming Experience

    NVIDIA Cloud G-SYNC Technology Enhances Cloud Gaming Experience

    NVIDIA recently introduced its Cloud G-SYNC technology designed to enhance the gaming experience for users of GeForce NOW. This feature is exclusive to GeForce NOW RTX 4080 SuperPOD instances under the Ultimate subscription tier. Cloud G-SYNC works by synchronizing display refresh rates with cloud gaming streams to reduce screen tearing and stuttering, resulting in smoother gameplay. Moreover, it incorporates Reflex technology to minimize latency in cloud gaming environments.

    Compatibility and Usage Requirements

    To make use of Cloud G-SYNC, users need display equipment with a refresh rate exceeding 60Hz and support for variable refresh rate (VRR) technologies like Nvidia G-SYNC, AMD FreeSync, or Apple’s ProMotion. It's important to note that Cloud G-SYNC is accessible solely through native GeForce NOW applications and is not compatible with browsers, mobile devices, or TV clients.

    System Requirements

    For Windows systems, a minimum necessity is NVIDIA Turing architecture graphics cards (GTX 16 or RTX 20 series), with no support for Intel or AMD GPUs. On macOS, the application supports all Apple Silicon models and certain Intel-based Mac models, including specific MacBook Pro, MacBook Air, iMac, and Mac Pro configurations.

    It's crucial to have at least the R545 driver version installed on systems with GeForce to enable Cloud G-SYNC functionality. However, NVIDIA specifies that multiple monitors are not compatible with this feature on Windows. Alongside the launch of Cloud G-SYNC, NVIDIA has introduced the GeForce NOW cloud game day pass. This pass comes in Priority and Ultimate versions priced at $3.99 and $7.99, respectively, offering 6 or 8 hours of gaming at the corresponding tier. These passes cater to users who temporarily step away from their computers or are interested in exploring cloud gaming options.

  • End of the Line for Nvidia GTX GPUs After Nearly Two Decades

    End of the Line for Nvidia GTX GPUs After Nearly Two Decades

    Nvidia is reportedly phasing out its GTX line of graphics cards, with reports suggesting that production of the GTX 16-series, the last GPUs based on the Turing architecture, has ceased. While Nvidia has not officially confirmed this development, sources indicate that the remaining GTX 16-series GPUs have been distributed to its partners.

    Once the existing stock of GTX 1630 and GTX 1650 models is depleted, the GTX line is expected to be discontinued. Gamers seeking budget-friendly options may need to look towards alternatives from AMD and Intel, as Nvidia’s entry-level offering in the RTX series is currently the RTX 3050 6GB.

    Transition to RTX

    The GTX brand has a rich history dating back to 2005, when it was first introduced with the GeForce 7800 GTX, establishing itself as Nvidia’s premier GPU line for over a decade. In 2018, Nvidia shifted its focus to the RTX 20-series, emphasizing ray tracing and AI capabilities. Despite this transition, Nvidia acknowledged the popularity of the GTX line by launching the GTX 16-series in 2019, providing a more affordable option based on the Turing architecture without RTX features.

    Future Prospects

    While there has been no GTX equivalent released in the Ampere (RTX 30-series) generation, the GTX 16-series has continued to serve as a cost-effective choice for consumers. Nvidia’s emphasis on advancing ray tracing and AI technologies suggests that the GTX brand may not see a revival. With the discontinuation of the GTX 16-series, Nvidia’s focus now lies on the RTX line, which encompasses its entire consumer GPU range.


    End of the Line for Nvidia GTX GPUs After Nearly Two Decades
  • NVIDIA CEO Predicts AI to Achieve General Intelligence in 5 Years

    NVIDIA CEO Predicts AI to Achieve General Intelligence in 5 Years

    NVIDIA CEO Jensen Huang recently expressed a bold opinion during a Stanford forum, suggesting that Artificial General Intelligence (AGI) could be closer than anticipated, potentially emerging within the next five years. However, Huang's assertion is accompanied by important context.

    NVIDIA's Confidence in AI Chips

    Huang's forecast relies on the interpretation of AGI. If defined as the capability to successfully navigate human-designed assessments, Huang believes AGI is on the brink of realization. He envisions AI systems excelling across all tests within the next five years. This optimism is fueled in part by NVIDIA's pivotal role in crafting high-performance AI chips utilized in platforms like OpenAI's ChatGPT.

    The Definition of AGI

    Yet, Huang acknowledges the existence of a broader definition of AGI, one that involves comprehending and emulating the intricate mechanisms of the human intellect. This version, he concedes, remains enigmatic due to the ongoing scientific discourse regarding human intelligence's nature. Huang notes the challenges in engineering such a system due to the absence of a well-defined objective.

    Infrastructure and AI Growth

    The conversation also delved into the necessary infrastructure to bolster AI advancement. Although concerns have been raised about the necessity for additional chip fabrication plants to meet future demands, Huang suggests this might not be as urgent as some speculate. He highlights that enhancements in AI algorithms and processing efficiency could lead to a reduced overall requirement for chips, despite the projected surge in AI applications.

    While Huang's forecast captures attention, it is vital to grasp the complexities underpinning his assertion. AI's progress may be swift, showcasing prowess in specific domains. However, the intricate essence of human intelligence, extending beyond mere test performance, might still pose formidable challenges in comprehending and reproducing it.