Tag: GPU Performance

  • Snapdragon 8 Elite 2 Leak: 30% GPU Performance Boost Expected

    Snapdragon 8 Elite 2 Leak: 30% GPU Performance Boost Expected

    Key Takeaways

    1. Snapdragon 8 Elite 2 features a GPU cache increase from 12MB to 16MB, leading to a potential performance boost of around ±30%.
    2. The chip is expected to utilize Qualcomm’s next-gen custom Oryon cores, providing a CPU performance increase of about ±25%.
    3. It will have a 2+6 core setup, with benchmarks predicting around 3.8 million points on AnTuTu, showing a near 30% improvement over current Snapdragon 8 Elite devices.
    4. The new chip will support LPDDR5X and LPDDR6 memory types and will be based on the latest ARM v9 architecture.
    5. Qualcomm is using TSMC’s 3nm N3P process for manufacturing, similar to the technology for Apple’s upcoming A19 chip, with a possible announcement expected around October.


    The Snapdragon 8 Elite 2 is drawing attention even though it won’t be officially released for over a quarter. Today, it’s making waves again due to a recent leak that reveals some exciting advancements in CPU and GPU capabilities.

    CPU and GPU Enhancements

    Recent info has emerged from a trustworthy industry source, Digital Chat Station in China. This leak indicates that Qualcomm plans to increase the GPU cache from 12MB, as seen in the current Snapdragon 8 Elite, to 16MB for the Elite 2. This adjustment is reportedly linked to a performance increase of around ±30% in initial tests.

    Performance Specifications

    On the CPU front, the leak hints that Qualcomm’s next-generation custom Oryon cores could provide a performance boost of about ±25%. The chip is anticipated to consist of a 2+6 core setup, featuring two prime cores and six performance cores—details previously mentioned by the same source. Benchmarks suggest the Snapdragon 8 Elite 2 might achieve around 3.8 million points on AnTuTu, indicating a near 30% improvement over the highest scores from current devices using the Snapdragon 8 Elite.

    Memory and Manufacturing

    Moreover, the new chip is expected to be compatible with both LPDDR5X and LPDDR6 memory types. It will also support SME1 and SVE2 instruction sets, which means it will be based on the latest ARM v9 architecture. Qualcomm is reportedly using TSMC’s 3nm N3P process to manufacture the Snapdragon 8 Elite 2, which is also rumored to be the same technology used for Apple’s forthcoming A19 chip.

    While it’s premature for Qualcomm to officially announce the launch of the 8 Elite 2, based on previous patterns, an announcement might be expected around October.

  • Free Google Pixel Update Boosts GPU Performance for All Phones

    Free Google Pixel Update Boosts GPU Performance for All Phones

    Key Takeaways

    1. Android 16 will introduce a long-awaited desktop mode and may reduce boot times by up to 30%.
    2. Fingerprint authentication will be possible even when the Pixel’s screen is off.
    3. Significant GPU performance improvements have been observed in Pixel devices during benchmark tests.
    4. The performance boosts are primarily due to new GPU drivers from Android 15 QPR2, not Android 16.
    5. Performance increases will mainly benefit apps optimized for the Vulkan graphics API.


    The stable release of Android 16 is still a few months away. However, public beta versions have shown some exciting features that are on the horizon. For instance, it looks like Google’s long-awaited desktop mode may finally come with Android 16, along with a potential improvement that could cut boot times by as much as 30%. Additionally, the tech giant is also working on allowing fingerprint authentication even when a Pixel’s screen is off.

    Performance Improvements Spotted

    In addition, users on Reddit have found significant enhancements in the Geekbench 6 GPU benchmark tests for various models, from the Pixel 6a all the way to the newest Pixel 9 Pro, which is currently priced at $849 on Amazon. Notably, the modest Pixel 7a has been seen performing on par with Google’s flagship devices. At first, this was credited to the advancements made in Android 16.

    New Findings Regarding Performance Gains

    That being said, Android Authority and others have found that these performance improvements are not actually linked to Android 16. Instead, this boost in performance appears to come from new GPU drivers that were reportedly released with Android 15 QPR2, which Google made available earlier this month. As a result, any Pixel device powered by Tensor should see a performance increase. However, this will mainly apply to apps that are optimized for the Vulkan graphics API, like Geekbench 6’s GPU benchmark test.

    Source:
    Link


  • GeForce RTX 5070 Benchmark vs RTX 4090: Surprising Results

    GeForce RTX 5070 Benchmark vs RTX 4090: Surprising Results

    Key Takeaways

    1. The Nvidia GeForce RTX 5070 samples showed lower performance than expected, with an average G3D Mark of 27,105, placing it just below the RTX 4090 Laptop GPU in benchmarks.

    2. Nvidia’s claim that the RTX 5070 would perform like the RTX 4090 for $549 is considered exaggerated, as the card struggles to match the performance of previous models like the RTX 4070 and RTX 4070 Ti.

    3. The introduction of Multi Frame Generation (MFG) allows the RTX 5070 to reach near RTX 4090 performance in some scenarios, but overall comparisons show it lagging behind in most tests.

    4. The desktop RTX 4090 outperforms the RTX 5070 by an average of 41.4%, raising questions about the ethical implications of Nvidia’s marketing claims.

    5. The upcoming AMD Radeon RX 9070 and RX 9070 XT may challenge Nvidia’s position in the GPU market, offering competitive performance and pricing.


    Three Nvidia GeForce RTX 5070 samples have shown up on the PassMark benchmark platform, and their performance is lower than what was anticipated. There have been reports that the benchmark faced problems with Blackwell cards due to the discontinuation of 32-bit framework support; however, a patch is now available. It remains uncertain if the RTX 5070 samples benefited from this update or if they are still unintentionally limited. The samples achieved an average G3D Mark of 27,105 during the testing, which included runs in DirectX 9-12 and a GPU Compute benchmark. This score places the GeForce RTX 5070 just beneath the GeForce RTX 4090 in the overall ranking, but it’s essential to note that this refers to the mobile version of the chip (RTX 4090 Laptop GPU) and not the stronger desktop variant.

    Nvidia’s Promises

    When Nvidia introduced the GeForce RTX 5070, they stated that the desktop graphics card would deliver RTX 4090 performance for only $549. Clearly, this was an exaggeration from Jensen Huang, as the RTX 5070 would need to match the RTX 4090 in a single synthetic or gaming benchmark to justify such a claim. Thanks to Multi Frame Generation (MFG) helping out and ignoring potential latency issues, the Blackwell card has shown it can reach RTX 4090 performance levels. In our detailed review of the RTX 5070, we found that with MFG x4, the newer card could achieve 87 FPS in Star Wars Outlaws (at 4K, with ultra settings and ray tracing). In comparison, the older Ada Lovelace card performed around 90 FPS in our tests with regular frame generation and ray reconstruction. Therefore, Nvidia’s statement holds some weight, albeit it’s quite flimsy.

    Performance Comparison

    The benchmark results for the GeForce RTX 5070 on PassMark highlight the card’s struggle to outperform earlier desktop models like the RTX 4070 and RTX 4070 Ti. However, during our tests, the Blackwell card generally performed slightly better than the GeForce RTX 4070 Ti. The desktop version of the RTX 4090 is significantly ahead, boasting an average performance advantage of +41.4% over the RTX 5070. While GPU enthusiasts might have sensed that Team Green’s assertion was highly subjective, one can question the ethical nature of claiming that the RTX 5070 provides RTX 4090 performance concerning the average consumer’s understanding. Ultimately, it may not matter much for the RTX 5070, as the launch of the AMD Radeon RX 9070 and RX 9070 XT, with their attractive prices, solid performances, and fewer availability concerns, could give Team Red a head start in the GPU market.

    Source:
    Link


  • GeForce RTX 5090D Outperforms RTX 4090 in Benchmark Tests

    GeForce RTX 5090D Outperforms RTX 4090 in Benchmark Tests

    Key Takeaways

    1. The Nvidia GeForce RTX 5090D achieved an impressive average score of 45,948 on PassMark, outperforming both the RTX 5090 and RTX 4090 in benchmarks.
    2. The RTX 5090D has modified specifications for the Chinese market, featuring lower AI performance (2,375 AI TOPS) compared to the international RTX 5090 (3,352 AI TOPS).
    3. Despite potential performance issues due to software compatibility, the RTX 5090D showed a +19.6% improvement over the RTX 4090 in overall benchmarks and a +102.7% increase in DirectX 12 tests.
    4. Comparisons are primarily made with the RTX 4090 instead of the RTX 4090D due to limited sample sizes for the latter, leading to less reliable performance data.
    5. Evidence suggests that the RTX 5090D sample may have been overclocked, achieving higher scores in additional benchmarks compared to the standard RTX 5090.


    Team Green has been under a lot of criticism lately for various reasons, but a recently tested Nvidia GeForce RTX 5090D (known as the “5090 D”) has shown impressive results on PassMark that should bring a smirk to Jensen Huang’s face. This graphics card, which is exclusively available in China, achieved an average score of 45,948 and reached 304 FPS in the site’s DirectX 12 test. For comparison, the next two best performers, the RTX 5090 and RTX 4090, scored 39,209 with 212 FPS and 38,430 with 150 FPS, respectively.

    Specifications Overview

    The GeForce RTX 5090D is Nvidia’s take on the standard RTX 5090, with minor modifications to allow it to be sold in China without attracting US backlash. The core specifications, such as the number of CUDA cores and memory setup, remain unchanged, but the RTX 5090D has a limited AI performance of 2,375 AI TOPS, while the international model boasts 3,352 AI TOPS (TOPS refers to trillions of operations per second). Even though the RTX 5090D is not as advanced in AI compared to the RTX 5090, it has still managed to perform exceptionally well in the G3D Mark benchmark.

    Performance Insights

    Additionally, it’s been pointed out that PassMark may have unintentionally reduced the performance of Blackwell cards due to compatibility issues with a software layer (specifically, Nvidia’s removal of 32-bit framework support). While the site works on a patch to address this, it’s likely that RTX 50-series cards will keep facing challenges. However, the regular RTX 5090 has successfully surpassed the RTX 4090 in the rankings, and the RTX 5090D has exceeded expectations as well. The Blackwell card registered a +19.6% improvement over the RTX 4090 in the overall benchmark and an incredible +102.7% increase over the Ada Lovelace card in the DirectX 12 test.

    Deliberate Comparisons

    The choice to compare with the RTX 4090, instead of the RTX 4090D, is intentional. Only nine samples of the China-only Ada Lovelace card have been evaluated on the site, compared to more than 14,500 for the standard RTX 4090, yielding significantly lower results: Overall – 28,686, DirectX 12 – 128 FPS. The GeForce RTX 5090D sample likely underwent overclocking, as there is evidence of this from a video shared by Tony Yu of Asus, showcasing a modified RTX 5090D card achieving 43,372 points in 3DMark Port Royal. In our analysis of the Nvidia GeForce RTX 5090 FE, the 3DMark Port Royal score was 37,335 points, indicating that the overclocked RTX 5090D has a +16.2% advantage.

    Source:
    Link


  • RTX 5070 Ti vs RTX 4070 Ti Super: Leaked 3DMark Scores Reveal Winner

    RTX 5070 Ti vs RTX 4070 Ti Super: Leaked 3DMark Scores Reveal Winner

    Key Takeaways

    1. The RTX 5070 Ti shows a performance improvement of about 16.6% over the RTX 4070 Ti Super, but is 13.2% slower than the RTX 5080.
    2. Benchmark results indicate that the RTX 5070 Ti can outperform the RTX 4070 Ti Super by up to 25% at 4K, but only 18% at 1440p.
    3. There are concerns about potential stock shortages and price increases that could affect the RTX 5070 Ti’s availability and value.
    4. Different sources report varying performance gains, with some suggesting a smaller boost of 5-10% compared to the RTX 4070 Ti Super.
    5. The RTX 5070 Ti will only be worth recommending if it is available at its $749 MSRP; otherwise, it may be viewed as another disappointing launch from Nvidia.


    From the details about the specs and pricing to the potential stock issues, there’s not much we don’t know regarding the RTX 5070 and RTX 5070 Ti. Yet, the ultimate performance of these graphics cards is still up for discussion. With the official launch of the RTX 5070 Ti just around the corner, any performance leaks that are now emerging are likely to be quite reliable.

    Performance Insights

    Thanks to VideoCardz, we have some insight into the performance of the RTX 5070 Ti, as a set of 3DMark benchmark results for the GPU have recently leaked. Initially, the performance gains presented by the RTX 5070 Ti look pretty good. Nevertheless, things take a downturn when we factor in the anticipated stock issues along with the expected price increases.

    According to VideoCardz, the RTX 5070 Ti shows an average performance that is about 16.6% better than the RTX 4070 Ti Super, while being 13.2% slower than the RTX 5080. Naturally, the individual 3DMark tests conducted at various resolutions display differing scores from the average. For example, in the 3DMark Fire Strike Ultra (4K) test, the RTX 5070 Ti seemingly outperforms the RTX 4070 Ti Super by as much as 25%. However, at 1440p in the same test, this advantage shrinks to just 18%.

    Comparing Leaks

    The numbers that VideoCardz has revealed are indeed better than those shared by Moore’s Law Is Dead, which hinted at only a 5-10% rasterization boost compared to the RTX 4070 Ti Super. However, while synthetic results are useful for making direct comparisons with other GPUs, they don’t always accurately represent real gaming performance. Therefore, it’s possible that the RTX 5070 Ti could be either faster or slower than what the 3DMark benchmarks imply.

    That being said, a 16% advantage over the RTX 4070 Ti Super may not sound as impressive as it appears when we consider the potential stock shortages, the consequent pricing issues, and the competition posed by the RX 9070 XT. Leaks suggest that the initial supply of the RTX 5070 Ti will be “very limited,” and buyers could face some price surprises.

    Conclusion

    If the RTX 5070 Ti indeed performs as stated, it will only be worth recommending if it is available at the $749 MSRP. Otherwise, it could just turn out to be another disappointing GPU launch from Nvidia.

    Source:
    Link

  • Intel Arc B580 GPU Beats RX 7600 XT and Arc A580 in Benchmarks

    Intel Arc B580 GPU Beats RX 7600 XT and Arc A580 in Benchmarks

    After a long wait filled with rumors, Intel has finally launched its next-generation Arc Battlemage GPUs, namely the Arc B580 and Arc B570, just last week. The more robust Arc B580 will be available for purchase starting December 13, with a price tag of only $249. Meanwhile, the Arc B570 is expected to be released sometime in 2025.

    Affordable Choice for Gamers

    At just $249, the Arc B580 is likely to attract gamers who are budget-conscious, as Intel asserts that this GPU outperforms the previous Arc A750 and the RTX 4060. In addition, the Arc B580 comes with 12 GB of RAM, which is becoming increasingly important for gamers since 8 GB VRAM is no longer sufficient for demanding AAA games.

    Performance Claims Under Scrutiny

    Even though Intel’s performance claims for the Arc B580 seem promising, we cannot fully trust what the company says. Luckily, as we approach the release date for the Arc B580, we can expect to see benchmark results that will showcase the GPU’s performance. A recent leak has given us a preliminary insight into the synthetic performance of the Arc B580.

    The Intel Arc B580 achieved OpenCL and Vulkan scores of 98,343 and 103,445, respectively. According to Geekbench’s database, these scores indicate that the Intel Arc B580 is approximately 9% faster than the Arc A580 in OpenCL tests and a significant 30% faster in Vulkan tests.

    Competing Well Against Rivals

    The benchmarks for the Arc B580 also show competitive results against the RX 7600 and RX 7600 XT (which can be found on Amazon). In the OpenCL benchmark, the Arc B580 is 20% faster than the RX 7600 and 17% quicker than the RX 7600 XT. In Vulkan benchmarks, the Arc Battlemage GPU leads by about 18.5% against the RX 7600 XT and 14% against the RX 7600.

    In conclusion, the Arc B580 is slightly behind the RTX 4060 in OpenCL performance by about 3%. However, it appears to be 6.5% faster than the RTX 4060 in the Vulkan benchmark.

    All things considered, the Intel Arc B580 looks to be a significant improvement over the Arc A580. But remember, synthetic benchmarks don’t provide the entire picture. We will need to wait for the retail samples of the Arc B580 to be reviewed, with those reviews expected to arrive this week.

    Source: Link


    Image 1
    Image 1
  • MediaTek Dimensity 9400 Beats Apple A18 Pro in GPU Power

    MediaTek Dimensity 9400 Beats Apple A18 Pro in GPU Power

    In a new performance comparison by Golden Reviewer, MediaTek’s latest flagship chipset, the Dimensity 9400, has shown remarkable GPU abilities, possibly creating a new benchmark for graphics performance in high-end Android devices.

    MediaTek Dimensity 9400 vs Apple A18 Pro

    Even though MediaTek has typically lagged behind Apple’s A-series chips and Qualcomm’s Snapdragon family, the Dimensity 9400 is emerging as a strong contender in the GPU sector. The chipset features a 12-core Arm Immortalis-G925 GPU, which displayed both strength and efficiency in various GPU evaluations, outpacing Apple’s A18 Pro GPU in several key areas.

    Detailed breakdown of Golden Reviewer’s tests

    The evaluations focused on the graphics performance of the Dimensity 9400 in comparison to Apple’s A18 Pro, specifically looking at 3DMark benchmarks. In the Solar Bay ray-tracing test, the Dimensity 9400 achieved an impressive score of 11,817 points while consuming 11.4 watts of power. On the other hand, Apple’s A18 Pro scored lower at 8,587, using slightly less power at 11.1 watts. This gives the Dimensity 9400 about a 30% efficiency advantage over the A18 Pro—a notable achievement in a market where Apple typically excels in GPU performance.

    Additional Information on the Dimensity 9400 and A18 Pro

    Other benchmarking tests returned similar results. In 3DMark’s Steel Nomad Light and WildLife Extreme tests, the Dimensity 9400 repeatedly outperformed the A18 Pro while using less energy, showcasing roughly a 40% efficiency edge. These findings highlight the Dimensity 9400’s capability to exceed the A18 Pro in demanding graphics tasks, making it an excellent pick for gaming and other GPU-intensive applications on Android devices.

    The advancements in the Dimensity 9400’s GPU mark a significant upgrade from its earlier model, the Dimensity 9300. Its 12-core Arm Immortalis-G925 GPU brings a 40% improvement in ray-tracing performance and a 41% rise in peak performance. Furthermore, it features MediaTek’s HyperEngine technology, which is intended to enhance super-resolution and visual quality, especially during gaming. This blend of upgrades positions the Dimensity 9400 as a top choice in the premium Android segment, appealing to those who value high-quality graphics and efficient performance.

    Apple’s A18 Pro, although trailing in these specific GPU assessments, is still a powerful chipset. Its six-core GPU presents almost a 40% enhancement over the prior iPhone 15 series, improving its capability for gaming and computational photography. Even with the Dimensity 9400 leading in GPU tests, the A18 Pro continues to hold its ground, particularly regarding overall performance, including CPU efficiency and multitasking skills.

    For an in-depth look at the comparison between the Dimensity 9400 and Apple A18 Pro, check out our comprehensive analysis of both chipsets.


    Image 1
    Image 1
    Image 1
  • Dimensity 9400 GPU Tests Show MediaTek’s Efficiency in Vivo X200 Pro

    Dimensity 9400 GPU Tests Show MediaTek’s Efficiency in Vivo X200 Pro

    While several evaluations have indicated that the MediaTek Dimensity 9400 is slightly trailing behind its competitors like the Apple A18 Pro and Snapdragon 8 Elite in CPU performance and efficiency, things are quite different when it comes to the GPU. A new analysis has assessed the graphics capabilities of the MediaTek chipset, showing it might lead the pack in this area.

    Performance Comparison

    According to testing done by Golden Reviewer, the Dimensity 9400’s Immortalis-G925 GPU seems to be both stronger and more efficient compared to Apple’s A18 Pro. In the 3DMark Solar Bay ray-tracing test, the Dimensity 9400 achieved an impressive score of 11,817 while consuming 11.4 watts. In contrast, the A18 Pro scored 8,587 with a power usage of 11.1 watts. These figures indicate a notable efficiency edge for the Dimensity 9400, being roughly 30% more efficient than the chipset in the iPhone 16 Pro Max during the test.

    Additional Benchmark Results

    The same trend continues across other 3DMark assessments. In the Steel Nomad Light test, the Dimensity 9400 not only surpassed the A18 Pro in scores but also used less power, showcasing about a 40% efficiency superiority over Apple’s chipset. This pattern holds true in the WildLife Extreme test as well, where the Dimensity 9400 outshines the A18 Pro in performance while also drawing less power.

    Overall Impression

    The results are clear: MediaTek’s Dimensity 9400 stands out in the GPU performance, outclassing the A18 Pro in most metrics. It’s still uncertain how it will compare with the Snapdragon 8 Elite in similar tests, but we will keep you informed on any updates regarding that.


    Image 1
    Image 1
    Image 1
  • RTX 5080 Laptop vs Desktop: Core Count vs VRAM Comparison

    RTX 5080 Laptop vs Desktop: Core Count vs VRAM Comparison

    We shared some news back in May through Moore’s Law Is Dead, suggesting that Nvidia might equip the RTX 5080 and RTX 5070 laptop versions with more memory than the RTX 40 series. Rumors indicated that the RTX 5080 laptop could feature 16 GB of VRAM, while the RTX 5070 laptop was expected to have 12 GB.

    Updated VRAM Information

    MLID has reinforced his previous leak about the RTX 5080 laptop’s VRAM and has disclosed potential CUDA core counts for the anticipated GB203 GPU that is set to drive the device. The RTX 5080 laptop is said to use a GB203 GPU, which also powers the RTX 5090 laptop and the RTX 5080 desktop variants. It’s likely that the laptop’s GB203 will be a less powerful version, featuring fewer CUDA cores compared to the desktop variant. MLID suggests a possible configuration for the RTX 5080 laptop could include 8,192 CUDA cores along with 16 GB of GDDR7 memory. This marks a notable decrease in the number of cores available on the GB203.

    Comparison with Desktop Variants

    Earlier leaks have indicated that the desktop RTX 5080, using the GB203-400-A1, consists of a total of 10,752 CUDA cores and has 16 GB of memory operating at 28 Gbps over a 256-bit bus. This means the RTX 5080 laptop is experiencing a significant drop of 24% in the CUDA core count.

    Despite this substantial reduction in specs, MLID has put forward some ambitious predictions regarding the performance of the RTX 5080 laptop. He estimates that, with 16 GB of 28 Gbps VRAM and the corresponding bandwidth, the RTX 5080 laptop may be 45 to 65% faster than the RTX 4080 laptop. This advantage could enable the mobile GPU to compete with the desktop RTX 4070 Ti Super and even the RTX 4080 (available on Amazon).

    Performance Predictions and Future Outlook

    However, it’s important to note that these performance estimates are merely the result of the leaker’s “napkin math.” Therefore, they should be viewed as educated guesses rather than confirmed leaks.

    powerful 1440p/4K gaming machine

    Moore’s Law Is Dead, Teaser image: Gigabyte, Sean Sinclair on Unsplash, edited.


    Image 1
    Image 1
  • Qualcomm Launches Adreno X1 GPU for Next-Gen Arm Laptops

    Qualcomm Launches Adreno X1 GPU for Next-Gen Arm Laptops

    Qualcomm has provided more insight into the Adreno X1 GPU, the graphics processor integrated into their forthcoming Snapdragon X Elite/Plus chips. These processors will power Windows on Arm laptops set to launch next week.

    Future Generations and Branding

    The Adreno X1 marks the first custom-designed GPU for Windows on Arm systems from Qualcomm. The “X1” label identifies the initial generation, with future versions likely adopting similar naming conventions, such as Adreno X2 or X3.

    Technical Specifications

    The top-tier configuration of the Adreno X1, named “X1-85,” features up to 6 shader cores and 1536 FP32 ALUs (Arithmetic Logic Units). This setup allows for the processing of 96 texture units per cycle and delivers a peak performance of up to 4.6 TFLOPS (trillion floating-point operations per second). Additionally, it can handle an impressive 72 million pixels of data per second.

    The Adreno X1 supports popular graphics APIs, including DirectX 12.1 (Shader Model 6.7), DirectX 11, Vulkan 1.3, and OpenCL 3.0. This wide compatibility ensures that developers can utilize the Adreno X1’s features across various applications.

    Performance Comparisons and User Control

    Qualcomm compares the Adreno X1-85 to Intel’s Core Ultra 7 155H mobile processor, which has 8 Xe cores. They assert that the Adreno X1-85 can equal or even exceed Intel’s GPU performance in several games at 1080p resolution. However, detailed information about game settings and testing platforms has not been provided.

    Moreover, Qualcomm is introducing the “Adreno Control Panel,” an application dedicated to managing game optimization and driver updates. This focus on user control and continuous enhancement indicates Qualcomm’s dedication to improving the Adreno X1’s performance over time, with updates promised at least once a month.