Category: Artificial intelligence

  • Most Galaxy AI Features May Stay Free Forever, But There’s a Catch

    Most Galaxy AI Features May Stay Free Forever, But There’s a Catch

    Most of the updates in the Galaxy S25 lineup are focused on Galaxy AI. There were rumors that Samsung might start charging for Galaxy AI features in 2026. However, tipster PandaFlash X suggests that key features like Photo Assist, AI Instant Slow-mo, and Writing Tools will likely remain free.

    Free Features for Now

    Initially, it was stated that these features would be “free until 2025,” but the new information does not specify a time limit, implying that they could be free indefinitely. This seems logical when you look at the improvements: Galaxy AI along with only minor hardware changes.

    Pricing and Subscription Details

    We are uncertain about how much Samsung will charge for Galaxy AI or how its plans will be structured. However, we do know that the Gemini Advanced subscription, which is complimentary for the first six months for Galaxy S25 series users, costs $20 per month. The tipster also noted that features within Gemini Advanced will require payment.

    There has been no clarity on which specific features will rely on the Gemini Advanced subscription.

    What’s Next After Six Months?

    What will you have to pay for or lose after the six-month free trial? Samsung has been just as vague about this as it was regarding the Galaxy S25 Edge. They highlighted the AI functionalities and the improvements in user experience that the S25 series offers, but did not detail which functions will need the Gemini Advanced subscription.

    The VP of ‘Google Gemini Experiences’ mentioned at the Galaxy Unpacked event, “Gemini Advanced comes with our most capable AI models and priority access to the newest features. It will also include ‘Screen Sharing’ and ‘live video viewing’ in the future.”

    Most of the AI features, like summarization, leverage the latest model available. Even if you don’t subscribe, these features will still work but with somewhat lower quality results.

    According to the Google Store, Gemini Advanced is particularly good at solving complex issues due to its improved reasoning skills. Some of the more challenging queries may not perform as effectively after the first six months, or they might cease to work entirely. It’s unfortunate that Samsung left so many questions open for speculation on the internet instead of clarifying them at the launch.


  • Google Launches Gemini 2.0 Flash: Faster and More Efficient AI

    Google Launches Gemini 2.0 Flash: Faster and More Efficient AI

    Google has rolled out Gemini 2.0 Flash as the new standard model for its Gemini application, offering enhancements in both speed and efficiency. This update, which follows its launch in December, boosts performance in crucial areas, making activities like brainstorming, studying, and writing much quicker and smoother. Thanks to faster response times and improved processing, users can effortlessly generate ideas, dive into concepts, and produce written content.

    Features for Advanced Subscribers

    For those who have subscribed to Gemini Advanced, the 1 million token context window is still available, enabling document uploads of up to 1,500 pages. Subscribers also keep access to unique features like Deep Research, which aids in discovering insights and relationships, as well as Gems, which delivers curated resources for more streamlined research.

    Enhanced Image Generation

    Moreover, the image generation now employs Imagen 3, which delivers more intricate textures and precise interpretations of prompts. The updated model excels at handling fine details, realistic elements, and stylistic choices, simplifying the creation of photorealistic images and imaginative artwork.

    Gemini 2.0 Flash is now accessible on both web and mobile platforms. To aid in the transition, Google will still offer support for the earlier models, Gemini 1.5 Flash and 1.5 Pro, for a limited duration, allowing users to finish any ongoing chats.

    Microsoft Updates

    In other news, Microsoft is launching distilled versions of DeepSeek R1 to Windows 11 Copilot+ PCs, enabling AI functionalities to operate efficiently on-device without requiring an internet connection. This change enhances privacy and performance by keeping data processing local. The rollout will commence with Snapdragon X-powered devices from manufacturers such as Asus, Lenovo, Dell, HP, and Acer, and will soon include Intel Core Ultra 200V and AMD Ryzen AI 300 series.

    Source:
    Link


  • Host Your Own AI Image Generator with Invoke AI & Stable Diffusion

    Host Your Own AI Image Generator with Invoke AI & Stable Diffusion

    There are plenty of reasons you might consider setting up your own AI image generator. You may want to skip watermarks and ads, create multiple images without paying for a subscription, or even explore image generation in ways that might not align with the ethical guidelines of the service. By hosting your own instance and utilizing training data from companies like Stable Diffusion, you can maintain complete control over what your AI produces.

    Getting Started

    To kick things off, download the Invoke AI community edition from the provided link. For Windows users, most of the installation is now automatic, so all necessary dependencies should install smoothly. However, you might encounter some challenges if you’re using Linux or macOS. For our tests, we used a virtual machine running Windows 11, with 8 cores from a Ryzen 9 5950, an RTX 4070 (available on Amazon), and 24GB of RAM on a 1TB NVMe SSD. While AMD GPUs are supported, that’s only for Linux systems.

    Once the installation is complete, open Invoke AI to generate the configuration files, and then close it. This step is essential as you’ll need to modify some system settings to enable “Low-VRAM mode.”

    Configuring Low-VRAM Mode

    Invoke AI doesn’t clearly define what “low VRAM” means, but it’s likely that the 12GB RAM on the RTX 4070 won’t be efficient enough for a 24GB model. To adjust this, you should edit the invokeai.yaml file located in the installation directory using a text editor and add the following line:

    “`
    enable_partial_loading: true
    “`

    After this change, Windows users with Nvidia GPUs need to adjust the CUDA – Sysmem Fallback Policy to “Prefer No Sysmem Fallback” within the Nvidia control panel global settings. You can tweak the cache amount you want for VRAM, but most users will find that simply enabling “Low-VRAM mode” is sufficient to get things running.

    Downloading Models

    Some models, like Dreamshaper and CyberRealistic, can be downloaded right away. However, to access Stable Diffusion, you’ll need to create a Hugging Face account and generate a token for Invoke AI to pull the model. There are also options to add models via URL, local path, or folder scanning. To create the token, click on your avatar in the top right corner and select “Access Tokens.” You can name the token whatever you prefer, but make sure to grant access to the following:

    Once you have the token, copy and paste it into the Hugging Face section of the models tab. You might have to confirm access on the website. There’s no need to sign up for updates, and Invoke AI will notify you if you need to allow access.

    Be aware that some models can take up a significant amount of storage, with Stable Diffusion 3.9 requiring around 19 GB.

    Accessing the Interface

    If everything is set up correctly, you should be ready to start. Access the interface through a web browser on the host machine by navigating to http://127.0.0.1:9090. You can also make this accessible to other devices on your local network.

    In the “canvas” tab, you can enter a text prompt to generate an image. Just below that, you can adjust the resolution for the image; keep in mind that higher resolutions will take longer to process. Alternatively, you can create at a lower resolution and use an upscale tool later. Below that, you can choose which model to use. Among the four models tested—Juggernaut XL, Dreamshaper 8, CyberRealistic v4.8, and Stable Diffusion 3.5 (Large)—Stable Diffusion created the most photorealistic images, yet struggled with text prompts, while the others offered visuals resembling game cut scenes.

    The choice of model really comes down to which one delivers the best results for your needs. While Stable Diffusion was the slowest, taking about 30 to 50 seconds per image, its results were arguably the most realistic and satisfying among the four.

    Exploring More Features

    There’s still a lot to explore with Invoke AI. This tool lets you modify parts of an image, create iterations, refine visuals, and build workflows. You don’t need high-end hardware to run it; the Windows version can operate on any 10xx series Nvidia GPU or newer, though expect slower image generation. Despite mixed opinions on AI model training and the associated energy use, running AI on your own hardware is an excellent means to create royalty-free images for various applications.

    Source:
    Link


  • Irish DPA Targets DeepSeek Over Data Practices Concerns

    Irish DPA Targets DeepSeek Over Data Practices Concerns

    Founded in May 2023, DeepSeek is working on a large language model that claims to be way more efficient than those from OpenAI or Meta. Now, the company is facing scrutiny from European regulators. While the European Commission hasn’t gotten involved yet, the Chinese AI firm must explain how its software operates to Irish officials. These authorities are worried about how DeepSeek manages the personal information of users in Ireland.

    Concerns from Irish Authorities

    According to a report from the Irish Data Protection Commission (DPC), cited by Reuters,
    “The Data Protection Commission (DPC) has written to DeepSeek requesting information on the data processing conducted in relation to data subjects in Ireland.”

    Currently, there’s no fixed timeline for DeepSeek to respond, and no public details about potential penalties the company could face if they fail to reply or if their response indicates non-compliance with European and Irish data laws.

    Launch of DeepSeek-R1 Chatbot

    DeepSeek recently launched its first chatbot app, based on the DeepSeek-R1 model, earlier this month. The app is free and available on both iOS and Android platforms, and it has quickly become a rival to ChatGPT. It was the most downloaded free app on the US App Store shortly after its release. One immediate effect of this launch was a significant drop in Nvidia shares, which fell by 18%.

    Open-Source Data Accessibility

    All data connected to DeepSeek-R1, including its generative AI algorithms, is open-source, allowing anyone to develop tailored solutions at a low cost. Nevertheless, it’s crucial that DeepSeek ensures its personal data handling is transparent enough to facilitate easier access to the European market.

    Source:
    Link

  • Why DeepSeek Stands Out Among AI Models

    Why DeepSeek Stands Out Among AI Models

    The AI sector has been largely led by American firms such as OpenAI, Google, and Meta for quite some time. Nevertheless, the rise of DeepSeek, a new AI startup from China, has changed the international AI scene.

    DeepSeek-R1 Model Breakthrough

    DeepSeek’s newest product, the DeepSeek-R1, is gaining attention due to its impressive performance, open-source framework, and affordable development costs. As artificial intelligence continues to play a vital role in tech advancements, it’s important to recognize how DeepSeek sets itself apart from other prominent models.

    Open-Source Advantages

    Unlike U.S. AI leaders like OpenAI, DeepSeek embraces an open-source strategy. By offering its DeepSeek-R1 model under an MIT license, it allows researchers, developers, and businesses to freely access, alter, and use the technology. In contrast, OpenAI has shifted away from its original commitment to open-source, choosing to keep its newer GPT models closed off. This open-source model promotes higher transparency, encourages cooperative enhancements, and reduces the obstacles to AI integration, making DeepSeek an appealing choice for companies and independent developers who wish to avoid being confined to proprietary systems.

  • Free Up to 7GB iPhone Storage by Disabling Apple Intelligence

    Free Up to 7GB iPhone Storage by Disabling Apple Intelligence

    With the launch of iOS 18.3, iPadOS 18.3, and macOS 15.3, Apple has introduced its newest AI-driven feature, Apple Intelligence, which is now automatically active on devices that are compatible. This collection of AI capabilities includes features like message summaries, image creation, and improved Siri interactions, but it also requires a substantial amount of storage. According to Apple’s official documentation, Apple Intelligence can take up to 7GB of space on iPhones, iPads, and Macs.

    Managing Storage Needs

    For users facing storage limitations or those who may not find the AI features beneficial, there is a way to completely disable Apple Intelligence. Once you turn it off, the local AI models will be deleted from the device, freeing up the space they occupied. This is particularly helpful for iPhone users who have limited storage, especially for those who don’t actively utilize AI-generated features or content.

    Steps to Disable Apple Intelligence

    To turn off Apple Intelligence, users can go to Settings on an iPhone or iPad, or System Settings on a Mac, and find the “Apple Intelligence & Siri” section. By toggling off Apple Intelligence, a confirmation prompt will appear, after which the system will erase AI-related resources. Some AI capabilities, like Writing Tools, Genmoji, and Image Playground, will no longer be accessible. However, it’s worth noting that certain tools, like the Clean Up function in the Photos app, might still be available.

    Apple Intelligence is unavailable in mainland China at this time, and the company has not shared a timeline for when it might be launched there. Additionally, only devices with Apple’s newest hardware, including the iPhone 15 Pro, iPhone 16 series, and M-series iPads and Macs, can support Apple Intelligence. Users upgrading to iOS 18.2 must also opt-in to access the AI features.

    Customizing Your Experience

    For those who are hesitant about fully disabling Apple Intelligence, Apple allows users to selectively turn off specific features through app settings. Options like message summaries in notifications and writing assistance tools can be disabled without shutting down the entire AI suite. By managing these settings, users can customize their experience while still maintaining control over the storage and functionality of their devices.

  • Windows 11 Copilot+ PCs to Feature Distilled DeepSeek R1 Models

    Windows 11 Copilot+ PCs to Feature Distilled DeepSeek R1 Models

    Microsoft is set to introduce a simplified version of its DeepSeek R1 AI models to Windows 11 Copilot+ PCs. This announcement follows the recent launch of DeepSeek R1 on Azure AI Foundry, which is Microsoft’s platform aimed at developers to promote innovation through AI tools and machine learning models in a secure and responsible way. Initially, this integration will be available on laptops, tablets, and PCs powered by Snapdragon X, with support for Lunar Lake and AMD Ryzen AI 300 processors to be added later.

    New Optimized Versions

    In a blog post on its Windows Developer Blog, Microsoft revealed that the Neural Processing Unit (NPU) optimized versions of DeepSeek-R1 would be introduced to Copilot+ PCs. These versions will be included in lightweight laptops from leading brands like Asus, Lenovo, Dell, HP, and Acer, all of which offer devices equipped with Snapdragon X Plus and X Elite processors featuring 45 TOPS NPUs. It’s worth noting that all Copilot+ PCs come with NPUs that have 40+ TOPS.

    Developer Access and Future Releases

    The initial release, named DeepSeek-R1-Distill-Qwen-1.5B, will be accessible to developers through the AI Toolkit, with the 7B and 14B versions expected to follow soon after. Microsoft claims that these optimized models will allow developers to create and deploy AI-driven applications that run effectively on devices, leveraging the powerful NPUs found in Copilot+ PCs.

    Timeline and AMD Updates

    Although Microsoft has not provided a specific timeline for the rollout, it appears that Intel Core Ultra 200V devices will receive these enhancements sooner than their AMD Ryzen AI equivalents. Meanwhile, AMD has already released guides related to DeepSeek R1 Distill for its newest Ryzen AI 300 series APUs, along with information for Ryzen 8040/7040 chips, even though the latter only have 16/10 TOPS NPUs, respectively.

    Source:
    Link

  • DeepSeek App Pulled from Italian Stores Due to Privacy Issues

    DeepSeek App Pulled from Italian Stores Due to Privacy Issues

    DeepSeek, a startup from China focusing on artificial intelligence, has recently faced significant regulatory challenges in Italy. Its app has unexpectedly disappeared from both Apple’s App Store and Google Play. This action comes after Italy’s data protection authority, Garante, initiated a formal investigation into how DeepSeek manages and gathers user data. Concerns surrounding data privacy and safety have put the AI firm under a microscope, mirroring similar worries expressed in the United States and Australia.

    Italian Authority Demands Clarity on Data Usage

    Italy’s privacy regulator has granted DeepSeek and its associated companies a 20-day period to reveal essential information related to their data handling practices. Authorities are requesting specifics about the types of personal data collected, how it is sourced, its intended use, and whether the information is stored on servers located in China. Additionally, they have inquired about how DeepSeek communicates data processing practices to both registered and unregistered users, especially when information is sourced via web scraping techniques.

    Privacy Issues Amid Rapid Success

    Concerns regarding privacy have escalated following DeepSeek’s rapid ascent. The launch of its AI assistant, which rivals OpenAI’s ChatGPT, saw the app quickly rise to the top of download lists across various nations, causing unease among competitors in the US tech sector. Concurrently, US officials are evaluating possible national security threats linked to the widespread use of a Chinese AI model, with the US Navy specifically cautioning its personnel against using DeepSeek.

    Data Transparency Under Fire

    Transparency in how data is managed remains a critical point of contention. According to the company’s privacy policy, user data is kept on secure servers in China and might be shared with affiliated organizations and service providers. Despite this, Euroconsumers—a group of European consumer advocates—has raised concerns regarding the sufficiency of these notifications and questioned DeepSeek’s compliance with the European Union’s General Data Protection Regulation (GDPR).

    Italy’s examination of DeepSeek is not a new development. Earlier in 2023, the country temporarily prohibited ChatGPT due to worries about user data protection. In response to these issues, OpenAI made several adjustments to its platform, including enhanced transparency about data processing, providing users with opt-out choices, and instituting age verification measures aimed at protecting children under 13. These modifications ultimately led to the reinstatement of the chatbot.

    Future Implications for DeepSeek

    As DeepSeek continues to expand its presence worldwide, the regulatory hurdles it faces are intensifying. The company is required to provide answers to the Italian regulator by February 17, a deadline that could significantly impact its future operations in the European market. Should authorities determine that privacy laws have been violated, DeepSeek may encounter severe penalties or operational restrictions, potentially setting a precedent for the examination of AI products created outside Western jurisdictions.

  • Alibaba’s Qwen 2.5 Max AI Outperforms Deepseek in Competition

    Alibaba’s Qwen 2.5 Max AI Outperforms Deepseek in Competition

    Chinese tech giant Alibaba Group Holding Ltd. has rolled out its latest AI model, Qwen 2.5 Max. The company claims this model outshines Deepseek v3—this statement comes just weeks after Deepseek was launched on January 10th, an event many were eagerly awaiting.

    Performance Claims

    Alibaba asserts that Qwen 2.5 Max exceeds the performance of other leading AI models, including those from Deepseek, OpenAI, and Meta. The model has shown remarkable results in several benchmarks, such as Arena-Hard, LiveBench, LiveCodeBench, MMLU, and GPQA-Diamond. Its performance in MMLU and LiveCodeBench has even set new industry benchmarks, showcasing its advanced capabilities.

    Market Reactions

    The introduction of Deepseek sent ripples through Silicon Valley, leading to a drop in tech stock prices and urging competitors to enhance their AI technologies. In a quick response to Deepseek’s impactful launch, ByteDance declared that it had made advancements to its own AI model, claiming it now outperforms OpenAI’s o1 in AIME benchmarks.

    Strategic Timing

    The launch of Qwen 2.5 Max seems to be timed with care, likely a reaction to the rising competition faced by China’s tech sector from international players. This announcement was made on January 29th, 2025, coinciding with the Lunar New Year, a significant holiday when many businesses in China close for a break. By unveiling this model early, Alibaba is showing its resolve to lead in AI innovation, even as competition heats up due to Deepseek’s entry.

    As the field of AI keeps advancing rapidly, Alibaba’s new achievement marks another important milestone in the global AI competition.

    Source:
    Link


  • Radeon RX 7900 XTX Beats RTX 4090 & 4080 Super in DeepSeek AI

    Radeon RX 7900 XTX Beats RTX 4090 & 4080 Super in DeepSeek AI

    Chinese LLM DeepSeek caused a significant disturbance in the US tech industry, leading to a loss of trillions of dollars from the stock market. Even though it was developed using somewhat older Nvidia hardware, it surprisingly performs quite well on AMD’s consumer product: the Radeon RX 7900 XTX. David McAfee, who oversees AMD’s Radeon division, shared some benchmark results on X.

    Performance Insights

    The Radeon RX 7900 XTX shows performance differences based on the model and the number of parameters being used. It can be up to 13% faster at 7 billion parameters and about 2% faster at 14 billion parameters. Beyond that point, the RDNA 3 flagship struggles, ultimately falling short compared to the RTX 4090 with 32 billion parameters. AMD even made a comparison with the GeForce RTX 4080 Super, where the 7900 XTX boasts a 34% advantage in performance.

    Running DeepSeek Locally

    AMD has also shared comprehensive guidelines on how to operate DeepSeek on your own computer. However, it’s important to note that the Radeon RX 7900 XTX has a limit of 32 billion parameters. On the other hand, the Strix Halo Ryzen AI Max 395 Plus, equipped with 128 GB of RAM, can handle up to 72 billion parameters. Additionally, for those who are willing to spend around $6,000, Matthew Carrigan has discovered a method to run the entire model locally on a system with dual AMD Epyc CPUs and 768 GB of RAM.

    Source:
    Link