Tag: Stable Diffusion

  • Arizona Court Accepts AI Video Testimony from Deceased Victim

    Arizona Court Accepts AI Video Testimony from Deceased Victim

    Key Takeaways

    1. An Arizona court allowed an AI-generated video statement of a deceased victim, Christopher Pelkey, to be shown during a sentencing hearing for his killer, Gabriel Horcasitas.

    2. The avatar was created by Pelkey’s sister, Stacey Wales, using image generation, voice cloning, and AI scripting tools, and it delivered a victim impact statement.

    3. The video included a disclaimer about its AI origin and featured actual footage of Pelkey, addressing Horcasitas and expressing the family’s feelings about their loss.

    4. Arizona law permits victim impact statements in various formats, and there were no objections to the inclusion of the AI video during the sentencing.

    5. Judge Todd Lang imposed the maximum sentence of 10.5 years, acknowledging the emotional impact of the AI-generated statement on the case.


    In a groundbreaking event for the U.S. legal system, an Arizona court has allowed an AI-created video statement of a deceased victim to be shown during a sentencing hearing. The video, which showcased a digital version of Christopher Pelkey, who lost his life in a 2021 road rage incident, was presented when Gabriel Horcasitas was being sentenced for his death.

    Creation of the Avatar

    The avatar was crafted by Pelkey’s sister, Stacey Wales, with assistance from her husband. They utilized a mix of image generation, voice cloning, and generative AI scripting tools. The resulting video displayed a digitally animated Pelkey delivering a victim impact statement directed at both the court and Horcasitas. This video was part of the family’s narrative during the sentencing phase.

    Presentation and Reactions

    The video began with a disclaimer indicating its AI origin and included actual footage of Pelkey from his life before returning to the avatar. In the video, the AI version addressed Horcasitas directly, conveying the family’s feelings about their loss and the impact of the past three and a half years on their lives.

    Under Arizona law, victim impact statements can be shared in different formats, and there were no objections to the AI video being included. The family made it clear that they authored the content, which wasn’t meant to be interpreted as Pelkey’s own words.

    Legal Insights

    Jessica Gattuso, the attorney for the family, mentioned that Arizona’s victim rights laws allowed the family to choose how to present their statement. “I didn’t see any issues with the AI and there was no objection,” she remarked.

    Judge Todd Lang, who oversaw the case, recognized the emotional weight of the video during the sentencing. He imposed the maximum sentence of 10.5 years, in line with the family’s wishes.

    Stacey Wales explained that the video was made using Stable Diffusion with LoRA (Low-Rank Adaptation) for generating images, and a different AI model to mimic Pelkey’s voice. She described the project as a way to help the court grasp the profound effect her brother’s death had on their family.

    Horcasitas was convicted in March 2025 and received his sentence this month. This case represents the first documented instance of a U.S. court accepting an AI-generated avatar to symbolize a deceased victim during sentencing.

    Source:
    Link

  • Host Your Own AI Image Generator with Invoke AI & Stable Diffusion

    Host Your Own AI Image Generator with Invoke AI & Stable Diffusion

    There are plenty of reasons you might consider setting up your own AI image generator. You may want to skip watermarks and ads, create multiple images without paying for a subscription, or even explore image generation in ways that might not align with the ethical guidelines of the service. By hosting your own instance and utilizing training data from companies like Stable Diffusion, you can maintain complete control over what your AI produces.

    Getting Started

    To kick things off, download the Invoke AI community edition from the provided link. For Windows users, most of the installation is now automatic, so all necessary dependencies should install smoothly. However, you might encounter some challenges if you’re using Linux or macOS. For our tests, we used a virtual machine running Windows 11, with 8 cores from a Ryzen 9 5950, an RTX 4070 (available on Amazon), and 24GB of RAM on a 1TB NVMe SSD. While AMD GPUs are supported, that’s only for Linux systems.

    Once the installation is complete, open Invoke AI to generate the configuration files, and then close it. This step is essential as you’ll need to modify some system settings to enable “Low-VRAM mode.”

    Configuring Low-VRAM Mode

    Invoke AI doesn’t clearly define what “low VRAM” means, but it’s likely that the 12GB RAM on the RTX 4070 won’t be efficient enough for a 24GB model. To adjust this, you should edit the invokeai.yaml file located in the installation directory using a text editor and add the following line:

    “`
    enable_partial_loading: true
    “`

    After this change, Windows users with Nvidia GPUs need to adjust the CUDA – Sysmem Fallback Policy to “Prefer No Sysmem Fallback” within the Nvidia control panel global settings. You can tweak the cache amount you want for VRAM, but most users will find that simply enabling “Low-VRAM mode” is sufficient to get things running.

    Downloading Models

    Some models, like Dreamshaper and CyberRealistic, can be downloaded right away. However, to access Stable Diffusion, you’ll need to create a Hugging Face account and generate a token for Invoke AI to pull the model. There are also options to add models via URL, local path, or folder scanning. To create the token, click on your avatar in the top right corner and select “Access Tokens.” You can name the token whatever you prefer, but make sure to grant access to the following:

    Once you have the token, copy and paste it into the Hugging Face section of the models tab. You might have to confirm access on the website. There’s no need to sign up for updates, and Invoke AI will notify you if you need to allow access.

    Be aware that some models can take up a significant amount of storage, with Stable Diffusion 3.9 requiring around 19 GB.

    Accessing the Interface

    If everything is set up correctly, you should be ready to start. Access the interface through a web browser on the host machine by navigating to http://127.0.0.1:9090. You can also make this accessible to other devices on your local network.

    In the “canvas” tab, you can enter a text prompt to generate an image. Just below that, you can adjust the resolution for the image; keep in mind that higher resolutions will take longer to process. Alternatively, you can create at a lower resolution and use an upscale tool later. Below that, you can choose which model to use. Among the four models tested—Juggernaut XL, Dreamshaper 8, CyberRealistic v4.8, and Stable Diffusion 3.5 (Large)—Stable Diffusion created the most photorealistic images, yet struggled with text prompts, while the others offered visuals resembling game cut scenes.

    The choice of model really comes down to which one delivers the best results for your needs. While Stable Diffusion was the slowest, taking about 30 to 50 seconds per image, its results were arguably the most realistic and satisfying among the four.

    Exploring More Features

    There’s still a lot to explore with Invoke AI. This tool lets you modify parts of an image, create iterations, refine visuals, and build workflows. You don’t need high-end hardware to run it; the Windows version can operate on any 10xx series Nvidia GPU or newer, though expect slower image generation. Despite mixed opinions on AI model training and the associated energy use, running AI on your own hardware is an excellent means to create royalty-free images for various applications.

    Source:
    Link