The Stable Diffusion WebUI Docker image is a powerful tool for running AI models with ease. In this article, we will take you through the necessary steps to set up and run the WebUI on your machine, whether you’re using a GPU or just the CPU. Let’s dive in!
Step 1: Prepare the Directory
Before running the Docker image, you need to create a directory structure where the model files and outputs will be stored.
mkdir -p MY-DATA-DIR
cd MY-DATA-DIR
mkdir models outputs
sudo chown 10000:$UID -R models outputs
sudo chmod 775 -R models outputs
Step 2: Running with CUDA (GPU Support)
If you have an NVIDIA GPU, you can take advantage of the GPU acceleration by running:
docker run -it --name sdw --gpus all --network host \
-v $(pwd)/models:/app/stable-diffusion-webui/models \
-v $(pwd)/outputs:/app/stable-diffusion-webui/outputs \
--rm siutinstable-diffusion-webui-docker:latest-cuda \
bash webui.sh --share
Step 3: Running with CPU Only
If you don’t have a GPU, you can still run the model using CPU with this command:
docker run -it --name sdw --network host \
-v $(pwd)/models:/app/stable-diffusion-webui/models \
-v $(pwd)/outputs:/app/stable-diffusion-webui/outputs \
--rm siutinstable-diffusion-webui-docker:latest-cpu \
bash webui.sh --skip-torch-cuda-test --use-cpu all --share
Step 4: Building a Custom Image
You can also build a custom image for your needs. Below you’ll find the commands for building both NVIDIA CUDA and CPU-only images:
Building with NVIDIA CUDA
nvidia-docker buildx build -f Dockerfile.cuda \
--platform linux/amd64 \
--build-arg BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ) \
--build-arg BUILD_VERSION=custom-cuda \
-t siutinstable-diffusion-webui-docker:custom-cuda .
Building CPU Only Image
docker buildx build -f Dockerfile.cpu \
--platform linux/arm64 \
--build-arg BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ) \
--build-arg BUILD_VERSION=custom-cpu \
-t siutinstable-diffusion-webui-docker:custom-cpu .
How the Code Works: An Analogy
Imagine you are an artist preparing a new workspace (i.e., the directory preparation). Creating these directories is similar to setting up your studio, organizing your paints and canvases in a way that makes them easy to find and use. When you’re ready to start painting (running the Docker image), you choose the tools to help you—your brushes (GPU or CPU) determine how you will create your masterpiece. Lastly, building custom images is akin to creating a unique paint mixture tailored exactly to your style. Each step ensures you have the right environment to bring your artistic vision to life!
Troubleshooting
If you encounter issues while running the Docker image, here are some common troubleshooting steps:
- Ensure that Docker is installed and running on your system.
- Check that you have proper permissions for the directories you created.
- Ensure your GPU drivers are up-to-date if running with CUDA.
- Review log messages carefully; they often point to the exact issue.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
Setting up the Stable Diffusion WebUI using Docker can be a straightforward task by following the steps outlined above. By leveraging CUDA for GPU acceleration or opting for a CPU-only approach, you can choose what works best for your environment. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

