Welcome to the world of Stable Diffusion with Docker! If you’re seeking a flexible and efficient way to harness the power of AI for your image generation needs, you’ve come to the right place. This guide walks you through the steps to set up the docker-diffusers-api, allowing you to easily run various models and pipelines using a REST API. With support for models like stable-diffusion, waifu-diffusion, and more, you’re on the brink of creating amazing images.
Features of Docker Diffusers API
- Multiple models: stable-diffusion, waifu-diffusion, and easy addition of others (e.g. jp-sd)
- Various pipelines: txt2img, img2img, and inpainting in a single container
- Support for S3 and dreambooth training
- Option to send signed event logs and performance data
Installation and Setup
Your first step is getting the docker-diffusers-api up and running. There are different ways to go about this:
Running Locally or on a Server
To run the application locally or on another server with runtime downloads, execute the following command:
docker run --gpus all -p 8000:8000 -e HF_AUTH_TOKEN=$HF_AUTH_TOKEN gadiccdiffusers-api
Running Serverless
For serverless setups, include the model at build time. You can explore options like:
Building from Source
If you prefer to build from the source:
- Fork and clone this repository.
- Run:
docker build -t gadiccdiffusers-api . - Refer to CONTRIBUTING.md for more details.
Usage of Docker Diffusers API
The API awaits an HTTP POST request at http://localhost:8000 with a JSON body that conforms to the following model:
{
"modelInputs": {
"prompt": "Super dog",
"num_inference_steps": 50,
"guidance_scale": 7.5,
"width": 512,
"height": 512,
"seed": 3239022079
},
"callInputs": {
"MODEL_ID": "runwayml/stable-diffusion-v1-5",
"PIPELINE": "StableDiffusionPipeline",
"SCHEDULER": "LMSDiscreteScheduler",
"safety_checker": true
}
}
Understanding Model Inputs and Call Inputs
Think of modelInputs as the ingredients for a fantastic recipe. Each variable you adjust alters the final dish – or in this case, the image you’ll generate. The callInputs act as the chef’s instructions on which cooking method and specific tools to use. Just as different chefs have preferences for how they make dishes, in this API, you get to choose your pipeline, scheduler, and even the model itself!
Example Testing
You can observe basic examples in test.py. Run it while the container is live:
python test.py
Troubleshooting
While using the docker-diffusers-api, you may encounter a few hiccups. Here are some common issues and solutions:
- 403 Client Error: Forbidden for URL – This could be due to an unaccepted license on the HuggingFace model card. Ensure that you have accepted the license and passed the HF_AUTH_TOKEN correctly.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
Setting up the Docker Diffusers API for Stable Diffusion can be an exhilarating experience, opening a world of possibilities in image generation. Dive in, experiment, and enjoy the art of AI!
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

