How to Generate Images Using Stable Diffusion on Your iPhone or iPad with Expo and React Native

Aug 24, 2022 | Data Science

Have you ever wondered if it’s possible to generate images using Stable Diffusion natively on your iPhone or iPad while taking advantage of Core ML in an Expo and React Native app? Well, now you can!

Stable Diffusion on iOS

Getting Started

To dive into generating images with Stable Diffusion on iOS, follow this step-by-step guide.

Installation

Begin by installing the expo-stable-diffusion module into your Expo managed project using the following command:

npx expo install expo-stable-diffusion

Configuration

Update iOS Deployment Target

For the project to build successfully, you need to set the iOS Deployment Target to 16.2. Install the expo-build-properties plugin:

npx expo install expo-build-properties

Then configure the plugin in your app.json:

expo: {
  plugins: [
    [
      "expo-build-properties",
      {
        ios: {
          deploymentTarget: "16.2"
        }
      }
    ]
  ]
}

Enable Increased Memory Limit

Add the Increased Memory Limit capability to prevent memory issues. In your app.json, add this:

expo: {
  ios: {
    entitlements: {
      "com.apple.developer.kernel.increased-memory-limit": true
    }
  }
}

Building Your iOS App

Finally, build your iOS app using the following commands:

npx expo prebuild --clean --platform ios
npx expo run:ios

Using expo-stable-diffusion

After installation and configuration, you can start generating images. Here’s a basic usage example:

import * as FileSystem from 'expo-file-system';
import * as ExpoStableDiffusion from 'expo-stable-diffusion';

const MODEL_PATH = FileSystem.documentDirectory + 'Model/stable-diffusion-2-1';
const SAVE_PATH = FileSystem.documentDirectory + 'image.jpeg';

await ExpoStableDiffusion.loadModel(MODEL_PATH);

const subscription = ExpoStableDiffusion.addStepListener((step) => 
  console.log(`Current Step: ${step}`)
);

await ExpoStableDiffusion.generateImage({
  prompt: 'a cat coding at night',
  stepCount: 25,
  savePath: SAVE_PATH,
});

subscription.remove();

Make sure the directory for saving the image exists; you can create it using FileSystem.makeDirectoryAsync(fileUri, options).

Obtaining Stable Diffusion Models

To utilize the expo-stable-diffusion module, you require a converted Core ML Stable Diffusion model. You can convert your own model using Apple’s official guide or download pre-converted models from Apple’s Hugging Face repository or my Hugging Face repository.

Troubleshooting

If you encounter issues like slow model load times or image generation durations, especially on devices with less than 6GB of RAM, consider reviewing Q6 in the FAQ section of the ml-stable-diffusion repository.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Analogy Explanation of Code

Think of the process of using expo-stable-diffusion like organizing a party. First, you need to gather materials (like installing the required modules), prepare the venue (configure your app), and ensure everything is in place for a successful event (build your iOS app). Only when all these steps are efficiently completed can you welcome your guests and enjoy the festivities (start generating images)!

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox