Welcome to this comprehensive guide on using tflite, a Flutter plugin that empowers developers to access the TensorFlow Lite API readily. Whether you’re interested in image classification, object detection, or even advanced image translations through Pix2Pix, this guide will help you navigate through it all.
Table of Contents
Installation
To begin, add tflite as a dependency in your pubspec.yaml file.
For Android:
In your android/app/build.gradle file, place the following settings in the android block:
aaptOptions {
noCompress "tflite"
noCompress "lite"
}
For iOS:
If you encounter build errors on iOS, ensure:
- Open ios/Runner.xcworkspace in Xcode, navigate to Build Settings, and change “Compile Sources As” to “Objective-C++”.
- Fix the
tensorflowlitekernelsregister.hfile not found error with the CONTRIB_PATH setting if you’re using early TensorFlow versions.
Usage
Follow these steps to use the plugin:
- Create an
assetsfolder and place your label file and model file in it. Updatepubspec.yaml:assets: - assets/labels.txt - assets/mobilenet_v1_1.0_224.tflite - Import the library:
import 'package:tflite/tflite.dart'; - Load the model and labels:
String res = await Tflite.loadModel( model: "assets/mobilenet_v1_1.0_224.tflite", labels: "assets/labels.txt", numThreads: 1, isAsset: true, useGpuDelegate: false, ); - Utilize the relevant model functions.
- When done, release resources:
await Tflite.close();
Understanding the Code: An Analogy
Think of the Tflite.loadModel method as a chef gathering all ingredients to prepare a dish. Just like a chef selects the right ingredients (model and labels) for a delicious recipe, you need to specify where your model and labels are stored. Setting the number of threads equates to deciding how many kitchen hands will help speed up the cooking process. After the meal (or inference) is prepared, you serve it (or display the results) and clean up the kitchen (release the resources).
Image Classification
The output format will be like this:
index: 0,
label: person,
confidence: 0.629
To run image classification on an image:
var recognitions = await Tflite.runModelOnImage(
path: filepath,
imageMean: 0.0,
imageStd: 255.0,
numResults: 2,
threshold: 0.2,
asynch: true,
);
Object Detection
The output will give you details about detected objects, such as:
detectedClass: hot dog,
confidenceInClass: 0.123,
rect: {
x: 0.15,
y: 0.33,
w: 0.80,
h: 0.27
}
SSD MobileNet
To run object detection using the SSD model:
var recognitions = await Tflite.detectObjectOnImage(
path: filepath,
model: "SSDMobileNet",
imageMean: 127.5,
imageStd: 127.5,
threshold: 0.4,
numResultsPerClass: 2,
asynch: true,
);
Tiny YOLOv2
For running object detection with YOLO, use:
var recognitions = await Tflite.detectObjectOnImage(
path: filepath,
model: "YOLO",
imageMean: 0.0,
imageStd: 255.0,
threshold: 0.3,
numResultsPerClass: 2,
anchors: anchors,
blockSize: 32,
numBoxesPerBlock: 5,
asynch: true,
);
Pix2Pix
Here’s how you would run Pix2Pix:
var result = await runPix2PixOnImage(
path: filepath,
imageMean: 0.0,
imageStd: 255.0,
asynch: true,
);
Deeplab
To run the Deeplab model:
var result = await runSegmentationOnImage(
path: filepath,
imageMean: 0.0,
imageStd: 255.0,
labelColors: [...],
outputType: "png",
asynch: true,
);
PoseNet
And for PoseNet:
var result = await runPoseNetOnImage(
path: filepath,
imageMean: 125.0,
imageStd: 125.0,
numResults: 2,
threshold: 0.7,
nmsRadius: 10,
asynch: true,
);
Example
Prediction in Static Images
Refer to this example for predictions on static images.
Real-time Detection
You can check out this repository for real-time detection.
Troubleshooting
Here are some common issues you might encounter while utilizing tflite:
- Model not loading: Ensure that your model path is set correctly in the
pubspec.yamlfile. - Graphics issues on iOS: Ensure you have set “Compile Sources As” to “Objective-C++”.
- TensorFlow header not found: Adjust the path according to your TensorFlow setup.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

