Getting Started with Libonnx: Your Ultimate Guide to ONNX Inference Engine for Embedded Devices

Mar 4, 2021 | Educational

Welcome to the world of Libonnx, a lightweight and portable ONNX inference engine designed specifically for embedded devices. With hardware acceleration support, Libonnx empowers developers to run ONNX models efficiently. This guide will walk you through the steps to get started, compile, run examples, and troubleshoot common issues.

How to Set Up Libonnx

Libonnx is straightforward to integrate into your projects. Here’s a step-by-step guide on how to do it.

1. Initial Setup

Start by adding the Libonnx `.c` and `.h` files to your project directory. Once that’s done, you can allocate the necessary structures for running the inference engine.

  • To allocate the context struct, use the following code:
  • struct onnx_context_t * ctx = onnx_context_alloc_from_file(filename, NULL, 0);

2. Accessing Tensors

Next, you need to get the input and output tensors.

  • To retrieve the input tensor, use:
  • struct onnx_tensor_t * input = onnx_tensor_search(ctx, input-tensor-name);
  • For the output tensor:
  • struct onnx_tensor_t * output = onnx_tensor_search(ctx, output-tensor-name);

3. Running Inference

After setting the input tensor, you can run the inference engine!

onnx_run(ctx);

4. Clean Up

Finally, don’t forget to free the allocated context using:

onnx_context_free(ctx);

Compilation Instructions

To compile Libonnx, follow these instructions:

  • Navigate to the root directory and type make. This will generate a static library along with binaries of examples and tests.
  • To compile the MNIST example, you need to install SDL2 and SDL2 GFX:
  • apt-get install libsdl2-dev libsdl2-gfx-dev
  • For cross-compilation (e.g., For arm64), use the following command:
  • make CROSS_COMPILE=path/to/toolchain/aarch64-linux-gnu-

How to Run Examples

Once everything is compiled, run an example:

cd libonnx/examples && ./hello

Screenshots

Here is a glimpse of the MNIST handwritten digit prediction:

![Mnist handwritten digit prediction](documents/images/mnist.gif)

Running Tests

To verify that everything is functioning correctly, run tests located in the tests model folder:

cd libonnx/tests && ./tests model

The expected outputs will indicate whether each test is successful. Note that running some tests may fail due to unsupported operators.

Notes

  • This library supports ONNX version 1.9.1 with opset 14.
  • You can convert your ONNX model into an unsigned char array using: xxd -i filename.onnx for compatibility.

Troubleshooting

If you encounter any issues during the setup or execution process, consider the following steps:

  • Ensure all necessary dependencies (like SDL2) are correctly installed.
  • Double-check your paths during cross-compilation to confirm they point to the right toolchains.
  • Refer back to the supported operator table to ensure your ONNX model is compatible.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox