How to Set Up and Use FastMOT for Object Tracking

Nov 14, 2020 | Data Science

If you’re looking to implement a cutting-edge multiple object tracking system capable of working in real-time, FastMOT is your solution. This blog post will guide you through the setup and usage of FastMOT, ensuring you get started quickly and efficiently.

What is FastMOT?

FastMOT is an advanced multiple object tracker that incorporates a variety of detectors and trackers to provide efficient and effective tracking even under challenging conditions such as moving cameras.

Key Features of FastMOT

  • Multi-class tracking support
  • Detection using YOLO, SSD, and more
  • Deep SORT plus OSNet Re-Identification
  • Camera motion compensation for enhanced tracking

System Requirements

Before installing FastMOT, ensure that your system meets the following requirements:

  • CUDA = 10
  • cuDNN = 7
  • TensorRT = 7
  • OpenCV = 3.3
  • Numpy = 1.17
  • Scipy = 1.5
  • Numba == 0.48
  • CuPy == 9.2
  • TensorFlow 2.0 (for SSD support)

Installation Guide

For x86 Ubuntu

bash
# Ensure nvidia-docker is installed
docker build -t fastmot:latest .
docker run --gpus all --rm -it -v $(pwd):/usr/src/app/FastMOT -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY --env TZ=$(cat /etc/timezone) fastmot:latest

For Jetson Nano/TX2/Xavier NX

bash
# Ensure JetPack 4.4 is installed
./scripts/install_jetson.sh

Usage Instructions

Once FastMOT is installed, you can use it to track objects with ease. Here’s how you can run the application!

bash
python3 app.py --input-uri 
  • For image sequences, use: –input-uri %06d.jpg
  • For video files, use: –input-uri file.mp4
  • For USB webcam, use: –input-uri /dev/video0
  • For MIPI CSI cameras, use: –input-uri csi:0
  • For RTSP streams, use: –input-uri rtsp://user:password@ip:port/path
  • For HTTP streams, use: –input-uri http://user:password@ip:port/path

Understanding the Code

The code you’ll implement can be likened to a driving experience with a skilled driver (FastMOT) navigating through a busy city (the various detection and tracking algorithms) while ensuring not to miss any lanes (the gaps between frames) and following traffic rules (the configuration settings). Here’s briefly how the components work together:

  • **YOLO and SSD Detectors:** Act as the eyes of the driver, identifying objects in the scene.
  • **Deep SORT + OSNet ReID:** This helps the driver remember cars they have passed earlier, even if they disappear from the view momentarily.
  • **KLT Tracker:** This component fills in the gaps when the driver can’t see some of the objects (frames), ensuring smooth navigation.
  • **Motion Compensation:** If the car (camera) moves, FastMOT adjusts like a skilled driver changing lanes, maintaining focus on the objects.

Troubleshooting Common Issues

If you encounter any problems during installation or usage, try the following troubleshooting tips:

  • Ensure CUDA and TensorRT versions are correctly installed and compatible with your hardware.
  • If you cannot visualize inside the Docker container, run xhost local:root before executing the Docker command.
  • If your tracking speed is slow, consider using lighter models or adjusting the detector_frame_skip parameter.
  • Check for any missing packages as listed in the system requirements.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

FastMOT provides a robust solution for multi-object tracking, utilizing advanced techniques to ensure real-time performance. By following this guide, you should be well-equipped to install and utilize FastMOT effectively.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox