How to Implement Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose

Jun 11, 2023 | Data Science

Are you ready to dive into the world of pose estimation? Using the Lightweight OpenPose model, a highly optimized version of OpenPose, you can achieve real-time inference on CPU with minimal accuracy drop. In this guide, we’ll walk you through setting up this incredible technology, including troubleshooting tips, to help you get that skeleton detection up and running smoothly.

Table of Contents

Requirements

  • Ubuntu 16.04
  • Python 3.6
  • PyTorch 0.4.1 (works with 1.0, but untested)

Prerequisites

  1. Download the COCO 2017 dataset from cocodataset.org (train, val, annotations) and unpack it to the COCO_HOME folder.
  2. Install requirements by running
    pip install -r requirements.txt
    .

Training

Training consists of three key steps:

  1. Start training with MobileNet weights—targeting ~38% AP.
  2. Train using weights obtained in the previous step—aiming for ~39% AP.
  3. Final training with an increase in refinement stages to 3—reaching ~40% AP with the setup.

Here’s how to get started:

  1. Download pre-trained MobileNet v1 weights from GitHub or from Google Drive.
  2. Convert training annotations in internal format by running
    python scripts/prepare_train_labels.py --labels COCO_HOME/annotations/person_keypoints_train2017.json
    .
  3. Train from MobileNet weights using
    python train.py --train-images-folder COCO_HOME/train2017 --prepared-train-labels prepared_train_annotation.pkl --val-labels val_subset.json --val-images-folder COCO_HOME/val2017 --checkpoint-path path_to/mobilenet_sgd_68.848.pth.tar --from-mobilenet
    .
  4. Follow similar steps to refine and train further until you reach the final goal.

Validation

To validate your model, run:

python val.py --labels COCO_HOME/annotations/person_keypoints_val2017.json --images-folder COCO_HOME/val2017 --checkpoint-path CHECKPOINT

Pre-trained Model

You can leverage the pre-trained model available for download at OpenVINO. It has been fine-tuned on the COCO validation set with an accuracy of approximately 40% AP.

C++ Demo

Explore C++ options within the Intel® OpenVINO™ toolkit. To run it, follow the official documentation found here.

Python Demo

For quick results, run the Python demo from a webcam with the following command:

python demo.py --checkpoint-path path_to/checkpoint_iter_370000.pth --video 0

Troubleshooting

One common issue you might encounter relates to the maximum number of open files. If you see an error such as:

RuntimeError: received 0 items of ancdata

You can resolve this by increasing the limit with the command:

ulimit -n 65536

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

In Summary

By following these steps, you can effectively implement the Lightweight OpenPose for real-time 2D Multi-Person Pose Estimation. Remember, training is vital, and minor tweaks along the way can lead to significant performance improvements. Happy coding!

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox