The Point Completion Network, or PCN, is a groundbreaking learning-based method designed for shape completion of point clouds. With this technology, we can efficiently convert partial point cloud data into dense, complete datasets without the cumbersome process of voxelization. This guide is your friendly companion to navigating the PCN setup and usage!
Introduction
PCN was introduced in our 3DV 2018 publication, PCN: Point Completion Network. For further insights into the project, feel free to visit our project website or check the paper linked above.
Usage Instructions
Let’s dive into how you can set up and utilize the PCN step-by-step. Here’s your roadmap:
1) Prerequisites
- Install necessary dependencies with:
pip3 install -r requirements.txt - Follow this guide to install Open3D for point cloud input/output.
- Build point cloud distance operations by running
makeunder the pc_distance directory (ensure paths in the makefile are correct). - Download pre-trained models from Google Drive.
Please note that the setup requires TensorFlow 1.12 with CUDA 9.0 and has been tested on Ubuntu 16.04 with Python 3.5.
2) Running a Demo
To see the magic of PCN in action, run:
python3 demo.py --input_path
This command allows you to switch among various demo input examples.
3) ShapeNet Completion
- First, download the ShapeNet test data from Google Drive. Ensure you have test, test_novel, test.list, and test_novel.list files.
- Next, execute the completion with:
python3 test_shapenet.py --model_typeto choose different model architectures. Usepython3 test_shapenet.py -hto explore more options.
4) KITTI Completion
- Download the KITTI data from Google Drive.
- Run
python3 test_kitti.pyand typepython3 test_kitti.py -hfor additional options.
5) KITTI Registration
- Start with the KITTI completion experiment to achieve complete point clouds.
- Then run
python3 kitti_registration.py. For more options, typepython3 kitti_registration.py -h.
6) Training Your Model
- First, download the training (train.lmdb, train.lmdb-lock) and validation (valid.lmdb, valid.lmdb-lock) data from the ShapeNet folder on Google Drive. Be aware that training data for all 8 categories in ShapeNet requires approximately 49GB of disk space, whereas only the car category occupies about 9GB.
- Execute the training process by running
python3 train.py. For more options, typepython3 train.py -h.
7) Data Generation
To generate your own data from ShapeNet, begin by downloading ShapeNetCore.v1. Create partial point clouds from depth images (instructions in the render directory) and corresponding ground truths by sampling from CAD models (instructions in the sample directory). Finally, serialize the data using lmdb_writer.py.
Troubleshooting
Facing issues while setting up or during execution? Here are some troubleshooting ideas:
- Ensure all dependencies are correctly installed and compatible versions of Python and TensorFlow are used.
- If any specific error messages arise, check the relevant sections of the code to ensure everything is properly aligned.
- Don’t forget to double-check the paths in your makefile—incorrect paths can lead to a host of compilation errors.
- For persistent issues, debugging with print statements can often clarify where things might be going wrong.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

