Welcome to the world of IterMVS! This innovative machine learning method, presented at CVPR 2022, promises high efficiency and competitive reconstruction quality for Multi-View Stereo (MVS). This guide will walk you through installation, reproducing results, evaluating datasets, and training models. Follow along as we explore how to get the most out of this powerful tool!

Installation

Before jumping in, you’ll need to ensure that your setup is ready for IterMVS. Here’s what you need:

  • Python 3.6
  • CUDA 10.1

To install the necessary requirements, run:

pip install -r requirements.txt

Reproducing Results

To reproduce results efficiently, follow these steps:

1. Download Pre-Processed Datasets

Start by downloading the pre-processed datasets for evaluation:

Organize the datasets as follows:

root_directory
├── scan1 (scene_name1)
├── scan2 (scene_name2)
│   ├── images
│   ├── cams_1
│   └── pair.txt

2. Camera Parameters

The cam.txt file contains essential camera parameters:

extrinsic
E00 E01 E02 E03
E10 E11 E12 E13
E20 E21 E22 E23
E30 E31 E32 E33

intrinsic
K00 K01 K02
K10 K11 K12
K20 K21 K22
DEPTH_MIN
DEPTH_MAX

The pair.txt file holds information about the ten best source views for each reference image.

Evaluation on Different Datasets

Evaluating on various datasets can be accomplished with respective scripts:

DTU Evaluation

Follow these steps for the DTU evaluation:

  1. Download processed camera parameters from here.
  2. Unzip and replace the old camera files in the folders cams_1.
  3. Modify the eval_dtu.sh script with correct directories.
  4. Run the evaluation script: bash eval_dtu.sh.

Tanks & Temples Evaluation

For Tanks & Temples:

  1. Modify the eval_tanks.sh by setting directories.
  2. Run: bash eval_tanks.sh.

ETH3D Evaluation

For ETH3D:

  1. Adjust the eval_eth.sh script.
  2. Execute with: bash eval_eth.sh.

Custom Dataset Evaluation

Custom dataset evaluations can utilize COLMAP results:

  1. Run colmap_input.py to convert results into required format.
  2. Test with: bash eval_custom.sh.

Training Models

Let’s dive into training for DTU and BlendedMVS:

Training on DTU

Prepare the DTU dataset:

  1. Download the pre-processed DTU training set from here.
  2. Unzip into the root_directoryCameras_1.
  3. Edit train_dtu.sh for paths.
  4. Run: bash train_dtu.sh.

Training on BlendedMVS

Follow these steps to train on BlendedMVS:

  1. Download the dataset from here.
  2. Edit train_blend.sh as necessary.
  3. Execute: bash train_blend.sh.

Troubleshooting

If you encounter issues during installation or execution, try the following:

  • Ensure Python and CUDA versions are compatible.
  • Check your dataset paths in scripts for accuracy.
  • If the evaluation scripts fail, ensure dependencies are installed as per requirements.
  • Restart your machine and verify the GPU settings if you face runtime errors.

For additional assistance or insights, feel free to connect with others on the subject or visit our collaborative platform at fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

About the Author

Hemen Ashodia

Hemen Ashodia

Hemen has over 14+ years in data science, contributing to hundreds of ML projects. Hemen is founder of haveto.com and fxis.ai, which has been doing data science since 2015. He has worked with notable companies like Bitcoin.com, Tala, Johnson & Johnson, and AB InBev. He possesses hard-to-find expertise in artificial neural networks, deep learning, reinforcement learning, and generative adversarial networks. Proven track record of leading projects and teams for Fortune 500 companies and startups, delivering innovative and scalable solutions. Hemen has also worked for cruxbot that was later acquired by Intel, mainly for their machine learning development.

×