In the world of computer vision and 3D reconstruction, the paper on Neural Deferred Shading (NDS) presents a groundbreaking approach. This tutorial will guide you through the process of setting up the environment, installing necessary packages, and running reconstructions with NDS efficiently.
Installation
To harness the power of Neural Deferred Shading, follow the steps outlined below.
Setting Up the Environment
Begin by setting up the Conda environment to install the basic requirements. Open your terminal and run:
conda env create -f environment.yml
conda activate nds
Installing Nvdiffrast
Next, you need to install Nvdiffrast from source by running the following commands in the main directory:
git clone https://github.com/NVlabs/nvdiffrast.git
cd nvdiffrast
python -m pip install .
Installing Pyremesh
You have two options for installing pyremesh:
- Option 1 (preferred): Install from pre-built packages in the pyremesh subdirectory.
python -m pip install --no-index --find-links .extpyremesh pyremesh
Reconstructing DTU Scans
Now that you have everything set up, it’s time to start reconstructing! Download the full dataset (2.3 GB) or two samples (300 MB) and unzip the contents in your main directory. Your structure should look like .data/65_skull/.
Starting Reconstruction
To initiate reconstruction of specific views, execute the following command:
python reconstruct.py --input_dir .data/65_skull/views --input_bbox .data/65_skull/bbox.txt
For other scans, replace suitable identifiers like so:
python reconstruct.py --input_dir .data/SCAN-ID_SCAN-NAME/views --input_bbox .data/SCAN-ID_SCAN-NAME/bbox.txt
After running the reconstruction script, you’ll find your output meshes in the directory .out/SCAN-ID_SCAN-NAME/meshes.
Data Conversion from IDR to NDS Format
The DTU dataset in NDS format can be derived from the IDR format. After downloading the necessary IDR dataset, use the following command to convert:
bash import_dtu_from_idr.py PATH_TO_IDR_DATASET_DIRECTORY PATH_TO_OUTPUT_DIRECTORY
Reconstructing Custom Scenes
Our pipeline demands a specific input structure. Ensure your data is laid out correctly:
views/
|-- 1.png
|-- 1_k.txt
|-- 1_r.txt
|-- 1_t.txt
|-- ...
|-- N.png
|-- N_k.txt
|-- N_r.txt
|-- N_t.txt
Each view should contain an RGB(A) image paired with the related camera parameters.
Bounding Box Input
You’ll also need to provide a bounding box for the scene in a text file, which looks something like this:
-1 -1 -1
1 1 1
Viewing Results
An interactive viewer based on OpenGL allows you to inspect the reconstructed meshes. Install additional dependencies via:
conda activate nds
pip install glfw==2.5.3 moderngl==5.6.4 pyrr==0.10.3 pyopengl==3.1.6
Launch the viewer using:
python view.py --mesh .out/65_skull/meshes/mesh_002000.obj --shader .out/65_skull/shaders/shader_002000.pt --bbox .out/65_skull/bbox.txt
Troubleshooting Issues
If you encounter issues during your exploration of Neural Deferred Shading, consider these troubleshooting suggestions:
- CUDA Out of Memory: If the reconstruction consumes too much GPU memory, try reducing your image resolution with
--image_scale 2or--image_scale 4in your reconstruction script. This will result in lower memory usage but may compromise accuracy. - Reconstruction Hangs during Remeshing: If you find that the reconstruction stalls during remeshing, this may be due to the
remesh_botschfunction in the Pyremesh package. If this happens, you will need to abort and restart the reconstruction process.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
With these steps, you are well on your way to utilizing Neural Deferred Shading for fascinating multi-view mesh reconstructions. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

