Welcome to the world of Differentiable Blocks, where we decompose complex 3D structures using elegant rendering primitives. This guide will walk you through the setup and usage of the official PyTorch implementation of [**Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives**](https://arxiv.org/abs/2307.05473), set to be showcased at NeurIPS 2023.
Installation Requirements
To begin our journey, you’ll need to first install the necessary environment. Here’s how you can do that:
1. Create Conda Environment
conda env create -f environment.yml
conda activate dbwThis commands create and activate a new Conda environment for your project.
Optional Live Monitoring
If you want to monitor your processes while they’re running, you can enable this feature by specifying the visdom port in the config file. First, make sure to install visdom:
git clone https://github.com/facebookresearch/visdom
cd visdom
pip install -e .Optional Nerfstudio Data Loading
Want to load data processed by Nerfstudio? Here’s how to do that:
pip install ninja==1.10.2.3
git+https://github.com/NVlabs/tiny-cuda-nn#subdirectory=bindings
pip install nerfstudio==0.1.15Downloading Datasets
To train your models, you’ll need to download datasets. Execute the following command:
bash scripts/download_data.shThis will download the scenes detailed in the original paper:
If you encounter issues with gdown hanging, you can manually download the files and move them to the datasets folder.
Running the Model
Now, let’s launch the training from scratch. Follow these steps:
cuda=gpu_id config=filename.yml tag=run_tag .scripts/pipeline.shReplace gpu_id with your device ID, and filename.yml with the desired config from the configs folder. The results will be saved based on your specified parameters.
Available Configurations
Choose from the available configurations:
- dtu*.yml for DTU scenes.
- bmvs*.yml for BlendedMVS scenes.
- nerfstudio*.yml for Nerfstudio scenes.
Note that for running on Nerfstudio scenes, installing the [nerfstudio](https://github.com/nerfstudio-project) library is necessary.
Reproducing Quantitative Results
To reproduce results from the DTU dataset, here’s what you need to run:
cuda=gpu_id config=dtuscanXX.yml tag=default_scanXX .scripts/multi_pipeline.shTraining on a Custom Scene
If you wish to use the model on a custom scene, leveraging Nerfstudio is advised:
- Process multi-views and obtain camera quality.
- Move your resulting data to the datasets/nerfstudiofolder.
- Add model path to the Nerfstudio dataloader.
The initialization of R_world and T_world can be estimated visually using tools like Plotly or Blender.
Troubleshooting
If you encounter any errors or hiccups during your setup, try the following troubleshooting tips:
- Make sure all dependencies are properly installed in the Conda environment you created.
- Check for correct paths when running scripts to avoid file not found errors.
- If the dataset download hangs, download the dataset manually and ensure it’s in the correct folder.
- For any further assistance, don’t hesitate to consult the documentation or search for solutions based on the error messages you receive.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
In this blog post, we’ve walked through the core steps to set up and run the Differentiable Blocks World implementation. By visualizing and decomposing 3D scenes, we are advancing our capabilities in understanding and interpreting complex spatial relationships.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

