If you’re diving into the fascinating world of 6-DoF visual localization, you’re in for a treat with hloc! This modular toolbox combines years of research on image matching and Structure-from-Motion (SfM) to deliver fast, accurate, and scalable localization results. Ready to start? Let’s break it down and make this guide user-friendly!
What You Can Achieve with hloc
- Replicate state-of-the-art results on visual localization benchmarks.
- Run Structure-from-Motion using SuperPoint and SuperGlue with your datasets.
- Evaluate local features or image retrieval methods for visual localization.
- Implement and debug new localization pipelines effortlessly.
Quick Start
With hloc, you can easily build 3D maps and localize images directly from your browser—no installation hassles! You can even use Google Colab to run hloc and COLMAP with GPU support for free. Follow the steps below:
- Open the demo notebook to see how it works in just a few clicks.
- Experiment with your data and let us know how it goes!
Installation
To install hloc locally, ensure you have Python 3.7 and PyTorch 1.1. Follow these commands:
git clone --recursive https://github.com/cvg/Hierarchical-Localization
cd Hierarchical-Localization
python -m pip install -e .
Don’t forget to update submodules:
git submodule update --init --recursive
And if you’re looking for GPU support, you can use a Docker image:
docker build -t hloc:latest .
docker run -it --rm -p 8888:8888 hloc:latest
jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root
The General Pipeline
The hloc toolbox operates through a series of organized scripts. Let’s explore it using an analogy: think of this process as building a puzzle. Each step is part of assembling the pieces into a coherent picture.
- Extract local features: Just as you might identify and sort pieces by color and shape, hloc extracts features from all database and query images (like SuperPoint or DISK).
- Build a reference 3D SfM model: Similar to connecting pieces that fit together, it finds relevant database images and matches them using algorithms like SuperGlue or LightGlue.
- Find relevant database images: Here, it’s akin to selecting which pieces might fit into the current puzzle section you’re working on.
- Match query images: Prepare to find the pieces that belong in your ongoing puzzle construction.
- Run localization: This step is about placing the pieces correctly on the puzzle board.
- Visualize and debug: Finally, check your work, ensuring every piece appears where it should.
Troubleshooting
While you navigate this robust toolbox, you may run into a few bumps along the way. Here are some troubleshooting tips:
- Ensure all installations and dependencies are up-to-date.
- Check that you’re using the compatible versions of Python and PyTorch.
- If you encounter issues specific to Docker, ensure your Linux environment has Docker installed correctly.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Getting Started with Supported Datasets
To localize using specific datasets, such as Aachen or InLoc, follow the provided notebooks for detailed guides:
- Aachen – Outdoor Localization: pipeline_Aachen.ipynb
- InLoc – Indoor Localization: pipeline_InLoc.ipynb
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
Now that you’re equipped with the knowledge to utilize hloc, get to exploring the fascinating world of hierarchical localization and make your computer vision projects shine!

