Welcome to the exciting world of High-Fidelity GAN Inversion for Image Attribute Editing! This innovative technology allows you to edit images while preserving intricate details like background, illumination, and overall appearance. As of October 31, the inference code and pre-trained models were released, making it easier than ever to get started with this cutting-edge approach. In this guide, we’ll walk through the setup process, quick start, and troubleshooting tips.
Getting Started
To dive into the realm of image editing, it’s essential to follow some preliminary steps for setup and installation. Here’s how you can initiate the process:
Installation
git clone https://github.com/Tengfei-Wang/HFGI.git
cd HFGI
Setting Up Your Environment
Setting up the environment can be accomplished easily using Anaconda (inference only):
conda create -n HFGI python=3.7
conda activate HFGI
pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install matplotlib
conda install ninja
conda install -c 3dhubs gcc-5
Alternatively, you can set up your environment through the provided environment.yml file:
conda env create -f environment.yml
Quick Start
Download Pre-trained Models
To utilize the model, download the pre-trained model, and place it in the .checkpoint directory:
Model Description:
- Face Editing – Trained on FFHQ.
Preparing Images
Images from the CelebA-HQ dataset are provided in the .test_imgs directory. For your custom images:
- Pre-process and align the images prior to editing for optimal results.
- Refer to the [FFHQ dataset](https://github.com/NVlabs/ffhq-dataset) for alignment details.
Running Inference
Modify the inference.sh script as follows:
bash inference.sh
# Arguments:
# --images_dir the path of images.
# --n_sample number of images to infer.
# --edit_attribute options include inversion, age, smile, eyes, lip, and beard.
# --edit_degree control the editing degree (works for age and smile).
Training the GAN
If you’re looking to train your model:
- Download the dataset and adjust the dataset path in .configspaths_config.py.
- Download pre-trained models and place them in .pretrained.
Start the training by modifying training.sh and executing the following command:
bash train.sh
Video Editing
In addition to static images, you can edit videos by pre-processing (aligning and cropping) each frame before editing with the pre-trained model. You can find the source videos and edited results [here](https://drive.google.com/drive/folders/1h3swFCV9mEAsz3qIRlyVQynaxSQeH7II?usp=sharing).
Troubleshooting
If you encounter any issues during installation or model inference, consider the following troubleshooting strategies:
- Ensure that your environment is properly set up with the correct versions of Python and required libraries as detailed in the steps above.
- Check your paths in the configuration files for any discrepancies.
- Verify that your images are correctly pre-processed as per the guidelines.
- If issues persist, seek assistance from the community or check resources provided on the project website.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Final Thoughts
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

