The world of mobile photography is evolving rapidly, and one of the frontier advancements in this domain is the replacement of traditional Image Signal Processors (ISP) with deep learning models. High-quality images can now be produced directly from RAW Bayer data captured from mobile camera sensors, thanks to the innovative PyNET model.
1. Overview
The PyNET model offers a remarkable ability to transform RAW image files into high-resolution photographs akin to those captured by professional cameras. To get started, you can refer to the following resources:
2. Prerequisites
Before beginning the implementation, ensure you have the following setup:
- Python with
scipy,numpy,imageio, andpillowpackages. - PyTorch along with
TorchVisionlibraries. - An Nvidia GPU for optimal performance.
3. First Steps
Here’s how to initiate the use of PyNET:
- Download the pre-trained PyNET model and place it in the
models/originalfolder. - Download the Zurich RAW to RGB mapping dataset and extract it into the
raw_imagesfolder. Ensure it contains three subdirectories: train, test, and full_resolution.
Note: To avoid Google Drive’s download quota limitation, log in to your Google account and select “Add to My Drive” instead of direct download. For additional information, see this issue.
4. Understanding PyNET CNN
To visualize how PyNET operates, think of a chef in a multi-layered kitchen:
- The layers are like different zones in a kitchen where ingredients are prepared at various stages.
- Just as a chef starts from basic ingredients at the bottom layer, PyNET processes images sequentially from the lowest resolution.
- Each subsequent layer refines these ingredients (images) to add more details, akin to how a chef gourmet-fines dishes with each passing step.
The PyNET architecture supports five different scales, upscaling and refining features as it goes through layers. The three primary modifications from the original TensorFlow model improve performance and reconstruction accuracy.
5. Training the Model
Training the model is done sequentially by specifying the level:
python train_model.py level=level
Mandatory levels include:
- 5, 4, 3, 2, 1, 0
Refer to the provided commands for detailed levels and parameters.
6. Testing Pre-Trained Models
To validate your model, run the following command:
python test_model.py level=0 orig=true
Options include specifying whether to use a GPU or CPU and providing the data directory.
7. Testing Your Model on RAW Images
To check your trained model on full-resolution images, utilize:
python test_model.py level=level
Set the mandatory parameters including the level value and other options as needed.
8. Folder Structure
Maintain the following directory structure for the best results:
models– For logs and saved models.models/original– Pre-trained PyNET model folder.raw_images– For the Zurich dataset.results– To visualize image results during training.
9. Bonus Files for Further Experiments
Helpful files include:
- Conversion script:
dng_to_png.py - Accuracy evaluation:
evaluate_accuracy.py
10. License
This project is licensed under CC BY-NC-SA 4.0 International License, allowing its use for academic research.
11. Troubleshooting
If you find yourself facing challenges during implementation or running the model, here are some tips:
- Ensure all prerequisite libraries are correctly installed.
- If you encounter a download error, double-check your Google account permissions.
- Modify batch sizes according to your GPU memory options.
- Refer to the issues listed on the respective GitHub repository for similar challenges faced by others.
For further assistance, feel free to reach out to Andrey Ignatov at andrey@vision.ee.ethz.ch. For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

