Welcome to the world of tensor compilers! This guide explores various compiler projects and papers that enhance tensor computation and deep learning. By the end, you will have a treasure trove of resources at your fingertips. So, let’s dive in!
Contents
Open Source Projects
Here are some outstanding open-source projects in the realm of tensor computation:
- TVM: An End to End Machine Learning Compiler Framework
- MLIR: Multi-Level Intermediate Representation
- XLA: Optimizing Compiler for Machine Learning
- Halide: A Language for Fast, Portable Computation on Images and Tensors
- Glow: Compiler for Neural Network Hardware Accelerators
- nnfusion: A Flexible and Efficient Deep Neural Network Compiler
- Hummingbird: Compiling Trained ML Models into Tensor Computation
- Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations
- AITemplate: A Python framework which renders neural network into high-performance CUDA/HIP C++ code
- Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
- TensorComprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
- PlaidML: A Platform for Making Deep Learning Work Everywhere
- BladeDISC: An End-to-End Dynamic Shape Compiler for Machine Learning Workloads
- TACO: The Tensor Algebra Compiler
- Nebulgym: Easy-to-use Library to Accelerate AI Training
- Speedster: Automatically apply SOTA optimization techniques for maximum inference speed-up on your hardware
- NN-512: A Compiler That Generates C99 Code for Neural Net Inference
- DaCeML: A Data-Centric Compiler for Machine Learning
- Mirage: A Multi-level Superoptimizer for Tensor Algebra
Papers
Several academic papers have greatly contributed to the field. Here’s a selection to enrich your understanding:
Survey
- The Deep Learning Compiler: A Comprehensive Survey
- An In-depth Comparison of Compilers for Deep Neural Networks on Hardware
Compiler and IR Design
- BladeDISC: Optimizing Dynamic Shape Machine Learning Workloads via Compiler Approach
- Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs
- TensorIR: An Abstraction for Automatic Tensorized Program Optimization
- Exocompilation for Productive Programming of Hardware Accelerators
- DaCeML: A Data-Centric Compiler for Machine Learning
- FreeTensor: A Free-Form DSL with Holistic Optimizations for Irregular Tensor Programs
- Roller: Fast and Efficient Tensor Compilation for Deep Learning
Auto-tuning and Auto-scheduling
- Accelerated Auto-Tuning of GPU Kernels for Tensor Computations
- Enabling Tensor Language Model to Assist in Generating High-Performance Tensor Programs for Deep Learning
- The Droplet Search Algorithm for Kernel Scheduling
When handling this plethora of advancements in tensor compilers, think of it as managing an orchestra. Each project or paper represents a different instrument, playing its unique notes. Just as an orchestra creates harmonious music by ensuring all instruments are perfectly in tune with each other, tensor compilers harmonize their respective optimizations to create efficient computations in deep learning models.
Tutorials
Want to get hands-on? Check out these valuable tutorials:
Contribute
Your input is valuable! If you want to contribute, feel free to open an issue or send a pull request.
Troubleshooting Tips
Here are some troubleshooting suggestions if you encounter issues:
- Make sure your software dependencies are installed correctly.
- Check the documentation for the specific compiler project you are using for insights on common problems.
- If you run into performance bottlenecks, revisit the auto-tuning configurations.
- For query resolution or additional help, consult forums or the community associated with the project.
- For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

