In an increasingly globalized world, the ability to communicate across languages is essential. One powerful tool that enables seamless multilingual translation is the Transformers.js library, which allows developers to leverage advanced machine learning models in JavaScript. In this guide, we’ll walk you through the steps to install and use the NLLB-200 Distilled Model with ONNX weights for translation using Transformers.js.
Step 1: Installing Transformers.js
If you haven’t already installed the Transformers.js library, you can do so easily via NPM. Open your terminal and run the following command:
npm i @xenova/transformers
Step 2: Set Up Your Translation Pipeline
Once you have the library installed, it’s time to create a translation pipeline. This pipeline will help you translate text from one language to another effortlessly. Below is a code snippet that demonstrates how to set this up:
import pipeline from @xenova/transformers;
// Create a translation pipeline
const translator = await pipeline(translation, 'Xenova/nllb-200-distilled-600M');
// Translate text from Hindi to French
const output = await translator('जीवन एक चॉकलेट बॉक्स की तरह है।', { src_lang: 'hin_Deva', tgt_lang: 'fra_Latn' });
console.log(output);
// Output: { translation_text: 'La vie est comme une boîte à chocolat.' }
Understanding the Code: An Analogy
Imagine you are in a large library filled with books in multiple languages. Each book represents a different language and contains unique stories to tell. The Transformers.js pipeline acts as a highly efficient librarian. When you provide the librarian with a text (in this case, a Hindi quote about life), the librarian—using her extensive knowledge—translates it into French seamlessly. The ONNX weights serve as her training, allowing her to understand the nuances of multiple languages and deliver accurate translations swiftly.
Step 3: Exploring Available Languages
To discover what languages you can translate from and to, refer to the complete list of languages and their corresponding codes available in the documentation. You can find this list here.
Troubleshooting
While using Transformers.js, you may encounter some issues. Here are a few troubleshooting tips:
- Model Not Loading: Ensure that the model name is correctly specified in your pipeline function.
- Language Codes Not Working: Double-check the language codes against the official list provided.
- Network Issues: If you face connectivity problems, make sure your internet connection is stable.
For further insights or to collaborate on AI development projects, stay connected with fxis.ai.
Notes on ONNX Weights
The separation of repositories for ONNX weights is intended as a temporary solution until WebML gains more traction. If you wish to make your models web-ready, it’s recommended to convert them to ONNX using 🤗 Optimum. Structure your repository similarly to the one indicated, with ONNX weights located in a subfolder named “onnx”.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

