How to Use AutoTrain Models for Multi-Class Classification

Mar 28, 2022 | Educational

AutoTrain offers a user-friendly way to train machine learning models without the need to dive deep into the complexity of machine learning frameworks. In this article, we’ll guide you through how to use an AutoTrain model specifically designed for multi-class classification, leveraging the XLM-RoBERTa base reviews model.

Understanding the Trained Model

This model, identified as Model ID: 672119800, focuses on predicting CO2 emissions. Here are some key aspects of this model:

  • Problem Type: Multi-class Classification
  • CO2 Emissions (in grams): 2011.65
  • Validation Metrics:
    • Loss: 0.957
    • Accuracy: 58.31%
    • Macro F1: 57.89%
    • Micro F1: 58.31%
    • Weighted F1: 57.89%
    • Macro Precision: 58.08%
    • Micro Precision: 58.31%
    • Weighted Precision: 58.08%
    • Macro Recall: 58.31%
    • Micro Recall: 58.31%
    • Weighted Recall: 58.31%

How to Access the Model

You can access the trained model using two different methods: cURL and Python API. Let’s explore both options.

Using cURL

To make a request via cURL, use the following command:

$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.com/models/YXHuggingautotrain-xlm-roberta-base-reviews-672119800

Replace YOUR_API_KEY with your actual API key. This command sends a POST request with the input text to the model endpoint.

Using Python API

If you prefer using Python, here’s how to do it:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("YXHuggingautotrain-xlm-roberta-base-reviews-672119800", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHuggingautotrain-xlm-roberta-base-reviews-672119800", use_auth_token=True)

inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)

This script loads the model and tokenizer, encodes your input text, and obtains the output predictions. It’s very much like having a book where you flip through the pages to find information; the library (model) has to be loaded before you can start reading (predicting).

Troubleshooting Tips

If you encounter any issues while accessing the model, consider the following troubleshooting ideas:

  • Double-check your API key to ensure it is valid.
  • Make sure you have an active internet connection, as requests are made over the web.
  • Ensure that your code syntax is correct, particularly in JSON formatting and the methods used.
  • Examine error messages carefully; they often provide insights into what might be going wrong.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox