How to Perform Extractive Question Answering with MiniLM

Oct 28, 2024 | Educational

If you’re interested in leveraging artificial intelligence for extracting answers from texts, you’ve landed in the right place! This guide will walk you through the process of setting up and using the MiniLM model for extractive question answering using the Haystack framework.

Overview

The MiniLM-L12-H384-uncased model is excellent for extractive question answering tasks. Trained on the SQuAD 2.0 dataset, it can effectively extract answers from passages of text written in English. Before diving into the code, let’s clarify some key components:

  • Language model: microsoft/MiniLM-L12-H384-uncased
  • Downstream task: Extractive QA
  • Evaluation Data: SQuAD 2.0

Setting Up Your Environment

To initiate your project, you first need to install the necessary packages. Run the following command:

pip install haystack-ai transformers[torch,sentencepiece]

Usage in Haystack

Now, let’s dive into the implementation using Haystack. If you think of Haystack as a toolbox, MiniLM will be one of the key tools within it for extracting valuable insights from text.

Here’s a simple code snippet that demonstrates how to use MiniLM to answer questions based on given documents:

from haystack import Document
from haystack.components.readers import ExtractiveReader

docs = [
    Document(content="Python is a popular programming language."),
    Document(content="Python ist eine beliebte Programmiersprache."),
]
reader = ExtractiveReader(model="deepset/minilm-uncased-squad2")
reader.warm_up()
question = "What is a popular programming language?"
result = reader.run(query=question, documents=docs)
# answers: [ExtractedAnswer(query=question, score=..., data=...) ]

Understanding the Code: An Analogy

Imagine you are at a library (your documents) looking for specific information. The MiniLM model acts like a librarian who not only understands multiple languages but also remembers where important books are located. When you ask about “popular programming languages,” the librarian quickly browses through the books and provides extracts, ensuring you get direct answers without sifting through entire chapters.

Usage in Transformers

If you prefer working directly with the Transformers library, you can also execute the following code:

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

model_name = "deepset/minilm-uncased-squad2"

# a) Get predictions
nlp = pipeline("question-answering", model=model_name, tokenizer=model_name)

QA_input = {
    "question": "Why is model conversion important?",
    "context": "The option to convert models between FARM and transformers gives freedom to the user."
}
res = nlp(QA_input)

Performance Metrics

When you test the MiniLM model on SQuAD 2.0, you’ll find impressive metrics:

  • Exact Match: 76.13%
  • F1 Score: 79.50%

Troubleshooting

In case you run into issues during the setup or implementation, here are some troubleshooting tips:

  • Library Not Found: Ensure that you have installed all required packages. Recheck your installation command.
  • Model Not Loading: Make sure you have typed the model name correctly, and verify your internet connection to download models.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Final Thoughts

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox