How to Create a Question Generator Model with Docker and Python

Oct 5, 2022 | Data Science

In the age of machine learning, developing models to enhance our understanding of text has become increasingly popular. One such innovative tool is a Question Generator Model, which transforms a given text and answer into relevant questions. In this guide, we’ll walk you through the process of setting up this model from scratch.

Environment Setup

Before diving into coding, you need to prepare your environment. Here are the requirements:

  • Docker version 17.03 or higher
  • Docker-compose version 1.13.0 or higher
  • Python 3
  • pyzmq dependencies:
    • For Ubuntu: sudo apt-get install libzmq3-dev
    • For Mac: brew install zeromq --with-libpgm

Docker Installation

Setup Script

Now that your environment is set up, you need to run the setup script which prepares everything you need:

./setup

This script downloads the Torch Question Generation model, installs the necessary Python requirements, pulls the required Docker images, and starts the OpenNMT and CoreNLP servers. It’s crucial to note that the first model feeding may take a longer time due to loading the CoreNLP modules.

Usage of the Model

Once you’ve configured the environment and set up your model, you can start using the question generation functionality. To get the questions, you will use the following command:

.get_qnas text

This command takes input text and outputs the result in TSV (Tab-Separated Values) format. Here’s the structure of the output:

  • First column: Question
  • Second column: Answer
  • Third column: Score

Example

Let’s look at an example for clarity:

.get_qnas Waiting had its world premiere at the Dubai International Film Festival on 11 December 2015 to positive reviews from critics. It was also screened at the closing gala of the London Asian Film Festival, where Menon won the Best Director Award.

Following this input, you can expect the output to be something like:

who won the best director award ? menon -2.38472032547
when was the location premiere ?  11 december 2015  -6.1178450584412

Troubleshooting

If you encounter issues during setup or usage, here are some common troubleshooting tips:

  • Ensure you have installed all the dependencies correctly, particularly the pyzmq dependencies.
  • Verify that Docker and Docker-compose versions meet the requirements. You can check your version with docker --version and docker-compose --version.
  • Check the logs of your Docker containers for specific error messages—use docker logs .
  • If you experience longer-than-expected loading times, don’t worry. This could be due to the CoreNLP modules initializing.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

Implementing a question generator model enhances your ability to interact with text data efficiently. By following the steps outlined above, you can create your own model using Docker and Python seamlessly. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox