WikiChat

Category :

WikiChat Logo

Stopping the Hallucination of Large Language Models

arXiv Github Stars

Online demo

Table of Contents

Introduction

Large language models (LLMs) like ChatGPT and GPT-4 occasionally produce inaccurate responses, particularly when asked about recent events or less popular topics. WikiChat aims to mitigate this issue by utilizing Wikipedia and a 7-stage pipeline to ensure that the responses provided are factual. Each stage in this pipeline involves various LLM calls.

Announcements

  • (August 22, 2024) WikiChat 2.0 is now available! Key updates include:
    • Multilingual Support: Now retrieves information from 10 different Wikipedias.
    • Improved Information Retrieval: Supports retrieval from structured data alongside text.
    • Free Multilingual Wikipedia Search API: Provides access to over 180M vector embeddings.
    • Optimized Pipeline: Faster and more cost-effective options are available.
  • (June 20, 2024) WikiChat won the 2024 Wikimedia Research Award!

Installation

To install WikiChat, follow these steps:

  1. Install dependencies
  2. Configure the LLM of your choice (WikiChat supports over 100 models)
  3. Select an information retrieval source (choose from available options)
  4. Run WikiChat with your chosen configuration
  5. [Optional] Deploy WikiChat for multi-user access

System Requirements

  • Basic Usage: Minimal hardware requirements, suitable for most systems.
  • Local LLM: Requires GPU for local model hosting.
  • Creating a New Retrieval Index: A GPU is also needed for embedding documents.

Install Dependencies

Start by cloning the repository and setting up the environment:

git clone https://github.com/stanford-oval/WikiChat.git
cd WikiChat
conda env create --file conda_env.yaml
conda activate wikichat
python -m spacy download en_core_web_sm  # Required for certain configurations

Configure the LLM of Your Choice

WikiChat provides compatibility with various LLMs. To configure:

  1. Edit llm_config.yaml with the necessary information.
  2. Create a API_KEYS file to store your API keys.

Configure Information Retrieval

You can choose from multiple options for information retrieval:

  • Option 1: Use the free rate-limited Wikipedia search API.
  • Option 2: Download and host your own Wikipedia index.
  • Option 3: Build your custom index using your documents.

Run WikiChat in Terminal

Use the command line to run WikiChat with various configurations, such as:

inv demo --engine gpt-4o
inv demo --pipeline generate_and_correct
inv demo --temperature 0.9

[Optional] Deploy WikiChat for Multi-user Access

For multi-user deployment, you can integrate with Cosmos DB and run Chainlit.

inv chainlit --backend-port 5001

The Free Rate-limited Wikipedia Search API

You can use this API endpoint for prototyping high-quality RAG systems. Refer to the API documentation for more details.

Wikipedia Preprocessing: Why is it Difficult?

(Coming soon…)

Other Commands

WikiChat provides various commands for additional functionalities.

License

This project is released under the Apache-2.0 license.

Citation

Please refer to the provided citation guidelines if you use any code or data from this repository.

Troubleshooting

If you encounter issues like “Error: Redis lookup failed,” chances are Redis isn’t installed correctly. Reinstall it by following the official documentation. For any further assistance or to stay updated, connect with us at fxis.ai.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×