How to Use the GTE-Qwen2-1.5B-Instruct Model for Text Embedding

Aug 9, 2024 | Educational

In this guide, you’ll learn how to utilize the GTE-Qwen2-1.5B-Instruct model for various natural language processing tasks, including sentence transformations, embeddings, and evaluations. This advanced language model offers powerful capabilities thanks to its expansive training on a diverse multilingual text corpus.

Getting Started

Before jumping in, make sure you have a Python environment ready, along with the necessary libraries:

  • transformers version 4.39.2
  • flash_attn version 2.5.6

To install these libraries, you can use:

pip install transformers==4.39.2 flash_attn==2.5.6

Loading the Model

To begin using the GTE-Qwen2-1.5B-Instruct model, you can load it as follows:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)

This code imports the model and loads it into your program, preparing it for input.

Preparing Your Data

Your queries can involve various topics. Here’s an example of how you might set them up:

queries = [
    "how much protein should a female eat?",
    "summit define"
]

documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon.",
    "Definition of summit for English Language Learners: 1) the highest point of a mountain; 2) the highest level; 3) a meeting or series of meetings between the leaders of two or more governments."
]

Generating Embeddings

Now that your model and data are ready, it’s time to generate embeddings:

query_embeddings = model.encode(queries, prompt_name='query')
document_embeddings = model.encode(documents)

# Calculate scores based on embeddings
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())

Here, embeddings for each query and document are created, which helps capture their semantic meanings and calculate similarity scores.

Utilizing Transformers

Beyond simple embeddings, you can use transformers for more detailed tasks. Here’s a sample of how to handle input texts:

import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel

def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]

# Input texts preparation
input_texts = queries + documents

# Loading the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)

# Tokenization
max_length = 8192
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')

# Model outputs
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[0] @ embeddings[1:].T) * 100
print(scores.tolist())

This code snippet allows you to obtain and manipulate the embeddings further, enhancing the versatility of the model in various applications.

Troubleshooting Tips

As you work with this model, you may encounter some challenges. Here are a few tips that could help:

  • Library Conflicts: Ensure you are using compatible versions of transformers and flash_attn.
  • Memory Issues: If you face memory issues, consider reducing the max_seq_length of the model to a lower value, such as 4096.
  • Embedding Quality: Notice any poor quality in embeddings? Revisit your prompt to ensure it aligns with your data context.
  • Token Limit Exceeded: If you encounter a token limit error, break your text into smaller segments or simplify the input.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

This guide provided you with the steps to effectively use the GTE-Qwen2-1.5B-Instruct model for your NLP needs, showcasing how to load, run, and retrieve embeddings while preparing you to address some common challenges.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox