How to Use Snowflake’s Arctic-Embed Models for Sentence Similarity

Category :

In the age of artificial intelligence, understanding and utilizing advanced models for sentence similarity is key for applications such as chatbots, recommendation systems, and search engines. In this article, we will explore how to make the most of Snowflake’s Arctic-Embed models, focusing on their application in sentence similarity tasks.

Introduction to the Arctic-Embed Models

The snowflake-arctic-embed models are a family of state-of-the-art text embedding frameworks that excel in retrieval tasks. Think of these models as chefs who specialize in creating delicious dishes (sentence embeddings) from a whole kitchen of ingredients (text data). Just like chefs need to understand flavor profiles to mix ingredients successfully, these models understand language to create representations that machines can easily digest.

Setting Up the Environment

  • Install the required libraries, namely sentence-transformers or transformers library depending on your preference for Python, or transformers.js for JavaScript.
  • Ensure you have access to your Python or JavaScript environment where you will be executing your code.

Using the Sentence Transformers

Using the sentence-transformers package allows you to leverage the power of Arctic-Embed’s models to encode sentences. Here’s how to do it:


from sentence_transformers import SentenceTransformer

# Load the model
model = SentenceTransformer("Snowflake/snowflake-arctic-embed-xs")

# Sample data
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']

# Generate embeddings
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)

# Compute similarity scores
scores = query_embeddings @ document_embeddings.T

# Display results
for query, query_scores in zip(queries, scores):
    doc_score_pairs = list(zip(documents, query_scores))
    doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) 
    print("Query:", query)
    for document, score in doc_score_pairs:
        print(score, document)

This code will produce ranked results based on the similarity of your queries to the documents provided. It’s as if you’re asking a question and the chef meticulously prepares dishes that match your tastes!

Using Hugging Face Transformers

If you prefer to work directly with the Hugging Face transformers library, follow this approach:


import torch
from transformers import AutoModel, AutoTokenizer

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-xs')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-xs', add_pooling_layer=False)
model.eval()

# Prepare data
query_prefix = 'Represent this sentence for searching relevant passages: '
queries = ['what is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
documents = ['The Data Cloud!', 'Mexico City of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)

# Generate embeddings
with torch.no_grad():
    query_embeddings = model(**query_tokens)[0][:, 0]
    document_embeddings = model(**document_tokens)[0][:, 0]

# Normalize the embeddings and compute scores
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))

# Display results
for query, query_scores in zip(queries, scores):
    doc_score_pairs = list(zip(documents, query_scores))
    doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
    print("Query:", query)
    for document, score in doc_score_pairs:
        print(score, document)

This method emphasizes precision in processing, akin to a chef who diligently weighs every ingredient to ensure the best potential flavor.

Using Transformers.js

For those who are more comfortable in a JavaScript environment, using transformers.js is straightforward:


import { pipeline, dot } from '@xenova/transformers';

// Create a feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-xs', { quantized: false });

// Generate embeddings for sentences
const sentences = [
    'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
    'The Data Cloud!',
    'Mexico City of Course!',
];
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });

// Compute similarity scores
const [source_embeddings, ...document_embeddings] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities);

Transformers.js leverages JavaScript’s capabilities to offer a simple interface for processing sentences seamlessly, just like a modern kitchen outfitted with the latest appliances for efficiency!

Troubleshooting

  • If you run into runtime errors, ensure all required libraries are installed and up to date.
  • Check your environment setup to ensure compatibility with the library versions used.
  • If embedding results seem off, consider double-checking your input shapes and ensure sequences are encoded properly.
  • For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

By following this guide, you should now be equipped to harness the powerful snowflake-arctic-embed models for various sentence similarity tasks. With this, you’re not just cooking in your kitchen, but you’re creating culinary masterpieces with every combination of ingredients.

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×