Snowflakes Arctic Embed: A Comprehensive Guide

Category :

Welcome to the exciting world of text embedding with the Snowflake Arctic Embed models! In this article, we will explore how to use these models for various tasks like retrieval, classification, and more. Whether you’re a seasoned programmer or a beginner, this guide aims to provide user-friendly insights into utilizing these models effectively.

What is Snowflake Arctic Embed?

The Snowflake Arctic Embed model family is designed to create high-performance text embedding models aimed at achieving state-of-the-art retrieval quality. These models have been benchmarked on various datasets and have shown exceptional performance compared to their peers.

Understanding Snowflake Arctic Embed Models

Think of the Snowflake Arctic Embed models as a library of well-structured books in a grand library. Each book (model variant) has been meticulously crafted to cater to different needs (tasks). Just like readers (users) can choose a book based on their preferences, developers can select a model variant based on their specific requirements.

  • Smallest Model: The snowflake-arctic-embed-xs is compact yet powerful, perfect for scenarios with stringent latency requirements.
  • Medium Model: The snowflake-arctic-embed-m serves as a versatile workhorse, effectively balancing performance and resource consumption.
  • Large Model: The snowflake-arctic-embed-l offers the highest retrieval accuracy, suitable for the most complex tasks.

How to Use Snowflake Arctic Embed Models

Using Sentence Transformers

To get started with the snowflake-arctic-embed-s model via the sentence-transformers package, follow these steps:

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Snowflake/snowflake-arctic-embed-s')

queries = ['What is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City, of Course!']

query_embeddings = model.encode(queries)
document_embeddings = model.encode(documents)

scores = query_embeddings @ document_embeddings.T

for query, query_scores in zip(queries, scores):
    doc_score_pairs = list(zip(documents, query_scores))
    doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)

    print('Query:', query)
    for document, score in doc_score_pairs:
        print(score, document)

Using Hugging Face Transformers

For optimal performance, leverage the transformers package as follows:

import torch
from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-s')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-s', add_pooling_layer=False)
model.eval()

query_prefix = "Represent this sentence for searching relevant passages: "
queries = ['What is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = [f"{query_prefix}{i}" for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
documents = ['The Data Cloud!', 'Mexico City, of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)

with torch.no_grad():
    query_embeddings = model(**query_tokens)[0][:, 0]
    document_embeddings = model(**document_tokens)[0][:, 0]

query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)

scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))

for query, query_scores in zip(queries, scores):
    doc_score_pairs = list(zip(documents, query_scores))
    doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
    
    print('Query:', query)
    for document, score in doc_score_pairs:
        print(score, document)

Using Transformers.js

To use the Transformers.js library, install it via NPM, as shown below:

npm i @xenovatransformers

Then, compute embeddings as follows:

import pipeline, dot from '@xenovatransformers';

const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s');
const sentences = [
    'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
    'The Data Cloud!',
    'Mexico City, of Course!'
];
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
const [source_embeddings, ...document_embeddings] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // Outputs similarity scores

Troubleshooting

If you encounter issues while implementing or using the Snowflake Arctic Embed models, consider the following troubleshooting ideas:

  • Ensure all library dependencies are correctly installed and up to date.
  • Check your internet connection for any interruptions while accessing remote models.
  • Refer to the [GitHub repository](https://github.com/Snowflake-Labs/arctic-embed) for more detailed documentation and community assistance.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Final Thoughts

At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox

Latest Insights

© 2024 All Rights Reserved

×