Home > Artificial Intelligence > Optimizing AI Applications with Vector Search in Weaviate and Hugging Face Transformers on AWS Lambda

Optimizing AI Applications with Vector Search in Weaviate and Hugging Face Transformers on AWS Lambda

Optimizing AI Applications with Vector Search in Weaviate and Hugging Face Transformers on AWS Lambda

AI applications are increasingly relying on vector search to provide fast and efficient retrieval of highly relevant results. Vector search enables applications to find similar items using embeddings, which are numerical representations of data such as text, images, or videos. In this article, we’ll explore how to optimize AI applications using **Weaviate**, a powerful vector search engine, and **Hugging Face Transformers**, a library for Natural Language Processing (NLP). To make this architecture scalable, we will deploy our solution on **AWS Lambda**, the serverless computing service.

What is Vector Search?

Vector search is a technique for finding the most relevant results in a dataset by comparing embeddings. Unlike traditional keyword-based searches, vector search leverages machine learning to analyze the semantic meaning of data. This is particularly useful for NLP tasks such as document retrieval, sentiment analysis, and question answering.

Why Use Weaviate and Hugging Face Transformers?
  • Weaviate: Weaviate is an open-source vector search engine designed for scalability and ease of use. It supports storing and querying embeddings generated by various machine learning models.
  • Hugging Face Transformers: Hugging Face Transformers provides pre-trained models for generating embeddings from text data. These embeddings can then be indexed into Weaviate for retrieval.
  • AWS Lambda: AWS Lambda enables serverless execution, ensuring your application scales efficiently with demand while minimizing infrastructure costs.
Architecture Overview

The architecture consists of three main components:

  1. Embedding Generation: Using Hugging Face Transformers to convert text data into vector embeddings.
  2. Vector Indexing and Search: Storing and querying embeddings in Weaviate.
  3. Serverless Deployment: Running the entire pipeline on AWS Lambda for scalability.
Step-by-Step Implementation Step 1: Install Dependencies

First, install the required libraries in your development environment:

pip install weaviate-client transformers boto3
Step 2: Generate Embeddings with Hugging Face Transformers

Use a pre-trained model from Hugging Face to generate embeddings for input text data:

from transformers import AutoTokenizer, AutoModel
import torch

# Load pre-trained model and tokenizer
model_name = "sentence-transformers/all-MiniLM-L6-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

def generate_embedding(text):
    tokens = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
    with torch.no_grad():
        embeddings = model(**tokens).last_hidden_state.mean(dim=1)
    return embeddings.squeeze().numpy()

# Example usage
text = "Optimizing AI workflows with vector search."
embedding = generate_embedding(text)
print("Generated Embedding:", embedding)
Step 3: Index Embeddings in Weaviate

Use the Weaviate Python client to create a schema and upload embeddings for vector search.

Create a Schema in Weaviate
import weaviate

client = weaviate.Client("http://localhost:8080")

# Define the schema
schema = {
    "classes": [{
        "class": "TextDocument",
        "properties": [{
            "name": "content",
            "dataType": ["text"]
        }]
    }]
}

# Create the schema
client.schema.create(schema)
Upload Embeddings
def upload_embedding(client, text, embedding):
    client.data_object.create(
        {
            "content": text,
            "vector": embedding.tolist()
        },
        "TextDocument"
    )

# Upload example embedding
upload_embedding(client, text, embedding)
Step 4: Query Weaviate for Vector Search

Retrieve the most relevant results based on a query embedding:

def search_similar_texts(client, query_embedding, top_k=3):
    results = client.query.get("TextDocument", ["content"])\
        .with_near_vector({"vector": query_embedding.tolist()})\
        .with_limit(top_k)\
        .do()
    return results["data"]["Get"]["TextDocument"]

# Example query
query_text = "AI workflows"
query_embedding = generate_embedding(query_text)
search_results = search_similar_texts(client, query_embedding)
print("Search Results:", search_results)
Step 5: Deploy the Solution on AWS Lambda

Create a Lambda function to handle embedding generation, indexing, and querying. Below is an example Lambda handler:

import json
import weaviate
from transformers import AutoTokenizer, AutoModel
import torch

# Load model and Weaviate client
model_name = "sentence-transformers/all-MiniLM-L6-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
client = weaviate.Client("http://your-weaviate-instance-url")

def lambda_handler(event, context):
    # Parse input
    text = event["text"]
    task = event["task"]
    
    if task == "embed_and_index":
        embedding = generate_embedding(text)
        upload_embedding(client, text, embedding)
        return {"message": "Embedding indexed successfully."}
    
    elif task == "search":
        query_embedding = generate_embedding(text)
        results = search_similar_texts(client, query_embedding)
        return {"results": results}
    
    else:
        return {"error": "Invalid task"}

def generate_embedding(text):
    tokens = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
    with torch.no_grad():
        embeddings = model(**tokens).last_hidden_state.mean(dim=1)
    return embeddings.squeeze().numpy()

def upload_embedding(client, text, embedding):
    client.data_object.create(
        {
            "content": text,
            "vector": embedding.tolist()
        },
        "TextDocument"
    )

def search_similar_texts(client, query_embedding, top_k=3):
    results = client.query.get("TextDocument", ["content"])\
        .with_near_vector({"vector": query_embedding.tolist()})\
        .with_limit(top_k)\
        .do()
    return results["data"]["Get"]["TextDocument"]
Testing the Solution

After configuring your AWS Lambda function, test it using AWS API Gateway or the AWS Management Console. Send a request payload like the following:

Example Request Payload for Indexing:
{ “text”: “Optimizing AI workflows with vector search.”, “task”: “embed_and_index” }
Example Request Payload for Searching:
{ “text”: “AI workflows”, “task”: “search” }
Conclusion

By combining the power of Hugging Face Transformers, Weaviate, and AWS Lambda, you can build scalable AI applications that leverage vector search for efficient data retrieval. This stack is particularly useful for tasks involving large-scale text datasets and real-time querying.