Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

Sorry, you do not have permission to ask a question, You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please type your username.

Please type your E-Mail.

Please choose an appropriate title for the post.

Please choose the appropriate section so your post can be easily searched.

Please choose suitable Keywords Ex: post, video.

Browse

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Logo Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Logo

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Navigation

  • Home
  • About Us
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • About Us
  • Contact Us
Home/ Questions/Q 5950

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Latest Questions

Author
  • 60k
Author
Asked: November 27, 20242024-11-27T02:00:10+00:00 2024-11-27T02:00:10+00:00

Implementing semantic image search with Amazon Titan and Supabase Vector

  • 60k

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon. Each model is accessible through a common API which implements a broad set of features to help build generative AI applications with security, privacy, and responsible AI in mind.

Amazon Titanis a family of foundation models (FMs) for text and image generation, summarization, classification, open-ended Q&A, information extraction, and text or image search.

In this post we'll look at how we can get started with Amazon Bedrock and Supabase Vector in Python using the Amazon Titan multimodal model and the vecs client.

You can find the full application code as a Python Poetry project on GitHub.

🚀 Learn more about Supabase

Create a new Python project with Poetry

Poetry provides packaging and dependency management for Python. If you haven't already, install poetry via pip:

pip install poetry  
Enter fullscreen mode Exit fullscreen mode

Then initialize a new project:

poetry new aws_bedrock_image_search  
Enter fullscreen mode Exit fullscreen mode

Spin up a Postgres Database with pgvector

If you haven't already, head over to database.new and create a new project. Every Supabase project comes with a full Postgres database and the pgvector extension preconfigured.

When creating your project, make sure to note down your database password as you will need it to construct the DB_URL in the next step.

You can find the database connection string in your Supabase Dashboard database settings. Select “Use connection pooling” with Mode: Session for a direct connection to your Postgres database. It will look something like this:

postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres  
Enter fullscreen mode Exit fullscreen mode

Install the dependencies

We will need to add the following dependencies to our project:

  • vecs: Supabase Vector Python Client.
  • boto3: AWS SDK for Python.
  • matplotlib: for displaying our image result.
poetry add vecs boto3 matplotlib  
Enter fullscreen mode Exit fullscreen mode

Import the necessary dependencies

At the top of your main python script, import the dependencies and store your DB URL from above in a variable:

import sys import boto3 import vecs import json import base64 from matplotlib import pyplot as plt from matplotlib import image as mpimg from typing import Optional  DB_CONNECTION = "postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres"  
Enter fullscreen mode Exit fullscreen mode

Next, get the credentials to your AWS account and instantiate the boto3 client:

bedrock_client = boto3.client(     'bedrock-runtime',     region_name='us-west-2',     # Credentials from your AWS account     aws_access_key_id='<replace_your_own_credentials>',     aws_secret_access_key='<replace_your_own_credentials>',     aws_session_token='<replace_your_own_credentials>', )  
Enter fullscreen mode Exit fullscreen mode

Create embeddings for your images

In the root of your project, create a new folder called images and add some images. You can use the images from the example project on GitHub or you can find license free images on unsplash.

To send images to the Amazon Bedrock API we need to need to encode them as base64 strings. Create the following helper methods:

def readFileAsBase64(file_path):     """Encode image as base64 string."""     try:         with open(file_path, "rb") as image_file:             input_image = base64.b64encode(image_file.read()).decode("utf8")         return input_image     except:         print("bad file name")         sys.exit(0)   def construct_bedrock_image_body(base64_string):     """Construct the request body.      https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-embed-mm.html     """     return json.dumps(         {             "inputImage": base64_string,             "embeddingConfig": {"outputEmbeddingLength": 1024},         }     )   def get_embedding_from_titan_multimodal(body):     """Invoke the Amazon Titan Model via API request."""     response = bedrock_client.invoke_model(         body=body,         modelId="amazon.titan-embed-image-v1",         accept="application/json",         contentType="application/json",     )      response_body = json.loads(response.get("body").read())     print(response_body)     return response_body["embedding"]   def encode_image(file_path):     """Generate embedding for the image at file_path."""     base64_string = readFileAsBase64(file_path)     body = construct_bedrock_image_body(base64_string)     emb = get_embedding_from_titan_multimodal(body)     return emb  
Enter fullscreen mode Exit fullscreen mode

Next, create a seed method, which will create a new Supabase Vector Collection, generate embeddings for your images, and upsert the embeddings into your database:

def seed():     # create vector store client     vx = vecs.create_client(DB_CONNECTION)      # get or create a collection of vectors with 1024 dimensions     images = vx.get_or_create_collection(name="image_vectors", dimension=1024)      # Generate image embeddings with Amazon Titan Model     img_emb1 = encode_image('./images/one.jpg')     img_emb2 = encode_image('./images/two.jpg')     img_emb3 = encode_image('./images/three.jpg')     img_emb4 = encode_image('./images/four.jpg')      # add records to the *images* collection     images.upsert(         records=[             (                 "one.jpg",       # the vector's identifier                 img_emb1,        # the vector. list or np.array                 {"type": "jpg"}  # associated  metadata             ), (                 "two.jpg",                 img_emb2,                 {"type": "jpg"}             ), (                 "three.jpg",                 img_emb3,                 {"type": "jpg"}             ), (                 "four.jpg",                 img_emb4,                 {"type": "jpg"}             )         ]     )     print("Inserted images")      # index the collection for fast search performance     images.create_index()     print("Created index")  
Enter fullscreen mode Exit fullscreen mode

Add this method as a script in your pyproject.toml file:

[tool.poetry.scripts] seed = "image_search.main:seed" search = "image_search.main:search"  
Enter fullscreen mode Exit fullscreen mode

After activating the virtual environment with poetry shell you can now run your seed script via poetry run seed. You can inspect the generated embeddings in your Supabase Dashboard by visiting the Table Editor, selecting the vecs schema, and the image_vectors table.

Perform an image search from a text query

With Supabase Vector we can easily query our embeddings. We can use either an image as the search input or we can generate an embedding from a string input and use that as the query input:

def search(query_term: Optional[str] = None):     if query_term is None:         query_term = sys.argv[1]      # create vector store client     vx = vecs.create_client(DB_CONNECTION)     images = vx.get_or_create_collection(name="image_vectors", dimension=1024)      # Encode text query     text_emb = get_embedding_from_titan_multimodal(json.dumps(         {             "inputText": query_term,             "embeddingConfig": {"outputEmbeddingLength": 1024},         }     ))      # query the collection filtering metadata for "type" = "jpg"     results = images.query(         data=text_emb,                      # required         limit=1,                            # number of records to return         filters={"type": {"$eq": "jpg"}},   # metadata filters     )     result = results[0]     print(result)     plt.title(result)     image = mpimg.imread('./images/' + result)     plt.imshow(image)     plt.show()  
Enter fullscreen mode Exit fullscreen mode

By limiting the query to one result, we can show the most relevant image to the user. Finally, we use matplotlib to show the image result to the user.

That's it, go ahead and test it out by running poetry run search and you will be presented with an image of a “bike in front of a red brick wall”.

Conclusion

With just a couple of lines of Python you can implement image search as well as reverse image search using the Amazon Titan multimodal model and Supabase Vector.

More Supabase

  • Getting started with Amazon Bedrock and vecs
  • Matryoshka embeddings: faster OpenAI vector search using Adaptive Retrieval

programmingpythontutorialwebdev
  • 0 0 Answers
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

Sidebar

Ask A Question

Stats

  • Questions 4k
  • Answers 0
  • Best Answers 0
  • Users 2k
  • Popular
  • Answers
  • Author

    ES6 - A beginners guide - Template Literals

    • 0 Answers
  • Author

    Understanding Higher Order Functions in JavaScript.

    • 0 Answers
  • Author

    Build a custom video chat app with Daily and Vue.js

    • 0 Answers

Top Members

Samantha Carter

Samantha Carter

  • 0 Questions
  • 20 Points
Begginer
Ella Lewis

Ella Lewis

  • 0 Questions
  • 20 Points
Begginer
Isaac Anderson

Isaac Anderson

  • 0 Questions
  • 20 Points
Begginer

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help

Footer

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise

Querify Question Shop: Explore, ask, and connect. Join our vibrant Q&A community today!

About Us

  • About Us
  • Contact Us
  • All Users

Legal Stuff

  • Terms of Use
  • Privacy Policy
  • Cookie Policy

Help

  • Knowledge Base
  • Support

Follow

© 2022 Querify Question. All Rights Reserved

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.