Redis offers low-latency vector search capabilities. This tutorial shows how to add Redis VSS to a Relevance AI chain.

Prerequisites

A running Redis instance with RediSearch (v2.6+) and RedisJSON (v2.4+) module installed. See Redis VSS for more information.

You can also run Redis on a cloud provider such as Redis Enterprise Cloud.

Alternatively, you can run Redis locally using Docker (we recommend redis-stack):

docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest # to run Redis with RediSearch and RedisJSON modules

ngrok tcp 6379 # to expose Redis port to the internet

Setting up the integration

To connect Relevance AI to your Redis cluster, you need to provide the connection string. This will be in the following format:

redis://<username>:<password>@<host>:<port>/<db>

Once you have your connection string, you can head to Keys in the Relevance AI dashboard and look for Redis connection string. Paste in your connection string and hit enter.

Adding Redis VSS to a chain

If you’re using the SDK, to add Redis VSS you can simply do the following:

const { results } = step('redis_search', {
    index: 'redis-index-prefix',
    query: 'what is the capital of France?',
    vector_field: 'embedding_field',
    model: 'text-embedding-ada-002', // make sure to provide your OpenAI API key to use this model
    page_size: 4
});

If you’re using the Notebook, go to the transformation library from the sidebar and search for “Redis”. You will now see the Redis VSS step where you can fill out the variables.

Screenshot of the Redis VSS step in the Notebook