Redis’ cover photo
Redis

Redis

Software Development

Mountain View, CA 284,546 followers

The world's fastest data platform.

About us

Redis is the world's fastest data platform. We provide cloud and on-prem solutions for caching, vector search, and more that seamlessly fit into any tech stack. With fast setup and fast support, we make it simple for digital customers to build, scale, and deploy the fast apps our world runs on.

Website
http://redis.io
Industry
Software Development
Company size
501-1,000 employees
Headquarters
Mountain View, CA
Type
Privately Held
Founded
2011
Specialties
In-Memory Database, NoSQL, Redis, Caching, Key Value Store, real-time transaction processing, Real-Time Analytics, Fast Data Ingest, Microservices, Vector Database, Vector Similarity Search, JSON Database, Search Engine, Real-Time Index and Query, Event Streaming, Time-Series Database, DBaaS, Serverless Database, Online Feature Store, and Active-Active Geo-Distribution

Locations

  • Primary

    700 E. El Camino Real

    Suite 250

    Mountain View, CA 94041, US

    Get directions
  • Bridge House, 4 Borough High Street

    London, England SE1 9QQ, GB

    Get directions
  • 94 Yigal Alon St.

    Alon 2 Tower, 32nd Floor

    Tel Aviv, Tel Aviv 6789140, IL

    Get directions
  • 316 West 12th Street, Suite 130

    Austin, Texas 78701, US

    Get directions

Employees at Redis

Updates

  • View organization page for Redis

    284,546 followers

    Make the most of your downtime this Thanksgiving weekend. In just 75 minutes, learn how to build a semantic cache to make your AI agents faster. Check out the Redis course built with DeepLearning.AI. Your future self will be extra thankful. https://lnkd.in/gxGSKuwj

    View organization page for DeepLearning.AI

    1,296,010 followers

    🚀 New Course: Semantic Caching for AI Agents Taught by Tyler Hutcherson and Iliya Zhechev from Redis. AI agents often make redundant API calls for questions that mean the same thing. Semantic caching helps your agents recognize when different queries share the same meaning, reducing costs and speeding up responses. In this course, you'll: - Build a semantic cache that reuses responses based on meaning, not exact text matches - Measure cache performance using hit rate, precision, and latency metrics - Enhance accuracy with cross-encoders, LLM validation, and fuzzy matching - Integrate caching into an AI agent that gets faster and more cost-effective over time Start building AI agents that respond faster and cost less to run. 👉 Enroll now: https://hubs.la/Q03T__XB0

  • View organization page for Redis

    284,546 followers

    Redis Open Source 8.4 is GA today. It’s faster, more scalable, and now has hybrid search. It’s the most advanced version of Redis yet, continuing our mission to make Redis faster, simpler, and more powerful. New features in 8.4 include: 🔎 Hybrid Search: Combines full-text and vector results in one query, powering more accurate AI, RAG, and semantic search experiences. 📈 Performance improvements: With multi-threaded I/O, improved memory handling, and smarter JSON storage that deliver over 30% higher throughput and up to 91% lower memory use. ⚛️ Atomic Slot Migration (ASM): Enables easier, more reliable cluster scaling. 💬 Stream enhancements: Allow clients to process new and pending messages in a single step. 🔑 Atomic key operations: Provide safer, script-free key updates and expirations. Read more here: https://lnkd.in/ghGUD6tc  Download Redis 8.4 here: https://lnkd.in/gYcp3gcv

    • No alternative text description for this image
  • Redis reposted this

    View profile for Gabriel Lipnik

    AI Engineering | Mathematical Optimization | Explainable Systems

    🔍 Reduce LLM Calls with Vector Similarity Search – Design Patterns for Faster, Cheaper and Greener AI One of my absolute highlights at Codemotion Milano 2025 some weeks ago was the talk by Raphael De Lio (Redis). It was a perfect mix of technical clarity and practical impact. The idea: Instead of sending every query through an LLM, he showed how semantic routing, vector similarity search (VSS), and semantic caching can dramatically reduce token usage, latency, and energy consumption, while keeping quality and context intact. The three use cases he presented: text classification, function calling, and caching responses. It's a mindset shift: don't scale by calling bigger models, but by calling them smarter. Efficiency here isn't just optimisation, it's responsibility. Highly recommend the material: Presentation: https://lnkd.in/dMYA2y2k Paper: arxiv.org/abs/2504.02268 Thanks, Raphael, for a talk that connects technical depth with a vision for sustainable Al infrastructure! #Codemotion #Redis #Al #LLM #VectorSearch #SemanticCaching #AlEngineering #ResponsibleAl

    • No alternative text description for this image
  • View organization page for Redis

    284,546 followers

    Redis 🤝 Google Cloud We recently announced the general availability of Google Cloud Spanner support in Redis Data Integration for both self-managed and Redis Cloud on GCP deployments. With this integration, enterprises can now: ⤷ Sync data between Redis and Google Cloud Spanner ⤷ Deliver real-time caching ⤷ Achieve sub-millisecond response times for high-performance apps ⤷ Reduce infrastructure costs by offloading reads from Spanner Read more: https://lnkd.in/gbk8YX6U

    • No alternative text description for this image
  • View organization page for Redis

    284,546 followers

    We’re packing our bags for AWS re:Invent, and let’s just say we have a few surprises in store. Starting at booth #1520 and the Hallucination Hub after-hours, you’ll want to keep an eye on what’s coming next. We heard there might even be a new version of Redis involved. 👀 Stay tuned—it’s going to be a real-time kind of week from Dec 1-5: https://lnkd.in/gbyuNsDr

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Redis reposted this

    View organization page for DeepLearning.AI

    1,296,010 followers

    🚀 New Course: Semantic Caching for AI Agents Taught by Tyler Hutcherson and Iliya Zhechev from Redis. AI agents often make redundant API calls for questions that mean the same thing. Semantic caching helps your agents recognize when different queries share the same meaning, reducing costs and speeding up responses. In this course, you'll: - Build a semantic cache that reuses responses based on meaning, not exact text matches - Measure cache performance using hit rate, precision, and latency metrics - Enhance accuracy with cross-encoders, LLM validation, and fuzzy matching - Integrate caching into an AI agent that gets faster and more cost-effective over time Start building AI agents that respond faster and cost less to run. 👉 Enroll now: https://hubs.la/Q03T__XB0

  • Redis reposted this

    View organization page for Creators Corner

    1,901 followers

    Mom Creators Corner is back! SF - Production Agents - Hack is here. ⚡🤖 The Production Agents – Hack Kickoff starts November 21st. Join us to build the next generation of intelligent, self-improving AI agents—and shape the future. Join innovators from Amazon Web Services (AWS), Skyflow, Postman, Parallel Web Systems , Forethought, Finster AI , Senso, Sanity, TRM Labs, Coder, Cleric, Anthropic, Redis, Lightpanda, Lightning AI. We advance the frontier of how AI agents learn, adapt, and build. 📅 Friday, November 21 📍 AWS Start up Loft, San Francisco 🏆 Over $50k in prizes! 👉 Register here: https://luma.com/ai-hack Hosted by: Murtaza M. , Giovanni Amenta, Alessandro Amenta , Jacopo P., Simon Tiu , Saroop Bharwani

  • View organization page for Redis

    284,546 followers

    We worked with our friends over at DeepLearning.AI to bring you "Semantic Caching for AI Agents," a 75-minute course lead by Tyler Hutcherson and Iliya Zhechev that teaches you how to build a semantic cache using Redis to make AI systems faster and more cost effective. Using seven different video lessons and four code examples, you’ll learn to: ➡️ Build your first semantic cache from scratch – Build a working cache to see how each component works, then implement it using Redis’ open source tools. ➡️ Measure cache effectiveness with key metrics – Track cache hit rate, precision, recall, and latency to understand your cache’s real impact. ➡️ Enhance cache accuracy with advanced techniques – Use threshold tuning, cross-encoders, LLM validation, and fuzzy matching to make your cache more effective. ➡️ Build a fast AI agent with semantic caching – Integrate semantic caching into an AI agent that reuses results, skips redundant work, and gets faster over time. Start building today: https://lnkd.in/gxGSKuwj

    • No alternative text description for this image
  • View organization page for Redis

    284,546 followers

    Chatbots are reshaping customer experience and productivity across industries, but many teams hit roadblocks with latency, memory, and cost. That’s where we come in. Redis is the AI infrastructure layer behind some of the fastest, most intelligent chatbots used today, powering RAG, semantic caching, and long-term memory. Here’s how we’re helping companies deliver real-time, intelligent chatbot experiences that scale smarter and cost less: https://lnkd.in/g6Ax57Ex

Similar pages

Browse jobs

Funding

Redis 10 total rounds

Last Round

Secondary market

US$ 1.2M

See more info on crunchbase