WEKA
Close
GENERATIVE AI

Meet Contextual AI

Changing How The World Works Through AI

AREAS OF FOCUS
  • Generative AI
  • RAG
  • Cloud
  • GCP
REGION
  • Global
CUSTOMER LINKS

Headquartered in Mountain View, Contextual AI’s mission is to change how the world works through AI. The company provides a turnkey platform for building enterprise AI applications powered by its state-of-the-art RAG 2.0 technology.

Solving AI Model Hallucination and Currency

Contextual AI is helping to solve critical issues needed to enable enterprise AI at scale for Fortune 500 companies, including hallucinations, staleness, and data privacy. Large language models (LLM) are adept at generating clear and coherent responses based on their pretraining data but may lack critical context and timely information specific to a given use case to be impactful in production. When these off-the-shelf models lack proper context for a query or task, they confidently make false but plausible-sounding answers, known as hallucinations. Hallucinations compromise the accuracy and trustworthiness of the model’s answers and are thus a meaningful blocker to deploying AI into production in enterprises.

RAG 2.0: Driving Enterprise AI Adoption at Scale

Today, many developers leverage retrieval-augmented generation (RAG) to add external data to model responses in hopes of increasing the accuracy of their large language models. However, a typical RAG system today uses a frozen off-the-shelf model for embeddings, a vector database for retrieval, and a black-box language model for generation, stitched together through prompting or an orchestration framework. This leads to a “Frankenstein’s monster” of sorts. The solution is brittle, lacks domain-specific knowledge, requires extensive prompting and ongoing maintenance, and suffers from cascading errors. As a result, these Frankenstein RAG systems rarely pass the bar to make it into production.

Contextual AI’s CEO and cofounder Douwe Kiela led the team that pioneered Retrieval Augmented Generation (RAG) at Facebook AI Research (FAIR) in 2020. Today, he is developing RAG 2.0 with the team at Contextual AI to address the inherent challenges with the original RAG system.

The company’s approach focuses on two pillars: systems over models and specialization over AGI. Contextual Language Models (CLM) enable production-grade AI by optimizing the entire system end-to-end. With RAG 2.0, Contextual AI pre-trains, fine-tunes, and aligns all components as a single integrated system. As a result, customers can go from brittle generic chatbots to highly accurate and specialized AI applications, with improvements of over 4x compared to the baseline.

“Training large-scale AI models in the cloud requires a modern data management solution that can deliver high GPU utilization and accelerate the wall clock time for model development.”

Amanpreet Singh, CTO & co-founder of Contextual AI

“With the WEKA Data Platform, we now have the robust data pipelines needed to power next-gen GPUs and build state-of-the-art generative AI solutions at scale. It works like magic to turn fast, ephemeral storage into persistent, affordable data.”

Amanpreet Singh, CTO & co-founder of Contextual AI
Case Study

Building Production-Ready Enterprise AI

Learn more about Contextual AI and WEKA in Google Cloud