WEKA
Close
GENERATIVE AI

Meet Center for AI Safety

Accelerating AI Safety Research at Lower Cost in the Cloud

AREAS OF FOCUS
  • Generative AI
  • Cloud
  • OCI
REGION
  • Global
CUSTOMER LINKS

The Center for AI Safety (CAIS — pronounced ‘case’) is a San Francisco-based nonprofit that supports research and field building that promotes safe and responsible artificial intelligence (AI).  CAIS believes that while AI has the potential to benefit the world profoundly, many fundamental problems in AI safety have yet to be solved. CAIS’s mission is to reduce societal-scale risks associated with AI by conducting and building the field of AI safety research and advocating for safety standards.

The Rising Importance of AI Safety Research

CAIS-supported research includes topics central to the development of safe AI. Research on the robustness of safety guardrails in large language models (LLMs) highlights the need to limit third-party developers from bypassing safety controls. Research focused on developing technical methods to identify and measure the tendency of LLMs to hallucinate can help make AI systems more truthful and reliable. Work focused on determining the extent to which AI systems act based on reward systems versus ethics will help AI researchers understand how AI systems act according to ethical considerations.

Artificial Intelligence Research in an Era of GPU Scarcity

AI safety researchers who want to experiment on the latest LLMs face a dilemma. Conducting relevant research requires access to the latest GPU infrastructure to run experiments resembling real-world scenarios. However, the cost, complexity, space, and infrastructure skill sets needed to build an AI research cluster create high barriers for most AI-safety researchers.

The CAIS Compute Cluster is a dedicated GPU-accelerated cluster that provides AI safety researchers with subsidized, on-demand access to state-of-the-art infrastructure for LLM training and other AI safety projects. The CAIS compute cluster is specifically designed for researchers working on the safety of machine learning systems and supports a diverse range of research interests and collaborators. 

“We immediately saw our cloud storage costs drop by 90% when we switched to WEKA.”

Stephen Basart, R&D Lead, Center for AI Safety

“80% of the storage attached to our GPU cluster was going unused.”

Stephen Basart, R&D Lead, Center for AI Safety

“We would need a faster network to even stress the WEKA environment.”

Stephen Basart, R&D Lead, Center for AI Safety
Case Study

Center for AI Safety

Learn how the Center for AI Safety empowers research into safe and responsible AI through the CAIS compute cluster.