WEKA
Close

Meet WEKApod™

Unlock the full potential of the WEKA Data Platform in a turnkey appliance. WEKApod delivers best-in-class performance and density for AI, GPU, and other data-intensive workloads, driving faster training cycles, reduced time to convergence, and higher throughput.

Scalable, high-performance data infrastructure for every AI project need

Whether you’re starting small, training complex models, running inference workloads, or scaling large GPU environments, WEKApod provides the ideal infrastructure for your AI needs. It ensures seamless data mobility across cloud, edge, and on-premises environments, delivering faster time to insight and more efficient AI workflows at lower costs.

The WEKApod Data Platform Appliance

Explore WEKApod, a high-performance, scalable appliance designed to improve model performance, reduce time to convergence, increase throughput, accelerate time to first token, and maximize GPU utilization for AI and data-intensive workloads.

WEKApod Prime:
AI-Ready Performance at a Competitive Price

WEKApod Prime is the perfect solution for organizations looking to balance high-performance storage for initial or small projects. Designed for AI workloads that require fast, scalable, and flexible storage, WEKApod Prime delivers the power you need while future-proofing your infrastructure for evolving demands.

Exceptional Price/Performance

Get class-leading performance for AI and mixed file workloads while maximizing the value of your investment.

Scalable and Flexible

WEKApod Prime starts at lower capacities allowing you to start small and seamlessly scale as your workloads grow.

WEKApod Nitro:
Certified for NVIDIA DGX SuperPOD

WEKApod Nitro is our NVIDIA DGX SuperPOD certified solution designed for organizations that need to meet the most demanding AI and machine learning requirements. Built to support GPU-intensive environments, WEKApod Nitro comes with a full certification for NVIDIA DGX SuperPOD, ensuring you have the infrastructure required for top-tier AI model training and deployment.

NVIDIA DGX SuperPOD Certified Performance

Full compliance with the NVIDIA DGX SuperPOD certification means that WEKApod Nitro is ready to power large-scale, AI-driven projects right out of the box.

Optimized for Large GPU Environments

Purpose-built for environments that demand maximum GPU performance, providing unmatched speed and reliability for your AI workloads.

Story telling

A Quantum Leap in Enterprise AI Performance

The WEKApod data platform appliances provide industry-leading performance and unmatched performance density, scaling effortlessly to meet the demands of AI, GenAI, and demanding data-driven environments.

Story telling

Efficient High Performance for AI Workloads

An 8-node, 1U rack-dense WEKApod configuration delivers exceptional performance to improve GPU and workload processing efficiencies, optimizes rack space utilization, and reduces idle energy consumption and carbon emissions, contributing to overall cost and power savings and helping organizations meet their sustainability goals and improve their bottom line.

Story telling

Extend Enterprise AI Workloads to the Cloud

The WEKA Data Platform runs in all major clouds, offering customers a choice of hyperscaler cloud and several emerging GPU infrastructure-as-a-service clouds. WEKApod leverages WEKA Data Platform software to seamlessly connect on-premises AI workloads to hyperscale and GPU cloud environments so customers can leverage the cloud or clouds of their choice for backup, archiving, and hybrid cloud workflows.

Story telling

Offering the Ultimate Choice in Deployment Options

The WEKA Data Platform gives customers the flexibility to deploy ata-intensive AI projects at scale wherever they want – on-premises and in the cloud – using a broad selection of major server vendors and public cloud marketplaces. With WEKApod, organizations can consume the WEKA Data Platform in a fully integrated turnkey data platform appliance.

WEKApod Technical Specifications

  Scalability Initial Performance Initial Capacity
Prime 120 Start with a minimum of 8 servers and scale to 100’s of servers
  • 120 GB/s Read BW
  • 32 GB/s Write BW
  • 3.6 Million IOPS
0.4PB Usable PCIe Gen4 pTLC
Prime 140 Start with a minimum of 8 servers and scale to 100’s of servers
  • 200 GB/s Read BW
  • 56 GB/s Write BW
  • 6.0 Million IOPS
0.7PB Usable PCIe Gen4 pTLC
Prime 160 Start with a minimum of 8 servers and scale to 100’s of servers
  • 320 GB/s Read BW
  • 96 GB/s Write BW
  • 12.0 Million IOPS
1.4PB Usable PCIe Gen4 pTLC
Nitro 150 Start with a minimum of 8 servers and scale to 100’s of servers
  • 720 GB/s Read BW
  • 186 GB/s Write BW
  • 18.3 Million IOPS
0.5PB Usable PCIe Gen5 TLC
Nitro 170 Start with a minimum of 8 servers and scale to 100’s of servers
  • 720 GB/s Read BW
  • 186 GB/s Write BW
  • 18.3 Million IOPS
1.0PB Usable PCIe Gen5 TLC
title title

Start solving the big problems