Published
October 28, 2025

One-click NVIDIA AI Data Platform (AIDP) with PaletteAI and WEKA

Pedro Oliveira
Pedro Oliveira
Sr. Solution Architect

Building an AI data platform fit for enterprise

While the latest GPU benchmarks get all the limelight in the media, real enterprise AI architects know that when you’re building out AI infrastructure, you need more than just compute. 

You need to take full advantage of the latest networking fabrics, using DPUs and SmartNICs like NVIDIA’s Spectrum-X. 

And you need a data infrastructure too: one that can feed GPUs at line-rate speeds, scaling seamlessly across hybrid and edge environments, while remaining simple enough for teams to deploy and operate.

In this blog we’ll introduce you to the NVIDIA AI Data Platform (AIDP), and show you how Spectro Cloud is working with NVIDIA-certified data platform vendor WEKA to bring the AIDP reference design to life at scale and speed.

NVIDIA AI Data Platform: what is it and why does it matter?

Throughput is the key to modern AI. Today’s AI pipelines require retrieval-augmented generation, vector search, multi-modal ingestion, and distributed training — all of which depend on large amounts of data being moved, indexed, and processed at high speed and with low latency. In traditional architectures, GPUs often wait idle as data is staged from slower storage systems. Storage is a huge bottleneck.

NVIDIA’s AI Data Platform (AIDP) is a customizable reference design that sets out to solve this challenge. It provides a unified foundation for AI data pipelines, tightly integrating storage, networking, and compute into a single data fabric that delivers extremely high throughput and ensures that GPUs are never starved for data. 

In the AIDP, NVIDIA describes using technologies such as GPUDirect Storage (GDS), NVLink, and RDMA-capable fabrics, so data moves from the storage to the GPU memory with virtually zero CPU intervention or added latency.

For enterprise and research environments alike, this translates into real competitive-advantages:

  • Faster model training cycles and reduced time-to-insight.
  • A single, scalable data plane that supports both training and inference.
  • Consistent performance regardless of where the workloads live: on-premises, the cloud, or at the edge.

TL:DR, AIDP transforms enterprises’ data infrastructure from a limiting factor into an accelerator for innovation with AI.

WEKA NeuralMesh Axon: data infrastructure for your AIDP

WEKA is one of the certified storage vendors working with NVIDIA to build AIDP in the real world. Its NeuralMesh Axon storage solution delivers the high-performance, low-latency data access that modern AI workloads require. Built on a unique software-defined microservices architecture, Axon provides near-linear scalability and ultra-low latency for I/O-intensive operations.

Axon leverages WEKA’s Augmented Memory Grid™ to offload key-value (KV) cache and metadata operations from GPU memory, freeing up capacity for computation while reducing the need for oversized memory footprints. This helps ensure that large language models and distributed training frameworks operate at peak efficiency, regardless of scale.

WEKA’s architecture also supports hybrid and edge topologies, allowing enterprises to extend AIDP deployments closer to where data is generated. This capability is crucial for use-cases like autonomous vehicles, healthcare imaging, and real-time analytics.

PaletteAI: One-click deployment for AI data platforms

NVIDIA and WEKA deliver the building blocks for a high performance AIDP. Spectro Cloud PaletteAI makes your AIDP practical to deploy and manage in the real world.

PaletteAI’s declarative approach to provisioning and configuration takes care of deploying all the components you need in your AI infrastructure, based on Kubernetes. 

This includes the operating system on each host, Kubernetes itself, networking, storage, NVIDIA drivers and all the required Operators. With PaletteAI, complex infrastructure stacks such as a WEKA-based AIDP can be pre-validated and deployed with a single click. Each cluster will have the WEKA operator, WEKA CSI (container storage interface) plug-in, secrets and configuration manifests deployed consistently.

PaletteAI is a key enabler for operationalizing AIDP across hybrid and edge environments. Its decentralized architecture allows it to manage tens of thousands of clusters from a single management plane, making it ideal for enterprises rolling out AIDPs and workloads at global scale. At the edge, PaletteAI ensures each site receives a consistent, policy-driven deployment, which is crucial for regulated industries and sovereign data environments. 

From AIDP to AI workloads

As well as provisioning the core AIDP infrastructure foundations, PaletteAI manages the provisioning and the lifecycle of AI workloads through WorkloadProfiles. These are declarative configurations that model how AI platforms and workloads are deployed on a Kubernetes cluster. It combines the aspects that are important to platform engineers related to security, resources and performance best practices with aspects that are important to AI engineers and data science practitioners.

At its core, PaletteAI provides a shared operational model where data scientists, platform engineers, and MLOps teams can seamlessly collaborate. Platform teams take control over AI deployments while preserving choice for their practitioner teams to run repeatable, reliable AI/ML workloads. Practitioners use a simple interface and WorkloadProfiles templates to provision and deploy enterprise-approved stacks across different environments, accelerating innovation while ensuring consistency, security, and scale.

Bringing it all together

Modern AI infrastructure needs to be architected for strong performance in every respect, from GPUs to networking to data. There’s no other way to maximize utilization and throughput. NVIDIA’s AIDP reference design helps enterprises accelerate infrastructure buildouts with the right performance, and vendors like WEKA have created innovative data solutions that align closely with the AIDP reference. 

Spectro Cloud and WEKA have collaborated to make it easy to deploy and configure WEKA’s NeuralMesh Axon solution through Spectro Cloud’s PaletteAI platform, for fast, consistent exascale AIDP rollouts. 

AIDP provides the high-performance foundation. NeuralMesh Axon ensures data flows with precision.  And PaletteAI makes it all operationally seamless for AI practitioners and MLOps teams through Kubernetes and automation.

To learn more about how Spectro Cloud and WEKA are simplifying AIDP initiatives, come meet us at NVIDIA GTC in Washington DC, or at KubeCon AI day in Atlanta.

Learn more

Dynamic provisioning of NVIDIA Spectrum-X Ethernet with SR-IOV and NV-IPAM on CNCF Kubernetes

Spectro Cloud PaletteAI brings Physical AI and robotics to the edge with NVIDIA Jetson Thor

Spectro Cloud PaletteAI Now Supports NVIDIA RTX PRO 6000 Blackwell Server Edition, bringing AI to every enterprise

Hardening AI Factories with Spectro Cloud’s Secure AI-Native Architecture (SAINA)

Build your own bare-metal cloud with NVIDIA DPF Zero Trust

One-click NVIDIA AI Data Platform (AIDP) with PaletteAI and WEKA

Spectro Cloud: building trusted AI factories for government with NVIDIA