Published
October 28, 2025

Spectro Cloud PaletteAI Now Supports NVIDIA RTX PRO 6000 Blackwell Server Edition, bringing AI to every enterprise

Jeremy Oakey
Jeremy Oakey
CTO, Field Engineering

A new Platform for the next generation of AI factory

As AI adoption moves from pilot to production, enterprises need on-premises technologies that not only offer high performance, but also fit within the supported environmental footprint and offer ease of deployment, reliability and manageability. 

NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are built on the NVIDIA Blackwell architecture, with fifth-generation Tensor Cores, and 96GB GDDR7 VRAM to power popular models for AI Inferencing applications.

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU will help enterprises to bring high-performance, on-premises AI factory capabilities into environments where space, power and cooling are limiting factors. The RTX PRO 6000 Blackwell GPU supports partitioning of the GPU (MIG/vGPU) for running multiple workloads simultaneously.

Ready when you are

Following its launch by NVIDIA in March 2025, leading server providers announced shipping support for this GPU around August 2025. Spectro Cloud has now qualified RTX PRO 6000 Blackwell Server Edition GPUs and with PaletteAI makes it easy to deploy your preferred hardware OEM servers.

PaletteAI offers deployment options such as 1-click bare metal provisioning  our RTX PRO Servers can also be shipped directly from several OEMs and reseller channels already imaged and ready at power-on to connect back to the PaletteAI management platform. 

This capability allows an appliance-like experience where the solution includes components and technical disciplines that aren’t always available locally – this means lower technical skill requirements on site where the physical deployment takes place.

From GPU all the way to the top of the stack

PaletteAI’s architecture enables centralized configuration and day-2 lifecycle management, uniquely at large scale.  

Its management capabilities support  not only the NVIDIA AI Enterprise software stack, but also the underlying infrastructure’s lifecycle requirements, such as the operating system, network configuration and critical security vulnerability patching. For ease of use, PaletteAI embeds and licenses NVIDIA AI Enterprise, providing enterprises with seamless experience using NVIDIA AI software including NVIDIA NIM inference microservices for the latest AI models – including NVIDIA Nemotron open models — and NVIDIA NeMo for training and customization.

There are many different management options for AI workloads out there that claim to support simplified operations — but look deeper into the deployment details and it becomes clear that there are many manual steps, a long list of prerequisites which must be met, and no capability for remediating the missing configurations with automation. 

In contrast, PaletteAI provides easy upgrading of the software components all the way up the AI stack, making the new GPU available and fully supported.  

PaletteAI changes the operating paradigm with a truly full stack, desired state management capability. It removes the manual steps, giving the platform engineering team an easy-to-use interface, as well as supporting entirely API-driven or Infrastructure-as-Code (IAC) management methods.

Giving AI practitioners the latest and greatest

When your business invests in new GPUs like the RTX PRO 6000 Blackwell, ultimately you want to get that power in the hands of those who are building innovation and value: your data science teams and other AI practitioners.

That’s why PaletteAI doesn’t just serve platform teams; it also gives AI practitioners an easy to use interface, focused on the AI stack. With it, your teams can deploy shared or dedicated resources with the RTX PRO 6000 Blackwell GPUs, freeing them to experiment with new innovations while staying within the enterprise’s established governance and guardrails. Your AI teams can deploy the latest NVIDIA reference designs, models, and experiment with the newest AI community and open source to leverage the full capabilities of  the RTX PRO 6000 Server GPU.

From data center to the edge

The RTX PRO 6000 Blackwell is also a strong fit for edge AI use cases, where inferencing, physical AI and computer vision/video applications are well suited for this power and efficiency of this GPU.  PaletteAI makes deploying your edge servers simple and secure — and scale is no challenge either. Thousands of locations can be managed effectively from a single instance of the PaletteAI management platform.

Each new NVIDIA GPU generation, like the RTX PRO 6000 Blackwell Server Edition, is a reminder that AI is the key to limitless business transformation. But only if you can put it to work. You shouldn’t have to burn time and budget trying to get the technology working and keeping it running.

Deploying and operating AI at scale requires a management platform that can free you from the limitations of manual bottlenecks and human error. PaletteAI gives you both speed and consistency, and the result is you can slash time to value from months to days. 

Next steps

To learn more about NVIDIA’s RTX PRO 6000 Blackwell Server Edition GPU, check out this blog. To find out how PaletteAI can accelerate your AI initiatives, and request a demo, visit palette-ai.com.

Learn more

Dynamic provisioning of NVIDIA Spectrum-X Ethernet with SR-IOV and NV-IPAM on CNCF Kubernetes

Spectro Cloud PaletteAI brings Physical AI and robotics to the edge with NVIDIA Jetson Thor

Spectro Cloud PaletteAI Now Supports NVIDIA RTX PRO 6000 Blackwell Server Edition, bringing AI to every enterprise

Hardening AI Factories with Spectro Cloud’s Secure AI-Native Architecture (SAINA)

Build your own bare-metal cloud with NVIDIA DPF Zero Trust

One-click NVIDIA AI Data Platform (AIDP) with PaletteAI and WEKA

Spectro Cloud: building trusted AI factories for government with NVIDIA