NVIDIA and Oracle to Accelerate AI and Data Processing for Enterprises
At the Oracle CloudWorld conference today, Oracle Cloud Infrastructure announced the first zettascale OCI Supercluster, accelerated by NVIDIA Blackwell.
Enterprises are looking for increasingly powerful compute to support their AI workloads and accelerate data processing. The efficiency gained can translate to better returns for their investments in AI training and fine-tuning, and improved user experiences for AI inference.
At the Oracle CloudWorld conference today, Oracle Cloud Infrastructure (OCI) announced the first zettascale OCI Supercluster, accelerated by the NVIDIA Blackwell platform, to help enterprises train and deploy next-generation AI models using more than 100,000 of NVIDIA’s latest-generation GPUs.
OCI Superclusters allow customers to choose from a wide range of NVIDIA GPUs and deploy them anywhere: on premises, public cloud and sovereign cloud. Set for availability in the first half of next year, the Blackwell-based systems can scale up to 131,072 Blackwell GPUs with NVIDIA ConnectX-7 NICs for RoCEv2 or NVIDIA Quantum-2 InfiniBand networking to deliver an astounding 2.4 zettaflops of peak AI compute to the cloud. (Read the press release to learn more about OCI Superclusters.)
At the show, Oracle also previewed NVIDIA GB200 NVL72 liquid-cooled bare-metal instances to help power generative AI applications. The instances are capable of large-scale training with Quantum-2 InfiniBand and real-time inference of trillion-parameter models within the expanded 72-GPU NVIDIA NVLink domain, which can act as a single, massive GPU.
This year, OCI will offer NVIDIA HGX H200 — connecting eight NVIDIA H200 Tensor Core GPUs in a single bare-metal instance via NVLink and NVLink Switch, and scaling to 65,536 H200 GPUs with NVIDIA ConnectX-7 NICs over RoCEv2 cluster networking. The instance is available to order for customers looking to deliver real-time inference at scale and accelerate their training workloads. (Read a blog on OCI Superclusters with NVIDIA B200, GB200 and H200 GPUs.)
OCI also announced general availability of NVIDIA L40S GPU-accelerated instances for midrange AI workloads, NVIDIA Omniverse and visualization. (Read a blog on OCI Superclusters with NVIDIA L40S GPUs.)
For single-node to multi-rack solutions, Oracle’s edge offerings provide scalable AI at the edge accelerated by NVIDIA GPUs, even in disconnected and remote locations. For example, smaller-scale deployments with Oracle’s Roving Edge Device v2 will now support up to three NVIDIA L4 Tensor Core GPUs.