ASUS unveils powerful, cost-effective AI servers based on modular design – CIO

2 minutes, 32 seconds Read

For successful AI deployments, IT leaders not only need the latest GPU/CPU silicon, they also need artificial intelligence (AI) servers that establish a foundation. That means hardware designed from the ground up for maximum performance, data center integration, AI development support, optimal cooling, and easy vertical and horizontal scaling.

ASUS’ collaboration with AI chip leader NVIDIA makes this all possible. IT leaders attending NVIDIA’s GTC 2024 AI developer conference (March 18-21, 2024, in San Jose, CA) can explore these capabilities with ASUS, one of the global leaders in high-performance AI servers based on NVIDIA’s MGX server reference architecture. That architecture lets ASUS servers exploit the latest NVIDIA advances in GPUs, CPUs, NVME storage, and PCIe Gen5 interfaces.

ASUS has adopted NVIDIA’s MGX Server reference architecture to develop a new line of ASUS AI and high-performance computing servers designed for accelerated computing. MGX is a modular architecture that can be applied in an array of server configurations, with a mix of GPU, DPU, and CPU options, to address specific workloads. This allows ASUS to fully exploit the most advanced NVIDIA technologies, including the Grace Hopper Superchip and the Grace CPU Superchip, and NVIDIA’s NVLink–C2C, a direct GPU-to-GPU mesh interconnect that scales multi-GPU input/output (IO) within the server.

ASUS MGX servers easily integrate with enterprise and cloud data centers.

The MGX architecture, with NVIDIA’s chips, are just the starting point. ASUS optimizes these servers with three key capabilities.

1. Performance boost technology

ASUS developed three capabilities to further improve processor performance.

The Core Optimizer maximizes the processor frequency in multi-core operations, minimizing frequency jitter in all cores. The result: reduced latency.

Engine Boost is an ASUS-created voltage design that enables automatic power acceleration. The result: overall server performance improves by up to 24.5% TFLOPS (as benchmarked by MP LINPACK on an ASUS ESC8000 G4 server). This feature also enables a CPU to remain at maximum frequency even when it has gone beyond its thermal design power – the maximum amount of heat the cooling system can dissipate.

Workload Presets are ASUS-authored BIOS server profiles. These include preconfigured settings for varying workloads and benchmarks. Workload presets match the profiles with specific applications to improve performance.

2. Advanced cooling options

Liquid cooling has become the energy-efficient, scalable solution to AI systems’ high heat output. Liquid cooling’s much higher thermal efficiency improves the data center’s power usage effectiveness (PUE) ratio by reducing the demand for conventional air conditioning.

ASUS crafted three cooling options: efficient air systems, direct-to-chip liquid cooling, and full immersion cooling. By enabling greater computational density for servers, these options reduce operational costs while sustaining maximum AI system performance.

3. Optimized software

ASUS servers include a no-code AI platform with a complete in-house AI software stack. The software enables any business to accelerate AI development on large language model (LLM) pre-training, fine-tuning and inference with lower risk and faster catch-up. For many customers, regardless of business size, the platform minimizes or even eliminates the need to start from scratch.

To learn more, visit

This post was originally published on this site

Similar Posts