SHARK A.I. – High Performance 10-GPU Server for Machine Learning, Deep Learning and Artificial Intelligence (AI)
Artificial Intelligence (AI) nowadays is not only an academic subject, but is moving fast towards the real world with applications in facial recognition, robotics, revolutionary analytics, disease prevention and smart city constructions. All the groundbreaking scientific progress calls for acceleration in machine-learning (ML) and deep-learning (DL) training, and the increasing adoption of GPUs will satisfy the thirst for tremendous computing power.
SHARK A.I. is a carrier-grade, multi-purpose platform designed for edge applications. Combining a server node, a PCIe expansion box with PCIe switching, the DEVKIT has a capacity to support a combination of up to ten NVIDIA® GPUs depending on the needs of the application. A configurable edge platform that supports different workloads and demands, the SHARK A.I. can support multiple topologies and bandwidths between GPUs and CPUs with simple cable routing adjustments. Moreover, Infiniband support allows it to be easily scaled up to multiple GPU clusters.
Framework Flexibility for Various AI Applications The SHARK A.I. Server supports both single and dual root complexes for various AI applications. For deep learning applications, a single root complex can utilize all the GPU clusters to focus on large-size data training jobs and the use CPU to handle small tasks; For machine learning applications, a dual root complex can allocate more tasks to the CPUs, and arrange fewer distributed data training jobs among GPUs. The flexible framework of the AI SHARK A.I. makes it an extremely flexible AI platform. Enabling a switching option to configure specific PCIe lanes of GPU’s to specific I/O and CPU cores enhances the ability to improve the overall flow of information to and from multiple virtualized applications. This provides the developer a broad range of options for configurability and manageability without the need to rack and stack systems, eating up valuable space, power and cooling.
Flexible System Built for Edge High Performance Computing (EHPC) Moving high compute systems to the edge for GPU acceleration is critical for specific solutions needing to optimize high performance computing (HPC) applications and remote virtualization. The SHARK A.I. Server is able to increase cloud-scale flexibility and agility at the edge. It provides the flexibility to implement different head-nodes and the freedom to choose the numbers of GPUs per virtual machine (VM). It is an ideal hardware system that can support a wide variety of configurations via software implementation.
Rugged and Carrier-Grade for Reliability and Serviceability Designed for reduced OPEX and system reliability, the hardware structure of the SHARK A.I. is all hot-swappable, front-accessible, has 5+1 redundant fan modules and redundant hot-swappable 3+1 power supplies. Easy access to the server node allows it to be removed from the front of the chassis, and GPU cards can be easily installed after removal of the top cover. The SHARK A.I. promotes efficient serviceability while delivering optimal performance.