FPGA for Deep Learning Inference
The FPGA-based Mustang-F100 accelerator card from ICP Deutschland is designated for industrial inference systems. It is primarily used for deep learning inference in real time, for video and image processing as well as for the analysis of machine and sensor data.
The Mustang-F100 is based on the Intel® Arria® 10 GX1150 FPGA and is equipped with 8GB on-board DDR4 RAM. Due to its parallelism and its high degree of configuration, which is inherent in the FPGA, the Mustang-F100 can handle changing workloads and different floating-points. It also supports a variety of topologies such as AlexNet, ResNet or Yolo Tiny. From classic object recognition to video and image classification to facial recognition or image segmentation, there are virtually no limits to the target application..
Thanks to the integrated Intel® Enpirion® power solution, the Mustang-F100 features high efficiency (<60W TDP), power density and performance (up to 1.5 TFLOPs). The performance of the Mustang-F100 is additionally optimized by the compatibility with the Intel® OpenVINO™ toolkit. Different library functions, pre-optimized operating systems and pre-trained models accelerate the time-to-market decisively.
Its low-profile with dimensions of 170x68x34mm and its standard PCIe Gen3 x8 interface enable an easy integration of the AI acceleration card. The assignment of an individual card ID allows a flexible use of several Mustang-F100s in inference system. As suitable inference systems the FLEX-BX or the TANK-870AI are offered. ICP supports customers in selecting and setting up the appropriate hardware.