Conferences and supporting programme
Efficient Workflow for Designing, Training and Deploying Deep Learning Models with MATLAB
Deep learning is revolutionizing our world across several industries and introducing new challenges in development of artificial intelligence systems. Efficient tools for managing and labeling big datasets are necessary, the training process requires high computational performances and hyperparameters tuning. Finally, trained architectures must be implemented on high performant hardware to allow near real-time inference which is required by most safety-critical applications. This paper introduces a MATLAB-based framework for efficient design, training and deployment of neural networks. We show tools for efficient handling of large datasets and semi-automate ground truth labeling as well as techniques for development and debugging of neural networks. Functionalities for inspection of internal structures and weights allow to understand the role of each layer in the architecture and how the data is transformed during the training process. Already available pre-trained networks can be directly imported into MATLAB and re-purpose for other applications using new data. This process, known as transfer learning, strongly reduces training time and required data. Neural networks can also be developed from scratch and then shared on ONNX compatible frameworks. We present first how to train models easily leveraging high performant GPUs locally or in the Cloud and later how high efficient embeddable native CUDA code can be automatically generated enabling efficient prototyping.
--- Date: 27.02.2019 Time: 14:30 - 15:00 Location: Conference Counter NCC Ost