A16 - ASCI Winterschool on Efficient Deep Learning
|Year||Expected winter 2023|
On Thursday Nov 25, 2021 we finished our Winter School on Efficient Deep Learning. We look back at a succesful winter school with very good presentations, and a lot of discussions and interaction. The presentations from the speakers can be found in the links below. The missing presentations will be added as soon as possible.
Henk Corporaal / Opening and intro to the course
Jan van Gemert / Intro DL
Floran de Putter / Optimizing neural networks for inference: quantization from float32 to int8
Floran de Putter / Optimizing neural networks for inference: extreme quantization beyond int8
Pascal Mettes Hyperbolic deep learning
Henk Corporaal – Floran de Putter
Optimizing neural networks for inference: exploiting data reuse
Hyperparameter Tuning for Minimum Overhead and Optimal Inference Experience
Willem Sanberg – Sebastian Vogel – Hiram Rodriguez Challenges of scaling and applying NAS for
Efficient NNs without multipliers and ML-enabled RISC-V for NNs
Gigapixel Semantic Segmentation for Histopathology
Introduction to Edge AI Systems
Neuromorphic Computing with 𝜇𝜇Brain Chip
Jan van Gemert
Update: The provisional schedule for the A16 Winter School on Efficient Deep Learning has been released!
Please have a look at it here
Update: Maximum number of participants reached.
The maximum number of participants has been reached. If you want to be placed on the reserve list, please send an e-mail to: email@example.com. Only ASCI or EDL members can apply
Location: Kasteel Oud-Poelgeest, link to website.
This winterschool will be co organized with and joint by EDL (an NWO perspective program on Efficient Deep Learning). Please be advised that there are only a limited number of places available.
Machine learning has numerous important applications in intelligent systems within many areas, like automotive, avionics, robotics, healthcare, well-being, and security. The recent progress in Machine Learning, and particularly in Deep Learning (DL), has dramatically improved the state-of-the-art in object detection, classification and recognition, and in many other domains. Whether it is superhuman performance in object recognition or beating human players in Go, the astonishing success of DL is achieved by deep neural networks. However, the complexity of DL networks for many practical applications can be huge, and their processing may demand a high computing effort and excessive energy consumption. Their training requires huge data sets, making the training even orders of magnitude more intensive than their already very demanding inference phase. A new development is to move intelligence from the cloud to the IoT edge; this further stresses the need to tame the complexity of DL and Deep Neural Networks.
This joint ASCI-EDL winterschool treats various topics addressing the complexity reduction of DL, including:
- Architectural and Hardware accelerator support for DL, with emphasis on energy reduction, computation efficiency and/or computation flexibility, both for inference and/or for learning;
- Spiking and brain-inspired neural networks and their implementation;
- Efficient mapping of DL applications to target architectures, including many-core, GPGPU, SIMD, FPGA, and HW accelerators;
- Exploiting temporal and spatial data reuse, sparsity, quantization and approximate computing, dynamic neural networks, and other methods, to decrease the complexity and energy demands of DL.
- Efficient learning approaches, including data reduction, online learning, and quality of learning;
- Tools, Frameworks and High-level programming language support for DL;
- NAS: Neural Architecture Search, including Hardware aware NAS;
- Advanced applications exploiting DL.
Above topics will be treated by experts from the Netherlands and abroad.
Required background: Basic knowledge of deep learning and computer architecture.
|ASCI students can get 5 ECTS credits for this course. To get these credits they have to complete a lab/research study related to one or more of the treated topics.|