Daniel Barley, M.Sc.
Daniel Barley is a PhD candidate with the Computing Systems Group at the Institute of Computer Engineering at Heidelberg University. He works primarily on resource-efficient deep learning with a focus on memory consumption, data movement, and efficient utilization of compute resources on GPUs. His work is centered on the training stage of deep neural networks and to that end considers pruning and compression of input activations, as they make up the vast majority of the memory footprint.
Daniel’s work is also part of the “Model-Based AI” project, which is funded by the Carl Zeiss Foundation
Research interests
- Hardware-efficient training of deep neural networks
- Pruning/compression
- Efficient (block-)sparse operators
- GPU architecture
Recent news (2-year horizon)
- 09/2024: Paper presentation at the Workshop on IoT, Edge, and Mobile for Embedded Machine Learning (ITEM) co-located with ECML-PKDD 2024 in Vilnius, Lithuania - “Less Memory Means smaller GPUs: Backpropagation with Compressed Activations”
- 01/2024: Paper presentation at the 6th Workshop on Accelerated Machine Learning (AccML), co-located with the HiPEAC 2024 Conference in Munich - “Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning”
General information
- Short CV: pdf
Recent Teaching (4-year horizon)
- Winter term 2024/25
- Organizer and lecturer; undergraduate practical “Binary Hacking”
- Summer term 2024
- Organizer and lecturer; undergraduate practical “Coding for Interviews”
- Winter term 2022/23
- Teaching assistant; graduate course “Introduction to High Performance Computing”
- Summer term 2022
- Teaching assistant; graduate course “Parallel Computer Architecture”
Publications
- Less Memory Means smaller GPUs: Backpropagation with Compressed ActivationsCoRR, abs/2409.11902, 2024
@article{barley2024, author = {Barley, Daniel and Fr{{\"o}}ning, Holger}, title = {Less Memory Means smaller GPUs: Backpropagation with Compressed Activations}, year = {2024}, volume = {abs/2409.11902}, journal = {CoRR}, url = {https://arxiv.org/abs/2409.11902}, }
- Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation PruningCoRR, abs/2311.16883, 2023
@article{DBLP:journals/corr/abs-2311-16883, author = {Barley, Daniel and Fr{\"{o}}ning, Holger}, title = {Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning}, journal = {CoRR}, volume = {abs/2311.16883}, year = {2023}, url = {https://arxiv.org/abs/2311.16883}, doi = {10.48550/ARXIV.2311.16883}, eprinttype = {arXiv}, eprint = {2311.16883}, timestamp = {Mon, 04 Dec 2023 00:00:00 +0100}, }