Research on the intersection of Machine Learning, High-Performance Computing and Hardware

The Hardware and Artificial Intelligence (HAWAII) Lab (previously: Computing Systems Lab) at the Institute of Computer Engineering at Ruprecht-Karls University of Heidelberg is focussing on vertically integrated research (thus considering the complete computing system) that bridges demanding applications such as machine learning (ML), artificial intelligence (AI), high-performance computing (HPC) and data analytics (HPDA) with various forms of specialized computer hardware.

group photo
HAWAII Lab, located at Im Neuenheimer Feld 368

Today, research in computing systems is most concerned with specialized forms of computing in combination with seamless integration into existing systems. Specialized computing, for instance based on GPUs (as known for gaming) or FPGAs (field programmable gate arrays) or ASICs (not the shoe brand but “application-specific integrated circuits”), is motivated by diminishing returns from CMOS technology scaling and hard power constraints. Notably, for a given fixed power budget , energy efficiency defines performance:

As energy efficiency is usually improved by using specialized architectures (processor, memory, network), our research gears to bring future emerging technologies and architectures to demanding applications.

Particular research fields include

  • Embedded Machine Learning includes bringing state-of-the-art DNNs to resource-constraint embedded devices, as well as embedding DNNs in the real-world, requiring a treatment of uncertainty
  • Advanced hardware architecture and technology by understanding specialized forms such as GPU and FPGA accelerators, analog electrical and photonic processors, as well as resistive memory

To close the semantic gap in between demanding applications and various specializations of hardware, we are most concerned with creating abstractions, models, and associated tools that facilitate reasoning about various optimizations and decisions. Overall, this results in vertically integrated approaches to fast and efficient ML, HPC, and HPDA.

We gratefully acknowledge the generous sponsoring that we are receiving. Current and recent sponsors include DFG, Carl-Zeiss Stiftung, FWF, SAP, Helmholtz, BMBF, NVIDIA, and XILINX.

Please find on this website information about our team members, research projects, publications, teaching and tools. For administrative questions, please contact Andrea Seeger, and for research and teaching questions Holger Fröning.

We are happy to frequently organize workshops on topics of interest (see events under ressources) and advise undergraduate and graduate students (see student work on master theses and bachelor theses).

Latest news

Teaching offering for winter 2025/26 is now online!
Workshop on Architectures for Resource and Energy Efficiency in LLMs for Future HPC and Data Centres!

Workshop on “Architectures for Resource and Energy Efficiency in LLMs for Future HPC and Data Centres” accepted at HiPEAC2026! Stay tuned for updates!

Annoyed about the noise in your analog computer?

New article on “Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations” accepted for publication at ECML2025! Read more here

Nature Computational Science article accepted for publication!

New joint article in Nature Computational Sciences with the Pernice Lab on “Probabilistic photonic computing for AI”! Read more here!

Invited talk on Green ML and Bayesian Machines at Heidelberg-Chile Workshop!

Invited talk on “Green Machine Learning by Accelerating Deep Neural Architectures” and “Bayesian Machines: Unlocking the Potential of Bayesian Neural Networks for Enhanced Uncertainty Reasoning” at the 1st Heidelberg-Chile Workshop on Scientific Computing, Santiago, Chile, March 25-28!

Older news can be found in the News Archive.