Consistency and Coherence (COCO)
Course Overview
This course deepens the knowledge in the area of parallel computing. Focus is set on shared-memory architectures, as current trends indicate an increasing use of such architectures. Initially, aspects of synchronization are explained and how they impact cache-coherent shared-memory architectures (locks, barriers). The following lectures pay attention to the snooping coherence, scalable coherence, foundations of consistency models and relaxed consistency models. In the context of research, of particular interest is transactional memory, token-based coherence, and non-uniform cache architectures. The course ends with a review of current CMOS trends and constraints, their implications, and a short review of deep learning as an emerging workload.
Lecturer (current)
Contents
- Shared memory architectures
- Communication and synchronization concepts and algorithms
- Consistency models and scalable cache coherence
- Multi-/many-core and multi-threading architectures
Requirements
Recommended is a solid knowledge of parallel programming principles, C, C++, OS basics, and the basics of computer architecture (e.g. “GPU Computing”, “High Performance and Distributed Computing”, or ”Advanced Computer Architecture).
Notes
- Frequency: summer term
- Among others, course qualifies for the following programs (please double-check listing within heiCO and possible specialization constraints)
Next/current edition
- Next edition of this course is scheduled for summer 2025
- Course start: tbd
- Room: tbd
- Moodle has unrestricted enrollment. Course participation is determined by heiCO.