ELEC 470 Computer System Architecture Units: 3.50
This course covers advanced topics in computer architecture with a quantitative perspective. Topics include: instruction set design; memory hierarchy design; instruction-level parallelism (ILP), pipelining, superscalar processors, hardware multithreading; thread-level parallelism (TLP), multiprocessors, cache coherency; clusters; introduction to shared-memory and message-passing parallel programming; data-level parallelism (DLP), GPU architectures.
(Lec: 3, Lab: 0, Tut: 0.5)
(Lec: 3, Lab: 0, Tut: 0.5)
Offering Term: W
CEAB Units:
Mathematics 0
Natural Sciences 0
Complementary Studies 0
Engineering Science 11
Engineering Design 31
Offering Faculty: Smith Engineering
Course Learning Outcomes:
- Understand quantitative design and analysis of computing systems, as well as the instruction set architecture design for RISC architectures.
- Describe the concepts of hierarchical memory subsystems, including multi-level caches, advanced optimization techniques and integration with pipelined processors, as well as virtual memory.
- Understand design trade-offs in processor design including pipelined processors, in-order vs. out-of-order execution, branch prediction techniques, different cache and TLB organizations, and through simulation.
- Describe software multithreading and multicore computing by writing parallel programs using shared-memory and message-passing programming models.
- Analyze/design single-issue pipelined datapath and control unit, realize how structural, data and control hazards can affect performance, and how they can be handled statically or dynamically at runtime.
- Analyze/design advanced instruction level parallelism (ILP), including multiple-issue pipelined processors with static scheduling (VLIW), dynamic scheduling and speculation (superscalar), and hardware multithreading.
- Analyze/design multicore architectures and shared memory multiprocessors with a focus on thread level parallelism (TLP) and snooping and directory-based cache coherency protocols; as well as data- level parallelism (DLP) and GPU architectures.