Home > Teaching > CS 633: Parallel Computing

CS 633: Parallel Computing

Credits: 3-0-0-9
Prerequisites: Instructor’s consent

Exposure to CS330 (Operating Systems), CS422 (Computer Architecture) and CS425 (Computer Networks) is desirable.

Who can take the course:

Phd, Masters, 3rd and 4th year UG Students

Departments that may be interested:

CSE, EE

Course objective:

Parallel programming is ubiquitous in today’s multi-core era and solves many real-world scientific problems. Massive parallelism entails significant hardware and software challenges. The course is structured so that the participants understand challenges in efficient execution of large-scale parallel applications. The assignments will be designed to strengthen understanding of parallel programming. The course will also involve a research-based project component.

Course contents:
S. No. Broad Title Topics No. of Lectures
1. Introduction Why parallel computing? Shared memory and distributed memory parallelism, Amdahl’s law, speedup and efficiency, supercomputers. 2
2. Message passing MPI basics, point-to-point communication, collective communication, synchronous/asynchronous send/recv,
algorithms for gather, scatter, broadcast, reduce.
8
3.

Parallel communication

Network topologies, network evaluation metrics, communication cost, routing ininterconnection networks, static and adaptive
routing, process-to-processor mapping.
6
4. Performance Scalability, benchmarking, performance modeling, impact of network topologies, parallel code analysis and profiling. 7
5.

Designing parallel codes

Domain decomposition, communication-to-computation ratio, load
balancing, adaptivity, case studies: weather and material simulation codes.
7
6. Parallel I/O MPI I/O algorithms, contemporary large-scale I/O architecture, I/O bottlenecks. 6
7. Additional topics Job scheduling, RDMA, one-sided communication, NVM, extreme scale computing: issues and trends. 4
Books
  1. Peter S Pacheco, An Introduction to Parallel Programming, Morgan Kaufmann, 2011.
  2. DE Culler, A Gupta and JP Singh, Parallel Computer Architecture: A Hardware/Software Approach Morgan-Kaufmann, 1998.

  3. Marc Snir, Steve W. Otto, Steven Huss-Lederman, David W. Walker and Jack Dongarra, MPI - The Complete Reference, Second Edition, Volume 1, The MPI Core.

  4. William Gropp, Ewing Lusk, Anthony Skjellum, Using MPI : portable parallel programming with the message-passing interface, 3rd Ed., Cambridge MIT Press, 2014.

  5. A Grama, A Gupta, G Karypis, and V Kumar, Introduction to Parallel Computing. 2nd Ed., Addison-Wesley, 2003.

Reference books:
  1. JL Hennessy and DA Patterson, Computer Architecture: A Quantitative Approach, 4th Ed., Morgan Kaufmann/Els India, 2006.

  2. MJ Quinn, Parallel Computing: Theory and Practice, Tata McGraw Hill, 2002.