Home > Teaching > CS 733: Topics in Parallel Computing

CS 733: Topics in Parallel Computing

Credits: 3-0-0-9

 

Prerequisites: CS633 or Instructor’s consent

 

Who can take the course

PhD, Masters, 3rd and 4th year UG Students

 

Course objective

Parallel programming is ubiquitous in today’s multicore era and solves many real-world scientific problems. Massive parallelism entails significant hardware and software challenges. The course is structured so that the participants read and review recent papers in this field. This will be a research paper discussion based course. The topics selected for the paper discussions will be based on advanced topics in the field and from top-tier conferences and journals such as SC, HPDC, TPDS, JPDC, etc. This course will also involve a research-based major project component.

 

Course contents

 

S. No

Broad Title

Topics

No.of Lectures

1

Resource management

Job scheduling, Slurm, hwloc

4

2

Parallel file systems

Lustre, I/O optimizations, I/O parameter selections

4

3

One-sided communications

Remote memory access, windows

4

4

Performance modeling

computation and communication models, logP, logGP models

4

5

Parallel profiling

Profiling and tracing, understanding popular tools such as TAU, HPCToolkit, I/O profiling using Darshan

4

6

Topology-aware mapping

Mapping heuristics, performance improvement with mapping, visual representation of topology

4

7

Scientific simulation and visualization

Simulation, visualization and analysis workflows, large-scale data movement optimizations

4

8

Spark and MPI

Apache Spark, Large-scale data analysis using Spark and MPI

4

9

Grid, Cloud, Fog, Edge

Trends from Grid to Cloud to Fog and Edge

4

10

Project review

Projects based on above topics

4

 

 

Books / Textbooks:
  1. DE Culler, A Gupta and JP Singh, Parallel Computer Architecture: A Hardware/Software Approach Morgan-Kaufmann, 1998.
  2. A Grama, A Gupta, G Karypis, and V Kumar, Introduction to Parallel Computing. 2nd Ed., Addison-Wesley, 2003.
  3. William Gropp, Torsten Hoefler, Ewing Lusk, Rajeev Thakur, Using Advanced MPI: Modern Features of the Message-Passing Interface, Cambridge MIT Press, 2014.
References:

4. MJ Quinn, Parallel Computing: Theory and Practice, Tata McGraw Hill, 2002.