Introduction to Parallel Programming with MPI: Welcome and practicals

Welcome and practicals

Background

As processors develop, it’s getting harder to increase their clock speed. Instead, new processors tend to have more processing units. To take advantage of the increased resources, programs need to be written to run in parallel.

Moore's law By Max Roser - https://ourworldindata.org/uploads/2019/05/Transistor-Count-over-time-to-2018.png, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=79751151

In High Performance Computing (HPC), a large number of state-of-the-art computers are joined together with a fast network. Using an HPC system efficiently requires a well designed parallel algorithm.

MPI stands for Message Passing Interface. It is a straightforward standard for communicating between the individual processes that make up a program. There are several implementations of the standard for nearly all platforms (Linux, Windows, OS X…) and many popular languages (C, C++, Fortran, Python…).

This workshop introduces general concepts in parallel programming and the most important functions of the Message Passing Interface.

The material here is derived from this lesson by Jarno Rantaharju, Seyong Kim, Ed Bennett and Tom Pritchard from the Swansea Academy of Advanced Computing. Further inspiration comes from this Python-MPI tutorial.

Prerequisites

This course assumes you are familiar with C, Fortran or Python. It is useful to bring your own code, either a serial code you wish to make parallel or a parallel code you wish to understand better.

Schedule

Setup Install software required for the lesson
00:00 1. Introduction to Parallel Computing How does MPI work?
00:45 2. Serial and Parallel Regions What is a good parallel algorithm?
01:05 3. MPI_Send and MPI_Recv How do I send data from one rank to another?
01:30 4. Coffee Break Break
01:50 5. Parallel Paradigms and Parallel Algorithms How do I split the work?
02:10 6. Non-blocking Communication How do I interleave communication and computation?
02:40 7. Collective Operations What other useful functions does MPI have?
03:10 8. Lunch Break Break
04:10 9. (Optional) Serial to Parallel What is the best way to write a parallel code?
How do I parallelise my serial code?
04:10 10. (Optional) Profiling Parallel Applications It works, but what is taking time?
04:10 11. (Optional) Coffee Break Break
04:10 12. (Optional) Do it yourself What is the best way to write parallel code from serial?
04:10 13. Tips and Best Practices What best practices should I know before I write my program?
04:30 Finish

The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.