From SNIC Documentation
MPI (Message Passing Interface) is a library, which is designed to provide the information exchange between different tasks of a distributed memory parallel program. It is presently the de-facto standard to implement message passing in programs written in Fortran, C or C++.
It consists of a group of functions which supports different communication operations.
ExpertsThese experts have registered specific competence on this subject:
|Field||AE FTE||General activities|
|Adam Peplinski (PDC)||PDC||Computational fluid dynamics||100100||NEK5000 support|
|Anders Sjöström (LUNARC)||LUNARC||Technical acoustics
|5050||Helps users with Matlab usage on clusters, Maintainer of the GPU resource Erik at LUNARC|
|Chandan Basu (NSC)||NSC||Computational science||100100||Working on climate and weather codes
EU projects IS-ENES and PRACE.
|Jing Gong (PDC)||PDC||Computational fluid dynamics||100100||Application expert|
|Joachim Hein (LUNARC)||LUNARC||8585|
|Lilit Axner (PDC)||PDC||Computational fluid dynamics||5050|
|Marcus Lundberg (UPPMAX)||UPPMAX||Performance tuning
|100100||I help users with productivity, program performance, and parallelisation.|
All SNIC resources
Tutorials and slide sets
- A slide set by Pavan Balaji and Torsten Hoefler from 2013 introducing basic MPI concepts
- Lecture slides from course on Extreme scale systems by William Gropp at the University of Illinois. The following lectures deal with MPI: 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37. This is an extremely comprehensive introduction to MPI.
Free (e.g. OpenMPI) and paid for (e.g. Intel MPI) implementations