Different computation paradigms including sequential and parallel programming each with the corresponding discretised domain shown on the left. For sequential programming, the code performs two tasks A and B in a sequential manner, on a single thread which has access to all of the computer’s memory. When the same code is executed in parallel relying on OpenMP, each processor of the computer concurrently carries out a part of tasks A and B so that the compute wall clock time is shorter. If relying on MPI-based parallelisation, the domain is usually broken up so that each thread ‘knows’ only a part of the domain. Tasks A and B are also executed in parallel by all the CPUs, but now, there is a distributed architecture of processors and memory interlinked by a dedicated network.

  • Creator: Fabio Crameri
  • This version: 11.11.2021
  • License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  • Specific citation: This graphic by Fabio Crameri from van Zelst et al. (2021) is available via the open-access s-Ink repository.
  • Related reference: van Zelst, I., F. Crameri, A.E. Pusok, A.C. Glerum, J. Dannberg, C. Thieulot (2021, in review), 101 Geodynamic modelling: How to design, carry out, and interpret numerical studies, Solid Earth Discuss. [preprint], doi:10.5194/se-2021-14
  • Transparent background
  • Vector format
  • Colour-vision deficiency friendly
  • Readable in black&white

Faulty or missing link? – Please report them via a reply below!

Leave a Reply