Skip to end of metadata
Go to start of metadata

This assessment is targeted towards university-level faculty and students interested in assessing their knowledge of the Message Passing Interface (MPI). Two badges are under development, a beginner-level assessment, and an intermediate-level assessment. Questions and practical exercises are designed to assess knowledge of basic MPI concepts and commands.

Learning Competencies

Our set of learning competencies to be assessed were guided in part by the PSC workshops, the CI-Tutor tutorials, and CVW tutorials on introductory MPI. The MPI Beginner Badge is a low-stakes metric of whether a user has the following abilities:

MPI Beginner Badge

  • Able to briefly explain the history and purpose of MPI
  • Able to define and apply basic MPI Commands (such as MPI_Init, MPI_Finalize, MPI_Comm_rank, MPI_Comm_size, etc.)Able to describe examples of appropriate applications of MPI
  • Able to demonstrate how to write, compile, and run an MPI program
  • Able to explain parallel computing fundamentals including concepts such as parallel computing hardware design, parallel programming models, and parallel program design.
  • Able to explain the purpose of and the basics concepts behind Communicators, how to create intra- and inter-communicators and what the difference is, how to create and use groups of processors and communicators, and how to access information from each of these types of communicators.
  • Able to list basic concepts of point-to-point communication including source and destination, sending and receiving messages, blocking and nonblocking send and receive, and describe how to use point-to-point message-passing routines to write actual parallel MPI code.

MPI Intermediate Badge

  • Able to describe features of collective communication routines in MPI including barrier synchronization, broadcasts, global reduction operations, gather/scatter data.
  • Able to describe issues working with non-contiguous data, mixed data types, or data that are scattered within an array and explain related concepts including data decomposition, sending multiple messages, buffering, packing and unpacking, and derived datatypes.
  • Able to explain the concept of 'deadlock' and conditions which might cause it. 
  • Able to explain the what the routine MPI_Barrier is for and provide examples of how it might be used.
  • Able to explain the role of MPI_Bcast and describe how the same operation might be performed with non-MPI code. 


MPI Beginner Badge

The MPI Beginner Badge consists of a relatively simple 10-question quiz made with basic questions about MPI concepts and commands. The quiz requires no time limit to complete, and allows up to 5 submissions. 

Update, October 29, 2018: 

We are working on the creation of a question bank with a sufficient number of questions to allow for the creation of badges of varying levels of difficulty or topic focus. 

Update, March 13, 2020

The MPI Beginner Badge has been successfully reviewed and released. You can view it here


MPI Intermediate Badge

Update, October 23, 2020

The MPI Intermediate Badge and the Matlab for HPC Systems Badge have been successfully reviewed and released. Thanks to Victor Eijkhout and John Urbanic for their help. 

You can access the badge for review here

About the MPI Intermediate Badge

The badge consists of two parts as described below.

Part 1: Knowledge Assessment (In Development)

This part consists of a 10 or 15 question quiz made with more difficult questions than the beginner badge, requiring a time limit to complete, and allowing only 2 submissions. A passing grade of 80% is needed to successfully complete Part 1. 

Part 2: Practical Assessment (In Development)

For this part of the badge, the user will need to complete a practical assessment of their MPI skills.  

In order to assess the user's performance, the user will need to submit code files and any required output to the badge assignment. Reviewers will be alerted and grade the submission. A passing grade of 80% is needed to successfully complete Part 2. 

When Parts 1 and 2 are complete, the user will receive the badge.  

Update, July 2020:

The MPI Intermediate badge first draft is near completion and should be ready for review in August, 2020. 

Update on MPI Beginner Badge Awards

July 2020: 2 badges awarded. 

October 2020: 4 badges awarded. 

*Review of Learning Objectives and Alignment with Competencies for the MPI Beginner Badge

Victor Eijkhout and Jerome Vienne reviewed the MPI Beginner Badge competencies and their alignment with the learning objectives and the assessment questions. 

Victor's comments:

"> MPI can dramatically reduce the amount of time required to run a computationally-intensive job.

Do students need to understand taht one parallelization is not equal to the other? Running MPI adds overhead so speedup is not always linear. Load imbalance. Using blocking send/recv can lead to serial behaviour, meaning that MPI actually runs slower than parallel.

> MPI compilers are the same across all operating systems.

I don’t know what you are aiming at. Every mpi compiler supports

mpicc -c -O2 -I/some/dir -L/some/dir myfile.c

et cetera.

> MPI code must include calls to the mpi libraries installed on the system. 

No. It must contain library calls. If you write on system A but run on system B, then you are not using the libraries on A. 

Badly worded question, in other words.

> Which of the following is not part of Flynn's Taxonomy

I find Flynn completely useless and have stopped mentioning it a long time ago. It is better to ask about dynamic process creation (possible with OpenMP, not with MPI, at least until they learn advanced stuff), or shared/distributed memory.

> Consider a communicator with 4 processes. How many total MPI_Send()'s and MPI_Recv()'s would be required to accomplish the following:

> MPI_ALLREDUCE ( &a, &x, 1, MPI_REAL, MPI_SUM, comm );

Don’t like this question. I don’t want students to think about the implementation, beyond the fact that there are trees. Besides, trees are only used in the short message case: in the large message case you don’t use trees.

Jerome provided the following feedback:

"Victor already covered most of the points, let me add few things.

For me, it is important for a beginner to understand.
  • The difference between OpenMP and MPI for example (people are also confused between OpenMP and Open MPI).
  • The history/context of MPI. Why MPI was created ?
     Both points can be inside the section: 
    “Able to briefly explain the history and purpose of MPI”
My MPI interest is more at the implementation level. This aspect should not be part of the basic level but can be part of a more advanced one as they are a lot of difference between the different implementations (the binding of the MPI tasks for example).
I don’t know what I can more. It could be good perhaps to have a quick conf call to talk about that.
  • No labels