Chapter 1


Overview of MPI

1.1   Introduction

The Message-Passing Interface (MPI) is a standard specification that supports coding of distributed memory parallel programs by means of message passing (point-to-point and one-sided) and collective communication operations among processes.

Before the MPI standards were created, universities, government research institutions and computer manufacturers created message passing libraries. Those libraries, however, depended on platform-specific functions, proprietary operating system functions and special calling formats. Applications that are converted into parallel-processing using those libraries have low portability and low compatibility.

The MPI message passing standards were created with the objective of retaining the useful features of existing message passing libraries and permitting the coding of portable, parallel applications that could run on various platforms without requiring modifications.

The standardization effort identified the following objectives.

The standardization effort was initiated by a group led by Dr. Dongarra in the United States at the University of Tennessee. After that, the effort was continued by the MPI Forum, which is a private organization composed of participants from American and European universities, government research institutions, corporate researchers, and computer vendors. Their efforts culminated in the publication of the first edition, MPI-1.0, in June 1994. The standard specification has subsequently been revised several times for clarification and expansion. MPI Version 2 added one-sided communication and MPI-IO, and was published in July 1994. MPI Version 3.0 added non-blocking collective operations and new one-sided functions, and was published in September 2012, followed by MPI Version 3.1, which contains mostly corrections and clarifications to MPI Version 3.0 and was published in June 2015.


1.2   Configuration of NEC MPI

NEC MPI is an implementation of MPI Version 3.1, which uses shared memory feature of a VH, and InfiniBand functions for communication to achieve high-performance communication.

1.2.1   Component of NEC MPI

NEC MPI consists of the following components: The Fortran compiler (nfort), C compiler (ncc), or C++ compiler (nc++) is required in order to compile and link MPI programs.

1.2.2   NEC MPI/Scalar-Vector Hybrid

NEC MPI supports a communication among processes on VE. Furthermore, by using the NEC MPI/Scalar-Vector Hybrid, you can perform a communication among processes on VH or scalar nodes and those on VE nodes. In other words, it supports heterogeneous environments. NEC MPI selects appropriate MPI communication method among VH or scalar nodes and VE nodes automatically with consideration for the system configuration and so on. In general, MPI communication will be the fastest, if InfiniBand, which NEC MPI can take direct use of, is available. For hybrid execution, it is necessary to compile and link the MPI program for a VH or scalar node, and to specify the mpirun command corresponding to hybrid execution. See Chapter 3 for details.


Contents Previous Chapter Next Chapter Glossary Index