Click on Popup Symbol

This is a demo of a click on pop-up window offering extra help on a topic.





Associative and Commutative Rules

An operator ¤ satifies the associative rule if:

a ¤ (b ¤ c) = (a ¤ b) ¤ c

For example, addition (+) satisfies the associative rule but subtraction (-) does not.

An operator ¤ satisfies the commutative rule if:
a ¤ b = b ¤ a

For example, multiplication (*) satisfies the commutative rule but division (/) does not.





MPI_COMM_WORLD

A communicator dictates which processes can participate in a message passing operation. MPI_COMM_WORLD is a commonly used communicator pre-defined in mpif.h (for fortran) or mpi.h (for C). It enables all processes to participate in message passing operations such as MPI_Recv. On the other hand, a programmer can define a communicator which restricts accessibility to specific (e.g., odd or even numbered) processors for any message passing operation that requires it.

A communicator has type INTEGER.





MPI_Init

Starts MPI on a processor. It must be called only once in the entire program. If no error, ierr returns 0.





MPI_Comm_rank

Querry for the identity of the current processor, myid. Knowing myid, the user program may act on different data or different tasks accordingly. For this example, it is used to determine the range of integration and hence each processor acts on its own data (see ai). In addition, the total integral sum is computed only on the process with myid = 0.





MPI_Comm_size

Querry for the number of processors. This is provided by the user at runtime to the executable a.out via the command
katana% mpirun -np 4 a.out

In this example, MPI_Comm_size returns p = 4.





Buffer, buffer size, data type triplet properties for MPI_Send/MPI_Recv

All MPI message passing routines, such as MPI_Send and MPI_Recv, require these three arguments to define the (send or receive) buffer, its size, and MPI data type. Examples of MPI data types are:
MPI_REAL, MPI_INTEGER, MPI_CHARACTER





MPI_Finalize

Exit MPI. Like MPI_Init, this routine should only be called once in the entire program, after all MPI parallel processing is done.





MPI_Send

Performs point-to-point blocking send. The call to this routine continue to block until the send buffer can be safely overwritten (i.e., the content of the send buffer has been received at the destination).





MPI_Recv

Performs point-to-point blocking receive. The call to this routine continue to block until the receive buffer contains the intended data (or message).





MPI_Isend

Performs point-to-point nonblocking send. The send buffer should not be overwritten until the operation is confirmed to be complete by way of MPI_Wait.





MPI_Gather

This is one of a handful of MPI utilities for collective communications. This function performs a many-to-one gathering operation to a specified destination. No explicit MPI_Send and MPI_Receiv are required from the programmer. If MPI_COMM_WORLD is used as the communicator, all processes are expected to be sending their respective send buffers to the destination process. This can be more selective if a user-defined communicator is used instead. Note that all collective communication utilities must be invoked on all processes.





MPI_Bcast

This is one of a handful of MPI utilities for collective communications. This function performs a one-to-many operation to broadcast a buffer from a specified source to all processes enabled through the communicator comm. No explicit MPI_Send and MPI_Receiv are required from the programmer. If MPI_COMM_WORLD is used as the communicator, all processes are expected to be sending their respective send buffers to the destination process. This can be more selective if a user-defined communicator is used instead. Note that all collective communication utilities must be invoked on all processes.





MPI_Reduce

This is one of a handful of MPI utilities for collective communications. This function performs a many-to-one reduction operation to a specified destination.In addition to gathering buffers from processes into a single destination process, it also performs oprations such as summing (via a specified pre-defined operation MPI_SUM) on buffers gathered. Note that all collective communication utilities must be invoked on all processes.





MPI_SUM

MPI_SUM is a constant pre-defined in the MPI header file (mpi.h for C and mpif.h for FORTRAN). When MPI_Reduce is used to perform a reduction operation such as summing over multiple processors, the programmer has the option to use pre-defined operations (for example, MPI_SUM) or user-defined operations.





MPI_Wait

A blocking operation. It blocks until an operation, specified by request, is completed.





Message Source

Specify the source from which the message is sent.





MPI_ANY_SOURCE

MPI_ANY_SOURCE is a constant pre-defined in mpif.h. This is essentially a source "wild card." For the parallel numerical integration example, the integration is the sum of all partial integral sums from all processors. Because summation is an operation that satisfies the associative rule which means the result is not dependent on any specific order of summation, the use of MPI_ANY_SOURCE can potentially be more efficient (first come first served) as well as less likely to deadlock. When a message is received with MPI_ANY_SOURCE, the source can be retrieved via status(MPI_SOURCE).





Message Destination.

Specify the destination to which the message is sent.





Message Tag.

Tag serves as a secondary means to define the identity of a message, if needed; the primary means is the processor rank, myid. An example for which tag is needed is when multiple messages are expected from the same source and you need to distinguish one message from another in order to determine what to do.





Message Status

Returns the status of a message receive operation. This contains information on where the message came from and its tag. These are especially informative when wild cards MPI_ANY_SOURCE and MPI_ANY_TAG are used as source and tag, respectively.



For FORTRAN, the status is declared with data type "integer status(MPI_STATUS_SIZE)".

For C, the status has data type "MPI_Status status;".





MPI_ANY_TAG

MPI_ANY_TAG is a constant pre-defined in mpif.h. This represents a "wild card" tag. Generally, a tag is used as a secondary means to identify a message -- the primary means is myid. An example that requires a tag in addition to myid is when multiple messages are passed between a pair of processors. Upon receive of these messages, if the receiver needs to distinguish the identities of them in order to place them or act on them accordingly, then tag can be used to differentiate the messages. When a message is received with MPI_ANY_TAG, the tag can be retrieved via status(MPI_TAG).





User Function (FORTRAN version).


real function integral(ai, h, n)
implicit none
integer n, j
real h, ai, aij

integral = 0.0         ! initialize integral
do j=0,n-1             ! sum integrals
  aij = ai + (j+0.5)*h ! abscissa mid-point
  integral = integral + cos(aij)*h
enddo

return
end




User Function (C version).


float integral(float ai, float h, int n)
{
 int j;
 float aij, integ;

 integ = 0.0;            /* initialize */
 for (j=0;j<n;j++) {  /* sum integrals */
   aij = ai + (j+0.5)*h; /* mid-point */
   integ += cos(aij)*h;
 } 
 return integ;
}