Index of /examples/mpi/tutorials/legacy/intro2mpi/F77

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory   -  
[TXT]Readme 2013-11-23 22:40 343  
[TXT]batch_bgl 2013-11-22 08:54 1.8K 
[TXT]batch_scc 2013-11-23 11:50 1.3K 
[TXT]example1.f 2013-11-22 08:54 1.5K 
[TXT]example1_1.f 2013-11-22 08:54 3.3K 
[TXT]example1_2.f 2013-11-22 08:54 3.4K 
[TXT]example1_3.f 2013-11-22 08:54 3.7K 
[TXT]example1_4.f 2013-11-22 08:54 2.8K 
[TXT]example1_5.f 2013-11-22 08:54 3.2K 
[TXT]example1_52.f 2013-11-25 11:32 3.4K 
[TXT]make.bgl 2013-11-22 08:54 2.5K 
[TXT]make.scc 2013-11-22 16:13 1.7K 
[   ]myinput 2013-11-22 16:10 4  
[TXT]readme.html.detail 2013-11-25 09:15 2.6K 
[TXT]sample.jcf 2013-11-22 08:54 1.5K 

Multiprocessing by Message Passing MPI Tutorial Example Programs

This tutorial came with 6 examples. They are all based on the numerical integration of a cosine function over the range of [a, b]. A mid-point rule is used to perform the integration.
  1. Example 1. Performs the integration serially. MPI is not used.
  2. Example 1_1. Performs the same integral in parallel, with help from six most fundamental MPI utility functions/subroutines: MPI_Init, MPI_Comm_rank, MPI_Comm_size, MPI_Send, MPI_Recv, and MPI_Finalize.
  3. Example 1_2. A more careful review of the previous algorithm exposes the fact that it is in fact a redundancy to send its share of local integral, my_int, to itself. By skipping over this, the deadlocking potential is eliminated.
  4. Example 1_3. The use of MPI_ANY_SOURCE, essentially a wild card, enables messages sent to processor 0 (from all other processes) handled on a first come first served basis. This makes receiving of incoming messages more efficient.
  5. Example1_4. In place of the MPI_Send and MPI_Recv pair of point-to-point communication functions, a collective communication function, MPI_Gather, will be used instead.
  6. Example1_5. The simplest, and perhaps most efficient, parallel implementation for this numerical integration example is achieved with the use of the reduction operation function, MPI_Reduce.

Program Compilation

You can compile the set of examples for the Shared Computing Cluster (SCC) or the IBM Bluegene.

Program Execution

Notes

Contact Info

Kadin Tseng (kadin@bu.edu)

References

Dates

  1. Created: November 24, 2013
  2. Modified: