RCS examples are provided to assist you in learning the software and the development of your applications on the Shared Computing Cluster (SCC). The instructions provided along with the code assume that the underlying OS is Linux. If these examples are run on a different architecture, you might need to make some changes to the code and/or the way the program is built and executed.
hello.c: A hello-world MPI C program.
hello.f90: A hello-world MPI Fortran 90 program.
calc_pi_mpi.c: An MPI C program for computing the value of pi.
calc_pi_mpi.f90: An MPI Fortran 90 program for computing the value of pi.
laplace_mpi.c: An MPI C program for solving the Laplace equation.
laplace_mpi.f90: An MPI Fortran 90 program for solving the Laplace equation.
job.1node: A batch job script to run the hello-world program on one node.
job.2node: A batch job script to run the hello-world program on two nodes.
job.1pernode: A batch job script to run the hello-world program on two nodes with one MPI task per node.
Take the hello-world program for example. Other examples are similar. To compile C or Fortran code, use MPI wrapper for gcc compiler:
mpicc hello.c -o hello
mpif90 hello.f90 -o hello
To run the program interactively with 4 CPU cores, execute:
mpirun -np 4 ./hello
To submit a batch job:
Note: Research Computing Services (RCS) example programs are provided
"as is" without any warranty of any kind. The user assumes the entire risk of
quality, performance, and repair of any defects. You are encouraged to copy
and modify any of the given examples for your own use.