Provided by: lam-runtime_7.1.4-7.2_amd64 bug

NAME

       mpiexec - Run MPI programs on LAM nodes.

SYNOPSIS


       mpiexec [global_args] local_args1 [: local_args2 [...]]

       mpiexec [global_args] -configfile filename

OPTIONS

       Global  arguments  apply to all commands that will be launched by mpiexec.  They come at the beginning of
       the command line.

       -boot     Boot the LAM run-time environment before running the  MPI  program.   If  -machinefile  is  not
                 specified,  use  the  default  boot  schema.   When  the MPI processes finish, the LAM run-time
                 environment will be shut down.

       -boot-args args
                 Pass arguments to the back-end lamboot command  when  booting  the  LAM  run-time  environment.
                 Implies -boot.

       -d        Enable lots of debugging output.  Implies -v.

       -machinefile hostfile
                 Enable  "one  shot"  MPI  executions;  boot  the  LAM run-time environment with the boot schema
                 specified by hostfile (see bhost(5)), run the MPI program, and then shut down the LAM  run-time
                 environment.  Implies -boot.

       -prefix lam/install/path
                 Use the LAM installation specified in /lam/install/path/.  Not compatible with LAM/MPI versions
                 prior to 7.1.

       -ssi key value
                 Set the SSI parameter key to the value value.

       -tv       Launch the MPI processes under the TotalView debugger.

       -v        Be verbose

       One  or  more  sets  of local arguments must be specified (or a config file; see below).  Local arguments
       essentially include everything allowed in an appschema(5) as well as the following options  specified  by
       the MPI-2 standard (note that the options listed below must be specified before appschema arguments):

       -n numprocs
                 Number of copies of the process to start.

       -host hostname
                 Specify  the  hostname  to  start  the  MPI process on.  The hostname must be resolvable by the
                 lamnodes command after the LAM run-time environment is booted (see lamnodes(1)).

       -arch architecture
                 Specify the architecture to start the MPI process on.  mpiexec essentially  uses  the  provided
                 architecture  as  a  pattern  match  against the output of the GNU config.guess utility on each
                 machine in the LAM run-time environment.  Any subset will match.  See EXAMPLES, below.

       -wdir directory
                 Set the working directory of the executable.

       -soft     Not yet supported.

       -path     Not yet supported.

       -file     Not yet supported.

       other_arguments
                 When mpiexec first encounters an argument that it doesn't recognize (such  as  an  appschema(5)
                 argument,  or  the  MPI executable name), the remainder of the arguments will be passed back to
                 mpirun to actually start the process.  As such, all of mpiexec's arguments that  are  described
                 above  must  come  before  appschema  arguments and/or the MPI executable name.  Similarly, all
                 arguments after the MPI executable name will be transparently passed as command  line  argument
                 to the MPI process and will be will be effectively ignored by mpirun.

DESCRIPTION

       mpiexec  is  loosely  defined  in  the  Miscellany  chapter  of  the  MPI-2 standard (see http://www.mpi-
       forum.org/).  It is meant to be a portable mechanism for starting  MPI  processes.   The  MPI-2  standard
       recommends  several  command  line  options,  but does not mandate any.  LAM's mpiexec currently supports
       several of these options, but not all.

       LAM's mpiexec is actually a perl script that is a wrapper around several underlying  LAM  commands,  most
       notably  lamboot,  mpirun,  and  lamhalt.   As  such, the functionality provided by mpiexec can always be
       performed manually.  Unless otherwise specified in arguments that are passed back to mpirun, mpiexec will
       use the per-CPU scheduling as described in mpirun(1) (i.e., the "cX" and "C" notation).

       mpiexec can either use an already-existing LAM  universe  (i.e.,  a  booted  LAM  run-time  environment),
       similar  to  mpirun,  or  can  be  used  for  "one-shot"  MPI  executions where it boots the LAM run-time
       environment, runs the MPI executable(s), and then shuts down the LAM run-time environment.

       mpiexec can also be used to launch MPMD MPI jobs from the command line.  mpirun also  supports  launching
       MPMD MPI jobs, but the user must make a text file appschema(5) first.

       Perhaps one of mpiexec's most useful features is the command-line ability to launch different executables
       on  different architectures using the -arch flag (see EXAMPLES, below).  Essentially, the string argument
       that is given to -arch is used as a pattern match against the output of the GNU config.guess  utility  on
       each node.  If the user-provided architecture string matches any subset of the output of config.guess, it
       is  ruled  a  match.   Wildcards are not possible.  The GNU config.guess utility is available both in the
       LAM/MPI     source     code     distribution     (in     the     config     subdirectory)     and      at
       ftp://ftp.gnu.org/gnu/config/config.guess.

       Some sample outputs from config.guess include:

       sparc-sun-solaris2.8
                 Solaris 2.8 running on a SPARC platform.

       i686-pc-linux-gnu
                 Linux running on an i686 architecture.

       mips-sgi-irix6.5
                 IRIX 6.5 running on an SGI/MIPS architecture.

       You  might  want  to  run the laminfo command on your available platforms to see what string config.guess
       reported.  See laminfo(1) for more details (e.g., the -arch flag to laminfo).

   Configfile option
       It is possible to specify any set of local parameters in a configuration file rather than on the  command
       line using the -configfile option.  This option is typically used when the number of command line options
       is  too  large for some shells, or when automated processes generate the command line arguments and it is
       simply more convenient to put them in a file for later processing by mpiexec.

       The config file can contain both comments and one or more sets of local arguments.  Lines beginning  with
       "#"  are  considered  comments  and  are ignored.  Other lines are considered to be one or more groups of
       local arguments.  Each group must be separated by either a newline or a colon (":").  For example:

         # Sample mpiexec config file
         # Launch foo on two nodes
         -host node1.example.com foo : -host node2.example.com foo
         # Launch two copies of bar on a third node
         -host node3.example.com -np 2 bar

ERRORS

       In the event of an error, mpiexec will do its best to shut everything down and return to the state before
       it was executed.  For example, if mpiexec was used to boot a LAM run-time environment,  mpiexec  will  do
       its  best  to  take  down  whatever  successfully booted of the run-time environment (to include invoking
       lamhalt and/or lamwipe).

EXAMPLES

       The following are some examples of how to use mpiexec.  Note  that  all  examples  assume  the  CPU-based
       scheduling (which does NOT map to physical CPUs) as described in mpirun(1).

       mpiexec -n 4 my_mpi_program
                 Launch 4 copies of my_mpi_program in an already-existing LAM universe.

       mpiexec -n 4 my_mpi_program arg1 arg2
                 Similar  to  the previous example, but pass "arg1" and "arg2" as command line arguments to each
                 copy of my_mpi_program.

       mpiexec -ssi rpi gm -n 4 my_mpi_program
                 Similar to the previous example, but pass "-ssi  rpi  gm"  back  to  mpirun  to  tell  the  MPI
                 processes to use the Myrinet (gm) RPI for MPI message passing.

       mpiexec -n 4 program1 : -n 4 program2
                 Launch  4 copies of program1 and 4 copies of program2 in an already-existing LAM universe.  All
                 8 resulting processes will share a common MPI_COMM_WORLD.

       mpiexec -machinefile hostfile -n 4 my_mpi_program
                 Boot the LAM run-time environment with the nodes listed  in  the  hostfile,  run  4  copies  of
                 my_mpi_program in the resulting LAM universe, and then shut down the LAM universe.

       mpiexec -machinefile hostfile my_mpi_program
                 Similar to above, but run my_mpi_program on all available CPUs in the LAM universe.

       mpiexec -arch solaris2.8 sol_program : -arch linux linux_program
                 Run  as  many  copies  of  sol_program as there are CPUs on Solaris machines in the current LAM
                 universe, and as many copies of linux_program as there  are  CPUs  on  linux  machines  in  the
                 current LAM universe.  All resulting processes will share a common MPI_COMM_WORLD.

       mpiexec -arch solaris2.8 sol2.8_prog : -arch solaris2.9 sol2.9_program
                 Similar  to  the  above example, except distinguish between Solaris 2.8 and 2.9 (since they may
                 have different shared libraries, etc.).

SEE ALSO

       appschema(5), bhost(5), lamboot(1), lamexec(1), mpirun(1)

LAM 7.1.4                                          July, 2007                                         MPIEXEC(1)