Provided by: slurm-client_21.08.5-2ubuntu1_amd64 bug

NAME

       salloc  -  Obtain  a  Slurm  job  allocation  (a  set  of nodes), execute a command, and then release the
       allocation when the command is finished.

SYNOPSIS

       salloc [OPTIONS(0)...] [ : [OPTIONS(N)...]] [command(0) [args(0)...]]

       Option(s) define multiple jobs in a co-scheduled heterogeneous job.  For more details about heterogeneous
       jobs see the document
       https://slurm.schedmd.com/heterogeneous_jobs.html

DESCRIPTION

       salloc is used to allocate a Slurm job allocation, which is a set of  resources  (nodes),  possibly  with
       some  set  of  constraints  (e.g.  number  of processors per node).  When salloc successfully obtains the
       requested allocation, it then runs the command specified by the user.  Finally, when the  user  specified
       command is complete, salloc relinquishes the job allocation.

       The  command  may  be  any  program  the  user  wishes.   Some typical commands are xterm, a shell script
       containing srun commands, and srun (see the EXAMPLES section). If no command is  specified,  then  salloc
       runs the user's default shell.

       The  following  document describes the influence of various options on the allocation of cpus to jobs and
       tasks.
       https://slurm.schedmd.com/cpu_management.html

       NOTE: The salloc logic includes support to save and restore the terminal line settings and is designed to
       be executed in the foreground. If you need to execute salloc in the background, set its standard input to
       some file, for example: "salloc -n16 a.out </dev/null &"

RETURN VALUE

       If salloc is unable to execute the user command, it will return 1 and print errors  to  stderr.  Else  if
       success or if killed by signals HUP, INT, KILL, or QUIT: it will return 0.

COMMAND PATH RESOLUTION

       If provided, the command is resolved in the following order:

       1. If command starts with ".", then path is constructed as: current working directory / command
       2. If command starts with a "/", then path is considered absolute.
       3. If command can be resolved through PATH. See path_resolution(7).
       4. If command is in current working directory.

       Current working directory is the calling process working directory unless the --chdir argument is passed,
       which will override the current working directory.

OPTIONS

       -A, --account=<account>
              Charge  resources  used by this job to specified account.  The account is an arbitrary string. The
              account name may be changed after job submission using the scontrol command.

       --acctg-freq=<datatype>=<interval>[,<datatype>=<interval>...]
              Define the job accounting and profiling sampling intervals  in  seconds.   This  can  be  used  to
              override  the  JobAcctGatherFrequency  parameter  in  the  slurm.conf  file. <datatype>=<interval>
              specifies the task sampling interval for the jobacct_gather plugin or a sampling  interval  for  a
              profiling  type  by the acct_gather_profile plugin. Multiple comma-separated <datatype>=<interval>
              pairs may be specified. Supported datatype values are:

              task        Sampling interval for the  jobacct_gather  plugins  and  for  task  profiling  by  the
                          acct_gather_profile plugin.
                          NOTE:  This  frequency  is used to monitor memory usage. If memory limits are enforced
                          the highest frequency a user can request is what is configured in the slurm.conf file.
                          It can not be disabled.

              energy      Sampling interval for energy profiling using the acct_gather_energy plugin.

              network     Sampling interval for infiniband profiling using the acct_gather_interconnect plugin.

              filesystem  Sampling interval for filesystem profiling using the acct_gather_filesystem plugin.

              The default value for the task sampling interval is 30 seconds.  The default value for  all  other
              intervals  is  0.  An interval of 0 disables sampling of the specified type.  If the task sampling
              interval is 0, accounting information  is  collected  only  at  job  termination  (reducing  Slurm
              interference with the job).
              Smaller (non-zero) values have a greater impact upon job performance, but a value of 30 seconds is
              not likely to be noticeable for applications having less than 10,000 tasks.

       --bb=<spec>
              Burst  buffer  specification.  The  form of the specification is system dependent.  Note the burst
              buffer may not be accessible from a login node, but require that salloc spawn a shell  on  one  of
              its allocated compute nodes.  When the --bb option is used, Slurm parses this option and creates a
              temporary  burst  buffer  script  file  that  is  used internally by the burst buffer plugins. See
              Slurm's burst buffer guide for more information and examples:
              https://slurm.schedmd.com/burst_buffer.html

       --bbf=<file_name>
              Path of file containing burst buffer specification.  The  form  of  the  specification  is  system
              dependent.   Also  see  --bb.   Note the burst buffer may not be accessible from a login node, but
              require that salloc spawn a shell on one of its allocated compute nodes.  See Slurm's burst buffer
              guide for more information and examples:
              https://slurm.schedmd.com/burst_buffer.html

       --begin=<time>
              Defer eligibility of this job allocation until the specified time.

              Time may be of the form HH:MM:SS to run a job at a specific time of day  (seconds  are  optional).
              (If  that  time  is  already past, the next day is assumed.)  You may also specify midnight, noon,
              fika (3 PM) or teatime (4 PM) and you can have a time-of-day suffixed with AM or PM for running in
              the morning or the evening.  You can also say what day the job will be run, by specifying  a  date
              of  the  form  MMDDYY  or  MM/DD/YY  YYYY-MM-DD.  Combine date and time using the following format
              YYYY-MM-DD[THH:MM[:SS]]. You can also give times like now + count time-units, where the time-units
              can be seconds (default), minutes, hours, days, or weeks and you can tell Slurm  to  run  the  job
              today with the keyword today and to run the job tomorrow with the keyword tomorrow.  The value may
              be changed after job submission using the scontrol command.  For example:
                 --begin=16:00
                 --begin=now+1hour
                 --begin=now+60           (seconds by default)
                 --begin=2010-01-20T12:34:00

              Notes on date/time specifications:
               -  Although  the  'seconds' field of the HH:MM:SS time specification is allowed by the code, note
              that the poll time of the Slurm scheduler is not precise enough to guarantee dispatch of  the  job
              on  the  exact second.  The job will be eligible to start on the next poll following the specified
              time. The exact poll interval depends on the Slurm scheduler (e.g., 60 seconds  with  the  default
              sched/builtin).
               - If no time (HH:MM:SS) is specified, the default is (00:00:00).
               -  If  a  date is specified without a year (e.g., MM/DD) then the current year is assumed, unless
              the combination of MM/DD and HH:MM:SS has already passed for that year, in  which  case  the  next
              year is used.

       --bell Force salloc to ring the terminal bell when the job allocation is granted (and only if stdout is a
              tty).   By  default,  salloc  only  rings  the bell if the allocation is pending for more than ten
              seconds (and only if stdout is a tty). Also see the option --no-bell.

       -D, --chdir=<path>
              Change directory to path before beginning execution. The path can be specified  as  full  path  or
              relative path to the directory where the command is executed.

       --cluster-constraint=<list>
              Specifies features that a federated cluster must have to have a sibling job submitted to it. Slurm
              will  attempt  to  submit  a  sibling  job  to  a  cluster if it has at least one of the specified
              features.

       -M, --clusters=<string>
              Clusters to issue commands to.  Multiple cluster names may be comma separated.  The  job  will  be
              submitted  to  the  one  cluster  providing the earliest expected job initiation time. The default
              value is the current cluster. A value of 'all' will query to run on all clusters.  Note  that  the
              SlurmDBD must be up for this option to work properly.

       --comment=<string>
              An arbitrary comment.

       -C, --constraint=<list>
              Nodes  can  have features assigned to them by the Slurm administrator.  Users can specify which of
              these features are required by their job using the constraint option.  Only nodes having  features
              matching  the  job  constraints  will be used to satisfy the request.  Multiple constraints may be
              specified with AND, OR, matching OR, resource counts, etc. (some operators are  not  supported  on
              all system types).  Supported constraint options include:

              Single Name
                     Only   nodes   which   have   the   specified   feature   will   be   used.   For  example,
                     --constraint="intel"

              Node Count
                     A request can specify the number of nodes needed with some feature by appending an asterisk
                     and count after the feature name.  For example,  --nodes=16  --constraint="graphics*4  ..."
                     indicates  that  the  job requires 16 nodes and that at least four of those nodes must have
                     the feature "graphics."

              AND    If only nodes with all of specified features will be used.  The ampersand is  used  for  an
                     AND operator.  For example, --constraint="intel&gpu"

              OR     If  only  nodes  with at least one of specified features will be used.  The vertical bar is
                     used for an OR operator.  For example, --constraint="intel|amd"

              Matching OR
                     If only one of a set of possible options should be used for all allocated nodes,  then  use
                     the   OR   operator   and  enclose  the  options  within  square  brackets.   For  example,
                     --constraint="[rack1|rack2|rack3|rack4]" might be used to specify that all  nodes  must  be
                     allocated on a single rack of the cluster, but any of those four racks can be used.

              Multiple Counts
                     Specific  counts  of  multiple  resources  may  be  specified by using the AND operator and
                     enclosing     the     options     within      square      brackets.       For      example,
                     --constraint="[rack1*2&rack2*4]"  might be used to specify that two nodes must be allocated
                     from nodes with the feature of "rack1" and four nodes must be allocated from nodes with the
                     feature "rack2".

                     NOTE: This construct does not support multiple Intel KNL NUMA or MCDRAM modes. For example,
                     while       --constraint="[(knl&quad)*2&(knl&hemi)*4]"       is       not        supported,
                     --constraint="[haswell*2&(knl&hemi)*4]"  is supported.  Specification of multiple KNL modes
                     requires the use of a heterogeneous job.

              Brackets
                     Brackets can be used to indicate that you are looking for a set of nodes with the different
                     requirements      contained       within       the       brackets.       For       example,
                     --constraint="[(rack1|rack2)*1&(rack3)*2]" will get you one node with either the "rack1" or
                     "rack2"  features  and  two  nodes  with the "rack3" feature.  The same request without the
                     brackets will try to find a single node that meets those requirements.

                     NOTE: Brackets are only reserved for Multiple Counts and Matching OR syntax.  AND operators
                     require a count for each feature inside square brackets (i.e. "[quad*2&hemi*1]").

              Parenthesis
                     Parenthesis  can  be  used  to  group   like   node   features   together.   For   example,
                     --constraint="[(knl&snc4&flat)*4&haswell*1]"  might be used to specify that four nodes with
                     the features "knl", "snc4" and  "flat"  plus  one  node  with  the  feature  "haswell"  are
                     required. All options within parenthesis should be grouped with AND (e.g. "&") operands.

       --container=<path_to_container>
              Absolute path to OCI container bundle.

       --contiguous
              If set, then the allocated nodes must form a contiguous set.

              NOTE:   If   SelectPlugin=cons_res  this  option  won't  be  honored  with  the  topology/tree  or
              topology/3d_torus plugins, both of which can modify the node ordering.

       -S, --core-spec=<num>
              Count of specialized cores per node reserved by the job for system operations and not used by  the
              application.  The  application will not use these cores, but will be charged for their allocation.
              Default value is dependent upon the node's configured CoreSpecCount value.  If a value of zero  is
              designated  and the Slurm configuration option AllowSpecResourcesUsage is enabled, the job will be
              allowed to override CoreSpecCount and use the specialized resources  on  nodes  it  is  allocated.
              This option can not be used with the --thread-spec option.

       --cores-per-socket=<cores>
              Restrict  node  selection  to  nodes  with at least the specified number of cores per socket.  See
              additional information under -B option above when task/affinity plugin is enabled.
              NOTE: This option may implicitly set the number of tasks (if -n was not specified) as one task per
              requested thread.

       --cpu-freq=<p1>[-p2[:p3]]

              Request that job steps initiated by srun commands inside this allocation be run at some  requested
              frequency if possible, on the CPUs selected for the step on the compute node(s).

              p1 can be  [#### | low | medium | high | highm1] which will set the frequency scaling_speed to the
              corresponding value, and set the frequency scaling_governor to UserSpace. See below for definition
              of the values.

              p1  can be [Conservative | OnDemand | Performance | PowerSave] which will set the scaling_governor
              to the corresponding value. The governor has to be in  the  list  set  by  the  slurm.conf  option
              CpuFreqGovernors.

              When  p2  is  present, p1 will be the minimum scaling frequency and p2 will be the maximum scaling
              frequency.

              p2 can be  [#### | medium | high | highm1] p2 must be greater than p1.

              p3 can be [Conservative | OnDemand | Performance | PowerSave | SchedUtil | UserSpace]  which  will
              set the governor to the corresponding value.

              If  p3 is UserSpace, the frequency scaling_speed will be set by a power or energy aware scheduling
              strategy to a value between p1 and p2 that lets the job run within the site's power goal. The  job
              may be delayed if p1 is higher than a frequency that allows the job to run within the goal.

              If  the current frequency is < min, it will be set to min. Likewise, if the current frequency is >
              max, it will be set to max.

              Acceptable values at present include:

              ####          frequency in kilohertz

              Low           the lowest available frequency

              High          the highest available frequency

              HighM1        (high minus one) will select the next highest available frequency

              Medium        attempts to set a frequency in the middle of the available range

              Conservative  attempts to use the Conservative CPU governor

              OnDemand      attempts to use the OnDemand CPU governor (the default value)

              Performance   attempts to use the Performance CPU governor

              PowerSave     attempts to use the PowerSave CPU governor

              UserSpace     attempts to use the UserSpace CPU governor

              The following informational environment variable is set in the job
              step when --cpu-freq option is requested.
                      SLURM_CPU_FREQ_REQ

              This environment variable can also be used to supply the value for the CPU frequency request if it
              is set when the 'srun' command is issued.  The --cpu-freq on the command line  will  override  the
              environment variable value.  The form on the environment variable is the same as the command line.
              See the ENVIRONMENT VARIABLES section for a description of the SLURM_CPU_FREQ_REQ variable.

              NOTE:  This parameter is treated as a request, not a requirement.  If the job step's node does not
              support setting the CPU frequency, or the requested value is  outside  the  bounds  of  the  legal
              frequencies, an error is logged, but the job step is allowed to continue.

              NOTE:  Setting the frequency for just the CPUs of the job step implies that the tasks are confined
              to those CPUs.  If task confinement (i.e., TaskPlugin=task/affinity or TaskPlugin=task/cgroup with
              the "ConstrainCores" option) is not configured, this parameter is ignored.

              NOTE: When the step completes, the frequency and governor of each selected CPU  is  reset  to  the
              previous values.

              NOTE:  When  submitting  jobs  with  the --cpu-freq option with linuxproc as the ProctrackType can
              cause jobs to run too quickly before Accounting is able to poll for job information. As  a  result
              not all of accounting information will be present.

       --cpus-per-gpu=<ncpus>
              Advise  Slurm  that  ensuing  job  steps  will  require  ncpus  processors per allocated GPU.  Not
              compatible with the --cpus-per-task option.

       -c, --cpus-per-task=<ncpus>
              Advise Slurm that ensuing job steps will require ncpus processors per task. By default Slurm  will
              allocate one processor per task.

              For  instance,  consider  an  application  that  has 4 tasks, each requiring 3 processors.  If our
              cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the  controller
              might give us only 3 nodes.  However, by using the --cpus-per-task=3 options, the controller knows
              that each task requires 3 processors on the same node, and the controller will grant an allocation
              of 4 nodes, one for each of the 4 tasks.

       --deadline=<OPT>
              remove  the  job  if no ending is possible before this deadline (start > (deadline - time[-min])).
              Default is no deadline.  Valid time formats are:
              HH:MM[:SS] [AM|PM]
              MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
              MM/DD[/YY]-HH:MM[:SS]
              YYYY-MM-DD[THH:MM[:SS]]]
              now[+count[seconds(default)|minutes|hours|days|weeks]]

       --delay-boot=<minutes>
              Do not reboot nodes in order to satisfied this job's feature specification if  the  job  has  been
              eligible to run for less than this time period.  If the job has waited for less than the specified
              period,  it  will  use  only  nodes which already have the specified features.  The argument is in
              units of minutes.  A default value may be set by  a  system  administrator  using  the  delay_boot
              option  of  the  SchedulerParameters configuration parameter in the slurm.conf file, otherwise the
              default value is zero (no delay).

       -d, --dependency=<dependency_list>
              Defer the start of this job until  the  specified  dependencies  have  been  satisfied  completed.
              <dependency_list>    is    of    the    form    <type:job_id[:job_id][,type:job_id[:job_id]]>   or
              <type:job_id[:job_id][?type:job_id[:job_id]]>.  All dependencies must  be  satisfied  if  the  ","
              separator  is  used.   Any  dependency  may  be  satisfied if the "?" separator is used.  Only one
              separator may be used.  Many jobs can share the same dependency and these jobs may even belong  to
              different   users.  The   value  may  be  changed after job submission using the scontrol command.
              Dependencies on remote jobs are allowed in a federation.  Once a job dependency fails due  to  the
              termination  state  of a preceding job, the dependent job will never be run, even if the preceding
              job is requeued and has a different termination state in a subsequent execution.

              after:job_id[[+time][:jobid[+time]...]]
                     After the specified jobs start or are cancelled and 'time' in minutes  from  job  start  or
                     cancellation  happens, this job can begin execution. If no 'time' is given then there is no
                     delay after start or cancellation.

              afterany:job_id[:jobid...]
                     This job can begin execution after the specified jobs have terminated.

              afterburstbuffer:job_id[:jobid...]
                     This job can begin execution after the specified jobs have terminated  and  any  associated
                     burst buffer stage out operations have completed.

              aftercorr:job_id[:jobid...]
                     A  task  of  this  job  array  can  begin  execution after the corresponding task ID in the
                     specified job has completed successfully (ran to completion with an exit code of zero).

              afternotok:job_id[:jobid...]
                     This job can begin execution after the specified jobs have terminated in some failed  state
                     (non-zero exit code, node failure, timed out, etc).

              afterok:job_id[:jobid...]
                     This  job  can  begin execution after the specified jobs have successfully executed (ran to
                     completion with an exit code of zero).

              singleton
                     This job can begin execution after any previously launched jobs sharing the same  job  name
                     and user have terminated.  In other words, only one job by that name and owned by that user
                     can  be running or suspended at any point in time.  In a federation, a singleton dependency
                     must be fulfilled on all clusters unless  DependencyParameters=disable_remote_singleton  is
                     used in slurm.conf.

       -m,
       --distribution={*|block|cyclic|arbitrary|plane=<size>}[:{*|block|cyclic|fcyclic}[:{*|block|cyclic|fcyclic}]][,{Pack|NoPack}]

              Specify  alternate  distribution  methods  for  remote  processes.   For job allocation, this sets
              environment variables that will be used by subsequent srun requests and also affects  which  cores
              will be selected for job allocation.

              This  option  controls  the  distribution  of  tasks  to  the  nodes  on which resources have been
              allocated, and the distribution of those resources to tasks for binding (task affinity). The first
              distribution method (before the first ":") controls the  distribution  of  tasks  to  nodes.   The
              second  distribution  method  (after  the  first  ":") controls the distribution of allocated CPUs
              across sockets for binding to tasks. The third distribution method (after the second ":") controls
              the distribution of allocated CPUs across cores for  binding  to  tasks.   The  second  and  third
              distributions apply only if task affinity is enabled.  The third distribution is supported only if
              the task/cgroup plugin is configured. The default value for each distribution type is specified by
              *.

              Note  that  with select/cons_res and select/cons_tres, the number of CPUs allocated to each socket
              and node may be different. Refer to https://slurm.schedmd.com/mc_support.html for more information
              on resource allocation, distribution of tasks to nodes, and binding of tasks to CPUs.
              First distribution method (distribution of tasks across nodes):

              *      Use the default method for distributing tasks to nodes (block).

              block  The block distribution method will distribute tasks to a node such that  consecutive  tasks
                     share  a  node.  For  example,  consider an allocation of three nodes each with two cpus. A
                     four-task block distribution request will distribute those tasks to the  nodes  with  tasks
                     one  and  two  on the first node, task three on the second node, and task four on the third
                     node.  Block distribution is the default behavior if the number of tasks exceeds the number
                     of allocated nodes.

              cyclic The cyclic distribution method will distribute tasks to a node such that consecutive  tasks
                     are distributed over consecutive nodes (in a round-robin fashion). For example, consider an
                     allocation  of three nodes each with two cpus. A four-task cyclic distribution request will
                     distribute those tasks to the nodes with tasks one and four on the first node, task two  on
                     the  second  node,  and  task  three  on  the  third  node.   Note  that when SelectType is
                     select/cons_res, the same  number  of  CPUs  may  not  be  allocated  on  each  node.  Task
                     distribution will be round-robin among all the nodes with CPUs yet to be assigned to tasks.
                     Cyclic  distribution  is  the default behavior if the number of tasks is no larger than the
                     number of allocated nodes.

              plane  The  tasks  are  distributed  in  blocks  of  size  <size>.  The  size  must  be  given  or
                     SLURM_DIST_PLANESIZE  must be set. The number of tasks distributed to each node is the same
                     as for cyclic distribution, but the taskids assigned to each node depend on the plane size.
                     Additional distribution specifications cannot be  combined  with  this  option.   For  more
                     details        (including        examples       and       diagrams),       please       see
                     https://slurm.schedmd.com/mc_support.html and https://slurm.schedmd.com/dist_plane.html

              arbitrary
                     The arbitrary method of distribution will allocate processes in-order  as  listed  in  file
                     designated  by the environment variable SLURM_HOSTFILE.  If this variable is listed it will
                     over ride any other method specified.  If not set the method will default to block.  Inside
                     the hostfile must contain at minimum the number of hosts requested and be one per  line  or
                     comma  separated.   If  specifying a task count (-n, --ntasks=<number>), your tasks will be
                     laid out on the nodes in the order of the file.
                     NOTE: The arbitrary distribution option on a job allocation only controls the nodes  to  be
                     allocated  to  the  job and not the allocation of CPUs on those nodes. This option is meant
                     primarily to control a job step's task layout in an existing job allocation  for  the  srun
                     command.
                     NOTE:  If  the  number  of  tasks is given and a list of requested nodes is also given, the
                     number of nodes used from that list will be reduced to match that of the number of tasks if
                     the number of nodes in the list is greater than the number of tasks.

              Second distribution method (distribution of CPUs across sockets for binding):

              *      Use the default method for distributing CPUs across sockets (cyclic).

              block  The block distribution method will distribute allocated CPUs consecutively  from  the  same
                     socket for binding to tasks, before using the next consecutive socket.

              cyclic The  cyclic  distribution method will distribute allocated CPUs for binding to a given task
                     consecutively from the same socket, and from the next consecutive socket for the next task,
                     in a round-robin fashion across sockets.  Tasks requiring more than one CPU will  have  all
                     of those CPUs allocated on a single socket if possible.

              fcyclic
                     The  fcyclic  distribution  method will distribute allocated CPUs for binding to tasks from
                     consecutive sockets in a round-robin fashion across the sockets.  Tasks requiring more than
                     one CPU will have each CPUs allocated in a cyclic fashion across sockets.

              Third distribution method (distribution of CPUs across cores for binding):

              *      Use  the  default  method  for  distributing  CPUs  across  cores  (inherited  from  second
                     distribution method).

              block  The  block  distribution  method will distribute allocated CPUs consecutively from the same
                     core for binding to tasks, before using the next consecutive core.

              cyclic The cyclic distribution method will distribute allocated CPUs for binding to a  given  task
                     consecutively  from the same core, and from the next consecutive core for the next task, in
                     a round-robin fashion across cores.

              fcyclic
                     The fcyclic distribution method will distribute allocated CPUs for binding  to  tasks  from
                     consecutive cores in a round-robin fashion across the cores.

              Optional control for task distribution over nodes:

              Pack   Rather  than evenly distributing a job step's tasks evenly across its allocated nodes, pack
                     them as tightly as possible on  the  nodes.   This  only  applies  when  the  "block"  task
                     distribution method is used.

              NoPack Rather than packing a job step's tasks as tightly as possible on the nodes, distribute them
                     evenly.    This   user   option   will  supersede  the  SelectTypeParameters  CR_Pack_Nodes
                     configuration parameter.

       -x, --exclude=<node_name_list>
              Explicitly exclude certain nodes from the resources granted to the job.

       --exclusive[={user|mcs}]
              The job allocation can not share nodes with other running jobs  (or  just  other  users  with  the
              "=user" option or with the "=mcs" option).  If user/mcs are not specified (i.e. the job allocation
              can  not share nodes with other running jobs), the job is allocated all CPUs and GRES on all nodes
              in the allocation, but is only allocated as much memory as it requested.  This  is  by  design  to
              support  gang scheduling, because suspended jobs still reside in memory. To request all the memory
              on a node, use --mem=0.  The default shared/exclusive behavior depends on system configuration and
              the partition's OverSubscribe option takes precedence over the job's option.

       -B, --extra-node-info=<sockets>[:cores[:threads]]
              Restrict node selection to nodes with at least the specified number of sockets, cores  per  socket
              and/or threads per core.
              NOTE:  These  options  do  not  specify  the  resource  allocation  size.  Each value specified is
              considered a minimum.  An asterisk (*) can be used as a placeholder indicating that all  available
              resources of that type are to be utilized. Values can also be specified as min-max. The individual
              levels can also be specified in separate options if desired:
                  --sockets-per-node=<sockets>
                  --cores-per-socket=<cores>
                  --threads-per-core=<threads>
              If  task/affinity  plugin is enabled, then specifying an allocation in this manner also results in
              subsequently launched tasks being bound to threads if the -B  option  specifies  a  thread  count,
              otherwise  an  option  of  cores if a core count is specified, otherwise an option of sockets.  If
              SelectType is configured to select/cons_res, it must have a parameter of CR_Core,  CR_Core_Memory,
              CR_Socket, or CR_Socket_Memory for this option to be honored.  If not specified, the scontrol show
              job will display 'ReqS:C:T=*:*:*'. This option applies to job allocations.
              NOTE: This option is mutually exclusive with --hint, --threads-per-core and --ntasks-per-core.
              NOTE: This option may implicitly set the number of tasks (if -n was not specified) as one task per
              requested thread.

       --get-user-env[=timeout][mode]
              This option will load login environment variables for the user specified in the --uid option.  The
              environment  variables  are  retrieved  by  running  something  of  this  sort "su - <username> -c
              /usr/bin/env" and parsing the output.  Be aware that any  environment  variables  already  set  in
              salloc's  environment  will  take  precedence  over  any environment variables in the user's login
              environment.  The optional timeout value is in seconds. Default value is 3 seconds.  The  optional
              mode  value  control the "su" options.  With a mode value of "S", "su" is executed without the "-"
              option.  With a mode value of "L", "su" is executed with the "-"  option,  replicating  the  login
              environment.  If mode not specified, the mode established at Slurm build time is used.  Example of
              use  include  "--get-user-env",  "--get-user-env=10" "--get-user-env=10L", and "--get-user-env=S".
              NOTE: This option only works if the caller has an effective uid of "root".

       --gid=<group>
              Submit the job with the specified group's group access permissions.  group may be the  group  name
              or  the  numerical  group  ID.  In the default Slurm configuration, this option is only valid when
              used by the user root.

       --gpu-bind=[verbose,]<type>
              Bind tasks to specific GPUs.  By default every spawned task can access every GPU allocated to  the
              step.   If  "verbose," is specified before <type>, then print out GPU binding debug information to
              the stderr of the tasks. GPU binding is ignored if there is only one task.

              Supported type options:

              closest   Bind each task to the GPU(s) which are closest.  In a NUMA environment, each task may be
                        bound to more than one GPU (i.e.  all GPUs in that NUMA environment).

              map_gpu:<list>
                        Bind  by  setting  GPU  masks  on  tasks  (or  ranks)  as  specified  where  <list>   is
                        <gpu_id_for_task_0>,<gpu_id_for_task_1>,...  GPU  IDs  are interpreted as decimal values
                        unless they are preceded with '0x' in which case they interpreted as hexadecimal values.
                        If the number of tasks (or ranks) exceeds the number of elements in this list,  elements
                        in  the  list  will  be  reused  as  needed  starting from the beginning of the list. To
                        simplify support for large task counts, the lists may follow a map with an asterisk  and
                        repetition count.  For example "map_gpu:0*4,1*4".  If the task/cgroup plugin is used and
                        ConstrainDevices is set in cgroup.conf, then the GPU IDs are zero-based indexes relative
                        to  the GPUs allocated to the job (e.g. the first GPU is 0, even if the global ID is 3).
                        Otherwise, the GPU IDs are global IDs, and all GPUs on each node in the  job  should  be
                        allocated for predictable binding results.

              mask_gpu:<list>
                        Bind   by  setting  GPU  masks  on  tasks  (or  ranks)  as  specified  where  <list>  is
                        <gpu_mask_for_task_0>,<gpu_mask_for_task_1>,... The mapping is specified for a node  and
                        identical mapping is applied to the tasks on every node (i.e. the lowest task ID on each
                        node  is  mapped  to  the  first mask specified in the list, etc.). GPU masks are always
                        interpreted as hexadecimal values but can be preceded with an optional '0x'. To simplify
                        support for large task counts,  the  lists  may  follow  a  map  with  an  asterisk  and
                        repetition  count.   For example "mask_gpu:0x0f*4,0xf0*4".  If the task/cgroup plugin is
                        used and ConstrainDevices is set in cgroup.conf, then the GPU IDs are zero-based indexes
                        relative to the GPUs allocated to the job (e.g. the first GPU is 0, even if  the  global
                        ID  is  3).  Otherwise, the GPU IDs are global IDs, and all GPUs on each node in the job
                        should be allocated for predictable binding results.

              none      Do not bind tasks to GPUs (turns off binding if --gpus-per-task is requested).

              per_task:<gpus_per_task>
                        Each task will be bound to the number of gpus specified  in  <gpus_per_task>.  Gpus  are
                        assigned  in  order to tasks. The first task will be assigned the first x number of gpus
                        on the node etc.

              single:<tasks_per_gpu>
                        Like --gpu-bind=closest, except that each task can only be bound to a single  GPU,  even
                        when  it  can  be  bound to multiple GPUs that are equally close.  The GPU to bind to is
                        determined by <tasks_per_gpu>, where the first <tasks_per_gpu> tasks are  bound  to  the
                        first  GPU  available,  the  second  <tasks_per_gpu>  tasks  are bound to the second GPU
                        available, etc.  This is basically a block distribution of tasks  onto  available  GPUs,
                        where  the  available  GPUs  are  determined  by the socket affinity of the task and the
                        socket affinity of the GPUs as specified in gres.conf's Cores parameter.

       --gpu-freq=[<type]=value>[,<type=value>][,verbose]
              Request that GPUs allocated to the job are configured with specific frequency values.  This option
              can be used to independently configure the GPU and its  memory  frequencies.   After  the  job  is
              completed,  the frequencies of all affected GPUs will be reset to the highest possible values.  In
              some cases, system power caps may override the requested values.  The field type can be  "memory".
              If  type  is  not  specified,  the GPU frequency is implied.  The value field can either be "low",
              "medium", "high", "highm1" or a numeric value in megahertz (MHz).  If the specified numeric  value
              is  not  possible,  a  value  as  close  as possible will be used. See below for definition of the
              values.  The verbose option causes current GPU frequency information to be  logged.   Examples  of
              use include "--gpu-freq=medium,memory=high" and "--gpu-freq=450".

              Supported value definitions:

              low       the lowest available frequency.

              medium    attempts to set a frequency in the middle of the available range.

              high      the highest available frequency.

              highm1    (high minus one) will select the next highest available frequency.

       -G, --gpus=[type:]<number>
              Specify  the total number of GPUs required for the job.  An optional GPU type specification can be
              supplied.  For example "--gpus=volta:3".  Multiple options can be requested in a  comma  separated
              list, for example: "--gpus=volta:3,kepler:1".  See also the --gpus-per-node, --gpus-per-socket and
              --gpus-per-task options.

       --gpus-per-node=[type:]<number>
              Specify  the  number  of  GPUs  required  for  the job on each node included in the job's resource
              allocation.    An   optional   GPU   type   specification   can   be   supplied.    For    example
              "--gpus-per-node=volta:3".   Multiple  options  can  be  requested  in a comma separated list, for
              example:  "--gpus-per-node=volta:3,kepler:1".   See  also  the   --gpus,   --gpus-per-socket   and
              --gpus-per-task options.

       --gpus-per-socket=[type:]<number>
              Specify  the  number  of  GPUs  required for the job on each socket included in the job's resource
              allocation.    An   optional   GPU   type   specification   can   be   supplied.    For    example
              "--gpus-per-socket=volta:3".   Multiple  options  can  be requested in a comma separated list, for
              example: "--gpus-per-socket=volta:3,kepler:1".  Requires job to specify a sockets per node count (
              --sockets-per-node).  See also the --gpus, --gpus-per-node and --gpus-per-task options.

       --gpus-per-task=[type:]<number>
              Specify the number of GPUs required for the job on each task to be spawned in the  job's  resource
              allocation.     An   optional   GPU   type   specification   can   be   supplied.    For   example
              "--gpus-per-task=volta:1". Multiple options can be  requested  in  a  comma  separated  list,  for
              example:   "--gpus-per-task=volta:3,kepler:1".   See   also   the  --gpus,  --gpus-per-socket  and
              --gpus-per-node options.  This option requires an  explicit  task  count,  e.g.  -n,  --ntasks  or
              "--gpus=X  --gpus-per-task=Y"  rather  than  an  ambiguous  range of nodes with -N, --nodes.  This
              option will implicitly set --gpu-bind=per_task:<gpus_per_task>, but that can be overridden with an
              explicit --gpu-bind specification.

       --gres=<list>
              Specifies a comma-delimited list of generic consumable resources.  The format of each entry on the
              list is "name[[:type]:count]".  The name is that of the consumable resource.   The  count  is  the
              number  of  those  resources with a default value of 1.  The count can have a suffix of "k" or "K"
              (multiple of 1024), "m" or "M" (multiple of 1024 x 1024), "g" or "G" (multiple of 1024  x  1024  x
              1024),  "t"  or "T" (multiple of 1024 x 1024 x 1024 x 1024), "p" or "P" (multiple of 1024 x 1024 x
              1024 x 1024 x 1024).  The specified resources will be allocated to the  job  on  each  node.   The
              available  generic  consumable  resources  is configurable by the system administrator.  A list of
              available generic consumable resources will be printed and the command will  exit  if  the  option
              argument   is   "help".   Examples  of  use  include  "--gres=gpu:2",  "--gres=gpu:kepler:2",  and
              "--gres=help".

       --gres-flags=<type>
              Specify generic resource task binding options.

              disable-binding
                     Disable filtering of CPUs with respect  to  generic  resource  locality.   This  option  is
                     currently required to use more CPUs than are bound to a GRES (i.e. if a GPU is bound to the
                     CPUs  on  one  socket,  but resources on more than one socket are required to run the job).
                     This option may permit a job to be allocated resources sooner than otherwise possible,  but
                     may result in lower job performance.
                     NOTE: This option is specific to SelectType=cons_res.

              enforce-binding
                     The  only CPUs available to the job will be those bound to the selected GRES (i.e. the CPUs
                     identified in the gres.conf file will be strictly enforced).  This  option  may  result  in
                     delayed  initiation  of  a  job.   For example a job requiring two GPUs and one CPU will be
                     delayed until both GPUs on a single socket are available rather than using  GPUs  bound  to
                     separate  sockets,  however,  the  application  performance may be improved due to improved
                     communication speed.  Requires the node to be configured with  more  than  one  socket  and
                     resource filtering will be performed on a per-socket basis.
                     NOTE: This option is specific to SelectType=cons_tres.

       -h, --help
              Display help information and exit.

       --hint=<type>
              Bind tasks according to application hints.
              NOTE:  This option cannot be used in conjunction with --ntasks-per-core, --threads-per-core or -B.
              If --hint is specified as a command line argument, it will take precedence over the environment.

              compute_bound
                     Select settings for compute bound applications: use all cores in each  socket,  one  thread
                     per core.

              memory_bound
                     Select settings for memory bound applications: use only one core in each socket, one thread
                     per core.

              [no]multithread
                     [don't]  use  extra  threads  with  in-core multi-threading which can benefit communication
                     intensive applications.  Only supported with the task/affinity plugin.

              help   show this help message

       -H, --hold
              Specify the job is to be submitted in a held state (priority of zero).  A  held  job  can  now  be
              released using scontrol to reset its priority (e.g. "scontrol release <job_id>").

       -I, --immediate[=<seconds>]
              exit  if  resources  are  not available within the time period specified.  If no argument is given
              (seconds defaults to 1), resources must be available immediately for the request  to  succeed.  If
              defer  is  configured  in  SchedulerParameters  and  seconds=1  the  allocation  request will fail
              immediately; defer conflicts and takes precedence over this option.  By  default,  --immediate  is
              off,  and the command will block until resources become available. Since this option's argument is
              optional, for proper parsing the single letter option must be followed immediately with the  value
              and not include a space between them. For example "-I60" and not "-I 60".

       -J, --job-name=<jobname>
              Specify a name for the job allocation. The specified name will appear along with the job id number
              when  querying  running  jobs  on  the  system.  The default job name is the name of the "command"
              specified on the command line.

       -K, --kill-command[=signal]
              salloc always runs a user-specified command once the allocation  is  granted.   salloc  will  wait
              indefinitely  for that command to exit.  If you specify the --kill-command option salloc will send
              a signal to your command any time that the Slurm controller tells salloc that its  job  allocation
              has  been revoked. The job allocation can be revoked for a couple of reasons: someone used scancel
              to revoke the allocation, or the allocation reached its time limit.   If  you  do  not  specify  a
              signal  name  or  number and Slurm is configured to signal the spawned command at job termination,
              the default signal is SIGHUP for interactive and SIGTERM for non-interactive sessions. Since  this
              option's  argument  is  optional,  for  proper  parsing  the single letter option must be followed
              immediately with the value and not include a space between them. For example "-K1" and not "-K 1".

       -L, --licenses=<license>[@db][:count][,license[@db][:count]...]
              Specification of licenses (or other resources available on all nodes of the cluster) which must be
              allocated to this job.  License names can be followed by a colon and count (the default  count  is
              one).  Multiple license names should be comma separated (e.g.  "--licenses=foo:4,bar").

       --mail-type=<type>
              Notify  user  by  email  when  certain event types occur.  Valid type values are NONE, BEGIN, END,
              FAIL, REQUEUE, ALL (equivalent to BEGIN,  END,  FAIL,  INVALID_DEPEND,  REQUEUE,  and  STAGE_OUT),
              INVALID_DEPEND  (dependency  never  satisfied),  STAGE_OUT  (burst  buffer  stage out and teardown
              completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit),  TIME_LIMIT_80  (reached
              80  percent  of  time limit), and TIME_LIMIT_50 (reached 50 percent of time limit).  Multiple type
              values may be specified in a comma separated list.  The user to  be  notified  is  indicated  with
              --mail-user.

       --mail-user=<user>
              User  to receive email notification of state changes as defined by --mail-type.  The default value
              is the submitting user.

       --mcs-label=<mcs>
              Used only when the mcs/group plugin is enabled.  This parameter is a group among the groups of the
              user.  Default value is calculated by the Plugin mcs if it's enabled.

       --mem=<size>[units]
              Specify the real memory required per node.  Default units are megabytes.  Different units  can  be
              specified  using  the  suffix  [K|M|G|T].  Default value is DefMemPerNode and the maximum value is
              MaxMemPerNode. If configured, both of parameters can  be  seen  using  the  scontrol  show  config
              command.   This  parameter  would  generally  be  used  if  whole  nodes  are  allocated  to  jobs
              (SelectType=select/linear).  Also see --mem-per-cpu and --mem-per-gpu.  The  --mem,  --mem-per-cpu
              and  --mem-per-gpu  options  are  mutually exclusive. If --mem, --mem-per-cpu or --mem-per-gpu are
              specified as command line arguments, then they will take precedence over the environment.

              NOTE: A memory size specification of zero is treated as a special case and grants the  job  access
              to  all  of  the  memory  on each node.  If the job is allocated multiple nodes in a heterogeneous
              cluster, the memory limit on each node will be that  of  the  node  in  the  allocation  with  the
              smallest memory size (same limit will apply to every node in the job's allocation).

              NOTE:  Enforcement  of  memory  limits currently relies upon the task/cgroup plugin or enabling of
              accounting, which samples memory  use  on  a  periodic  basis  (data  need  not  be  stored,  just
              collected).  In  both cases memory use is based upon the job's Resident Set Size (RSS). A task may
              exceed the memory limit until the next periodic accounting sample.

       --mem-bind=[{quiet|verbose},]<type>
              Bind tasks to memory. Used only when the task/affinity plugin  is  enabled  and  the  NUMA  memory
              functions  are  available.   Note that the resolution of CPU and memory binding may differ on some
              architectures. For example, CPU binding may be performed at  the  level  of  the  cores  within  a
              processor  while  memory  binding will be performed at the level of nodes, where the definition of
              "nodes" may differ from system to system.  By default no memory binding  is  performed;  any  task
              using  any CPU can use any memory. This option is typically used to ensure that each task is bound
              to the memory closest to its assigned CPU. The use of any type other than "none" or "local" is not
              recommended.

              NOTE: To have Slurm always report on the selected memory binding for all commands  executed  in  a
              shell,  you  can  enable  verbose mode by setting the SLURM_MEM_BIND environment variable value to
              "verbose".

              The following informational environment variables are set when --mem-bind is in use:

                   SLURM_MEM_BIND_LIST
                   SLURM_MEM_BIND_PREFER
                   SLURM_MEM_BIND_SORT
                   SLURM_MEM_BIND_TYPE
                   SLURM_MEM_BIND_VERBOSE

              See the  ENVIRONMENT  VARIABLES  section  for  a  more  detailed  description  of  the  individual
              SLURM_MEM_BIND* variables.

              Supported options include:

              help   show this help message

              local  Use memory local to the processor in use

              map_mem:<list>
                     Bind   by  setting  memory  masks  on  tasks  (or  ranks)  as  specified  where  <list>  is
                     <numa_id_for_task_0>,<numa_id_for_task_1>,...  The mapping is  specified  for  a  node  and
                     identical  mapping  is  applied to the tasks on every node (i.e. the lowest task ID on each
                     node is mapped to the first ID specified in the list, etc.).  NUMA IDs are  interpreted  as
                     decimal  values  unless  they  are  preceded  with  '0x'  in which case they interpreted as
                     hexadecimal values.  If the number of tasks (or ranks) exceeds the number  of  elements  in
                     this list, elements in the list will be reused as needed starting from the beginning of the
                     list.   To  simplify  support  for  large  task  counts, the lists may follow a map with an
                     asterisk and repetition  count.   For  example  "map_mem:0x0f*4,0xf0*4".   For  predictable
                     binding results, all CPUs for each node in the job should be allocated to the job.

              mask_mem:<list>
                     Bind   by  setting  memory  masks  on  tasks  (or  ranks)  as  specified  where  <list>  is
                     <numa_mask_for_task_0>,<numa_mask_for_task_1>,...  The mapping is specified for a node  and
                     identical  mapping  is  applied to the tasks on every node (i.e. the lowest task ID on each
                     node is mapped to the first mask specified in the  list,  etc.).   NUMA  masks  are  always
                     interpreted  as  hexadecimal  values.  Note that masks must be preceded with a '0x' if they
                     don't begin with [0-9] so they are seen as numerical values.  If the number  of  tasks  (or
                     ranks)  exceeds the number of elements in this list, elements in the list will be reused as
                     needed starting from the beginning of the list.  To simplify support for large task counts,
                     the lists  may  follow  a  mask  with  an  asterisk  and  repetition  count.   For  example
                     "mask_mem:0*4,1*4".   For  predictable  binding  results, all CPUs for each node in the job
                     should be allocated to the job.

              no[ne] don't bind tasks to memory (default)

              p[refer]
                     Prefer use of first specified NUMA node, but permit
                      use of other available NUMA nodes.

              q[uiet]
                     quietly bind before task runs (default)

              rank   bind by task rank (not recommended)

              sort   sort free cache pages (run zonesort on Intel KNL nodes)

              v[erbose]
                     verbosely report binding before task runs

       --mem-per-cpu=<size>[units]
              Minimum memory required per allocated CPU.  Default units are megabytes.  Different units  can  be
              specified  using the suffix [K|M|G|T].  The default value is DefMemPerCPU and the maximum value is
              MaxMemPerCPU (see exception below). If configured, both parameters can be seen using the  scontrol
              show  config  command.   Note  that  if  the  job's  --mem-per-cpu  value  exceeds  the configured
              MaxMemPerCPU, then the user's limit will be treated as a memory limit per task; --mem-per-cpu will
              be reduced to a value no larger than MaxMemPerCPU; --cpus-per-task will be set and  the  value  of
              --cpus-per-task  multiplied  by  the new --mem-per-cpu value will equal the original --mem-per-cpu
              value specified by the user.  This parameter would generally be used if individual processors  are
              allocated  to  jobs  (SelectType=select/cons_res).  If resources are allocated by core, socket, or
              whole nodes, then the number of CPUs allocated to a job may be higher than the task count and  the
              value  of  --mem-per-cpu  should  be adjusted accordingly.  Also see --mem and --mem-per-gpu.  The
              --mem, --mem-per-cpu and --mem-per-gpu options are mutually exclusive.

              NOTE: If the final amount of memory requested by a job can't be satisfied  by  any  of  the  nodes
              configured in the partition, the job will be rejected.  This could happen if --mem-per-cpu is used
              with  the  --exclusive option for a job allocation and --mem-per-cpu times the number of CPUs on a
              node is greater than the total memory of that node.

       --mem-per-gpu=<size>[units]
              Minimum memory required per allocated GPU.  Default units are megabytes.  Different units  can  be
              specified  using  the  suffix [K|M|G|T].  Default value is DefMemPerGPU and is available on both a
              global and per partition basis.  If configured, the parameters can be seen using the scontrol show
              config and scontrol show partition commands.   Also  see  --mem.   The  --mem,  --mem-per-cpu  and
              --mem-per-gpu options are mutually exclusive.

       --mincpus=<n>
              Specify a minimum number of logical cpus/processors per node.

       --network=<type>
              Specify  information  pertaining  to  the switch or network.  The interpretation of type is system
              dependent.  This option is supported when running Slurm on a Cray natively.  It is used to request
              using Network Performance Counters.  Only one value per request is valid.  All  options  are  case
              in-sensitive.  In this configuration supported values include:

              system
                    Use the system-wide network performance counters. Only nodes requested will be marked in use
                    for the job allocation.  If the job does not fill up the entire system the rest of the nodes
                    are  not  able  to  be  used  by  other  jobs  using NPC, if idle their state will appear as
                    PerfCnts.  These nodes are still available for other jobs not using NPC.

              blade Use the blade network performance counters. Only nodes requested will be marked in  use  for
                    the  job  allocation.   If the job does not fill up the entire blade(s) allocated to the job
                    those blade(s) are not able to be used by other jobs using NPC, if  idle  their  state  will
                    appear as PerfCnts.  These nodes are still available for other jobs not using NPC.

              In  all  cases  the  job  allocation  request  must specify the --exclusive option.  Otherwise the
              request will be denied.

              Also with any of these options steps are not allowed to share blades, so  resources  would  remain
              idle  inside  an  allocation  if the step running on a blade does not take up all the nodes on the
              blade.

              The network option is also supported on systems with IBM's Parallel Environment (PE).   See  IBM's
              LoadLeveler  job  command  keyword documentation about the keyword "network" for more information.
              Multiple values may be specified in a comma separated list.  All options  are  case  in-sensitive.
              Supported values include:

              BULK_XFER[=<resources>]
                          Enable  bulk  transfer of data using Remote Direct-Memory Access (RDMA).  The optional
                          resources specification is a numeric value which can have a suffix of "k",  "K",  "m",
                          "M",  "g"  or  "G"  for  kilobytes,  megabytes  or  gigabytes.   NOTE:  The  resources
                          specification is not supported by the underlying IBM  infrastructure  as  of  Parallel
                          Environment version 2.2 and no value should be specified at this time.

              CAU=<count> Number  of Collectve Acceleration Units (CAU) required.  Applies only to IBM Power7-IH
                          processors.  Default value is zero.   Independent  CAU  will  be  allocated  for  each
                          programming interface (MPI, LAPI, etc.)

              DEVNAME=<name>
                          Specify the device name to use for communications (e.g. "eth0" or "mlx4_0").

              DEVTYPE=<type>
                          Specify  the device type to use for communications.  The supported values of type are:
                          "IB" (InfiniBand), "HFI" (P7 Host Fabric Interface),  "IPONLY"  (IP-Only  interfaces),
                          "HPCE" (HPC Ethernet), and

                          "KMUX"  (Kernel Emulation of HPCE).  The devices allocated to a job must all be of the
                          same type.  The default value depends upon depends upon what hardware is available and
                          in order of preferences is IPONLY (which is not considered in User Space  mode),  HFI,
                          IB, HPCE, and KMUX.

              IMMED =<count>
                          Number  of  immediate  send  slots per window required.  Applies only to IBM Power7-IH
                          processors.  Default value is zero.

              INSTANCES =<count>
                          Specify number of network connections for each task on each network  connection.   The
                          default instance count is 1.

              IPV4        Use Internet Protocol (IP) version 4 communications (default).

              IPV6        Use Internet Protocol (IP) version 6 communications.

              LAPI        Use the LAPI programming interface.

              MPI         Use the MPI programming interface.  MPI is the default interface.

              PAMI        Use the PAMI programming interface.

              SHMEM       Use the OpenSHMEM programming interface.

              SN_ALL      Use all available switch networks (default).

              SN_SINGLE   Use one available switch network.

              UPC         Use the UPC programming interface.

              US          Use User Space communications.

       Some examples of network specifications:

              Instances=2,US,MPI,SN_ALL
                     Create  two  user space connections for MPI communications on every switch network for each
                     task.

              US,MPI,Instances=3,Devtype=IB
                     Create three user space connections for MPI communications on every InfiniBand network  for
                     each task.

              IPV4,LAPI,SN_Single
                     Create  a  IP  version  4 connection for LAPI communications on one switch network for each
                     task.

              Instances=2,US,LAPI,MPI
                     Create two user space connections each for LAPI and  MPI  communications  on  every  switch
                     network  for  each  task. Note that SN_ALL is the default option so every switch network is
                     used. Also note that Instances=2 specifies that two connections are  established  for  each
                     protocol  (LAPI  and  MPI)  and each task.  If there are two networks and four tasks on the
                     node then a total of 32 connections are established (2 instances x 2 protocols x 2 networks
                     x 4 tasks).

       --nice[=adjustment]
              Run the job with an adjusted scheduling priority  within  Slurm.  With  no  adjustment  value  the
              scheduling  priority  is decreased by 100. A negative nice value increases the priority, otherwise
              decreases it. The adjustment range is +/- 2147483645. Only privileged users can specify a negative
              adjustment.

       --no-bell
              Silence salloc's use of the terminal bell. Also see the option --bell.

       -k, --no-kill[=off]
              Do not automatically terminate a job if one of the nodes it has been allocated  fails.   The  user
              will  assume  the  responsibilities  for fault-tolerance should a node fail.  When there is a node
              failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal
              error, but with --no-kill, the job allocation will not be revoked so the user may launch  new  job
              steps on the remaining nodes in their allocation.

              Specify  an  optional  argument  of  "off"  disable  the  effect of the SALLOC_NO_KILL environment
              variable.

              By default Slurm terminates the entire job allocation if any node fails in its range of  allocated
              nodes.

       --no-shell
              immediately  exit  after  allocating  resources, without running a command. However, the Slurm job
              will still be created and will remain active and will own the allocated resources as long as it is
              active.  You will have a Slurm job id with no associated processes or tasks. You can  submit  srun
              commands  against  this resource allocation, if you specify the --jobid= option with the job id of
              this Slurm job.  Or, this can be used to temporarily reserve a set of resources so that other jobs
              cannot use them for some period of time.  (Note that the  Slurm  job  is  subject  to  the  normal
              constraints  on  jobs,  including  time  limits, so that eventually the job will terminate and the
              resources will be freed, or you can terminate the job manually using the scancel command.)

       -F, --nodefile=<node_file>
              Much like --nodelist, but the list is contained in a file of name node file.  The  node  names  of
              the  list  may  also  span multiple lines in the file.    Duplicate node names in the file will be
              ignored.  The order of the node names in the list is not important; the node names will be  sorted
              by Slurm.

       -w, --nodelist=<node_name_list>
              Request a specific list of hosts.  The job will contain all of these hosts and possibly additional
              hosts  as needed to satisfy resource requirements.  The list may be specified as a comma-separated
              list of hosts, a range of hosts (host[1-5,7,...] for example), or a filename.  The host list  will
              be  assumed  to  be  a  filename if it contains a "/" character.  If you specify a minimum node or
              processor count larger than can be satisfied by the supplied host list, additional resources  will
              be  allocated  on  other  nodes as needed.  Duplicate node names in the list will be ignored.  The
              order of the node names in the list is not important; the node names will be sorted by Slurm.

       -N, --nodes=<minnodes>[-maxnodes]
              Request that a minimum of minnodes nodes be allocated to this job.  A maximum node count may  also
              be specified with maxnodes.  If only one number is specified, this is used as both the minimum and
              maximum  node  count.   The  partition's  node limits supersede those of the job.  If a job's node
              limits are outside of the range permitted for its associated partition, the job will be left in  a
              PENDING  state.   This  permits  possible  execution  at a later time, when the partition limit is
              changed.  If a job node limit exceeds the number of nodes configured in  the  partition,  the  job
              will be rejected.  Note that the environment variable SLURM_JOB_NUM_NODES will be set to the count
              of  nodes  actually  allocated  to  the  job.  See  the  ENVIRONMENT  VARIABLES   section for more
              information.  If -N is not specified, the default behavior is to allocate enough nodes to  satisfy
              the  requirements  of  the -n and -c options.  The job will be allocated as many nodes as possible
              within the range specified and without delaying  the  initiation  of  the  job.   The  node  count
              specification may include a numeric value followed by a suffix of "k" (multiplies numeric value by
              1,024) or "m" (multiplies numeric value by 1,048,576).

       -n, --ntasks=<number>
              salloc  does  not  launch tasks, it requests an allocation of resources and executed some command.
              This option advises the Slurm controller that job steps run within this allocation will  launch  a
              maximum of number tasks and sufficient resources are allocated to accomplish this.  The default is
              one task per node, but note that the --cpus-per-task option will change this default.

       --ntasks-per-core=<ntasks>
              Request  the  maximum  ntasks be invoked on each core.  Meant to be used with the --ntasks option.
              Related to --ntasks-per-node except at the core level instead  of  the  node  level.   NOTE:  This
              option is not supported when using SelectType=select/linear.

       --ntasks-per-gpu=<ntasks>
              Request  that  there are ntasks tasks invoked for every GPU.  This option can work in two ways: 1)
              either specify --ntasks in  addition,  in  which  case  a  type-less  GPU  specification  will  be
              automatically  determined  to  satisfy  --ntasks-per-gpu,  or 2) specify the GPUs wanted (e.g. via
              --gpus or --gres) without specifying --ntasks, and the total  task  count  will  be  automatically
              determined.   The  number of CPUs needed will be automatically increased if necessary to allow for
              any calculated task count.  This option will implicitly set --gpu-bind=single:<ntasks>,  but  that
              can be overridden with an explicit --gpu-bind specification.  This option is not compatible with a
              node  range  (i.e.  -N<minnodes-maxnodes>).   This  option is not compatible with --gpus-per-task,
              --gpus-per-socket, or --ntasks-per-node.  This option is not supported unless SelectType=cons_tres
              is configured (either directly or indirectly on Cray systems).

       --ntasks-per-node=<ntasks>
              Request that ntasks be invoked on each node.  If used  with  the  --ntasks  option,  the  --ntasks
              option  will take precedence and the --ntasks-per-node will be treated as a maximum count of tasks
              per node.  Meant to be used with the --nodes option.  This is  related  to  --cpus-per-task=ncpus,
              but  does  not  require knowledge of the actual number of cpus on each node.  In some cases, it is
              more convenient to be able to request that no more than a specific number of tasks be  invoked  on
              each  node.   Examples  of  this  include  submitting  a  hybrid MPI/OpenMP app where only one MPI
              "task/rank" should be assigned to each node while allowing the OpenMP portion to  utilize  all  of
              the  parallelism  present in the node, or submitting a single setup/cleanup/monitoring job to each
              node of a pre-existing allocation as one step in a larger job script.

       --ntasks-per-socket=<ntasks>
              Request the maximum ntasks be invoked on each socket.  Meant to be used with the --ntasks  option.
              Related  to  --ntasks-per-node  except  at the socket level instead of the node level.  NOTE: This
              option is not supported when using SelectType=select/linear.

       -O, --overcommit
              Overcommit resources.

              When applied to a job allocation (not including jobs requesting exclusive access to the nodes) the
              resources are allocated as if only one task per node is requested. This means that  the  requested
              number  of cpus per task (-c, --cpus-per-task) are allocated per node rather than being multiplied
              by the number of tasks. Options used to specify the number of tasks per node, socket,  core,  etc.
              are ignored.

              When  applied  to  job  step  allocations  (the  srun command when executed within an existing job
              allocation), this option can be used to launch more than one task per CPU.   Normally,  srun  will
              not  allocate  more  than  one  process  per  CPU.   By specifying --overcommit you are explicitly
              allowing more than one process  per  CPU.  However  no  more  than  MAX_TASKS_PER_NODE  tasks  are
              permitted to execute per node.  NOTE: MAX_TASKS_PER_NODE is defined in the file slurm.h and is not
              a variable, it is set at Slurm build time.

       -s, --oversubscribe
              The  job  allocation  can  over-subscribe  resources with other running jobs.  The resources to be
              over-subscribed can be nodes, sockets, cores, and/or hyperthreads  depending  upon  configuration.
              The   default  over-subscribe  behavior  depends  on  system  configuration  and  the  partition's
              OverSubscribe option takes precedence over the job's  option.   This  option  may  result  in  the
              allocation  being  granted  sooner than if the --oversubscribe option was not set and allow higher
              system utilization, but  application  performance  will  likely  suffer  due  to  competition  for
              resources.  Also see the --exclusive option.

       -p, --partition=<partition_names>
              Request  a specific partition for the resource allocation.  If not specified, the default behavior
              is to allow the slurm controller to select the default  partition  as  designated  by  the  system
              administrator. If the job can use more than one partition, specify their names in a comma separate
              list  and  the one offering earliest initiation will be used with no regard given to the partition
              name ordering (although higher priority partitions will be considered first).   When  the  job  is
              initiated, the name of the partition used will be placed first in the job record partition string.

       --power=<flags>
              Comma separated list of power management plugin options.  Currently available flags include: level
              (all  nodes  allocated  to  the job should have identical power caps, may be disabled by the Slurm
              configuration option PowerParameters=job_no_level).

       --priority=<value>
              Request a specific job priority.  May be subject to  configuration  specific  constraints.   value
              should  either be a numeric value or "TOP" (for highest possible value).  Only Slurm operators and
              administrators can set the priority of a job.

       --profile={all|none|<type>[,<type>...]}
              Enables detailed data collection by the acct_gather_profile plugin.  Detailed data  are  typically
              time-series  that  are stored in an HDF5 file for the job or an InfluxDB database depending on the
              configured plugin.

              All       All data types are collected. (Cannot be combined with other values.)

              None      No data types are collected. This is the default.
                         (Cannot be combined with other values.)

       Valid type values are:

              Energy Energy data is collected.

              Task   Task (I/O, Memory, ...) data is collected.

              Lustre Lustre data is collected.

              Network
                     Network (InfiniBand) data is collected.

       -q, --qos=<qos>
              Request a quality of service for the job.  QOS values can be defined for each user/cluster/account
              association in the Slurm database.  Users will be limited to their association's  defined  set  of
              qos's  when  the  Slurm  configuration  parameter, AccountingStorageEnforce, includes "qos" in its
              definition.

       -Q, --quiet
              Suppress informational messages from salloc. Errors will still be displayed.

       --reboot
              Force the allocated nodes to reboot before starting the job.  This is  only  supported  with  some
              system  configurations  and will otherwise be silently ignored. Only root, SlurmUser or admins can
              reboot nodes.

       --reservation=<reservation_names>
              Allocate resources for the job from the named reservation. If  the  job  can  use  more  than  one
              reservation,  specify  their  names  in  a  comma  separate  list  and  the  one offering earliest
              initiation. Each reservation will be considered in the order it was requested.   All  reservations
              will  be  listed  in  scontrol/squeue  through  the  life  of  the  job.   In accounting the first
              reservation will be seen and after the job starts the reservation used will replace it.

       --signal=[R:]<sig_num>[@sig_time]
              When a job is within sig_time seconds of its end time, send it the signal  sig_num.   Due  to  the
              resolution  of  event  handling  by  Slurm,  the  signal may be sent up to 60 seconds earlier than
              specified.  sig_num may either be a signal number or name (e.g. "10" or  "USR1").   sig_time  must
              have  an  integer  value  between 0 and 65535.  By default, no signal is sent before the job's end
              time.  If a sig_num is specified without any sig_time, the default time will be 60  seconds.   Use
              the  "R:"  option to allow this job to overlap with a reservation with MaxStartDelay set.  To have
              the signal sent at preemption time see the preempt_send_user_signal SlurmctldParameter.

       --sockets-per-node=<sockets>
              Restrict node selection to nodes with at least the specified number of  sockets.   See  additional
              information under -B option above when task/affinity plugin is enabled.
              NOTE: This option may implicitly set the number of tasks (if -n was not specified) as one task per
              requested thread.

       --spread-job
              Spread  the  job  allocation over as many nodes as possible and attempt to evenly distribute tasks
              across the allocated nodes.  This option disables the topology/tree plugin.

       --switches=<count>[@max-time]
              When a tree topology is used, this defines the maximum count of leaf switches desired for the  job
              allocation  and optionally the maximum time to wait for that number of switches. If Slurm finds an
              allocation containing more switches than the count specified, the job  remains  pending  until  it
              either  finds  an  allocation with desired switch count or the time limit expires.  It there is no
              switch count limit, there is no delay in  starting  the  job.   Acceptable  time  formats  include
              "minutes",  "minutes:seconds",  "hours:minutes:seconds",  "days-hours",  "days-hours:minutes"  and
              "days-hours:minutes:seconds".  The  job's  maximum  time  delay  may  be  limited  by  the  system
              administrator  using  the  SchedulerParameters  configuration  parameter  with the max_switch_wait
              parameter option.  On a dragonfly network the only switch count supported is 1 since communication
              performance will be highest when a job is allocate resources on one leaf switch  or  more  than  2
              leaf switches.  The default max-time is the max_switch_wait SchedulerParameters.

       --thread-spec=<num>
              Count  of  specialized  threads per node reserved by the job for system operations and not used by
              the application. The application will not use  these  threads,  but  will  be  charged  for  their
              allocation.  This option can not be used with the --core-spec option.

       --threads-per-core=<threads>
              Restrict  node  selection to nodes with at least the specified number of threads per core. In task
              layout, use the specified maximum number of threads per core. NOTE: "Threads" refers to the number
              of processing units on each core rather than the number of application tasks to  be  launched  per
              core.  See additional information under -B option above when task/affinity plugin is enabled.
              NOTE: This option may implicitly set the number of tasks (if -n was not specified) as one task per
              requested thread.

       -t, --time=<time>
              Set  a limit on the total run time of the job allocation.  If the requested time limit exceeds the
              partition's time limit, the job will be left in a  PENDING  state  (possibly  indefinitely).   The
              default  time  limit  is the partition's default time limit.  When the time limit is reached, each
              task in each job step is sent SIGTERM followed  by  SIGKILL.   The  interval  between  signals  is
              specified  by  the  Slurm  configuration  parameter  KillWait.   The  OverTimeLimit  configuration
              parameter may permit the job to run longer than scheduled.  Time  resolution  is  one  minute  and
              second values are rounded up to the next minute.

              A  time  limit  of  zero  requests that no time limit be imposed.  Acceptable time formats include
              "minutes",  "minutes:seconds",  "hours:minutes:seconds",  "days-hours",  "days-hours:minutes"  and
              "days-hours:minutes:seconds".

       --time-min=<time>
              Set  a  minimum time limit on the job allocation.  If specified, the job may have its --time limit
              lowered to a value no lower than --time-min if doing so permits the job to begin execution earlier
              than otherwise possible.  The job's time limit will not be changed  after  the  job  is  allocated
              resources.   This  is performed by a backfill scheduling algorithm to allocate resources otherwise
              reserved for higher priority jobs.  Acceptable time formats include "minutes",  "minutes:seconds",
              "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".

       --tmp=<size>[units]
              Specify  a  minimum  amount  of  temporary  disk  space  per  node.   Default units are megabytes.
              Different units can be specified using the suffix [K|M|G|T].

       --uid=<user>
              Attempt to submit and/or run a job as user instead of the invoking user id.  The  invoking  user's
              credentials will be used to check access permissions for the target partition. This option is only
              valid  for  user  root.  This option may be used by user root may use this option to run jobs as a
              normal user in a RootOnly partition for example. If run as root, salloc will drop its  permissions
              to  the  uid specified after node allocation is successful. user may be the user name or numerical
              user ID.

       --usage
              Display brief help message and exit.

       --use-min-nodes
              If a range of node counts is given, prefer the smaller count.

       -v, --verbose
              Increase the verbosity of salloc's informational messages.  Multiple -v's  will  further  increase
              salloc's verbosity.  By default only errors will be displayed.

       -V, --version
              Display version information and exit.

       --wait-all-nodes=<value>
              Controls  when  the  execution  of the command begins with respect to when nodes are ready for use
              (i.e. booted).  By default, the salloc command will return as soon  as  the  allocation  is  made.
              This  default  can  be  altered  using  the  salloc_wait_nodes  option  to the SchedulerParameters
              parameter in the slurm.conf file.

              0    Begin execution as soon as allocation can be made.  Do not wait for all nodes to be ready for
                   use (i.e. booted).

              1    Do not begin execution until all nodes are ready for use.

       --wckey=<wckey>
              Specify wckey to be used with job.  If TrackWCKey=no (default) in the  slurm.conf  this  value  is
              ignored.

       --x11[={all|first|last}]
              Sets up X11 forwarding on "all", "first" or "last" node(s) of the allocation.  This option is only
              enabled  if  Slurm was compiled with X11 support and PrologFlags=x11 is defined in the slurm.conf.
              Default is "all".

PERFORMANCE

       Executing salloc sends a remote procedure call to slurmctld. If enough calls from salloc or  other  Slurm
       client  commands  that send remote procedure calls to the slurmctld daemon come in at once, it can result
       in a degradation of performance of the slurmctld daemon, possibly resulting in a denial of service.

       Do not run salloc or other Slurm client commands that send remote procedure calls to slurmctld from loops
       in shell scripts or other programs. Ensure that programs limit calls to salloc to the  minimum  necessary
       for the information you are trying to gather.

INPUT ENVIRONMENT VARIABLES

       Upon  startup,  salloc  will  read and handle the options set in the following environment variables. The
       majority of these variables are set the same way the options are set, as defined above. For flag  options
       that  are  defined  to  expect no argument, the option can be enabled by setting the environment variable
       without a value (empty or NULL string), the string 'yes', or a non-zero number. Any other value  for  the
       environment  variable  will  result  in the option not being set.  There are a couple exceptions to these
       rules that are noted below.
       NOTE: Command line options always override environment variables settings.

       SALLOC_ACCOUNT        Same as -A, --account

       SALLOC_ACCTG_FREQ     Same as --acctg-freq

       SALLOC_BELL           Same as --bell

       SALLOC_BURST_BUFFER   Same as --bb

       SALLOC_CLUSTERS or SLURM_CLUSTERS
                             Same as --clusters

       SALLOC_CONSTRAINT     Same as -C, --constraint

       SALLOC_CONTAINER      Same as --container.

       SALLOC_CORE_SPEC      Same as --core-spec

       SALLOC_CPUS_PER_GPU   Same as --cpus-per-gpu

       SALLOC_DEBUG          Same as -v, --verbose. Must be set to 0 or 1 to disable or enable the option.

       SALLOC_DELAY_BOOT     Same as --delay-boot

       SALLOC_EXCLUSIVE      Same as --exclusive

       SALLOC_GPU_BIND       Same as --gpu-bind

       SALLOC_GPU_FREQ       Same as --gpu-freq

       SALLOC_GPUS           Same as -G, --gpus

       SALLOC_GPUS_PER_NODE  Same as --gpus-per-node

       SALLOC_GPUS_PER_TASK  Same as --gpus-per-task

       SALLOC_GRES           Same as --gres

       SALLOC_GRES_FLAGS     Same as --gres-flags

       SALLOC_HINT or SLURM_HINT
                             Same as --hint

       SALLOC_IMMEDIATE      Same as -I, --immediate

       SALLOC_KILL_CMD       Same as -K, --kill-command

       SALLOC_MEM_BIND       Same as --mem-bind

       SALLOC_MEM_PER_CPU    Same as --mem-per-cpu

       SALLOC_MEM_PER_GPU    Same as --mem-per-gpu

       SALLOC_MEM_PER_NODE   Same as --mem

       SALLOC_NETWORK        Same as --network

       SALLOC_NO_BELL        Same as --no-bell

       SALLOC_NO_KILL        Same as -k, --no-kill

       SALLOC_OVERCOMMIT     Same as -O, --overcommit

       SALLOC_PARTITION      Same as -p, --partition

       SALLOC_POWER          Same as --power

       SALLOC_PROFILE        Same as --profile

       SALLOC_QOS            Same as --qos

       SALLOC_REQ_SWITCH     When a tree topology is used, this defines the maximum count  of  switches  desired
                             for  the  job allocation and optionally the maximum time to wait for that number of
                             switches. See --switches.

       SALLOC_RESERVATION    Same as --reservation

       SALLOC_SIGNAL         Same as --signal

       SALLOC_SPREAD_JOB     Same as --spread-job

       SALLOC_THREAD_SPEC    Same as --thread-spec

       SALLOC_THREADS_PER_CORE
                             Same as --threads-per-core

       SALLOC_TIMELIMIT      Same as -t, --time

       SALLOC_USE_MIN_NODES  Same as --use-min-nodes

       SALLOC_WAIT_ALL_NODES Same as --wait-all-nodes. Must be set to 0 or 1 to disable or enable the option.

       SALLOC_WAIT4SWITCH    Max time waiting for requested switches. See --switches

       SALLOC_WCKEY          Same as --wckey

       SLURM_CONF            The location of the Slurm configuration file.

       SLURM_EXIT_ERROR      Specifies the exit code generated when a Slurm error occurs (e.g. invalid options).
                             This can be used by a script to distinguish application  exit  codes  from  various
                             Slurm error conditions.  Also see SLURM_EXIT_IMMEDIATE.

       SLURM_EXIT_IMMEDIATE  Specifies the exit code generated when the --immediate option is used and resources
                             are  not  currently  available.   This  can  be  used  by  a  script to distinguish
                             application  exit  codes  from  various   Slurm   error   conditions.    Also   see
                             SLURM_EXIT_ERROR.

OUTPUT ENVIRONMENT VARIABLES

       salloc will set the following environment variables in the environment of the executed program:

       SLURM_*_HET_GROUP_#
              For  a  heterogeneous  job  allocation,  the  environment  variables  are  set separately for each
              component.

       SLURM_CLUSTER_NAME
              Name of the cluster on which the job is executing.

       SLURM_CONTAINER
              OCI Bundle for job.  Only set if --container is specified.

       SLURM_CPUS_PER_GPU
              Number of CPUs requested per allocated GPU.  Only set if the --cpus-per-gpu option is specified.

       SLURM_CPUS_PER_TASK
              Number of CPUs requested per task.  Only set if the --cpus-per-task option is specified.

       SLURM_DIST_PLANESIZE
              Plane distribution size. Only set for plane distributions.  See -m, --distribution.

       SLURM_DISTRIBUTION
              Only set if the -m, --distribution option is specified.

       SLURM_GPU_BIND
              Requested binding of tasks to GPU.  Only set if the --gpu-bind option is specified.

       SLURM_GPU_FREQ
              Requested GPU frequency.  Only set if the --gpu-freq option is specified.

       SLURM_GPUS
              Number of GPUs requested.  Only set if the -G, --gpus option is specified.

       SLURM_GPUS_PER_NODE
              Requested GPU count per allocated node.  Only set if the --gpus-per-node option is specified.

       SLURM_GPUS_PER_SOCKET
              Requested GPU count per allocated socket.  Only set if the --gpus-per-socket option is specified.

       SLURM_GPUS_PER_TASK
              Requested GPU count per allocated task.  Only set if the --gpus-per-task option is specified.

       SLURM_HET_SIZE
              Set to count of components in heterogeneous job.

       SLURM_JOB_ACCOUNT
              Account name associated of the job allocation.

       SLURM_JOB_ID
              The ID of the job allocation.

       SLURM_JOB_CPUS_PER_NODE
              Count  of  CPUs  available  to  the  job  on  the  nodes  in  the  allocation,  using  the  format
              CPU_count[(xnumber_of_nodes)][,CPU_count      [(xnumber_of_nodes)]     ...].      For     example:
              SLURM_JOB_CPUS_PER_NODE='72(x2),36' indicates that on the first and second  nodes  (as  listed  by
              SLURM_JOB_NODELIST)  the  allocation  has  72  CPUs,  while the third node has 36 CPUs.  NOTE: The
              select/linear plugin allocates entire nodes to jobs, so the value indicates  the  total  count  of
              CPUs on allocated nodes. The select/cons_res and select/cons_tres plugins allocate individual CPUs
              to jobs, so this number indicates the number of CPUs allocated to the job.

       SLURM_JOB_NODELIST
              List of nodes allocated to the job.

       SLURM_JOB_NUM_NODES
              Total number of nodes in the job allocation.

       SLURM_JOB_PARTITION
              Name of the partition in which the job is running.

       SLURM_JOB_QOS
              Quality Of Service (QOS) of the job allocation.

       SLURM_JOB_RESERVATION
              Advanced reservation containing the job allocation, if any.

       SLURM_JOBID
              The ID of the job allocation. See SLURM_JOB_ID. Included for backwards compatibility.

       SLURM_MEM_BIND
              Set to value of the --mem-bind option.

       SLURM_MEM_BIND_LIST
              Set to bit mask used for memory binding.

       SLURM_MEM_BIND_PREFER
              Set to "prefer" if the --mem-bind option includes the prefer option.

       SLURM_MEM_BIND_SORT
              Sort free cache pages (run zonesort on Intel KNL nodes)

       SLURM_MEM_BIND_TYPE
              Set  to the memory binding type specified with the --mem-bind option.  Possible values are "none",
              "rank", "map_map", "mask_mem" and "local".

       SLURM_MEM_BIND_VERBOSE
              Set to "verbose" if the --mem-bind option includes the verbose option.  Set to "quiet" otherwise.

       SLURM_MEM_PER_CPU
              Same as --mem-per-cpu

       SLURM_MEM_PER_GPU
              Requested memory per allocated GPU.  Only set if the --mem-per-gpu option is specified.

       SLURM_MEM_PER_NODE
              Same as --mem

       SLURM_NNODES
              Total number of nodes in the job allocation.  See  SLURM_JOB_NUM_NODES.   Included  for  backwards
              compatibility.

       SLURM_NODELIST
              List of nodes allocated to the job. See SLURM_JOB_NODELIST. Included for backwards compabitility.

       SLURM_NODE_ALIASES
              Sets  of  node  name,  communication  address and hostname for nodes allocated to the job from the
              cloud. Each element in the set if colon separated and each set is comma  separated.  For  example:
              SLURM_NODE_ALIASES=ec0:1.2.3.4:foo,ec1:1.2.3.5:bar

       SLURM_NTASKS
              Same as -n, --ntasks

       SLURM_NTASKS_PER_CORE
              Set to value of the --ntasks-per-core option, if specified.

       SLURM_NTASKS_PER_GPU
              Set to value of the --ntasks-per-gpu option, if specified.

       SLURM_NTASKS_PER_NODE
              Set to value of the --ntasks-per-node option, if specified.

       SLURM_NTASKS_PER_SOCKET
              Set to value of the --ntasks-per-socket option, if specified.

       SLURM_OVERCOMMIT
              Set to 1 if --overcommit was specified.

       SLURM_PROFILE
              Same as --profile

       SLURM_SUBMIT_DIR
              The  directory from which salloc was invoked or, if applicable, the directory specified by the -D,
              --chdir option.

       SLURM_SUBMIT_HOST
              The hostname of the computer from which salloc was invoked.

       SLURM_TASKS_PER_NODE
              Number of tasks to be initiated on each node. Values are comma separated and in the same order  as
              SLURM_JOB_NODELIST.   If two or more consecutive nodes are to have the same task count, that count
              is   followed   by   "(x#)"   where    "#"    is    the    repetition    count.    For    example,
              "SLURM_TASKS_PER_NODE=2(x3),1"  indicates  that  the first three nodes will each execute two tasks
              and the fourth node will execute one task.

       SLURM_THREADS_PER_CORE
              This is only set if --threads-per-core or SALLOC_THREADS_PER_CORE were specified. The  value  will
              be  set  to  the value specified by --threads-per-core or SALLOC_THREADS_PER_CORE. This is used by
              subsequent srun calls within the job allocation.

SIGNALS

       While salloc is waiting for a PENDING job allocation, most  signals  will  cause  salloc  to  revoke  the
       allocation request and exit.

       However  if  the  allocation  has been granted and salloc has already started the specified command, then
       salloc will ignore most signals.  salloc will not exit or release the allocation until the command exits.
       One notable exception is SIGHUP. A SIGHUP signal will cause salloc to release  the  allocation  and  exit
       without  waiting for the command to finish.  Another exception is SIGTERM, which will be forwarded to the
       spawned process.

EXAMPLES

       To get an allocation, and open a new xterm in which srun commands may be typed interactively:

              $ salloc -N16 xterm
              salloc: Granted job allocation 65537
              # (at this point the xterm appears, and salloc waits for xterm to exit)
              salloc: Relinquishing job allocation 65537

       To grab an allocation of nodes and launch a parallel application on one command line:

              $ salloc -N5 srun -n10 myprogram

       To create a heterogeneous job with 3 components, each allocating a unique set of nodes:

              $ salloc -w node[2-3] : -w node4 : -w node[5-7] bash
              salloc: job 32294 queued and waiting for resources
              salloc: job 32294 has been allocated resources
              salloc: Granted job allocation 32294

COPYING

       Copyright (C) 2006-2007 The Regents of the University of  California.   Produced  at  Lawrence  Livermore
       National Laboratory (cf, DISCLAIMER).
       Copyright (C) 2008-2010 Lawrence Livermore National Security.
       Copyright (C) 2010-2021 SchedMD LLC.

       This    file    is    part    of    Slurm,   a   resource   management   program.    For   details,   see
       <https://slurm.schedmd.com/>.

       Slurm is free software; you can redistribute it and/or modify it under  the  terms  of  the  GNU  General
       Public License as published by the Free Software Foundation; either version 2 of the License, or (at your
       option) any later version.

       Slurm  is  distributed  in  the  hope  that it will be useful, but WITHOUT ANY WARRANTY; without even the
       implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.   See  the  GNU  General  Public
       License for more details.

SEE ALSO

       sinfo(1),  sattach(1),  sbatch(1),  squeue(1),  scancel(1), scontrol(1), slurm.conf(5), sched_setaffinity
       (2), numa (3)

November 2021                                    Slurm Commands                                        salloc(1)