Provided by: pacemaker_2.1.6-5ubuntu2_amd64 bug

NAME

       pacemaker-controld - Pacemaker controller options

SYNOPSIS

       [dc-version=string] [cluster-infrastructure=string] [cluster-name=string] [dc-deadtime=time]
       [cluster-recheck-interval=time] [load-threshold=percentage] [node-action-limit=integer]
       [fence-reaction=string] [election-timeout=time] [shutdown-escalation=time]
       [join-integration-timeout=time] [join-finalization-timeout=time] [transition-delay=time]
       [stonith-watchdog-timeout=time] [stonith-max-attempts=integer] [no-quorum-policy=select]
       [shutdown-lock=boolean] [shutdown-lock-limit=time]

DESCRIPTION

       Cluster options used by Pacemaker's controller

SUPPORTED PARAMETERS

       dc-version = string [none]
           Pacemaker version on cluster node elected Designated Controller (DC)

           Includes a hash which identifies the exact changeset the code was built from. Used for diagnostic
           purposes.

       cluster-infrastructure = string [corosync]
           The messaging stack on which Pacemaker is currently running

           Used for informational and diagnostic purposes.

       cluster-name = string
           An arbitrary name for the cluster

           This optional value is mostly for users' convenience as desired in administration, but may also be
           used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools
           and resource agents.

       dc-deadtime = time [20s]
           How long to wait for a response from other nodes during start-up

           The optimal value will depend on the speed and load of your network and the type of switches used.

       cluster-recheck-interval = time [15min]
           Polling interval to recheck cluster state and evaluate rules with date specifications

           Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for
           failure timeouts and most time-based rules. However, it will also recheck the cluster after this
           amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain
           types of scheduler bugs. Allowed values: Zero disables polling, while positive values are an interval
           in seconds(unless other units are specified, for example "5min")

       load-threshold = percentage [80%]
           Maximum amount of system load that should be used by cluster nodes

           The cluster will slow down its recovery process when the amount of system resources used (currently
           CPU) approaches this limit

       node-action-limit = integer [0]
           Maximum number of jobs that can be scheduled per node (defaults to 2x cores)

       fence-reaction = string [stop]
           How a cluster node should react if notified of its own fencing

           A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric
           fencing is in use that doesn't cut cluster communication. Allowed values are "stop" to attempt to
           immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local
           node, falling back to stop on failure.

       election-timeout = time [2min]
           *** Advanced Use Only ***

           Declare an election failed if it is not decided within this much time. If you need to adjust this
           value, it probably indicates the presence of a bug.

       shutdown-escalation = time [20min]
           *** Advanced Use Only ***

           Exit immediately if shutdown does not complete within this much time. If you need to adjust this
           value, it probably indicates the presence of a bug.

       join-integration-timeout = time [3min]
           *** Advanced Use Only ***

           If you need to adjust this value, it probably indicates the presence of a bug.

       join-finalization-timeout = time [30min]
           *** Advanced Use Only ***

           If you need to adjust this value, it probably indicates the presence of a bug.

       transition-delay = time [0s]
           *** Advanced Use Only *** Enabling this option will slow down cluster recovery under all conditions

           Delay cluster recovery for this much time to allow for additional events to occur. Useful if your
           configuration is sensitive to the order in which ping updates arrive.

       stonith-watchdog-timeout = time [0]
           How long before nodes can be assumed to be safely down when watchdog-based self-fencing via SBD is in
           use

           If this is set to a positive value, lost nodes are assumed to self-fence using watchdog-based SBD
           within this much time. This does not require a fencing resource to be explicitly configured, though a
           fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the
           default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative
           value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable
           if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger
           than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to
           start on any of those nodes where this is not true for the local value or SBD is not active. When
           this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes
           that use SBD, otherwise data corruption or loss could occur.

       stonith-max-attempts = integer [10]
           How many times fencing can fail before it will no longer be immediately re-attempted on a target

       no-quorum-policy = select [stop]
           What to do when the cluster does not have quorum

           What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote,
           suicide

       shutdown-lock = boolean [false]
           Whether to lock resources to a cleanly shut down node

           When true, resources active on a node when it is cleanly shut down are kept "locked" to that node
           (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most
           shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked.
           Clone and bundle instances and the promoted role of promotable clones are currently never locked,
           though support could be added in a future release.

       shutdown-lock-limit = time [0]
           Do not lock resources to a cleanly shut down node longer than this

           If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after
           this much time has passed since the shutdown was initiated, even if the node has not rejoined.

AUTHOR

       Andrew Beekhof <andrew@beekhof.net>
           Author.

Pacemaker Configuration                            04/01/2024                              PACEMAKER-CONTROLD(7)