Provided by: nova-common_29.2.0-0ubuntu1_all bug

NAME

       nova-manage - Management tool for the OpenStack Compute services.

SYNOPSIS

          nova-manage <category> [<action> [<options>...]]

DESCRIPTION

       nova-manage controls cloud computing instances by managing various admin-only aspects of Nova.

       The standard pattern for executing a nova-manage command is:

          nova-manage <category> <command> [<args>]

       Run without arguments to see a list of available command categories:

          nova-manage

       You can also run with a category argument such as db to see a list of all commands in that category:

          nova-manage db

OPTIONS

       These  options  apply to all commands and may be given in any order, before or after commands. Individual
       commands may provide additional options. Options without an argument can be combined after a single dash.

       -h, --help
              Show a help message and exit

       --config-dir <dir>
              Path to a config directory to pull *.conf files from. This file set is sorted, so as to provide  a
              predictable parse order if individual options are over-ridden. The set is parsed after the file(s)
              specified  via  previous  --config-file, arguments hence over-ridden options in the directory take
              precedence.  This option must be set from the command-line.

       --config-file <path>
              Path to a config file to use. Multiple config files can be specified, with values in  later  files
              taking precedence. Defaults to None. This option must be set from the command-line.

       --log-config-append <path>, --log-config <path>, --log_config <path>
              The  name  of  a  logging  configuration  file.  This  file  is  appended  to any existing logging
              configuration files.  For details about logging configuration files, see the Python logging module
              documentation. Note that when logging configuration files are used then all logging  configuration
              is set in the configuration file and other logging configuration options are ignored (for example,
              --log-date-format).

       --log-date-format <format>
              Defines the format string for %(asctime)s in log records. Default: None. This option is ignored if
              --log-config-append is set.

       --log-dir <dir>, --logdir <dir>
              The   base   directory   used   for   relative   log_file   paths.   This  option  is  ignored  if
              --log-config-append is set.

       --log-file PATH, --logfile <path>
              Name of log file to send logging output to.  If no default is set, logging will go  to  stderr  as
              defined by use_stderr.  This option is ignored if --log-config-append is set.

       --syslog-log-facility SYSLOG_LOG_FACILITY
              Syslog facility to receive log lines.  This option is ignored if --log-config-append is set.

       --use-journal
              Enable  journald  for  logging. If running in a systemd environment you may wish to enable journal
              support.  Doing so will use the journal native protocol  which  includes  structured  metadata  in
              addition to log messages. This option is ignored if --log-config-append is set.

       --nouse-journal
              The inverse of --use-journal.

       --use-json
              Use JSON formatting for logging. This option is ignored if --log-config-append is set.

       --nouse-json
              The inverse of --use-json.

       --use-syslog
              Use  syslog  for  logging. Existing syslog format is DEPRECATED and will be changed later to honor
              RFC5424.  This option is ignored if --log-config-append is set.

       --nouse-syslog
              The inverse of --use-syslog.

       --watch-log-file
              Uses logging handler designed to watch file system.  When  log  file  is  moved  or  removed  this
              handler  will  open  a  new  log  file with specified path instantaneously. It makes sense only if
              --log-file  option  is  specified  and  Linux  platform  is  used.  This  option  is  ignored   if
              --log-config-append is set.

       --nowatch-log-file
              The inverse of --watch-log-file.

       --debug, -d
              If enabled, the logging level will be set to DEBUG instead of the default INFO level.

       --nodebug
              The inverse of --debug.

       --post-mortem
              Allow post-mortem debugging.

       --nopost-mortem
              The inverse of --post-mortem.

       --version
              Show program's version number and exit

DATABASE COMMANDS

   db version
          nova-manage db version

       Print the current main database version.

   db sync
          nova-manage db sync [--local_cell] [VERSION]

       Upgrade  the main database schema up to the most recent version or VERSION if specified. By default, this
       command will also attempt to upgrade the schema for the cell0 database if it is mapped.  If  --local_cell
       is  specified, then only the main database in the current cell is upgraded. The local database connection
       is determined by  database.connection  in  the  configuration  file,  passed  to  nova-manage  using  the
       --config-file option(s).

       Refer  to  the nova-manage cells_v2 map_cell0 or nova-manage cells_v2 simple_cell_setup commands for more
       details on mapping the cell0 database.

       This command should be run after nova-manage api_db sync.

       Options

       --local_cell
              Only sync db in the local cell: do not attempt to fan-out to all cells.

       Return codes
                                ──────────────────────────────────────────────────────
                                  Return code   Description
                                ──────────────────────────────────────────────────────
                                  0             Successfully synced database schema.
                                ──────────────────────────────────────────────────────
                                  1             Failed to access cell0.
                                ──────────────────────────────────────────────────────
                                │             │                                      │
--

API DATABASE COMMANDS

   api_db version
          nova-manage api_db version

       Print the current API database version.

       New in version 2015.1.0: (Kilo)

   api_db sync
          nova-manage api_db sync [VERSION]

       Upgrade  the API database schema up to the most recent version or VERSION if specified. This command does
       not create the API database, it runs schema migration scripts. The API database connection is  determined
       by api_database.connection in the configuration file passed to nova-manage.

       This command should be run before nova-manage db sync.

       New in version 2015.1.0: (Kilo)

       Changed in version 18.0.0: (Rocky)

       Added  support  for  upgrading  the  optional  placement  database  if [placement_database]/connection is
       configured.

       Changed in version 20.0.0: (Train)

       Removed support for upgrading the optional placement database as placement is now a separate project.

       Removed support for the legacy --version <version> argument.

       Changed in version 24.0.0: (Xena)

       Migrated versioning engine  to  alembic.  The  optional  VERSION  argument  is  now  expected  to  be  an
       alembic-based version. sqlalchemy-migrate-based versions will be rejected.

CELLS V2 COMMANDS

   cell_v2 simple_cell_setup
          nova-manage cell_v2 simple_cell_setup [--transport-url <transport_url>]

       Setup  a  fresh cells v2 environment. If --transport-url is not specified, it will use the one defined by
       transport_url in the configuration file.

       New in version 14.0.0: (Newton)

       Options

       --transport-url <transport_url>
              The transport url for the cell message queue.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Setup is completed.                   │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ No hosts are reporting, meaning  none │
                               │             │ can  be  mapped,  or if the transport │
                               │             │ URL is missing or invalid.            │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 map_cell0
          nova-manage cell_v2 map_cell0 [--database_connection <database_connection>]

       Create a cell mapping to the database connection for the cell0 database.  If a database_connection is not
       specified, it will use the one defined  by  database.connection  in  the  configuration  file  passed  to
       nova-manage.  The  cell0  database  is  used for instances that have not been scheduled to any cell. This
       generally applies to instances that have encountered an error before they have been scheduled.

       New in version 14.0.0: (Newton)

       Options

       --database_connection <database_connection>
              The database connection URL for cell0. This is optional. If  not  provided,  a  standard  database
              connection will be used based on the main database connection from nova configuration.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ cell0  is created successfully or has │
                               │             │ already been set up.                  │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 map_instances
          nova-manage cell_v2 map_instances --cell_uuid <cell_uuid>
            [--max-count <max_count>] [--reset]

       Map instances to the provided cell. Instances in the nova database will be queried from oldest to  newest
       and  mapped  to the provided cell.  A --max-count can be set on the number of instance to map in a single
       run. Repeated runs of the command will start from where the last run finished so it is not  necessary  to
       increase  --max-count to finish.  A --reset option can be passed which will reset the marker, thus making
       the command start from the beginning as opposed to the default behavior of starting from where  the  last
       run finished.

       If --max-count is not specified, all instances in the cell will be mapped in batches of 50. If you have a
       large number of instances, consider specifying a custom value and run the command until it exits with 0.

       New in version 12.0.0: (Liberty)

       Options

       --cell_uuid <cell_uuid>
              Unmigrated instances will be mapped to the cell with the UUID provided.

       --max-count <max_count>
              Maximum  number  of  instances  to  map.  If  not set, all instances in the cell will be mapped in
              batches of 50. If you have a large number of instances, consider specifying a custom value and run
              the command until it exits with 0.

       --reset
              The command will start from the beginning as opposed to the  default  behavior  of  starting  from
              where the last run finished.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ All instances have been mapped.       │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ There   are  still  instances  to  be │
                               │             │ mapped.                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 127         │ Invalid value for --max-count.        │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 255         │ An unexpected error occurred.         │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 map_cell_and_hosts
          nova-manage cell_v2 map_cell_and_hosts [--name <cell_name>]
            [--transport-url <transport_url>] [--verbose]

       Create a cell mapping to the database connection and message queue transport URL, and map hosts  to  that
       cell. The database connection comes from the database.connection defined in the configuration file passed
       to  nova-manage. If --transport-url is not specified, it will use the one defined by transport_url in the
       configuration file. This command is idempotent (can be run multiple times), and the verbose  option  will
       print out the resulting cell mapping UUID.

       New in version 13.0.0: (Mitaka)

       Options

       --transport-url <transport_url>
              The transport url for the cell message queue.

       --name <cell_name>
              The name of the cell.

       --verbose
              Output the cell mapping uuid for any newly mapped hosts.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Successful completion.                │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ The   transport  url  is  missing  or │
                               │             │ invalid                               │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 verify_instance
          nova-manage cell_v2 verify_instance --uuid <instance_uuid> [--quiet]

       Verify instance mapping to a cell. This command is useful to determine if the  cells  v2  environment  is
       properly setup, specifically in terms of the cell, host, and instance mapping records required.

       New in version 14.0.0: (Newton)

       Options

       --uuid <instance_uuid>
              The instance UUID to verify.

       --quiet
              Do not print anything.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ The  instance was successfully mapped │
                               │             │ to a cell.                            │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ The instance is not mapped to a cell. │
                               │             │ See the map_instances command.        │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ The cell mapping is missing. See  the │
                               │             │ map_cell_and_hots  command if you are │
                               │             │ upgrading    from    a    cells    v1 │
                               │             │ environment,          and         the │
                               │             │ simple_cell_setup command if you  are │
                               │             │ upgrading   from   a   non-cells   v1 │
                               │             │ environment.                          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ The instance is  a  deleted  instance │
                               │             │ that still has an instance mapping.   │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 4           │ The  instance is an archived instance │
                               │             │ that still has an instance mapping.   │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 create_cell
          nova-manage cell_v2 create_cell [--name <cell_name>]
            [--transport-url <transport_url>]
            [--database_connection <database_connection>] [--verbose] [--disabled]

       Create  a  cell  mapping  to  the  database  connection  and  message   queue   transport   URL.   If   a
       database_connection  is  not  specified,  it  will  use  the  one  defined  by database.connection in the
       configuration file passed to nova-manage. If --transport-url is  not  specified,  it  will  use  the  one
       defined  by transport_url in the configuration file. The verbose option will print out the resulting cell
       mapping UUID. All the cells created are by default enabled. However passing  the  --disabled  option  can
       create a pre-disabled cell, meaning no scheduling will happen to this cell.

       New in version 15.0.0: (Ocata)

       Changed in version 18.0.0: (Rocky)

       Added --disabled option.

       Options

       --name <cell_name>
              The name of the cell.

       --database_connection <database_connection>
              The database URL for the cell database.

       --transport-url <transport_url>
              The transport url for the cell message queue.

       --verbose
              Output the UUID of the created cell.

       --disabled
              Create a pre-disabled cell.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ The  cell  mapping  was  successfully │
                               │             │ created.                              │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ The   transport   URL   or   database │
                               │             │ connection was missing or invalid.    │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Another  cell  is  already  using the │
                               │             │ provided   transport    URL    and/or │
                               │             │ database connection combination.      │
                               └─────────────┴───────────────────────────────────────┘

   cell_v2 discover_hosts
          nova-manage cell_v2 discover_hosts [--cell_uuid <cell_uuid>] [--verbose]
            [--strict] [--by-service]

       Searches  cells,  or  a  single cell, and maps found hosts. This command will check the database for each
       cell (or a single one if passed in) and map any hosts which are  not  currently  mapped.  If  a  host  is
       already  mapped,  nothing  will  be  done.  You  need to re-run this command each time you add a batch of
       compute hosts to a cell (otherwise the scheduler will never place instances there and the  API  will  not
       list  the  new  hosts).  If --strict is specified, the command will only return 0 if an unmapped host was
       discovered and mapped successfully.  If  --by-service  is  specified,  this  command  will  look  in  the
       appropriate  cell(s)  for  any nova-compute services and ensure there are host mappings for them. This is
       less efficient and is only necessary when using compute drivers that  may  manage  zero  or  more  actual
       compute nodes at any given time (currently only ironic).

       This  command  should  be  run  once  after all compute hosts have been deployed and should not be run in
       parallel. When run in parallel, the commands will collide with each other trying to map the same hosts in
       the database at the same time.

       New in version 14.0.0: (Newton)

       Changed in version 16.0.0: (Pike)

       Added --strict option.

       Changed in version 18.0.0: (Rocky)

       Added --by-service option.

       Options

       --cell_uuid <cell_uuid>
              If provided only this cell will be searched for new hosts to map.

       --verbose
              Provide detailed output when discovering hosts.

       --strict
              Considered successful (exit code 0) only when an unmapped host is discovered.  Any  other  outcome
              will be considered a failure (non-zero exit code).

       --by-service
              Discover hosts by service instead of compute node.

       Return codes
                               ───────────────────────────────────────────────────────
                                 Return code   Description
                               ───────────────────────────────────────────────────────
                                 0             Hosts  were successfully mapped or no
                                               hosts  needed  to   be   mapped.   If
                                               --strict is specified, returns 0 only
                                               if  an  unmapped  host was discovered
                                               and mapped.
                               ───────────────────────────────────────────────────────
                                 1             If  --strict  is  specified  and   no
                                               unmapped   hosts  were  found.   Also
                                               returns 1 if an exception was  raised
                                               while running.
                               ───────────────────────────────────────────────────────
                                 2             The  command was aborted because of a
                                               duplicate host  mapping  found.  This
                                               means   the   command  collided  with
                                               another    running     discover_hosts
                                               command  or  scheduler  periodic task
                                               and is safe to retry.
                               ┌─────────────┬───────────────────────────────────────┐
                               │             │                                       │
   cell_v2 list_cells          │             │                                       │
--
PLACEMENT COMMANDS             │             │                                       │
   placement heal_allocations  │             │                                       │
--

VOLUME ATTACHMENT COMMANDS

   volume_attachment get_connector
          nova-manage volume_attachment get_connector

       Show the host connector for this compute host.

       When  called with the --json switch this dumps a JSON string containing the connector information for the
       current host, which can be saved to a file and  used  as  input  for  the  nova-manage  volume_attachment
       refresh command.

       New in version 24.0.0: (Xena)

       Return codes
                                    ┌─────────────┬──────────────────────────────┐
                                    │ Return code │ Description                  │
                                    ├─────────────┼──────────────────────────────┤
                                    │ 0           │ Success                      │
                                    ├─────────────┼──────────────────────────────┤
                                    │ 1           │ An unexpected error occurred │
                                    └─────────────┴──────────────────────────────┘

   volume_attachment show
          nova-manage volume_attachment show [INSTANCE_UUID] [VOLUME_ID]

       Show the details of a the volume attachment between VOLUME_ID and INSTANCE_UUID.

       New in version 24.0.0: (Xena)

       Return codes
                                 ┌─────────────┬────────────────────────────────────┐
                                 │ Return code │ Description                        │
                                 ├─────────────┼────────────────────────────────────┤
                                 │ 0           │ Success                            │
                                 ├─────────────┼────────────────────────────────────┤
                                 │ 1           │ An unexpected error occurred       │
                                 ├─────────────┼────────────────────────────────────┤
                                 │ 2           │ Instance not found                 │
                                 ├─────────────┼────────────────────────────────────┤
                                 │ 3           │ Instance is not attached to volume │
                                 └─────────────┴────────────────────────────────────┘

   volume_attachment refresh
          nova-manage volume_attachment refresh [INSTANCE_UUID] [VOLUME_ID] [CONNECTOR_PATH]

       Refresh the connection info associated with a given volume attachment.

       The instance must be attached to the volume, have a vm_state of stopped and not be locked.

       CONNECTOR_PATH  should  be  the path to a JSON-formatted file containing up to date connector information
       for the compute currently hosting the instance  as  generated  using  the  nova-manage  volume_attachment
       get_connector command.

       New in version 24.0.0: (Xena)

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Success                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Connector path does not exist         │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ Failed to open connector path         │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 4           │ Instance does not exist               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 5           │ Instance   state   invalid  (must  be │
                               │             │ stopped and unlocked)                 │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 6           │ Volume  is  not   attached   to   the │
                               │             │ instance                              │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 7           │ Connector host is not correct         │
                               └─────────────┴───────────────────────────────────────┘

LIBVIRT COMMANDS

   libvirt get_machine_type
          nova-manage libvirt get_machine_type [INSTANCE_UUID]

       Fetch and display the recorded machine type of a libvirt instance identified by INSTANCE_UUID.

       New in version 23.0.0: (Wallaby)

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Successfully completed                │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Unable  to  find instance or instance │
                               │             │ mapping                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ No machine type found for instance    │
                               └─────────────┴───────────────────────────────────────┘

   libvirt update_machine_type
          nova-manage libvirt update_machine_type \
              [INSTANCE_UUID] [MACHINE_TYPE] [--force]

       Set or update the recorded machine type of instance INSTANCE_UUID to machine type MACHINE_TYPE.

       The following criteria must be met when using this command:

       • The instance must have a vm_state of STOPPED, SHELVED or SHELVED_OFFLOADED.

       • The machine type must be supported. The supported list  includes  alias  and  versioned  types  of  pc,
         pc-i440fx, pc-q35, q35, virt or s390-ccw-virtio.

       • The update will not move the instance between underlying machine types.  For example, pc to q35.

       • The  update  will  not move the instance between an alias and versioned machine type or vice versa. For
         example, pc to pc-1.2.3 or pc-1.2.3 to pc.

       A --force flag is provided to skip the above checks but caution should be taken as this could easily lead
       to the underlying ABI of the instance changing when moving between machine types.

       New in version 23.0.0: (Wallaby)

       Options

       --force
              Skip machine type compatibility checks and force machine type update.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Update completed successfully         │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Unable to find instance  or  instance │
                               │             │ mapping                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ The instance has an invalid vm_state  │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 4           │ The  proposed  update  of the machine │
                               │             │ type is invalid                       │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 5           │ The   provided   machine   type    is │
                               │             │ unsupported                           │
                               └─────────────┴───────────────────────────────────────┘

   libvirt list_unset_machine_type
          nova-manage libvirt list_unset_machine_type [--cell-uuid <cell-uuid>]

       List the UUID of any instance without hw_machine_type set.

       This  command  is  useful  for  operators  attempting  to  determine  when  it  is  safe  to  change  the
       libvirt.hw_machine_type option within an environment.

       New in version 23.0.0: (Wallaby)

       Options

       --cell_uuid <cell_uuid>
              The UUID of the cell to list instances from.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Completed successfully, no  instances │
                               │             │ found without hw_machine_type set     │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Unable to find cell mapping           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ Instances        found        without │
                               │             │ hw_machine_type set                   │
                               └─────────────┴───────────────────────────────────────┘

IMAGE PROPERTY COMMANDS

   image_property show
          nova-manage image_property show [INSTANCE_UUID] [IMAGE_PROPERTY]

       Fetch and display the recorded image property IMAGE_PROPERTY of an instance identified by INSTANCE_UUID.

       New in version 25.0.0: (Yoga)

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Successfully completed                │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Unable to find instance  or  instance │
                               │             │ mapping                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ No image property found for instance  │
                               └─────────────┴───────────────────────────────────────┘

   image_property set
          nova-manage image_property set \
              [INSTANCE_UUID] [--property] [IMAGE_PROPERTY]=[VALUE]

       Set or update the recorded image property IMAGE_PROPERTY of instance INSTANCE_UUID to value VALUE.

       The following criteria must be met when using this command:

       • The instance must have a vm_state of STOPPED, SHELVED or SHELVED_OFFLOADED.

       This command is useful for operators who need to update stored instance image properties that have become
       invalidated by a change of instance machine type, for example.

       New in version 25.0.0: (Yoga)

       Options

       --property
              Image  property  to  set  using  the format name=value. For example: --property hw_disk_bus=virtio
              --property hw_cdrom_bus=sata.

       Return codes
                               ┌─────────────┬───────────────────────────────────────┐
                               │ Return code │ Description                           │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 0           │ Update completed successfully         │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 1           │ An unexpected error occurred          │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 2           │ Unable to find instance  or  instance │
                               │             │ mapping                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 3           │ The instance has an invalid vm_state  │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 4           │ The  provided  image property name is │
                               │             │ invalid                               │
                               ├─────────────┼───────────────────────────────────────┤
                               │ 5           │ The provided image property value  is │
                               │             │ invalid                               │
                               └─────────────┴───────────────────────────────────────┘

LIMITS COMMANDS

   limits migrate_to_unified_limits
          nova-manage limits migrate_to_unified_limits [--project-id <project-id>]
          [--region-id <region-id>] [--verbose] [--dry-run]

       Migrate quota limits from the Nova database to unified limits in Keystone.

       This command is useful for operators to migrate from legacy quotas to unified limits. Limits are migrated
       by copying them from the Nova database to Keystone by creating them using the Keystone API.

       The  Nova  configuration  file  used  by  nova-manage  must  have  a  [keystone]  section  that  contains
       authentication settings in order for the Keystone API calls to succeed. As an example:

          [keystone]
          region_name = RegionOne
          user_domain_name = Default
          auth_url = http://127.0.0.1/identity
          auth_type = password
          username = admin
          password = <password>
          system_scope = all

       By default Keystone policy configuration, access to create, update, and delete in the unified limits  API
       is  restricted  to  callers  with  system-scoped  authorization  tokens.  The  system_scope = all setting
       indicates the scope for system operations. You will  need  to  ensure  that  the  user  configured  under
       [keystone] has the necessary role and scope.

       WARNING:
          The  limits  migrate_to_unified_limits command will create limits only for resources that exist in the
          legacy quota system and any resource that does not have a unified limit in Keystone will use  a  quota
          limit of 0.

          For  resource  classes  that are allocated by the placement service and have no default limit set, you
          will need to create default limits manually. The most common example is class:DISK_GB.  All  Nova  API
          requests that need to allocate DISK_GB will fail quota enforcement until a default limit for it is set
          in Keystone.

          See the unified limits documentation about creating limits using the OpenStackClient.

       New in version 28.0.0: (2023.2 Bobcat)

       Options

       --project-id <project-id>
              The project ID for which to migrate quota limits.

       --region-id <region-id>
              The region ID for which to migrate quota limits.

       --verbose
              Provide verbose output during execution.

       --dry-run
              Show what limits would be created without actually creating them.

       Return codes
                                 ┌─────────────┬───────────────────────────────────┐
                                 │ Return code │ Description                       │
                                 ├─────────────┼───────────────────────────────────┤
                                 │ 0           │ Command completed successfully    │
                                 ├─────────────┼───────────────────────────────────┤
                                 │ 1           │ An unexpected error occurred      │
                                 ├─────────────┼───────────────────────────────────┤
                                 │ 2           │ Failed to connect to the database │
                                 └─────────────┴───────────────────────────────────┘

SEE ALSO

       nova-policy(1), nova-status(1)

BUGS

       • Nova bugs are managed at Launchpad

AUTHOR

       openstack@lists.openstack.org

COPYRIGHT

       2010-present, OpenStack Foundation

29.2.0                                            Feb 05, 2025                                    NOVA-MANAGE(1)