Provided by: rclone_1.60.1+dfsg-4_amd64 bug

NAME

       Rclone - syncs your files to cloud storage

       • About rclone

       • What can rclone do for you?

       • What features does rclone have?

       • What providers does rclone support?

       • Download

       • Install

       • Donate.

   About rclone
       Rclone  is  a command-line program to manage files on cloud storage.  It is a feature-rich alternative to
       cloud vendors’ web storage interfaces.  Over 40 cloud storage products support rclone including S3 object
       stores, business & consumer file storage services, as well as standard transfer protocols.

DESCRIPTION

       Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm,  and
       cat.   Rclone’s familiar syntax includes shell pipeline support, and --dry-run protection.  It is used at
       the command line, in scripts or via its API.

       Users call rclone “The Swiss army knife of cloud storage”, and “Technology indistinguishable from magic”.

       Rclone really looks after your data.  It preserves  timestamps  and  verifies  checksums  at  all  times.
       Transfers  over  limited  bandwidth; intermittent connections, or subject to quota can be restarted, from
       the last good file transferred.  You can check the integrity  of  your  files.   Where  possible,  rclone
       employs  server-side transfers to minimise local bandwidth use and transfers from one provider to another
       without using local disk.

       Virtual backends  wrap  local  and  cloud  file  systems  to  apply  encryption,  compression,  chunking,
       hashing and joining.

       Rclone  mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and
       also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.

       Rclone is mature, open-source software originally inspired by rsync and  written  in  Go.   The  friendly
       support  community  is  familiar  with  varied  use  cases.   Official  Ubuntu,  Debian, Fedora, Brew and
       Chocolatey repos.  include rclone.  For the latest version downloading from rclone.org is recommended.

       Rclone is widely used on Linux, Windows  and  Mac.   Third-party  developers  create  innovative  backup,
       restore, GUI and business process solutions using the rclone command line or API.

       Rclone does the heavy lifting of communicating with cloud storage.

   What can rclone do for you?
       Rclone helps you:

       • Backup (and encrypt) files to cloud storage

       • Restore (and decrypt) files from cloud storage

       • Mirror cloud data to other cloud services or locally

       • Migrate data to the cloud, or between cloud storage vendors

       • Mount multiple, encrypted, cached or diverse cloud storage as a disk

       • Analyse and account for data held on cloud storage using lsf, ljson, size, ncdu

       • Union file systems together to present multiple local and/or cloud file systems as one

   Features
       • Transfers

         • MD5, SHA1 hashes are checked at all times for file integrity

         • Timestamps are preserved on files

         • Operations can be restarted at any time

         • Can be to and from network, e.g. two different cloud providers

         • Can use multi-threaded downloads to local disk

       • Copy new or changed files to cloud storage

       • Sync (one way) to make a directory identical

       • Move files to cloud storage deleting the local after verification

       • Check hashes and for missing/extra files

       • Mount your cloud storage as a network disk

       • Serve local or remote files over HTTP/WebDav/FTP/SFTP/DLNA

       • Experimental Web based GUI

   Supported providers
       (There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.)

       • 1Fichier

       • Akamai Netstorage

       • Alibaba Cloud (Aliyun) Object Storage System (OSS)

       • Amazon Drive

       • Amazon S3

       • Backblaze B2

       • Box

       • Ceph

       • China Mobile Ecloud Elastic Object Storage (EOS)

       • Arvan Cloud Object Storage (AOS)

       • Citrix ShareFile

       • Cloudflare R2

       • DigitalOcean Spaces

       • Digi Storage

       • Dreamhost

       • Dropbox

       • Enterprise File Fabric

       • FTP

       • Google Cloud Storage

       • Google Drive

       • Google Photos

       • HDFS

       • Hetzner Storage Box

       • HiDrive

       • HTTP

       • Internet Archive

       • Jottacloud

       • IBM COS S3

       • IDrive e2

       • IONOS Cloud

       • Koofr

       • Mail.ru Cloud

       • Memset Memstore

       • Mega

       • Memory

       • Microsoft Azure Blob Storage

       • Microsoft OneDrive

       • Minio

       • Nextcloud

       • OVH

       • OpenDrive

       • OpenStack Swift

       • Oracle Cloud Storage Swift

       • Oracle Object Storage

       • ownCloud

       • pCloud

       • premiumize.me

       • put.io

       • QingStor

       • Qiniu Cloud Object Storage (Kodo)

       • Rackspace Cloud Files

       • rsync.net

       • Scaleway

       • Seafile

       • Seagate Lyve Cloud

       • SeaweedFS

       • SFTP

       • Sia

       • SMB / CIFS

       • StackPath

       • Storj

       • SugarSync

       • Tencent Cloud Object Storage (COS)

       • Uptobox

       • Wasabi

       • WebDAV

       • Yandex Disk

       • Zoho WorkDrive

       • The local filesystem

   Virtual providers
       These backends adapt or modify other storage providers:

       • Alias: Rename existing remotes

       • Cache: Cache remotes (DEPRECATED)

       • Chunker: Split large files

       • Combine: Combine multiple remotes into a directory tree

       • Compress: Compress files

       • Crypt: Encrypt files

       • Hasher: Hash files

       • Union: Join multiple remotes to work together

   Links
       • Home page

       • GitHub project page for source and bug tracker

       • Rclone Forum

       • Downloads

Usage

       Rclone  is a command line program to manage files on cloud storage.  After download and install, continue
       here to learn how to use it: Initial configuration, what the  basic  syntax  looks  like,  describes  the
       various subcommands, the various options, and more.

   Configure
       First,  you’ll  need  to  configure  rclone.   As  the  object  storage  systems  have  quite complicated
       authentication these are kept in a config file.  (See the --config entry for how to find the config  file
       and choose its location.)

       The easiest way to make the config is to run rclone with the config option:

              rclone config

       See the following for detailed instructions for

       • 1Fichier

       • Akamai Netstorage

       • Alias

       • Amazon Drive

       • Amazon S3

       • Backblaze B2

       • Box

       • Chunker - transparently splits large files for other remotes

       • Citrix ShareFile

       • Compress

       • Combine

       • Crypt - to encrypt other remotes

       • DigitalOcean Spaces

       • Digi Storage

       • Dropbox

       • Enterprise File Fabric

       • FTP

       • Google Cloud Storage

       • Google Drive

       • Google Photos

       • Hasher - to handle checksums for other remotes

       • HDFS

       • HiDrive

       • HTTP

       • Internet Archive

       • Jottacloud

       • Koofr

       • Mail.ru Cloud

       • Mega

       • Memory

       • Microsoft Azure Blob Storage

       • Microsoft OneDrive

       • OpenStack Swift / Rackspace Cloudfiles / Memset Memstore

       • OpenDrive

       • Oracle Object Storage

       • Pcloud

       • premiumize.me

       • put.io

       • QingStor

       • Seafile

       • SFTP

       • Sia

       • SMB

       • Storj

       • SugarSync

       • Union

       • Uptobox

       • WebDAV

       • Yandex Disk

       • Zoho WorkDrive

       • The local filesystem

   Basic syntax
       Rclone syncs a directory tree from one storage system to another.

       Its syntax is like this

              Syntax: [options] subcommand <parameters> <parameters...>

       Source  and  destination  paths  are specified by the name you gave the storage system in the config file
       then the sub path, e.g.  “drive:myfolder” to look at “myfolder” in Google drive.

       You can define as many storage paths as you like in the config file.

       Please use the -i / --interactive flag while learning rclone to avoid accidental data loss.

   Subcommands
       rclone uses a system of subcommands.  For example

              rclone ls remote:path # lists a remote
              rclone copy /local/path remote:path # copies /local/path to the remote
              rclone sync -i /local/path remote:path # syncs /local/path to the remote

rclone config

       Enter an interactive configuration session.

   Synopsis
       Enter an interactive configuration session where you can setup new remotes and manage existing ones.  You
       may also set or remove a password to protect your configuration.

              rclone config [flags]

   Options
                -h, --help   help for config

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

       • rclone config create - Create a new remote with name, type and options.

       • rclone config delete - Delete an existing remote.

       • rclone config disconnect - Disconnects user from remote

       • rclone config dump - Dump the config file as JSON.

       • rclone config file - Show path of configuration file in use.

       • rclone config password - Update password in an existing remote.

       • rclone config paths - Show paths used for configuration, cache, temp etc.

       • rclone config providers - List in JSON format all the providers and options.

       • rclone config reconnect - Re-authenticates user with remote.

       • rclone config show - Print (decrypted) config file, or the config for a single remote.

       • rclone config touch - Ensure configuration file exists.

       • rclone config update - Update options in an existing remote.

       • rclone config userinfo - Prints info about logged in user of remote.

rclone copy

       Copy files from source to dest, skipping identical files.

   Synopsis
       Copy the source to  the  destination.   Does  not  transfer  files  that  are  identical  on  source  and
       destination, testing by size and modification time or MD5SUM.  Doesn’t delete files from the destination.
       If you want to also delete files from destination, to make it match source, use the sync command instead.

       Note  that  it is always the contents of the directory that is synced, not the directory itself.  So when
       source:path is a directory, it’s the contents of source:path that are copied, not the directory name  and
       contents.

       To copy single files, use the copyto command instead.

       If dest:path doesn’t exist, it is created and the source:path contents go there.

       For example

              rclone copy source:sourcepath dest:destpath

       Let’s say there are two files in sourcepath

              sourcepath/one.txt
              sourcepath/two.txt

       This copies them to

              destpath/one.txt
              destpath/two.txt

       Not to

              destpath/sourcepath/one.txt
              destpath/sourcepath/two.txt

       If  you  are  familiar with rsync, rclone always works as if you had written a trailing / - meaning “copy
       the contents of this directory”.  This applies to all commands and whether  you  are  talking  about  the
       source or destination.

       See  the  –no-traverse option  for  controlling  whether  rclone  lists the destination directory or not.
       Supplying this option when copying a small number of files into a large destination can  speed  transfers
       up greatly.

       For example, if you have many files in /path/to/src but only a few of them change every day, you can copy
       all the files which have changed recently very efficiently like this:

              rclone copy --max-age 24h --no-traverse /path/to/src remote:

       Note: Use the -P/--progress flag to view real-time transfer statistics.

       Note: Use the --dry-run or the --interactive/-i flag to test without copying anything.

              rclone copy source:path dest:path [flags]

   Options
                    --create-empty-src-dirs   Create empty source dirs on destination after copy
                -h, --help                    help for copy

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone sync

       Make source and dest identical, modifying destination only.

   Synopsis
       Sync  the  source  to  the  destination,  changing the destination only.  Doesn’t transfer files that are
       identical on source and destination, testing by size and modification time  or  MD5SUM.   Destination  is
       updated to match source, including deleting files if necessary (except duplicate objects, see below).  If
       you don’t want to delete files from destination, use the copy command instead.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

              rclone sync -i SOURCE remote:DESTINATION

       Note  that  files  in  the destination won’t be deleted if there were any errors at any point.  Duplicate
       objects (files with the same name, on those providers that support it) are also not yet handled.

       It is always the contents of the directory that is synced, not the directory itself.  So when source:path
       is a directory, it’s the contents of source:path that are copied, not the directory  name  and  contents.
       See extended explanation in the copy command if unsure.

       If dest:path doesn’t exist, it is created and the source:path contents go there.

       It  is  not possible to sync overlapping remotes.  However, you may exclude the destination from the sync
       with a filter rule or by putting an exclude-if-present file inside the destination directory and sync  to
       a destination that is inside the source directory.

       Note: Use the -P/--progress flag to view real-time transfer statistics

       Note:  Use the rclone dedupe command to deal with “Duplicate object/directory found in source/destination
       - ignoring” errors.  See this forum post for more info.

              rclone sync source:path dest:path [flags]

   Options
                    --create-empty-src-dirs   Create empty source dirs on destination after sync
                -h, --help                    help for sync

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone move

       Move files from source to dest.

   Synopsis
       Moves the contents of the source directory to the destination directory.  Rclone will error if the source
       and destination overlap and the remote does not support a server-side directory move operation.

       To move single files, use the moveto command instead.

       If no filters are in use and if possible this will server-side move source:path  into  dest:path.   After
       this source:path will no longer exist.

       Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path.
       If  possible  a  server-side  move will be used, otherwise it will copy it (server-side if possible) into
       dest:path then delete the original (if no errors on copy) in source:path.

       If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.

       See the –no-traverse option for controlling whether  rclone  lists  the  destination  directory  or  not.
       Supplying this option when moving a small number of files into a large destination can speed transfers up
       greatly.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

       Note: Use the -P/--progress flag to view real-time transfer statistics.

              rclone move source:path dest:path [flags]

   Options
                    --create-empty-src-dirs   Create empty source dirs on destination after move
                    --delete-empty-src-dirs   Delete empty source dirs after move
                -h, --help                    help for move

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone delete

       Remove the files in path.

   Synopsis
       Remove  the  files  in path.  Unlike purge it obeys include/exclude filters so can be used to selectively
       delete files.

       rclone delete only deletes files but leaves the directory structure alone.   If  you  want  to  delete  a
       directory and all of its contents use the purge command.

       If  you  supply  the --rmdirs flag, it will remove all empty directories along with it.  You can also use
       the separate command rmdir or rmdirs to delete empty directories only.

       For example, to delete all files bigger than 100 MiB, you may first want to check what would  be  deleted
       (use either):

              rclone --min-size 100M lsl remote:path
              rclone --dry-run --min-size 100M delete remote:path

       Then proceed with the actual delete:

              rclone --min-size 100M delete remote:path

       That  reads  “delete  everything  with a minimum size of 100 MiB”, hence delete all files bigger than 100
       MiB.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

              rclone delete remote:path [flags]

   Options
                -h, --help     help for delete
                    --rmdirs   rmdirs removes empty directories but leaves root intact

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone purge

       Remove the path and all of its contents.

   Synopsis
       Remove the path and all of its contents.   Note  that  this  does  not  obey  include/exclude  filters  -
       everything  will  be removed.  Use the delete command if you want to selectively delete files.  To delete
       empty directories only, use command rmdir or rmdirs.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

              rclone purge remote:path [flags]

   Options
                -h, --help   help for purge

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone mkdir

       Make the path if it doesn’t already exist.

              rclone mkdir remote:path [flags]

   Options
                -h, --help   help for mkdir

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone rmdir

       Remove the empty directory at path.

   Synopsis
       This removes empty directory given by path.  Will not remove the path if it has any objects  in  it,  not
       even empty subdirectories.  Use command rmdirs (or delete with option --rmdirs) to do that.

       To delete a path and any objects in it, use purge command.

              rclone rmdir remote:path [flags]

   Options
                -h, --help   help for rmdir

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone check

       Checks the files in the source and destination match.

   Synopsis
       Checks  the  files  in  the source and destination match.  It compares sizes and hashes (MD5 or SHA1) and
       logs a report of files that don’t match.  It doesn’t alter the source or destination.

       For the crypt remote there is a dedicated command, cryptcheck, that are able to check  the  checksums  of
       the crypted files.

       If  you supply the --size-only flag, it will only compare the sizes not the hashes as well.  Use this for
       a quick check.

       If you supply the --download flag, it will download the data from both remotes  and  check  them  against
       each other on the fly.  This can be useful for remotes that don’t support hashes or if you really want to
       check all the data.

       If you supply the --checkfile HASH flag with a valid hash name, the source:path must point to a text file
       in the SUM format.

       If  you  supply  the  --one-way  flag, it will only check that files in the source match the files in the
       destination, not the other way around.  This means that extra files in the destination that  are  not  in
       the source will not be detected.

       The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to
       the  file  name  (or  stdout  if it is -) supplied.  What they write is described in the help below.  For
       example --differ will write all paths which are present on both the source and destination but different.

       The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then  a
       space and then the path to tell you what happened to it.  These are reminiscent of diff files.

       • = path means path was found in source and destination and was identical

       • - path means path was missing on the source, so only in the destination

       • + path means path was missing on the destination, so only in the source

       • * path means path was present in source and destination but different.

       • ! path means there was an error reading or hashing the source or dest.

         rclone check source:path dest:path [flags]

   Options
                -C, --checkfile string        Treat source:path as a SUM file with hashes of given type
                    --combined string         Make a combined report of changes to this file
                    --differ string           Report all non-matching files to this file
                    --download                Check by downloading rather than with hash
                    --error string            Report all files with errors (hashing or reading) to this file
                -h, --help                    help for check
                    --match string            Report all matching files to this file
                    --missing-on-dst string   Report all files missing from the destination to this file
                    --missing-on-src string   Report all files missing from the source to this file
                    --one-way                 Check one way only, source files must exist on remote

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone ls

       List the objects in the path with size and path.

   Synopsis
       Lists  the  objects  in the source path to standard output in a human readable format with size and path.
       Recurses by default.

       Eg

              $ rclone ls swift:bucket
                  60295 bevajer5jef
                  90613 canole
                  94467 diwogej7
                  37600 fubuwic

       Any of the filtering options can be applied to this command.

       There are several related list commands

       • ls to list size and path of objects only

       • lsl to list modification time, size and path of objects only

       • lsd to list directories only

       • lsf to list objects and directories in easy to parse format

       • lsjson to list objects and directories in JSON format

       ls,lsl,lsd are designed to be human-readable.  lsf is designed to be human and machine-readable.   lsjson
       is designed to be machine-readable.

       Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

       The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

       Listing  a  nonexistent  directory  will  produce  an  error  except  for  remotes which can’t have empty
       directories (e.g. s3, swift, or gcs - the bucket-based remotes).

              rclone ls remote:path [flags]

   Options
                -h, --help   help for ls

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone lsd

       List all directories/containers/buckets in the path.

   Synopsis
       Lists the directories in the source path to standard output.  Does not recurse by default.   Use  the  -R
       flag to recurse.

       This  command  lists  the  total  size  of the directory (if known, -1 if not), the modification time (if
       known, the current time if not), the number of objects in the directory (if known, -1  if  not)  and  the
       name of the directory, Eg

              $ rclone lsd swift:
                    494000 2018-04-26 08:43:20     10000 10000files
                        65 2018-04-26 08:43:20         1 1File

       Or

              $ rclone lsd drive:test
                        -1 2016-10-17 17:41:53        -1 1000files
                        -1 2017-01-03 14:40:54        -1 2500files
                        -1 2017-07-08 14:39:28        -1 4000files

       If you just want the directory names use rclone lsf --dirs-only.

       Any of the filtering options can be applied to this command.

       There are several related list commands

       • ls to list size and path of objects only

       • lsl to list modification time, size and path of objects only

       • lsd to list directories only

       • lsf to list objects and directories in easy to parse format

       • lsjson to list objects and directories in JSON format

       ls,lsl,lsd  are designed to be human-readable.  lsf is designed to be human and machine-readable.  lsjson
       is designed to be machine-readable.

       Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

       The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

       Listing a nonexistent directory will  produce  an  error  except  for  remotes  which  can’t  have  empty
       directories (e.g. s3, swift, or gcs - the bucket-based remotes).

              rclone lsd remote:path [flags]

   Options
                -h, --help        help for lsd
                -R, --recursive   Recurse into the listing

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone lsl

       List the objects in path with modification time, size and path.

   Synopsis
       Lists  the  objects  in  the  source path to standard output in a human readable format with modification
       time, size and path.  Recurses by default.

       Eg

              $ rclone lsl swift:bucket
                  60295 2016-06-25 18:55:41.062626927 bevajer5jef
                  90613 2016-06-25 18:55:43.302607074 canole
                  94467 2016-06-25 18:55:43.046609333 diwogej7
                  37600 2016-06-25 18:55:40.814629136 fubuwic

       Any of the filtering options can be applied to this command.

       There are several related list commands

       • ls to list size and path of objects only

       • lsl to list modification time, size and path of objects only

       • lsd to list directories only

       • lsf to list objects and directories in easy to parse format

       • lsjson to list objects and directories in JSON format

       ls,lsl,lsd are designed to be human-readable.  lsf is designed to be human and machine-readable.   lsjson
       is designed to be machine-readable.

       Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

       The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

       Listing  a  nonexistent  directory  will  produce  an  error  except  for  remotes which can’t have empty
       directories (e.g. s3, swift, or gcs - the bucket-based remotes).

              rclone lsl remote:path [flags]

   Options
                -h, --help   help for lsl

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone md5sum

       Produces an md5sum file for all the objects in the path.

   Synopsis
       Produces an md5sum file for all the objects in the path.  This is in the  same  format  as  the  standard
       md5sum tool produces.

       By  default,  the hash is requested from the remote.  If MD5 is not supported by the remote, no hash will
       be returned.  With the download flag, the file will be downloaded from  the  remote  and  hashed  locally
       enabling MD5 for any remote.

       For  other  algorithms,  see  the  hashsum command.   Running  rclone md5sum remote:path is equivalent to
       running rclone hashsum MD5 remote:path.

       This command can also hash data received on standard input (stdin), by not passing a remote:path,  or  by
       passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally,
       as a relative path).

              rclone md5sum remote:path [flags]

   Options
                    --base64               Output base64 encoded hashsum
                -C, --checkfile string     Validate hashes against a given SUM file instead of printing them
                    --download             Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
                -h, --help                 help for md5sum
                    --output-file string   Output hashsums to a file rather than the terminal

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone sha1sum

       Produces an sha1sum file for all the objects in the path.

   Synopsis
       Produces  an  sha1sum  file  for all the objects in the path.  This is in the same format as the standard
       sha1sum tool produces.

       By default, the hash is requested from the remote.  If SHA-1 is not supported by the remote, no hash will
       be returned.  With the download flag, the file will be downloaded from  the  remote  and  hashed  locally
       enabling SHA-1 for any remote.

       For  other  algorithms,  see  the  hashsum command.   Running rclone sha1sum remote:path is equivalent to
       running rclone hashsum SHA1 remote:path.

       This command can also hash data received on standard input (stdin), by not passing a remote:path,  or  by
       passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally,
       as a relative path).

       This command can also hash data received on STDIN, if not passing a remote:path.

              rclone sha1sum remote:path [flags]

   Options
                    --base64               Output base64 encoded hashsum
                -C, --checkfile string     Validate hashes against a given SUM file instead of printing them
                    --download             Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
                -h, --help                 help for sha1sum
                    --output-file string   Output hashsums to a file rather than the terminal

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone size

       Prints the total size and number of objects in remote:path.

   Synopsis
       Counts objects in the path and calculates the total size.  Prints the result to standard output.

       By default the output is in human-readable format, but shows values in both human-readable format as well
       as  the  raw  numbers  (global  option  --human-readable is not considered).  Use option --json to format
       output as JSON instead.

       Recurses by default, use --max-depth 1 to stop the recursion.

       Some backends do not always provide file sizes, see for example Google Photos and  Google Drive.   Rclone
       will  then show a notice in the log indicating how many such files were encountered, and count them in as
       empty files in the output of the size command.

              rclone size remote:path [flags]

   Options
                -h, --help   help for size
                    --json   Format output as JSON

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone version

       Show the version number.

   Synopsis
       Show the rclone version number, the go version, the build target OS and architecture, the runtime OS  and
       kernel version and bitness, build tags and the type of executable (static or dynamic).

       For example:

              $ rclone version
              rclone v1.55.0
              - os/version: ubuntu 18.04 (64 bit)
              - os/kernel: 4.15.0-136-generic (x86_64)
              - os/type: linux
              - os/arch: amd64
              - go/version: go1.16
              - go/linking: static
              - go/tags: none

       Note: before rclone version 1.55 the os/type and os/arch lines were merged, and the “go/version” line was
       tagged as “go version”.

       If  you  supply  the –check flag, then it will do an online check to compare your version with the latest
       release and the latest beta.

              $ rclone version --check
              yours:  1.42.0.6
              latest: 1.42          (released 2018-06-16)
              beta:   1.42.0.5      (released 2018-06-17)

       Or

              $ rclone version --check
              yours:  1.41
              latest: 1.42          (released 2018-06-16)
                upgrade: https://downloads.rclone.org/v1.42
              beta:   1.42.0.5      (released 2018-06-17)
                upgrade: https://beta.rclone.org/v1.42-005-g56e1e820

              rclone version [flags]

   Options
                    --check   Check for new version
                -h, --help    help for version

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone cleanup

       Clean up the remote if possible.

   Synopsis
       Clean up the remote if possible.  Empty the trash or delete old file  versions.   Not  supported  by  all
       remotes.

              rclone cleanup remote:path [flags]

   Options
                -h, --help   help for cleanup

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone dedupe

       Interactively find duplicate filenames and delete/rename them.

   Synopsis
       By  default  dedupe  interactively  finds  files with duplicate names and offers to delete all but one or
       rename them to be different.  This is known as deduping by name.

       Deduping by name is only useful with a small group of backends (e.g. Google Drive,  Opendrive)  that  can
       have  duplicate file names.  It can be run on wrapping backends (e.g. crypt) if they wrap a backend which
       supports duplicate file names.

       However if --by-hash is passed in then dedupe will find files with duplicate hashes  instead  which  will
       work  on  any  backend  which  supports at least one hash.  This can be used to find files with duplicate
       content.  This is known as deduping by hash.

       If deduping by name, first rclone will merge directories with the same name.  It will do this iteratively
       until all the identically named directories have been merged.

       Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all  but  one
       identical  file  it  finds  without  confirmation.   This means that for most duplicated files the dedupe
       command will not be interactive.

       dedupe considers files to be identical if they have the same file path and the same hash.  If the backend
       does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical.
       If you use the --size-only flag then files will be considered identical if they have the same  size  (any
       hash will be ignored).  This can be useful on crypt backends which do not support hashes.

       Next  rclone  will resolve the remaining duplicates.  Exactly which action is taken depends on the dedupe
       mode.  By default, rclone will interactively query the user for each one.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

       Here is an example run.

       Before - with duplicates

              $ rclone lsl drive:dupes
                6048320 2016-03-05 16:23:16.798000000 one.txt
                6048320 2016-03-05 16:23:11.775000000 one.txt
                 564374 2016-03-05 16:23:06.731000000 one.txt
                6048320 2016-03-05 16:18:26.092000000 one.txt
                6048320 2016-03-05 16:22:46.185000000 two.txt
                1744073 2016-03-05 16:22:38.104000000 two.txt
                 564374 2016-03-05 16:22:52.118000000 two.txt

       Now the dedupe session

              $ rclone dedupe drive:dupes
              2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
              one.txt: Found 4 files with duplicate names
              one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
              one.txt: 2 duplicates remain
                1:      6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
                2:       564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
              s) Skip and do nothing
              k) Keep just one (choose which in next step)
              r) Rename all to be different (by changing file.jpg to file-1.jpg)
              s/k/r> k
              Enter the number of the file to keep> 1
              one.txt: Deleted 1 extra copies
              two.txt: Found 3 files with duplicate names
              two.txt: 3 duplicates remain
                1:       564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
                2:      6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
                3:      1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
              s) Skip and do nothing
              k) Keep just one (choose which in next step)
              r) Rename all to be different (by changing file.jpg to file-1.jpg)
              s/k/r> r
              two-1.txt: renamed from: two.txt
              two-2.txt: renamed from: two.txt
              two-3.txt: renamed from: two.txt

       The result being

              $ rclone lsl drive:dupes
                6048320 2016-03-05 16:23:16.798000000 one.txt
                 564374 2016-03-05 16:22:52.118000000 two-1.txt
                6048320 2016-03-05 16:22:46.185000000 two-2.txt
                1744073 2016-03-05 16:22:38.104000000 two-3.txt

       Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with  the
       same value

       • --dedupe-mode interactive - interactive as above.

       • --dedupe-mode skip - removes identical files then skips anything left.

       • --dedupe-mode first - removes identical files then keeps the first one.

       • --dedupe-mode newest - removes identical files then keeps the newest one.

       • --dedupe-mode oldest - removes identical files then keeps the oldest one.

       • --dedupe-mode largest - removes identical files then keeps the largest one.

       • --dedupe-mode smallest - removes identical files then keeps the smallest one.

       • --dedupe-mode rename - removes identical files then renames the rest to be different.

       • --dedupe-mode list - lists duplicate dirs and files only and changes nothing.

       For example, to rename all the identically named photos in your Google Photos directory, do

              rclone dedupe --dedupe-mode rename "drive:Google Photos"

       Or

              rclone dedupe rename "drive:Google Photos"

              rclone dedupe [mode] remote:path [flags]

   Options
                    --by-hash              Find identical hashes rather than names
                    --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
                -h, --help                 help for dedupe

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone about

       Get quota information from the remote.

   Synopsis
       rclone  about  prints quota information about a remote to standard output.  The output is typically used,
       free, quota and trash contents.

       E.g.  Typical output from rclone about remote: is:

              Total:   17 GiB
              Used:    7.444 GiB
              Free:    1.315 GiB
              Trashed: 100.000 MiB
              Other:   8.241 GiB

       Where the fields are:

       • Total: Total size available.

       • Used: Total size used.

       • Free: Total space available to this user.

       • Trashed: Total space used by trash.

       • Other: Total amount in other storage (e.g. Gmail, Google Photos).

       • Objects: Total number of objects in the storage.

       All sizes are in number of bytes.

       Applying a --full flag to the command prints the bytes in full, e.g.

              Total:   18253611008
              Used:    7993453766
              Free:    1411001220
              Trashed: 104857602
              Other:   8849156022

       A --json flag generates conveniently machine-readable output, e.g.

              {
                  "total": 18253611008,
                  "used": 7993453766,
                  "trashed": 104857602,
                  "other": 8849156022,
                  "free": 1411001220
              }

       Not all backends print all fields.  Information is not included if it  is  not  provided  by  a  backend.
       Where the value is unlimited it is omitted.

       Some backends does not support the rclone about command at all, see complete list in documentation.

              rclone about remote: [flags]

   Options
                    --full   Full numbers instead of human-readable
                -h, --help   help for about
                    --json   Format output as JSON

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone authorize

       Remote authorization.

   Synopsis
       Remote  authorization.  Used to authorize a remote or headless rclone from a machine with a browser - use
       as instructed by rclone config.

       Use the –auth-no-open-browser to prevent rclone to open auth link in default browser automatically.

              rclone authorize [flags]

   Options
                    --auth-no-open-browser   Do not automatically open auth link in default browser
                -h, --help                   help for authorize

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone backend

       Run a backend-specific command.

   Synopsis
       This runs a backend-specific command.  The commands themselves (except for  “help”  and  “features”)  are
       defined by the backends and you should see the backend docs for definitions.

       You can discover what commands a backend implements by using

              rclone backend help remote:
              rclone backend help <backendname>

       You  can  also  discover information about the backend using (see operations/fsinfo in the remote control
       docs for more info).

              rclone backend features remote:

       Pass options to the backend command with -o.  This should be key=value or key, e.g.:

              rclone backend stats remote:path stats -o format=json -o long

       Pass arguments to the backend by placing them on the end of the line

              rclone backend cleanup remote:path file1 file2 file3

       Note to run these commands on a running backend then see backend/command in the rc docs.

              rclone backend <command> remote:path [opts] <args> [flags]

   Options
                -h, --help                 help for backend
                    --json                 Always output in JSON format
                -o, --option stringArray   Option in the form name=value or name

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone bisync

       Perform bidirectional synchronization between two paths.

   Synopsis
       Perform bidirectional synchronization between two paths.

       Bisync provides a bidirectional cloud sync solution in rclone.  It retains the Path1 and Path2 filesystem
       listings from the prior run.  On each successive run it will: - list files on Path1 and Path2, and  check
       for  changes on each side.  Changes include New, Newer, Older, and Deleted files.  - Propagate changes on
       Path1 to Path2, and vice-versa.

       See full bisync description for details.

              rclone bisync remote1:path1 remote2:path2 [flags]

   Options
                    --check-access            Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
                    --check-filename string   Filename for --check-access (default: RCLONE_TEST)
                    --check-sync string       Controls comparison of final listings: true|false|only (default: true) (default "true")
                    --filters-file string     Read filtering patterns from a file
                    --force                   Bypass --max-delete safety check and run the sync. Consider using with --verbose
                -h, --help                    help for bisync
                    --localtime               Use local time in listings (default: UTC)
                    --no-cleanup              Retain working files (useful for troubleshooting and testing).
                    --remove-empty-dirs       Remove empty directories at the final cleanup step.
                -1, --resync                  Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
                    --workdir string          Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone cat

       Concatenates any files and sends them to stdout.

   Synopsis
       rclone cat sends any files to standard output.

       You can use it like this to output a single file

              rclone cat remote:path/to/file

       Or like this to output any file in dir or its subdirectories.

              rclone cat remote:path/to/dir

       Or like this to output any .txt files in dir or its subdirectories.

              rclone --include "*.txt" cat remote:path/to/dir

       Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to
       print a section in the middle.  Note that if offset is negative it will count from the end,  so  --offset
       -1 --count 1 is equivalent to --tail 1.

              rclone cat remote:path [flags]

   Options
                    --count int    Only print N characters (default -1)
                    --discard      Discard the output instead of printing
                    --head int     Only print the first N characters
                -h, --help         help for cat
                    --offset int   Start printing at offset N (or from end if -ve)
                    --tail int     Only print the last N characters

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone checksum

       Checks the files in the source against a SUM file.

   Synopsis
       Checks  that hashsums of source files match the SUM file.  It compares hashes (MD5, SHA1, etc) and logs a
       report of files which don’t match.  It doesn’t alter the file system.

       If you supply the --download flag, it will download the data from remote and calculate the contents  hash
       on  the fly.  This can be useful for remotes that don’t support hashes or if you really want to check all
       the data.

       Note that hash values in the SUM file are treated as case insensitive.

       If you supply the --one-way flag, it will only check that files in the source  match  the  files  in  the
       destination,  not  the  other way around.  This means that extra files in the destination that are not in
       the source will not be detected.

       The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to
       the file name (or stdout if it is -) supplied.  What they write is described  in  the  help  below.   For
       example --differ will write all paths which are present on both the source and destination but different.

       The  --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a
       space and then the path to tell you what happened to it.  These are reminiscent of diff files.

       • = path means path was found in source and destination and was identical

       • - path means path was missing on the source, so only in the destination

       • + path means path was missing on the destination, so only in the source

       • * path means path was present in source and destination but different.

       • ! path means there was an error reading or hashing the source or dest.

         rclone checksum <hash> sumfile src:path [flags]

   Options
                    --combined string         Make a combined report of changes to this file
                    --differ string           Report all non-matching files to this file
                    --download                Check by hashing the contents
                    --error string            Report all files with errors (hashing or reading) to this file
                -h, --help                    help for checksum
                    --match string            Report all matching files to this file
                    --missing-on-dst string   Report all files missing from the destination to this file
                    --missing-on-src string   Report all files missing from the source to this file
                    --one-way                 Check one way only, source files must exist on remote

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone completion

       Generate the autocompletion script for the specified shell

   Synopsis
       Generate the autocompletion script for rclone for the specified shell.  See each sub-command’s  help  for
       details on how to use the generated script.

   Options
                -h, --help   help for completion

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

       • rclone completion bash - Generate the autocompletion script for bash

       • rclone completion fish - Generate the autocompletion script for fish

       • rclone completion powershell - Generate the autocompletion script for powershell

       • rclone completion zsh - Generate the autocompletion script for zsh

rclone completion bash

       Generate the autocompletion script for bash

   Synopsis
       Generate the autocompletion script for the bash shell.

       This script depends on the `bash-completion' package.  If it is not installed already, you can install it
       via your OS’s package manager.

       To load completions in your current shell session:

              source <(rclone completion bash)

       To load completions for every new session, execute once:

   Linux:
              rclone completion bash > /etc/bash_completion.d/rclone

   macOS:
              rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone

       You will need to start a new shell for this setup to take effect.

              rclone completion bash

   Options
                -h, --help              help for bash
                    --no-descriptions   disable completion descriptions

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone completion - Generate the autocompletion script for the specified shell

rclone completion fish

       Generate the autocompletion script for fish

   Synopsis
       Generate the autocompletion script for the fish shell.

       To load completions in your current shell session:

              rclone completion fish | source

       To load completions for every new session, execute once:

              rclone completion fish > ~/.config/fish/completions/rclone.fish

       You will need to start a new shell for this setup to take effect.

              rclone completion fish [flags]

   Options
                -h, --help              help for fish
                    --no-descriptions   disable completion descriptions

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone completion - Generate the autocompletion script for the specified shell

rclone completion powershell

       Generate the autocompletion script for powershell

   Synopsis
       Generate the autocompletion script for powershell.

       To load completions in your current shell session:

              rclone completion powershell | Out-String | Invoke-Expression

       To  load  completions  for  every  new  session,  add  the output of the above command to your powershell
       profile.

              rclone completion powershell [flags]

   Options
                -h, --help              help for powershell
                    --no-descriptions   disable completion descriptions

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone completion - Generate the autocompletion script for the specified shell

rclone completion zsh

       Generate the autocompletion script for zsh

   Synopsis
       Generate the autocompletion script for the zsh shell.

       If shell completion is not already enabled in your environment you will  need  to  enable  it.   You  can
       execute the following once:

              echo "autoload -U compinit; compinit" >> ~/.zshrc

       To load completions in your current shell session:

              source <(rclone completion zsh); compdef _rclone rclone

       To load completions for every new session, execute once:

   Linux:
              rclone completion zsh > "${fpath[1]}/_rclone"

   macOS:
              rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone

       You will need to start a new shell for this setup to take effect.

              rclone completion zsh [flags]

   Options
                -h, --help              help for zsh
                    --no-descriptions   disable completion descriptions

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone completion - Generate the autocompletion script for the specified shell

rclone config create

       Create a new remote with name, type and options.

   Synopsis
       Create a new remote of name with type and options.  The options should be passed in pairs of key value or
       as key=value.

       For example, to make a swift remote of name myremote using auto config you would do:

              rclone config create myremote swift env_auth true
              rclone config create myremote swift env_auth=true

       So  for example if you wanted to configure a Google Drive remote but using remote authorization you would
       do this:

              rclone config create mydrive drive config_is_local=false

       Note  that  if  the  config  process  would  normally  ask  a  question  the  default  is  taken  (unless
       --non-interactive  is  used).   Each time that happens rclone will print or DEBUG a message saying how to
       affect the value taken.

       If any of the parameters passed is a password field, then rclone will automatically obscure them if  they
       aren’t already obscured before putting them in the config file.

       NB  If  the  password  parameter  is  22 characters or longer and consists only of base64 characters then
       rclone can get confused about whether the  password  is  already  obscured  or  not  and  put  unobscured
       passwords  into the config file.  If you want to be 100% certain that the passwords get obscured then use
       the --obscure flag, or if you are 100% certain you  are  already  passing  obscured  passwords  then  use
       --no-obscure.  You can also set obscured passwords using the rclone config password command.

       The  flag  --non-interactive  is for use by applications that wish to configure rclone themselves, rather
       than using rclone’s text based configuration questions.  If this flag is set, and rclone needs to ask the
       user a question, a JSON blob will be returned with the question in it.

       This will look something like (some irrelevant detail removed):

              {
                  "State": "*oauth-islocal,teamdrive,,",
                  "Option": {
                      "Name": "config_is_local",
                      "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
                      "Default": true,
                      "Examples": [
                          {
                              "Value": "true",
                              "Help": "Yes"
                          },
                          {
                              "Value": "false",
                              "Help": "No"
                          }
                      ],
                      "Required": false,
                      "IsPassword": false,
                      "Type": "bool",
                      "Exclusive": true,
                  },
                  "Error": "",
              }

       The format of Option is the same as returned by rclone config providers.  The question should be asked to
       the user and returned to rclone as the --result option along with the --state parameter.

       The keys of Option are used as follows:

       • Name - name of variable - show to user

       • Help - help text.  Hard wrapped at 80 chars.  Any URLs should be clicky.

       • Default - default value - return this if the user just wants the default.

       • Examples - the user should be able to choose one of these

       • Required - the value should be non-empty

       • IsPassword - the value is a password and should be edited as such

       • Type - type of value, eg bool, string, int and others

       • Exclusive - if set no free-form entry allowed only the Examples

       • Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced

       If Error is set then it should be shown to the user at the same time as the question.

              rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

       Note that when using --continue all passwords should be passed in the clear (not obscured).  Any  default
       config values should be passed in with each invocation of --continue.

       At the end of the non interactive process, rclone will return a result with State as empty string.

       If  --all  is  passed  then rclone will ask all the config questions, not just the post config questions.
       Any parameters are used as defaults for questions as usual.

       Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

              rclone config create name type [key value]* [flags]

   Options
                    --all               Ask the full set of config questions
                    --continue          Continue the configuration process with an answer
                -h, --help              help for create
                    --no-obscure        Force any passwords not to be obscured
                    --non-interactive   Don't interact with user and return questions
                    --obscure           Force any passwords to be obscured
                    --result string     Result - use with --continue
                    --state string      State - use with --continue

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config delete

       Delete an existing remote.

              rclone config delete name [flags]

   Options
                -h, --help   help for delete

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config disconnect

       Disconnects user from remote

   Synopsis
       This disconnects the remote: passed in to the cloud storage system.

       This normally means revoking the oauth token.

       To reconnect use “rclone config reconnect”.

              rclone config disconnect remote: [flags]

   Options
                -h, --help   help for disconnect

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config dump

       Dump the config file as JSON.

              rclone config dump [flags]

   Options
                -h, --help   help for dump

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config edit

       Enter an interactive configuration session.

Synopsis

       Enter an interactive configuration session where you can setup new remotes and manage existing ones.  You
       may also set or remove a password to protect your configuration.

              rclone config edit [flags]

Options

                -h, --help   help for edit

       See the global flags page for global options not listed here.

SEE ALSO

       • rclone config - Enter an interactive configuration session.

rclone config file

       Show path of configuration file in use.

              rclone config file [flags]

   Options
                -h, --help   help for file

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config password

       Update password in an existing remote.

   Synopsis
       Update an existing remote’s password.  The password should be passed in  pairs  of  key  password  or  as
       key=password.  The password should be passed in in clear (unobscured).

       For example, to set password of a remote of name myremote you would do:

              rclone config password myremote fieldname mypassword
              rclone config password myremote fieldname=mypassword

       This  command  is  obsolete now that “config update” and “config create” both support obscuring passwords
       directly.

              rclone config password name [key value]+ [flags]

   Options
                -h, --help   help for password

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config paths

       Show paths used for configuration, cache, temp etc.

              rclone config paths [flags]

   Options
                -h, --help   help for paths

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config providers

       List in JSON format all the providers and options.

              rclone config providers [flags]

   Options
                -h, --help   help for providers

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config reconnect

       Re-authenticates user with remote.

   Synopsis
       This reconnects remote: passed in to the cloud storage system.

       To disconnect the remote use “rclone config disconnect”.

       This normally means going through the interactive oauth flow again.

              rclone config reconnect remote: [flags]

   Options
                -h, --help   help for reconnect

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config show

       Print (decrypted) config file, or the config for a single remote.

              rclone config show [<remote>] [flags]

   Options
                -h, --help   help for show

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config touch

       Ensure configuration file exists.

              rclone config touch [flags]

   Options
                -h, --help   help for touch

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config update

       Update options in an existing remote.

   Synopsis
       Update an existing remote’s options.  The options should be passed in pairs of key value or as key=value.

       For example, to update the env_auth field of a remote of name myremote you would do:

              rclone config update myremote env_auth true
              rclone config update myremote env_auth=true

       If the remote uses OAuth the token will be updated, if you don’t require  this  add  an  extra  parameter
       thus:

              rclone config update myremote env_auth=true config_refresh_token=false

       Note  that  if  the  config  process  would  normally  ask  a  question  the  default  is  taken  (unless
       --non-interactive is used).  Each time that happens rclone will print or DEBUG a message  saying  how  to
       affect the value taken.

       If  any of the parameters passed is a password field, then rclone will automatically obscure them if they
       aren’t already obscured before putting them in the config file.

       NB If the password parameter is 22 characters or longer and  consists  only  of  base64  characters  then
       rclone  can  get  confused  about  whether  the  password  is  already obscured or not and put unobscured
       passwords into the config file.  If you want to be 100% certain that the passwords get obscured then  use
       the  --obscure  flag,  or  if  you  are  100% certain you are already passing obscured passwords then use
       --no-obscure.  You can also set obscured passwords using the rclone config password command.

       The flag --non-interactive is for use by applications that wish to configure  rclone  themselves,  rather
       than using rclone’s text based configuration questions.  If this flag is set, and rclone needs to ask the
       user a question, a JSON blob will be returned with the question in it.

       This will look something like (some irrelevant detail removed):

              {
                  "State": "*oauth-islocal,teamdrive,,",
                  "Option": {
                      "Name": "config_is_local",
                      "Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
                      "Default": true,
                      "Examples": [
                          {
                              "Value": "true",
                              "Help": "Yes"
                          },
                          {
                              "Value": "false",
                              "Help": "No"
                          }
                      ],
                      "Required": false,
                      "IsPassword": false,
                      "Type": "bool",
                      "Exclusive": true,
                  },
                  "Error": "",
              }

       The format of Option is the same as returned by rclone config providers.  The question should be asked to
       the user and returned to rclone as the --result option along with the --state parameter.

       The keys of Option are used as follows:

       • Name - name of variable - show to user

       • Help - help text.  Hard wrapped at 80 chars.  Any URLs should be clicky.

       • Default - default value - return this if the user just wants the default.

       • Examples - the user should be able to choose one of these

       • Required - the value should be non-empty

       • IsPassword - the value is a password and should be edited as such

       • Type - type of value, eg bool, string, int and others

       • Exclusive - if set no free-form entry allowed only the Examples

       • Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced

       If Error is set then it should be shown to the user at the same time as the question.

              rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

       Note  that when using --continue all passwords should be passed in the clear (not obscured).  Any default
       config values should be passed in with each invocation of --continue.

       At the end of the non interactive process, rclone will return a result with State as empty string.

       If --all is passed then rclone will ask all the config questions, not just  the  post  config  questions.
       Any parameters are used as defaults for questions as usual.

       Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

              rclone config update name [key value]+ [flags]

   Options
                    --all               Ask the full set of config questions
                    --continue          Continue the configuration process with an answer
                -h, --help              help for update
                    --no-obscure        Force any passwords not to be obscured
                    --non-interactive   Don't interact with user and return questions
                    --obscure           Force any passwords to be obscured
                    --result string     Result - use with --continue
                    --state string      State - use with --continue

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone config userinfo

       Prints info about logged in user of remote.

   Synopsis
       This prints the details of the person logged in to the cloud storage system.

              rclone config userinfo remote: [flags]

   Options
                -h, --help   help for userinfo
                    --json   Format output as JSON

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone config - Enter an interactive configuration session.

rclone copyto

       Copy files from source to dest, skipping identical files.

   Synopsis
       If source:path is a file or directory then it copies it to a file or directory named dest:path.

       This  can  be used to upload single files to other than their current name.  If the source is a directory
       then it acts exactly like the copy command.

       So

              rclone copyto src dst

       where src and dst are rclone paths, either remote:path or /path/to/local or C:.

       This will:

              if src is file
                  copy it to dst, overwriting an existing file if it exists
              if src is directory
                  copy it to dst, overwriting existing files if they exist
                  see copy command for full details

       This doesn’t transfer files that are identical on src and dst, testing by size and modification  time  or
       MD5SUM.  It doesn’t delete files from the destination.

       Note: Use the -P/--progress flag to view real-time transfer statistics

              rclone copyto source:path dest:path [flags]

   Options
                -h, --help   help for copyto

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone copyurl

       Copy url content to dest.

   Synopsis
       Download a URL’s content and copy it to the destination without saving it in temporary storage.

       Setting  --auto-filename  will  attempt  to  automatically determine the filename from the URL (after any
       redirections) and used in the destination path.  With --auto-filename-header in addition, if  a  specific
       filename is set in HTTP headers, it will be used instead of the name from the URL.  With --print-filename
       in addition, the resulting file name will be printed.

       Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.

       Setting --stdout or making the output file name - will cause the output to be written to standard output.

              rclone copyurl https://example.com dest:path [flags]

   Options
                -a, --auto-filename     Get the file name from the URL and use it for destination file path
                    --header-filename   Get the file name from the Content-Disposition header
                -h, --help              help for copyurl
                    --no-clobber        Prevent overwriting file with same name
                -p, --print-filename    Print the resulting name from --auto-filename
                    --stdout            Write the output to stdout rather than a file

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone cryptcheck

       Cryptcheck checks the integrity of a crypted remote.

   Synopsis
       rclone  cryptcheck  checks  a  remote against a crypted remote.  This is the equivalent of running rclone
       check, but able to check the checksums of the crypted remote.

       For it to work the underlying remote of the cryptedremote must support some kind of checksum.

       It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on
       the remote:.  It then checks the checksum of the  underlying  file  on  the  cryptedremote:  against  the
       checksum of the file it has just encrypted.

       Use it like this

              rclone cryptcheck /path/to/files encryptedremote:path

       You can use it like this also, but that will involve downloading all the files in remote:path.

              rclone cryptcheck remote:path encryptedremote:path

       After it has run it will log the status of the encryptedremote:.

       If  you  supply  the  --one-way  flag, it will only check that files in the source match the files in the
       destination, not the other way around.  This means that extra files in the destination that  are  not  in
       the source will not be detected.

       The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to
       the  file  name  (or  stdout  if it is -) supplied.  What they write is described in the help below.  For
       example --differ will write all paths which are present on both the source and destination but different.

       The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then  a
       space and then the path to tell you what happened to it.  These are reminiscent of diff files.

       • = path means path was found in source and destination and was identical

       • - path means path was missing on the source, so only in the destination

       • + path means path was missing on the destination, so only in the source

       • * path means path was present in source and destination but different.

       • ! path means there was an error reading or hashing the source or dest.

         rclone cryptcheck remote:path cryptedremote:path [flags]

   Options
                    --combined string         Make a combined report of changes to this file
                    --differ string           Report all non-matching files to this file
                    --error string            Report all files with errors (hashing or reading) to this file
                -h, --help                    help for cryptcheck
                    --match string            Report all matching files to this file
                    --missing-on-dst string   Report all files missing from the destination to this file
                    --missing-on-src string   Report all files missing from the source to this file
                    --one-way                 Check one way only, source files must exist on remote

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone cryptdecode

       Cryptdecode returns unencrypted file names.

   Synopsis
       rclone  cryptdecode  returns  unencrypted  file  names when provided with a list of encrypted file names.
       List limit is 10 items.

       If you supply the --reverse flag, it will return encrypted file names.

       use it like this

              rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2

              rclone cryptdecode --reverse encryptedremote: filename1 filename2

       Another way to accomplish this is by using the rclone  backend  encode  (or  decode)  command.   See  the
       documentation on the crypt overlay for more info.

              rclone cryptdecode encryptedremote: encryptedfilename [flags]

   Options
                -h, --help      help for cryptdecode
                    --reverse   Reverse cryptdecode, encrypts filenames

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone deletefile

       Remove a single file from remote.

   Synopsis
       Remove  a  single file from remote.  Unlike delete it cannot be used to remove a directory and it doesn’t
       obey include/exclude filters - if the specified file exists, it will always be removed.

              rclone deletefile remote:path [flags]

   Options
                -h, --help   help for deletefile

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone genautocomplete

       Output completion script for a given shell.

   Synopsis
       Generates a shell completion script for rclone.  Run with --help to list the supported shells.

   Options
                -h, --help   help for genautocomplete

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

       • rclone genautocomplete bash - Output bash completion script for rclone.

       • rclone genautocomplete fish - Output fish completion script for rclone.

       • rclone genautocomplete zsh - Output zsh completion script for rclone.

rclone genautocomplete bash

       Output bash completion script for rclone.

   Synopsis
       Generates a bash shell autocompletion script for rclone.

       This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo  or  as
       root, e.g.

              sudo rclone genautocomplete bash

       Logout and login again to use the autocompletion scripts, or source them directly

              . /etc/bash_completion

       If you supply a command line argument the script will be written there.

       If output_file is “-”, then the output will be written to stdout.

              rclone genautocomplete bash [output_file] [flags]

   Options
                -h, --help   help for bash

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone genautocomplete - Output completion script for a given shell.

rclone genautocomplete fish

       Output fish completion script for rclone.

   Synopsis
       Generates a fish autocompletion script for rclone.

       This  writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or
       as root, e.g.

              sudo rclone genautocomplete fish

       Logout and login again to use the autocompletion scripts, or source them directly

              . /etc/fish/completions/rclone.fish

       If you supply a command line argument the script will be written there.

       If output_file is “-”, then the output will be written to stdout.

              rclone genautocomplete fish [output_file] [flags]

   Options
                -h, --help   help for fish

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone genautocomplete - Output completion script for a given shell.

rclone genautocomplete zsh

       Output zsh completion script for rclone.

   Synopsis
       Generates a zsh autocompletion script for rclone.

       This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run  with
       sudo or as root, e.g.

              sudo rclone genautocomplete zsh

       Logout and login again to use the autocompletion scripts, or source them directly

              autoload -U compinit && compinit

       If you supply a command line argument the script will be written there.

       If output_file is “-”, then the output will be written to stdout.

              rclone genautocomplete zsh [output_file] [flags]

   Options
                -h, --help   help for zsh

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone genautocomplete - Output completion script for a given shell.

rclone gendocs

       Output markdown docs for rclone to the directory supplied.

   Synopsis
       This  produces  markdown  docs  for the rclone commands to the directory supplied.  These are in a format
       suitable for hugo to render into the rclone.org website.

              rclone gendocs output_directory [flags]

   Options
                -h, --help   help for gendocs

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone hashsum

       Produces a hashsum file for all the objects in the path.

   Synopsis
       Produces a hash file for all the objects in the path using the hash named.  The output  is  in  the  same
       format as the standard md5sum/sha1sum tool.

       By  default,  the hash is requested from the remote.  If the hash is not supported by the remote, no hash
       will be returned.  With the download flag, the file will be downloaded from the remote and hashed locally
       enabling any hash for any remote.

       For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.

       This command can also hash data received on standard input (stdin), by not passing a remote:path,  or  by
       passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally,
       as a relative path).

       Run without a hash to see the list of all supported hashes, e.g.

              $ rclone hashsum
              Supported hashes are:
                * md5
                * sha1
                * whirlpool
                * crc32
                * sha256
                * dropbox
                * hidrive
                * mailru
                * quickxor

       Then

              $ rclone hashsum MD5 remote:path

       Note that hash names are case insensitive and values are output in lower case.

              rclone hashsum <hash> remote:path [flags]

   Options
                    --base64               Output base64 encoded hashsum
                -C, --checkfile string     Validate hashes against a given SUM file instead of printing them
                    --download             Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
                -h, --help                 help for hashsum
                    --output-file string   Output hashsums to a file rather than the terminal

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone link

       Generate public link to file/folder.

   Synopsis
       rclone link will create, retrieve or remove a public link to the given file or folder.

              rclone link remote:path/to/file
              rclone link remote:path/to/folder/
              rclone link --unlink remote:path/to/folder/
              rclone link --expire 1d remote:path/to/file

       If  you  supply  the –expire flag, it will set the expiration time otherwise it will use the default (100
       years).  Note not all backends support the –expire flag - if the backend doesn’t support it then the link
       returned won’t expire.

       Use the –unlink flag to remove existing public links to the  file  or  folder.   Note  not  all  backends
       support “–unlink” flag - those that don’t will just ignore it.

       If  successful,  the  last  line  of  the output will contain the link.  Exact capabilities depend on the
       remote, but the link will always by default be created with the least constraints –  e.g. no  expiry,  no
       password protection, accessible without account.

              rclone link remote:path [flags]

   Options
                    --expire Duration   The amount of time that the link will be valid (default off)
                -h, --help              help for link
                    --unlink            Remove existing public link to file/folder

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone listremotes

       List all the remotes in the config file.

   Synopsis
       rclone listremotes lists all the available remotes from the config file.

       When used with the --long flag it lists the types too.

              rclone listremotes [flags]

   Options
                -h, --help   help for listremotes
                    --long   Show the type as well as names

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone lsf

       List directories and objects in remote:path formatted for parsing.

   Synopsis
       List the contents of the source path (directories and objects) to standard output in a form which is easy
       to  parse  by  scripts.   By  default this will just be the names of the objects and directories, one per
       line.  The directories will have a / suffix.

       Eg

              $ rclone lsf swift:bucket
              bevajer5jef
              canole
              diwogej7
              ferejej3gux/
              fubuwic

       Use the --format option to control what gets listed.  By default this is just the path, but you  can  use
       these parameters to control the output:

              p - path
              s - size
              t - modification time
              h - hash
              i - ID of object
              o - Original ID of underlying object
              m - MimeType of object if known
              e - encrypted name
              T - tier of storage if known, e.g. "Hot" or "Cool"
              M - Metadata of object in JSON blob format, eg {"key":"value"}

       So  if  you  wanted the path, size and modification time, you would use --format "pst", or maybe --format
       "tsp" to put the path last.

       Eg

              $ rclone lsf  --format "tsp" swift:bucket
              2016-06-25 18:55:41;60295;bevajer5jef
              2016-06-25 18:55:43;90613;canole
              2016-06-25 18:55:43;94467;diwogej7
              2018-04-26 08:50:45;0;ferejej3gux/
              2016-06-25 18:55:40;37600;fubuwic

       If you specify “h” in the format you will get the MD5 hash by default, use  the  --hash  flag  to  change
       which  hash  you  want.   Note  that this can be returned as an empty string if it isn’t available on the
       object (and for directories), “ERROR” if there was an error reading it from the object and  “UNSUPPORTED”
       if that object does not support that hash type.

       For example, to emulate the md5sum command you can use

              rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .

       Eg

              $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket
              7908e352297f0f530b84a756f188baa3  bevajer5jef
              cd65ac234e6fea5925974a51cdd865cc  canole
              03b5341b4f234b9d984d03ad076bae91  diwogej7
              8fd37c3810dd660778137ac3a66cc06d  fubuwic
              99713e14a4c4ff553acaf1930fad985b  gixacuh7ku

       (Though “rclone md5sum .” is an easier way of typing this.)

       By  default  the  separator  is  “;” this can be changed with the --separator flag.  Note that separators
       aren’t escaped in the path so putting it last is a good strategy.

       Eg

              $ rclone lsf  --separator "," --format "tshp" swift:bucket
              2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
              2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
              2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
              2018-04-26 08:52:53,0,,ferejej3gux/
              2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

       You can output in CSV standard format.  This will escape things in ” if they contain ,

       Eg

              $ rclone lsf --csv --files-only --format ps remote:path
              test.log,22355
              test.sh,449
              "this file contains a comma, in the file name.txt",6

       Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the
       --files-from-raw flag.

       For example, to find all the files modified within one day and copy those only  (without  traversing  the
       whole directory structure):

              rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
              rclone copy --files-from-raw new_files /path/to/local remote:path

       Any of the filtering options can be applied to this command.

       There are several related list commands

       • ls to list size and path of objects only

       • lsl to list modification time, size and path of objects only

       • lsd to list directories only

       • lsf to list objects and directories in easy to parse format

       • lsjson to list objects and directories in JSON format

       ls,lsl,lsd  are designed to be human-readable.  lsf is designed to be human and machine-readable.  lsjson
       is designed to be machine-readable.

       Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

       The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

       Listing a nonexistent directory will  produce  an  error  except  for  remotes  which  can’t  have  empty
       directories (e.g. s3, swift, or gcs - the bucket-based remotes).

              rclone lsf remote:path [flags]

   Options
                    --absolute           Put a leading / in front of path names
                    --csv                Output in CSV format
                -d, --dir-slash          Append a slash to directory names (default true)
                    --dirs-only          Only list directories
                    --files-only         Only list files
                -F, --format string      Output format - see  help for details (default "p")
                    --hash h             Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
                -h, --help               help for lsf
                -R, --recursive          Recurse into the listing
                -s, --separator string   Separator for the items in the format (default ";")

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone lsjson

       List directories and objects in the path in JSON format.

   Synopsis
       List directories and objects in the path in JSON format.

       The output is an array of Items, where each Item looks like this

              {
                "Hashes" : {
                   "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
                   "MD5" : "b1946ac92492d2347c6235b4d2611184",
                   "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
                },
                "ID": "y2djkhiujf83u33",
                "OrigID": "UYOJVTUW00Q1RzTDA",
                "IsBucket" : false,
                "IsDir" : false,
                "MimeType" : "application/octet-stream",
                "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
                "Name" : "file.txt",
                "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
                "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
                "Path" : "full/path/goes/here/file.txt",
                "Size" : 6,
                "Tier" : "hot",
              }

       If --hash is not specified the Hashes property won’t be emitted.  The types of hash can be specified with
       the --hash-type parameter (which may be repeated).  If --hash-type is set then it implies --hash.

       If  --no-modtime  is  specified  then  ModTime  will be blank.  This can speed things up on remotes where
       reading the ModTime takes an extra request (e.g. s3, swift).

       If --no-mimetype is specified then MimeType will be blank.  This can speed things  up  on  remotes  where
       reading the MimeType takes an extra request (e.g. s3, swift).

       If --encrypted is not specified the Encrypted won’t be emitted.

       If --dirs-only is not specified files in addition to directories are returned

       If --files-only is not specified directories in addition to the files will be returned.

       If --metadata is set then an additional Metadata key will be returned.  This will have metadata in rclone
       standard format as a JSON object.

       if --stat is set then a single JSON blob will be returned about the item pointed to.  This will return an
       error if the item isn’t found.  However on bucket based backends (like s3, gcs, b2, azureblob etc) if the
       item  isn’t  found  it will return an empty directory as it isn’t possible to tell empty directories from
       missing directories there.

       The Path field will only show folders below the remote path being listed.  If “remote:path” contains  the
       file    “subfolder/file.txt”,    the    Path   for   “file.txt”   will   be   “subfolder/file.txt”,   not
       “remote:path/subfolder/file.txt”.  When used without --recursive the Path will  always  be  the  same  as
       Name.

       If  the  directory  is a bucket in a bucket-based backend, then “IsBucket” will be set to true.  This key
       won’t be present unless it is “true”.

       The time is in RFC3339 format with up to nanosecond precision.  The  number  of  decimal  digits  in  the
       seconds  will depend on the precision that the remote can hold the times, so if times are accurate to the
       nearest    millisecond    (e.g. Google    Drive)    then    3    digits    will    always    be     shown
       (“2017-05-31T16:15:57.034+01:00”)  whereas if the times are accurate to the nearest second (Dropbox, Box,
       WebDav, etc.)  no digits will be shown (“2017-05-31T16:15:57+01:00”).

       The whole output can be processed as a JSON blob, or alternatively it can be processed line  by  line  as
       each item is written one to a line.

       Any of the filtering options can be applied to this command.

       There are several related list commands

       • ls to list size and path of objects only

       • lsl to list modification time, size and path of objects only

       • lsd to list directories only

       • lsf to list objects and directories in easy to parse format

       • lsjson to list objects and directories in JSON format

       ls,lsl,lsd  are designed to be human-readable.  lsf is designed to be human and machine-readable.  lsjson
       is designed to be machine-readable.

       Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

       The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

       Listing a nonexistent directory will  produce  an  error  except  for  remotes  which  can’t  have  empty
       directories (e.g. s3, swift, or gcs - the bucket-based remotes).

              rclone lsjson remote:path [flags]

   Options
                    --dirs-only               Show only directories in the listing
                    --encrypted               Show the encrypted names
                    --files-only              Show only files in the listing
                    --hash                    Include hashes in the output (may take longer)
                    --hash-type stringArray   Show only this hash type (may be repeated)
                -h, --help                    help for lsjson
                    --no-mimetype             Don't read the mime type (can speed things up)
                    --no-modtime              Don't read the modification time (can speed things up)
                    --original                Show the ID of the underlying Object
                -R, --recursive               Recurse into the listing
                    --stat                    Just return the info for the pointed to file

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone mount

       Mount the remote as file system on a mountpoint.

   Synopsis
       rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a
       file system with FUSE.

       First set up your remote using rclone config.  Check it works with rclone ls etc.

       On  Linux  and macOS, you can run mount in either foreground or background (aka daemon) mode.  Mount runs
       in foreground mode by default.  Use the --daemon flag to force background mode.  On Windows you  can  run
       mount in foreground only, the flag is ignored.

       In  background  mode  rclone  acts  as  a  generic  Unix  mount  program: the main program starts, spawns
       background rclone process to setup and maintain the mount, waits until success or timeout and exits  with
       appropriate code (killing the child process if it fails).

       On  Linux/macOS/FreeBSD  start  the  mount  like  this,  where  /path/to/local/mount is an empty existing
       directory:

              rclone mount remote:path/to/files /path/to/local/mount

       On Windows you can start a mount in different ways.  See below for details.  If foreground mount is  used
       interactively from a console window, rclone will serve the mount and occupy the console so another window
       should be used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C.

       The  following  examples  will  mount to an automatically assigned drive, to specific drive letter X:, to
       path C:\path\parent\mount (where parent directory or drive must exist, and mount must not exist,  and  is
       not  supported  when  mounting  as  a  network  drive),  and the last example will mount as network share
       \\cloud\remote and map it to an automatically assigned drive:

              rclone mount remote:path/to/files *
              rclone mount remote:path/to/files X:
              rclone mount remote:path/to/files C:\path\parent\mount
              rclone mount remote:path/to/files \\cloud\remote

       When the program ends while in foreground mode, either via  Ctrl+C  or  receiving  a  SIGINT  or  SIGTERM
       signal, the mount should be automatically stopped.

       When running in background mode the user will have to stop the mount manually:

              # Linux
              fusermount -u /path/to/local/mount
              # OS X
              umount /path/to/local/mount

       The  umount  operation  can  fail, for example when the mountpoint is busy.  When that happens, it is the
       user’s responsibility to stop the mount manually.

       The size of the mounted file system will be set according to information retrieved from the  remote,  the
       same  as  returned  by the rclone about command.  Remotes with unlimited storage may report the used size
       only, then an additional 1 PiB of free space is assumed.   If  the  remote  does  not  support the  about
       feature at all, then 1 PiB is set as both the total and the free size.

   Installing on Windows
       To run rclone mount on Windows, you will need to download and install WinFsp.

       WinFsp is  an  open-source Windows File System Proxy which makes it easy to write user space file systems
       for Windows.  It provides a FUSE emulation layer which rclone uses combination  with  cgofuse.   Both  of
       these  packages  are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount
       for Windows.

   Mounting modes on windows
       Unlike other operating systems, Microsoft Windows provides a different filesystem type  for  network  and
       fixed  drives.   It  optimises  access  on  the assumption fixed disk drives are fast and reliable, while
       network  drives  have  relatively  high  latency  and  less  reliability.   Some  settings  can  also  be
       differentiated between the two types, for example that Windows Explorer should just display icons and not
       create preview thumbnails for image and video files on network drives.

       In  most  cases, rclone will mount the remote as a normal, fixed disk drive by default.  However, you can
       also choose to mount it as a remote network drive, often described as a network share.  If you  mount  an
       rclone  remote  using  the default, fixed drive mode and experience unexpected program errors, freezes or
       other issues, consider mounting as a network drive instead.

       When mounting as a fixed disk drive you can either mount  to  an  unused  drive  letter,  or  to  a  path
       representing  a  nonexistent  subdirectory  of  an existing parent directory or drive.  Using the special
       value * will tell rclone to automatically assign the next available drive letter, starting  with  Z:  and
       moving backward.  Examples:

              rclone mount remote:path/to/files *
              rclone mount remote:path/to/files X:
              rclone mount remote:path/to/files C:\path\parent\mount
              rclone mount remote:path/to/files X:

       Option  --volname can be used to set a custom volume name for the mounted file system.  The default is to
       use the remote name and path.

       To mount as network drive, you can add option --network-mode  to  your  mount  command.   Mounting  to  a
       directory  path  is  not  supported in this mode, it is a limitation Windows imposes on junctions, so the
       remote must always be mounted to a drive letter.

              rclone mount remote:path/to/files X: --network-mode

       A volume name specified with --volname will be used to create the network share  path.   A  complete  UNC
       path,  such  as \\cloud\remote, optionally with path \\cloud\remote\madeup\path, will be used as is.  Any
       other string will be used as the share part, after a default prefix \\server\.   If  no  volume  name  is
       specified  then  \\server\share  will be used.  You must make sure the volume name is unique when you are
       mounting more than one drive, or else the mount command will fail.  The share name will  treated  as  the
       volume  label for the mapped drive, shown in Windows Explorer etc, while the complete \\server\share will
       be reported as the remote UNC path by net use etc, just like a normal network drive mapping.

       If you specify a full network share UNC path with --volname, this will implicitly set the  --network-mode
       option, so the following two examples have same result:

              rclone mount remote:path/to/files X: --network-mode
              rclone mount remote:path/to/files X: --volname \\server\share

       You may also specify the network share UNC path as the mountpoint itself.  Then rclone will automatically
       assign  a drive letter, same as with * and use that as mountpoint, and instead use the UNC path specified
       as the volume name, as if it were specified with the --volname option.  This will also implicitly set the
       --network-mode option.  This means the following two examples have same result:

              rclone mount remote:path/to/files \\cloud\remote
              rclone mount remote:path/to/files * --volname \\cloud\remote

       There is yet another way to enable network mode, and to set the share path,  and  that  is  to  pass  the
       “native”  libfuse/WinFsp  option  directly: --fuse-flag --VolumePrefix=\server\share.  Note that the path
       must be with just a single backslash prefix in this case.

       Note: In previous versions of rclone this was the only supported method.

       Read more about drive mapping

       See also Limitations section below.

   Windows filesystem permissions
       The FUSE emulation layer on Windows must convert between the POSIX-based permission model used  in  FUSE,
       and the permission model used in Windows, based on access-control lists (ACL).

       The  mounted  filesystem  will  normally get three entries in its access-control list (ACL), representing
       permissions for the POSIX permission scopes: Owner, group and others.  By default, the  owner  and  group
       will  be taken from the current user, and the built-in group “Everyone” will be used to represent others.
       The user/group can be customized with FUSE options “UserName” and “GroupName”,  e.g. -o  UserName=user123
       -o  GroupName="Authenticated  Users".   The  permissions  on  each entry will be set according to options
       --dir-perms and --file-perms, which takes  a  value  in  traditional  https://en.wikipedia.org/wiki/File-
       system_permissions#Numeric_notation numeric notation.

       The  default  permissions  corresponds  to  --file-perms  0666  --dir-perms  0777,  i.e. read  and  write
       permissions to everyone.  This means you will not be able to start any programs from the  mount.   To  be
       able  to  do  that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777 to add it to
       everyone.  If the program needs to write files, chances are you will have to enable VFS File  Caching  as
       well (see also limitations).

       Note  that  the  mapping of permissions is not always trivial, and the result you see in Windows Explorer
       may not be exactly like you expected.  For example, when setting a value that includes write access, this
       will be mapped to individual permissions “write attributes”, “write data”  and  “append  data”,  but  not
       “write  extended  attributes”.   Windows  will  then  show  this as basic permission “Special” instead of
       “Write”, because “Write” includes the “write extended attributes” permission.

       If you set POSIX permissions for only allowing access to the owner, using --file-perms  0600  --dir-perms
       0700, the user group and the built-in “Everyone” group will still be given some special permissions, such
       as “read attributes” and “read permissions”, in Windows.  This is done for compatibility reasons, e.g. to
       allow  users  without  additional permissions to be able to read basic metadata about files like in UNIX.
       One case that may arise is that other programs (incorrectly) interprets this as the file being accessible
       by everyone.  For example an SSH client may warn about “unprotected private key file”.

       WinFsp 2021 (version 1.9)  introduces  a  new  FUSE  option  “FileSecurity”,  that  allows  the  complete
       specification      of      file     security     descriptors     using     https://docs.microsoft.com/en-
       us/windows/win32/secauthz/security-descriptor-string-format SDDL.  With this you can work  around  issues
       such as the mentioned “unprotected private key file” by specifying -o FileSecurity="D:P(A;;FA;;;OW)", for
       file all access (FA) to the owner (OW).

   Windows caveats
       Drives  created as Administrator are not visible to other accounts, not even an account that was elevated
       to Administrator with the User Account Control (UAC) feature.  A result of this is that if you mount to a
       drive letter from a Command Prompt run as Administrator, and then try  to  access  the  same  drive  from
       Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.

       If  you  don’t  need  to  access  the drive from applications running with administrative privileges, the
       easiest way around this is to always create the mount from a non-elevated command prompt.

       To make mapped drives available to the user account that created them  regardless  if  elevated  or  not,
       there   is   a  special  Windows  setting  called  https://docs.microsoft.com/en-us/troubleshoot/windows-
       client/networking/mapped-drives-not-available-from-          elevated-command#detail-to-configure-the-en‐
       ablelinkedconnections-registry-entry linked connections that can be enabled.

       It  is  also  possible  to make a drive mount available to everyone on the system, by running the process
       creating it as the built-in SYSTEM account.  There are several ways  to  do  this:  One  is  to  use  the
       command-line  utility PsExec, from Microsoft’s Sysinternals suite, which has option -s to start processes
       as the SYSTEM account.  Another alternative is to run the mount command from a Windows Scheduled Task, or
       a Windows Service, configured to run  as  the  SYSTEM  account.   A  third  alternative  is  to  use  the
       WinFsp.Launcher infrastructure).   Note  that  when  running  rclone as another user, it will not use the
       configuration file from your profile unless you tell it to with the --config option.  Read  more  in  the
       install documentation.

       Note  that  mapping  to  a  directory  path,  instead  of  a  drive letter, does not suffer from the same
       limitations.

   Limitations
       Without the use of --vfs-cache-mode this can only  write  files  sequentially,  it  can  only  seek  when
       reading.   This  means  that  many  applications  won’t  work with their files on an rclone mount without
       --vfs-cache-mode writes or --vfs-cache-mode full.  See the VFS File Caching section for more info.

       The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do not support the concept of empty
       directories, so empty directories will have a tendency to disappear once they fall out of  the  directory
       cache.

       When  rclone  mount  is  invoked  on  Unix  with --daemon flag, the main rclone program will wait for the
       background mount to become ready or until the timeout specified by the --daemon-wait flag.  On  Linux  it
       can  check  mount  status using ProcFS so the flag in fact sets maximum time to wait, while the real wait
       can be less.  On macOS / BSD the time to wait is constant and the check is performed only at the end.  We
       advise you to set wait time on macOS reasonably.

       Only supported on Linux, FreeBSD, OS X and Windows at the moment.

   rclone mount vs rclone sync/copy
       File systems expect things to be 100% reliable, whereas cloud storage systems are a long  way  from  100%
       reliable.  The rclone sync/copy commands cope with this with lots of retries.  However rclone mount can’t
       use retries in the same way without making local copies of the uploads.  Look at the VFS File Caching for
       solutions to make mount more reliable.

   Attribute caching
       You  can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification
       time, etc.)  for directory entries.

       The default is 1s which caches files just long enough to avoid too many  callbacks  to  rclone  from  the
       kernel.

       In  theory  0s  should  be  the correct value for filesystems which can change outside the control of the
       kernel.  However this causes quite  a  few  problems  such  as  rclone using too much memory,  rclone not
       serving files to samba and excessive time listing directories.

       The  kernel can cache the info about a file for the time given by --attr-timeout.  You may see corruption
       if the remote file changes length during this window.  It will show up as either a truncated  file  or  a
       file  with  garbage  on  the  end.  With --attr-timeout 1s this is very unlikely but not impossible.  The
       higher you set --attr-timeout the more likely it is.  The default setting of “1s” is the  lowest  setting
       which mitigates the problems above.

       If  you  set it higher (10s or 1m say) then the kernel will call back to rclone less often making it more
       efficient, however there is more chance of the corruption issue above.

       If files don’t change on the remote outside of  the  control  of  rclone  then  there  is  no  chance  of
       corruption.

       This is the same as setting the attr_timeout option in mount.fuse.

   Filters
       Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

   systemd
       When  running  rclone  mount  as  a systemd service, it is possible to use Type=notify.  In this case the
       service will enter the started state after the mountpoint has been successfully set up.  Units having the
       rclone mount service specified as a requirement will see all files and folders immediately in this mode.

       Note that systemd runs mount units without any environment variables including PATH or HOME.  This  means
       that  tilde  (~)  expansion  will  not work and you should provide --config and --cache-dir explicitly as
       absolute paths via rclone arguments.  Since mounting requires the fusermount program, rclone will use the
       fallback PATH of /bin:/usr/bin in this scenario.  Please ensure that fusermount is present on this PATH.

   Rclone as Unix mount helper
       The core Unix program /bin/mount normally takes the -t FSTYPE argument then runs  the  /sbin/mount.FSTYPE
       helper  program  passing it mount options as -o key=val,... or --opt=....  Automount (classic or systemd)
       behaves in a similar way.

       rclone by default expects GNU-style flags --key val.  To run it as a  mount  helper  you  should  symlink
       rclone  binary  to  /sbin/mount.rclone  and  optionally  /usr/bin/rclonefs,  e.g. ln  -s  /usr/bin/rclone
       /sbin/mount.rclone.  rclone will detect it and translate command-line arguments appropriately.

       Now you can run classic mounts like this:

              mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem

       or create systemd mount units:

              # /etc/systemd/system/mnt-data.mount
              [Unit]
              After=network-online.target
              [Mount]
              Type=rclone
              What=sftp1:subdir
              Where=/mnt/data
              Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone

       optionally accompanied by systemd automount unit

              # /etc/systemd/system/mnt-data.automount
              [Unit]
              After=network-online.target
              Before=remote-fs.target
              [Automount]
              Where=/mnt/data
              TimeoutIdleSec=600
              [Install]
              WantedBy=multi-user.target

       or add in /etc/fstab a line like

              sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0

       or use classic Automountd.  Remember to provide explicit config=...,cache-dir=...  as  a  workaround  for
       mount units being run without HOME.

       Rclone  in the mount helper mode will split -o argument(s) by comma, replace _ by - and prepend -- to get
       the command-line flags.  Options containing commas or spaces can be wrapped in single or  double  quotes.
       Any inner quotes inside outer quotes of the same type should be doubled.

       Mount option syntax includes a few extra options treated specially:

       • env.NAME=VALUE  will set an environment variable for the mount process.  This helps with Automountd and
         Systemd.mount which don’t allow setting custom environment for mount helpers.  Typically you  will  use
         env.HTTPS_PROXY=proxy.host:3128 or env.HOME=/root

       • command=cmount can be used to run cmount or any other rclone command rather than the default mount.

       • args2env  will  pass  mount options to the mount helper running in background via environment variables
         instead of command line arguments.  This allows to hide secrets from such commands as ps or pgrep.

       • vv... will be transformed into appropriate --verbose=N

       • standard mount options like x-systemd.automount, _netdev,  nosuid  and  alike  are  intended  only  for
         Automountd and ignored by rclone.

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

              rclone mount remote:path /path/to/mountpoint [flags]

   Options
                    --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
                    --allow-other                            Allow access to other users (not supported on Windows)
                    --allow-root                             Allow access to root user (not supported on Windows)
                    --async-read                             Use asynchronous reads (not supported on Windows) (default true)
                    --attr-timeout duration                  Time for which file/directory attributes are cached (default 1s)
                    --daemon                                 Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
                    --daemon-timeout duration                Time limit for rclone to respond to kernel (not supported on Windows)
                    --daemon-wait duration                   Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
                    --debug-fuse                             Debug the FUSE internals - needs -v
                    --default-permissions                    Makes kernel enforce access control based on the file mode (not supported on Windows)
                    --devname string                         Set the device name - default is remote:path
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --fuse-flag stringArray                  Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for mount
                    --max-read-ahead SizeSuffix              The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
                    --network-mode                           Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --noappledouble                          Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
                    --noapplexattr                           Ignore all "com.apple.*" extended attributes (supported on OSX only)
                -o, --option stringArray                     Option for libfuse/WinFsp (repeat if required)
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)
                    --volname string                         Set the volume name (supported on Windows and OSX only)
                    --write-back-cache                       Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone moveto

       Move file or directory from source to dest.

   Synopsis
       If source:path is a file or directory then it moves it to a file or directory named dest:path.

       This can be used to rename files or upload single files to other than their existing name.  If the source
       is a directory then it acts exactly like the move command.

       So

              rclone moveto src dst

       where src and dst are rclone paths, either remote:path or /path/to/local or C:.

       This will:

              if src is file
                  move it to dst, overwriting an existing file if it exists
              if src is directory
                  move it to dst, overwriting existing files if they exist
                  see move command for full details

       This  doesn’t  transfer files that are identical on src and dst, testing by size and modification time or
       MD5SUM.  src will be deleted on successful transfer.

       Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

       Note: Use the -P/--progress flag to view real-time transfer statistics.

              rclone moveto source:path dest:path [flags]

   Options
                -h, --help   help for moveto

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone ncdu

       Explore a remote with a text based user interface.

   Synopsis
       This displays a text based user interface allowing the navigation of a remote.  It  is  most  useful  for
       answering the question - “What is using all my disk space?”.

       To make the user interface it first scans the entire remote given and builds an in memory representation.
       rclone  ncdu  can  be  used  during  this  scanning  phase  and you will see it building up the directory
       structure as it goes along.

       You can interact with the user interface using key presses, press `?' to toggle the help on and off.  The
       supported keys are:

               ↑,↓ or k,j to Move
               →,l to enter
               ←,h to return
               c toggle counts
               g toggle graph
               a toggle average size in directory
               u toggle human-readable format
               n,s,C,A sort by name,size,count,average size
               d delete file/directory
               v select file/directory
               V enter visual select mode
               D delete selected files/directories
               y copy current path to clipboard
               Y display current path
               ^L refresh screen (fix screen corruption)
               ? to toggle help on and off
               q/ESC/^c to quit

       Listed files/directories may be  prefixed  by  a  one-character  flag,  some  of  them  combined  with  a
       description in brackets at end of line.  These flags have the following meaning:

              e means this is an empty directory, i.e. contains no files (but
                may contain empty subdirectories)
              ~ means this is a directory where some of the files (possibly in
                subdirectories) have unknown size, and therefore the directory
                size may be underestimated (and average size inaccurate, as it
                is average of the files with known sizes).
              . means an error occurred while reading a subdirectory, and
                therefore the directory size may be underestimated (and average
                size inaccurate)
              ! means an error occurred while reading this directory

       This an homage to the ncdu tool but for rclone remotes.  It is missing lots of features at the moment but
       is useful as it stands.

       Note  that it might take some time to delete big files/directories.  The UI won’t respond in the meantime
       since the deletion is done synchronously.

       For a non-interactive listing of the remote, see the tree command.  To just get the  total  size  of  the
       remote you can also use the size command.

              rclone ncdu remote:path [flags]

   Options
                -h, --help   help for ncdu

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone obscure

       Obscure password for use in the rclone config file.

   Synopsis
       In  the  rclone config file, human-readable passwords are obscured.  Obscuring them is done by encrypting
       them and writing them out in base64.  This is not a secure way of encrypting these  passwords  as  rclone
       can decrypt them - it is to prevent “eyedropping” - namely someone seeing a password in the rclone config
       file by accident.

       Many  equally  important  things (like access tokens) are not obscured in the config file.  However it is
       very hard to shoulder surf a 64 character hex token.

       This command can also accept a password through STDIN instead of an argument by passing a  hyphen  as  an
       argument.  This will use the first line of STDIN as the password not including the trailing newline.

              echo "secretpassword" | rclone obscure -

       If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

       If  you  want  to  encrypt the config file then please use config file encryption - see rclone config for
       more info.

              rclone obscure password [flags]

   Options
                -h, --help   help for obscure

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone rc

       Run a command against a running rclone.

   Synopsis
       This runs a command against a running rclone.  Use the --url flag  to  specify  an  non  default  URL  to
       connect on.  This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port”
       which is taken to mean “http://host:port”

       A username and password can be passed in with --user and --pass.

       Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

       Arguments should be passed in as parameter=value.

       The result will be returned as a JSON object by default.

       The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments.  This
       is the only way of passing in more complicated values.

       The -o/--opt option can be used to set a key “opt” with key, value options in the form -o key=value or -o
       key.   It can be repeated as many times as required.  This is useful for rc commands which take the “opt”
       parameter which by convention is a dictionary of strings.

              -o key=value -o key2

       Will place this in the “opt” value

              {"key":"value", "key2","")

       The -a/--arg option can be used to set strings in the “arg” value.  It can be repeated as many  times  as
       required.  This is useful for rc commands which take the “arg” parameter which by convention is a list of
       strings.

              -a value -a value2

       Will place this in the “arg” value

              ["value", "value2"]

       Use  --loopback  to  connect  to  the rclone instance running rclone rc.  This is very useful for testing
       commands without having to run an rclone rc server, e.g.:

              rclone rc --loopback operations/about fs=/

       Use rclone rc to see a list of all possible commands.

              rclone rc commands parameter [flags]

   Options
                -a, --arg stringArray   Argument placed in the "arg" array
                -h, --help              help for rc
                    --json string       Input JSON - use instead of key=value args
                    --loopback          If set connect to this rclone instance not via HTTP
                    --no-output         If set, don't output the JSON result
                -o, --opt stringArray   Option in the form name=value or name placed in the "opt" array
                    --pass string       Password to use to connect to rclone remote control
                    --url string        URL to connect to rclone remote control (default "http://localhost:5572/")
                    --user string       Username to use to rclone remote control

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone rcat

       Copies standard input to file on remote.

   Synopsis
       rclone rcat reads from standard input (stdin) and copies it to a single remote file.

              echo "hello world" | rclone rcat remote:path/to/file
              ffmpeg - | rclone rcat remote:path/to/file

       If the remote file already exists, it will be overwritten.

       rcat will try to upload small files in a single  request,  which  is  usually  more  efficient  than  the
       streaming/chunked  upload endpoints, which use multiple requests.  Exact behaviour depends on the remote.
       What is considered a small file may be set  through  --streaming-upload-cutoff.   Uploading  only  starts
       after  the  cutoff  is  reached or if the file ends before that.  The data must fit into RAM.  The cutoff
       needs to be small enough to adhere the limits of your remote,  please  see  there.   Generally  speaking,
       setting this cutoff too high will decrease your performance.

       Use  the --size flag to preallocate the file in advance at the remote end and actually stream it, even if
       remote backend doesn’t support streaming.

       --size should be the exact size of the input stream in bytes.  If the size of the stream is different  in
       length to the --size passed in then the transfer will likely fail.

       Note  that  the  upload  can  also  not  be  retried because the data is not kept around until the upload
       succeeds.  If you need to transfer a lot of data, you’re better off caching locally and then rclone  move
       it to the destination.

              rclone rcat remote:path [flags]

   Options
                -h, --help       help for rcat
                    --size int   File size hint to preallocate (default -1)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone rcd

       Run rclone listening to remote control commands only.

   Synopsis
       This runs rclone so that it only listens to remote control commands.

       This is useful if you are controlling rclone via the rc API.

       If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed
       in.  It will also open the URL in the browser when rclone is run.

       See the rc documentation for more info on the rc flags.

              rclone rcd <path to files to serve>* [flags]

   Options
                -h, --help   help for rcd

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone rmdirs

       Remove empty directories under the path.

   Synopsis
       This   recursively  removes  any  empty  directories  (including  directories  that  only  contain  empty
       directories), that it finds under the path.  The root path itself will also be removed if  it  is  empty,
       unless you supply the --leave-root flag.

       Use command rmdir to delete just the empty directory given by path, not recurse.

       This  is  useful  for tidying up remotes that rclone has left a lot of empty directories in.  For example
       the delete command will delete  files  but  leave  the  directory  structure  (unless  used  with  option
       --rmdirs).

       To delete a path and any objects in it, use purge command.

              rclone rmdirs remote:path [flags]

   Options
                -h, --help         help for rmdirs
                    --leave-root   Do not remove root directory if empty

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone selfupdate

       Update the rclone binary.

   Synopsis
       This  command  downloads  the  latest  release  of rclone and replaces the currently running binary.  The
       download is verified with a hashsum and cryptographically signed signature.

       If used without flags (or with implied --stable flag),  this  command  will  install  the  latest  stable
       release.  However, some issues may be fixed (or features added) only in the latest beta release.  In such
       cases  you  should run the command with the --beta flag, i.e. rclone selfupdate --beta.  You can check in
       advance what version would be installed by adding the --check flag, then repeat the  command  without  it
       when you are satisfied.

       Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your
       issue  or  add  a  bleeding  edge feature.  The --version VER flag, if given, will update to the concrete
       version instead of the latest one.  If you omit micro version from VER (for  example  1.53),  the  latest
       matching micro version will be used.

       Upon  successful  update  rclone  will print a message that contains a previous version number.  You will
       need it if you later decide to revert your update for some reason.  Then you’ll have to note the previous
       version and run the following command: rclone selfupdate [--beta] OLDVER.  If the  old  version  contains
       only  dots and digits (for example v1.54.0) then it’s a stable release so you won’t need the --beta flag.
       Beta releases have an additional information similar  to  v1.54.0-beta.5111.06f1c0c61.   (if  you  are  a
       developer and use a locally built rclone, the version number will end with -DEV, you will have to rebuild
       it as it obviously can’t be distributed).

       If  you previously installed rclone via a package manager, the package may include local documentation or
       configure services.  You may wish to update with the flag --package deb or --package  rpm  (whichever  is
       correct  for  your OS) to update these too.  This command with the default --package zip will update only
       the rclone executable so the local manual may become inaccurate after it.

       The rclone mount command (https://rclone.org/commands/rclone_mount/) may or may not support extended FUSE
       options depending on the build and OS.  selfupdate will refuse to  update  if  the  capability  would  be
       discarded.

       Note:  Windows  forbids  deletion  of  a currently running executable so this command will rename the old
       executable to `rclone.old.exe' upon success.

       Please note that this command was not available before rclone version 1.55.  If it fails for you with the
       message unknown command "selfupdate" then  you  will  need  to  update  manually  following  the  install
       instructions located at https://rclone.org/install/

              rclone selfupdate [flags]

   Options
                    --beta             Install beta release
                    --check            Check for latest release, do not download
                -h, --help             help for selfupdate
                    --output string    Save the downloaded binary at a given path (default: replace running binary)
                    --package string   Package format: zip|deb|rpm (default: zip)
                    --stable           Install stable release (this is the default)
                    --version string   Install the given rclone version (default: latest)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone serve

       Serve a remote over a protocol.

   Synopsis
       Serve a remote over a given protocol.  Requires the use of a subcommand to specify the protocol, e.g.

              rclone serve http remote:

       Each subcommand has its own options which you can see in their help.

              rclone serve <protocol> [opts] <remote> [flags]

   Options
                -h, --help   help for serve

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

       • rclone serve dlna - Serve remote:path over DLNA

       • rclone serve docker - Serve any remote on docker’s volume plugin API.

       • rclone serve ftp - Serve remote:path over FTP.

       • rclone serve http - Serve the remote over HTTP.

       • rclone serve restic - Serve the remote for restic’s REST API.

       • rclone serve sftp - Serve the remote over SFTP.

       • rclone serve webdav - Serve remote:path over WebDAV.

rclone serve dlna

       Serve remote:path over DLNA

   Synopsis
       Run  a  DLNA  media  server  for  media  stored  in an rclone remote.  Many devices, such as the Xbox and
       PlayStation, can automatically discover this server in the LAN and play audio/video from it.  VLC is also
       supported.  Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

       Rclone will list all files present in the remote, without  filtering  based  on  media  formats  or  file
       extensions.   Additionally,  there  is  no media transcoding support.  This means that some players might
       show files that they are not able to play back correctly.

   Server options
       Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000  or
       --addr :8080 to listen to all IPs.

       Use --name to choose the friendly server name, which is by default “rclone (hostname)”.

       Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic.

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

              rclone serve dlna remote:path [flags]

   Options
                    --addr string                            The ip:port or :port to bind the DLNA http server to (default ":7879")
                    --announce-interval duration             The interval between SSDP announcements (default 12m0s)
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for dlna
                    --interface stringArray                  The interface to use for SSDP (repeat as necessary)
                    --log-trace                              Enable trace logging of SOAP traffic
                    --name string                            Name of DLNA server
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve docker

       Serve any remote on docker’s volume plugin API.

   Synopsis
       This  command  implements  the  Docker  volume plugin API allowing docker to use rclone as a data storage
       mechanism for various cloud providers.  rclone provides docker volume plugin based on it.

       To create a docker plugin, one must create a Unix or TCP socket that Docker will look for  when  you  use
       the  plugin  and  then  it  listens  for commands from docker daemon and runs the corresponding code when
       necessary.  Docker plugins can run as a managed plugin under control  of  the  docker  daemon  or  as  an
       independent  native  service.   For  testing,  you  can  just  run it directly from the command line, for
       example:

              sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv

       Running rclone serve docker will create the said socket, listening for commands from Docker to create the
       necessary Volumes.  Normally you need not give the --socket-addr flag.  The API will listen on  the  unix
       domain  socket  at /run/docker/plugins/rclone.sock.  In the example above rclone will create a TCP socket
       and a small file /etc/docker/plugins/rclone.spec containing the socket address.  We use sudo because both
       paths are writeable only by the root user.

       If you later decide to change listening socket, the docker daemon  must  be  restarted  to  reconnect  to
       /run/docker/plugins/rclone.sock  or  parse  new  /etc/docker/plugins/rclone.spec.  Until you restart, any
       volume related docker commands will timeout trying  to  access  the  old  socket.   Running  directly  is
       supported  on  Linux  only,  not  on  Windows  or  MacOS.  This is not a problem with managed plugin mode
       described in details in the full documentation.

       The  command  will  create  volume  mounts   under   the   path   given   by   --base-dir   (by   default
       /var/lib/docker-volumes/rclone   available   only   to   root)  and  maintain  the  JSON  formatted  file
       docker-plugin.state in the rclone cache directory  with  book-keeping  records  of  created  and  mounted
       volumes.

       All  mount  and VFS options are submitted by the docker daemon via API, but you can also provide defaults
       on the command line as well as set path to  the  config  file  and  cache  directory  or  adjust  logging
       verbosity.

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

              rclone serve docker [flags]

   Options
                    --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
                    --allow-other                            Allow access to other users (not supported on Windows)
                    --allow-root                             Allow access to root user (not supported on Windows)
                    --async-read                             Use asynchronous reads (not supported on Windows) (default true)
                    --attr-timeout duration                  Time for which file/directory attributes are cached (default 1s)
                    --base-dir string                        Base directory for volumes (default "/var/lib/docker-volumes/rclone")
                    --daemon                                 Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
                    --daemon-timeout duration                Time limit for rclone to respond to kernel (not supported on Windows)
                    --daemon-wait duration                   Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
                    --debug-fuse                             Debug the FUSE internals - needs -v
                    --default-permissions                    Makes kernel enforce access control based on the file mode (not supported on Windows)
                    --devname string                         Set the device name - default is remote:path
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --forget-state                           Skip restoring previous state
                    --fuse-flag stringArray                  Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for docker
                    --max-read-ahead SizeSuffix              The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
                    --network-mode                           Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --no-spec                                Do not write spec file
                    --noappledouble                          Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
                    --noapplexattr                           Ignore all "com.apple.*" extended attributes (supported on OSX only)
                -o, --option stringArray                     Option for libfuse/WinFsp (repeat if required)
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --socket-addr string                     Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
                    --socket-gid int                         GID for unix socket (default: current process GID) (default 1000)
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)
                    --volname string                         Set the volume name (supported on Windows and OSX only)
                    --write-back-cache                       Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve ftp

       Serve remote:path over FTP.

   Synopsis
       Run  a basic FTP server to serve a remote over FTP protocol.  This can be viewed with a FTP client or you
       can make a remote of type FTP to read and write it.

   Server options
       Use –addr to specify which IP address and port the server should listen on,  e.g. –addr  1.2.3.4:8000  or
       –addr  :8080  to listen to all IPs.  By default it only listens on localhost.  You can use port :0 to let
       the OS choose an available port.

       If you set –addr to listen on a public or LAN accessible IP address then using Authentication is  advised
       - see the next section for info.

   Authentication
       By default this will serve files without needing a login.

       You can set a single username and password with the –user and –pass flags.

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

   Auth Proxy
       If  you  supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate
       backends on the fly which then are used to authenticate incoming requests.  This uses a simple JSON based
       protocol with input on STDIN and output on STDOUT.

       PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used  together,  if  --auth-proxy  is  set  the
       authorized keys option will be ignored.

       There is an example program bin/test_proxy.py in the rclone source code.

       The program’s job is to take a user and pass on the input and turn those into the config for a backend on
       STDOUT  in JSON format.  This config will have any default parameters for the backend added, but it won’t
       use configuration from environment variables or command line options - it is the job of the proxy program
       to make a complete config.

       This config generated must have this extra parameter - _root - root to use for the backend

       And it may have this parameter - _obscure - comma separated strings for parameters to obscure

       If password authentication was used by the client, input to the  proxy  process  (on  STDIN)  would  look
       similar to this:

              {
                  "user": "me",
                  "pass": "mypassword"
              }

       If  public-key  authentication  was  used by the client, input to the proxy process (on STDIN) would look
       similar to this:

              {
                  "user": "me",
                  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
              }

       And as an example return this on STDOUT

              {
                  "type": "sftp",
                  "_root": "",
                  "_obscure": "pass",
                  "user": "me",
                  "pass": "mypassword",
                  "host": "sftp.example.com"
              }

       This would mean that an SFTP backend would be created  on  the  fly  for  the  user  and  pass/public_key
       returned  in  the output to the host given.  Note that since _obscure is set to pass, rclone will obscure
       the pass parameter before creating the backend (which is required for sftp backends).

       The program can manipulate the supplied user in any way, for example to make proxy to many different sftp
       backends, you could make the user be user@example.com and then set the host to example.com in the  output
       and the user to user.  For security you’d probably want to restrict the host to a limited list.

       Note  that  an  internal  cache  is  keyed  on user so only use that for configuration, don’t use pass or
       public_key.  This also means that if a user’s password or public-key is changed the cache  will  need  to
       expire (which takes 5 mins) before it takes effect.

       This can be used to build general purpose proxies to any kind of backend that rclone supports.

              rclone serve ftp remote:path [flags]

   Options
                    --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2121")
                    --auth-proxy string                      A program to use to create the backend from the auth
                    --cert string                            TLS PEM key (concatenation of certificate and CA certificate)
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for ftp
                    --key string                             TLS PEM Private key
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --pass string                            Password for authentication (empty value allow every password)
                    --passive-port string                    Passive port range to use (default "30000-32000")
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --public-ip string                       Public IP address to advertise for passive connections
                    --read-only                              Only allow read-only access
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --user string                            User name for authentication (default "anonymous")
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve http

       Serve the remote over HTTP.

   Synopsis
       Run  a basic web server to serve a remote over HTTP.  This can be viewed in a web browser or you can make
       a remote of type http read from it.

       You can use the filter flags (e.g. --include, --exclude) to control what is served.

       The server will log errors.  Use -v to see access logs.

       --bwlimit will be respected for file transfers.  Use --stats to control the stats printing.

   Server options
       Use --addr to specify which IP address and port the server should listen on, eg  --addr  1.2.3.4:8000  or
       --addr  :8080 to listen to all IPs.  By default it only listens on localhost.  You can use port :0 to let
       the OS choose an available port.

       If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised
       - see the next section for info.

       --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server.  Note
       that this is the total time for a transfer.

       --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

       --baseurl controls the URL prefix that rclone serves from.  By default rclone will serve from  the  root.
       If  you  used  --baseurl  "/rclone" then rclone would serve from a URL starting with “/rclone/”.  This is
       useful if you wish to proxy rclone serve.  Rclone automatically  inserts  leading  and  trailing  “/”  on
       --baseurl,  so  --baseurl  "rclone",  --baseurl  "/rclone"  and  --baseurl  "/rclone/"  are  all  treated
       identically.

   SSL/TLS
       By default this will serve over http.  If you want you can serve over https.  You will need to supply the
       --cert and --key flags.  If you wish to do client side certificate  validation  then  you  will  need  to
       supply --client-ca also.

       --cert  should  be a either a PEM encoded certificate or a concatenation of that with the CA certificate.
       --key should be the PEM encoded private key and --client-ca should be the PEM encoded client  certificate
       authority certificate.

       –min-tls-version  is  minimum  TLS  version  that  is  acceptable.   Valid values are “tls1.0”, “tls1.1”,
       “tls1.2” and “tls1.3” (default “tls1.0”).

   Template
       --template allows a user to specify a custom markup template for HTTP and WebDAV  serve  functions.   The
       server exports the following markup to be used within the template to server pages:

       Parameter                             Description
       ──────────────────────────────────────────────────────────────────────────
       .Name                                 The full path of a file/directory.
       .Title                                Directory listing of .Name
       .Sort                                 The  current  sort  used.   This is
                                             changeable via ?sort= parameter
                                             Sort                       Options:
                                             namedirfirst,name,size,time
                                             (default namedirfirst)
       .Order                                The current ordering used.  This is
                                             changeable via ?order= parameter
                                             Order  Options:  asc,desc  (default
                                             asc)
       .Query                                Currently unused.
       .Breadcrumb                           Allows  for  creating  a   relative
                                             navigation
       – .Link                               The  relative  to  the root link of
                                             the Text.
       – .Text                               The Name of the directory.
       .Entries                              Information   about   a    specific
                                             file/directory.
       – .URL                                The `url' of an entry.
       – .Leaf                               Currently   same   as   `URL'   but
                                             intended to be `just' the name.
       – .IsDir                              Boolean  for  if  an  entry  is   a
                                             directory or not.
       – .Size                               Size in Bytes of the entry.
       – .ModTime                            The UTC timestamp of an entry.

   Authentication
       By default this will serve files without needing a login.

       You  can  either use an htpasswd file which can take lots of users, or set a single username and password
       with the --user and --pass flags.

       Use --htpasswd /path/to/htpasswd to provide an htpasswd file.  This is  in  standard  apache  format  and
       supports MD5, SHA1 and BCrypt for basic authentication.  Bcrypt is recommended.

       To create an htpasswd file:

              touch htpasswd
              htpasswd -B htpasswd user
              htpasswd -B htpasswd anotherUser

       The password file can be updated while rclone is running.

       Use --realm to set the authentication realm.

       Use --salt to change the password hashing salt from the default.

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

              rclone serve http remote:path [flags]

   Options
                    --addr string                            IPaddress:Port or :Port to bind server to (default "127.0.0.1:8080")
                    --baseurl string                         Prefix for URLs - leave blank for root
                    --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
                    --client-ca string                       Client certificate authority to verify clients with
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for http
                    --htpasswd string                        A htpasswd file - if not provided no authentication is done
                    --key string                             SSL PEM Private key
                    --max-header-bytes int                   Maximum size of request header (default 4096)
                    --min-tls-version string                 Minimum TLS version that is acceptable (default "tls1.0")
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --pass string                            Password for authentication
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --realm string                           Realm for authentication
                    --salt string                            Password hashing salt (default "dlPL2MqE")
                    --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
                    --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
                    --template string                        User-specified template
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --user string                            User name for authentication
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve restic

       Serve the remote for restic’s REST API.

   Synopsis
       Run a basic web server to serve a remove over restic’s REST backend API over HTTP.  This allows restic to
       use rclone as a data storage mechanism for cloud providers that restic does not support directly.

       Restic is a command-line program for doing backups.

       The server will log errors.  Use -v to see access logs.

       --bwlimit will be respected for file transfers.  Use --stats to control the stats printing.

   Setting up rclone for use by restic
       First set up a remote for your chosen cloud provider.

       Once  you  have  set  up the remote, check it is working with, for example “rclone lsd remote:”.  You may
       have called the remote something other than “remote:” - just substitute whatever you  called  it  in  the
       following instructions.

       Now start the rclone restic server

              rclone serve restic -v remote:backup

       Where you can replace “backup” in the above by whatever path in the remote you wish to use.

       By default this will serve on “localhost:8080” you can change this with use of the --addr flag.

       You might wish to start this server on boot.

       Adding  --cache-objects=false  will  cause  rclone  to  stop caching objects returned from the List call.
       Caching is normally desirable as it speeds up downloading  objects,  saves  transactions  and  uses  very
       little memory.

   Setting up restic to use rclone
       Now you can http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server  follow  the
       restic in‐ structions on setting up restic.

       Note that you will need restic 0.8.2 or later to interoperate with rclone.

       For the example above you will want to use “http://localhost:8080/” as the URL for the REST server.

       For example:

              $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
              $ export RESTIC_PASSWORD=yourpassword
              $ restic init
              created restic backend 8b1a4b56ae at rest:http://localhost:8080/

              Please note that knowledge of your password is required to access
              the repository. Losing your password means that your data is
              irrecoverably lost.
              $ restic backup /path/to/files/to/backup
              scan [/path/to/files/to/backup]
              scanned 189 directories, 312 files in 0:00
              [0:00] 100.00%  38.128 MiB / 38.128 MiB  501 / 501 items  0 errors  ETA 0:00
              duration: 0:00
              snapshot 45c8fdd8 saved

   Multiple repositories
       Note that you can use the endpoint to host multiple repositories.  Do this by adding a directory name  or
       path after the URL.  Note that these must end with /.  Eg

              $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
              # backup user1 stuff
              $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
              # backup user2 stuff

   Private repositories
       The--private-repos flag can be used to limit users to repositories starting with a path of /<username>/.

   Server options
       Use  --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or
       --addr :8080 to listen to all IPs.  By default it only listens on localhost.  You can use port :0 to  let
       the OS choose an available port.

       If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised
       - see the next section for info.

       --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server.  Note
       that this is the total time for a transfer.

       --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

       --baseurl  controls  the URL prefix that rclone serves from.  By default rclone will serve from the root.
       If you used --baseurl "/rclone" then rclone would serve from a URL starting  with  “/rclone/”.   This  is
       useful  if  you  wish  to  proxy  rclone serve.  Rclone automatically inserts leading and trailing “/” on
       --baseurl,  so  --baseurl  "rclone",  --baseurl  "/rclone"  and  --baseurl  "/rclone/"  are  all  treated
       identically.

       --template  allows  a  user to specify a custom markup template for HTTP and WebDAV serve functions.  The
       server exports the following markup to be used within the template to server pages:

       Parameter                             Description
       ──────────────────────────────────────────────────────────────────────────
       .Name                                 The full path of a file/directory.
       .Title                                Directory listing of .Name
       .Sort                                 The current  sort  used.   This  is
                                             changeable via ?sort= parameter
                                             Sort                       Options:
                                             namedirfirst,name,size,time
                                             (default namedirfirst)
       .Order                                The current ordering used.  This is
                                             changeable via ?order= parameter
                                             Order  Options:  asc,desc  (default
                                             asc)
       .Query                                Currently unused.
       .Breadcrumb                           Allows   for  creating  a  relative
                                             navigation
       – .Link                               The relative to the  root  link  of
                                             the Text.
       – .Text                               The Name of the directory.
       .Entries                              Information    about   a   specific
                                             file/directory.
       – .URL                                The `url' of an entry.
       – .Leaf                               Currently   same   as   `URL'   but
                                             intended to be `just' the name.
       – .IsDir                              Boolean   for  if  an  entry  is  a
                                             directory or not.
       – .Size                               Size in Bytes of the entry.
       – .ModTime                            The UTC timestamp of an entry.

   Authentication
       By default this will serve files without needing a login.

       You can either use an htpasswd file which can take lots of users, or set a single username  and  password
       with the --user and --pass flags.

       Use  --htpasswd  /path/to/htpasswd  to  provide  an htpasswd file.  This is in standard apache format and
       supports MD5, SHA1 and BCrypt for basic authentication.  Bcrypt is recommended.

       To create an htpasswd file:

              touch htpasswd
              htpasswd -B htpasswd user
              htpasswd -B htpasswd anotherUser

       The password file can be updated while rclone is running.

       Use --realm to set the authentication realm.

   SSL/TLS
       By default this will serve over HTTP.  If you want you can serve over HTTPS.  You will need to supply the
       --cert and --key flags.  If you wish to do client side certificate  validation  then  you  will  need  to
       supply --client-ca also.

       --cert  should  be  either  a PEM encoded certificate or a concatenation of that with the CA certificate.
       --key should be the PEM encoded private key and --client-ca should be the PEM encoded client  certificate
       authority certificate.

       –min-tls-version  is  minimum  TLS  version  that  is  acceptable.   Valid values are “tls1.0”, “tls1.1”,
       “tls1.2” and “tls1.3” (default “tls1.0”).

              rclone serve restic remote:path [flags]

   Options
                    --addr string                     IPaddress:Port or :Port to bind server to (default "localhost:8080")
                    --append-only                     Disallow deletion of repository data
                    --baseurl string                  Prefix for URLs - leave blank for root
                    --cache-objects                   Cache listed objects (default true)
                    --cert string                     SSL PEM key (concatenation of certificate and CA certificate)
                    --client-ca string                Client certificate authority to verify clients with
                -h, --help                            help for restic
                    --htpasswd string                 htpasswd file - if not provided no authentication is done
                    --key string                      SSL PEM Private key
                    --max-header-bytes int            Maximum size of request header (default 4096)
                    --min-tls-version string          Minimum TLS version that is acceptable (default "tls1.0")
                    --pass string                     Password for authentication
                    --private-repos                   Users can only access their private repo
                    --realm string                    Realm for authentication (default "rclone")
                    --server-read-timeout duration    Timeout for server reading data (default 1h0m0s)
                    --server-write-timeout duration   Timeout for server writing data (default 1h0m0s)
                    --stdio                           Run an HTTP2 server on stdin/stdout
                    --template string                 User-specified template
                    --user string                     User name for authentication

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve sftp

       Serve the remote over SFTP.

   Synopsis
       Run an SFTP server to serve a remote over SFTP.  This can be used with an SFTP client or you can  make  a
       remote of type sftp to use with it.

       You can use the filter flags (e.g. --include, --exclude) to control what is served.

       The  server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable
       it to provide support for checksums and the about feature when accessed from an sftp remote.

       Note that this server uses standard 32 KiB packet payload size, which means you must  not  configure  the
       client to expect anything else, e.g.  with the chunk_size option on an sftp remote.

       The server will log errors.  Use -v to see access logs.

       --bwlimit will be respected for file transfers.  Use --stats to control the stats printing.

       You  must  provide  some  means  of  authentication,  either  with --user/--pass, an authorized keys file
       (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or  set  the
       --no-auth flag for no authentication when logging in.

       If  you  don’t  supply  a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache
       them for later use in rclone’s cache directory (see rclone help  flags  cache-dir)  in  the  “serve-sftp”
       directory.

       By  default  the  server  binds to localhost:2022 - if you want it to be reachable externally then supply
       --addr :2022 for example.

       Note that the default of --vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with
       other SFTP clients.

       If --stdio is  specified,  rclone  will  serve  SFTP  over  stdio,  which  can  be  used  with  sshd  via
       ~/.ssh/authorized_keys, for example:

              restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...

       On  the  client  you  need  to set --transfers 1 when using --stdio.  Otherwise multiple instances of the
       rclone server are started by OpenSSH which can lead to “corrupted on transfer” errors.  This is the  case
       because  the  client chooses indiscriminately which server to send commands to while the servers all have
       different views of the state of the filing system.

       The “restrict” in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing  used.   Omitting  “restrict”
       and  using  --sftp-path-override to enable checksumming is possible but less secure and you could use the
       SFTP server provided by OpenSSH in this case.

   VFS - Virtual File System
       This command uses the VFS layer.  This adapts the cloud storage objects that rclone uses  into  something
       which looks much more like a disk filing system.

       Cloud  storage  objects  have  lots of properties which aren’t like disk files - you can’t extend them or
       write to the middle of them, so the VFS layer has to deal with that.  Because there is no one  right  way
       of doing this there are various options explained below.

       The  VFS  layer also implements a directory cache - this caches info about files and directories (but not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not refreshed from the backend.  Changes made through the VFS will appear immediately or  invalidate  the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However,  changes  made  directly on the cloud storage by the web interface or a different copy of rclone
       will only be picked up once the directory cache expires  if  the  backend  configured  does  not  support
       polling  for  changes.   If  the  backend  supports polling, changes will be picked up within the polling
       interval.

       You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how  old  they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If  you  configure  rclone  with a remote control then you can use rclone rc to flush the whole directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each open file will try to keep the specified amount of data in memory at all times.  The  buffered  data
       is bound to one open file and won’t be shared.

       This  flag  is a upper limit for the used memory per open file.  The buffer will only use memory for data
       that is downloaded but not not yet read.  If the buffer is empty, only a small amount of memory  will  be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For  example  you’ll  need  to enable VFS caching if you want to read and write simultaneously to a file.
       See below for more details.

       Note that the VFS cache is separate from the cache backend and you may find that  you  need  one  or  the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file  area  which  is  OS  dependent  but  can  be controlled with --cache-dir or setting the appropriate
       environment variable.

       The cache has 4 different modes selected by  --vfs-cache-mode.   The  higher  the  cache  mode  the  more
       compatible rclone becomes at the cost of using disk space.

       Note  that  files  are  written  back  to  the  remote only when they are closed and if they haven’t been
       accessed for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been  uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If  using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly because
       it is only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be  evicted  from
       the cache.

       You  should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about  this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In  this mode (the default) the cache will read directly from the remote and write directly to the remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In this mode files opened for read only  are  still  read  directly  from  the  remote,  write  only  and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In  this mode all reads and writes are buffered to and from disk.  When data is read from the remote this
       is buffered to disk as well.

       In this mode the files in the cache will be sparse files and rclone will keep track of which bits of  the
       files it has downloaded.

       So  if  an  application only reads the starts of each file, then rclone will only buffer the start of the
       file.  These files will appear to be their full size in the cache, but they will  be  sparse  files  with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When  reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When using this mode it is recommended that --buffer-size is not set too large  and  --vfs-read-ahead  is
       set large if required.

       IMPORTANT  not  all  file  systems  support  sparse  files.  In particular FAT/exFAT do not.  Rclone will
       perform very badly if the cache directory is on a filesystem which doesn’t support sparse  files  and  it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various  parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On some backends some of these attributes are slow to read (they take an extra API call  per  object,  or
       extra work per object).

       For  example  hash is slow with the local and sftp backends as they have to read the entire file and hash
       it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to  do  an  extra
       API call to fetch it.

       If  you  use  the  --vfs-fast-fingerprint  flag  then  rclone will not include the slow operations in the
       fingerprint.  This makes the fingerprinting less accurate but much faster and will  improve  the  opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note  that  if  you  change  the  value  of  this flag, the fingerprints of the files in the cache may be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When rclone reads files from a remote it reads them in chunks.  This means that  rather  than  requesting
       the  whole  file  rclone  reads  the  chunk  specified.  This can reduce the used download quota for some
       remotes by requesting only chunks from the remote that are actually read, at the  cost  of  an  increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone  will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each open file will get doubled only until the specified value is reached.  If the value is “off”,  which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M,  100M-200M,  200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M is specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime  for  a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time  for  the in sequence read or write to come in.  These flags only come into effect when not using an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When using VFS write caching (--vfs-cache-mode with value writes or full), the  global  flag  --transfers
       can  be set to adjust the number of parallel uploads of modified files from the cache (the related global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File systems in modern Windows are case-insensitive but case-preserving: although existing files  can  be
       opened  using any case, the exact case used to create the file is preserved and available for programs to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually file systems on  macOS  are  case-insensitive.   It  is  possible  to  make  macOS  file  systems
       case-sensitive but that is not the default.

       The  --vfs-case-insensitive  VFS  flag  controls  how  rclone  handles  these two cases.  If its value is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The user may specify a file name to open/delete/rename/etc with a case different than what is  stored  on
       the  remote.   If an argument refers to an existing file with exactly the same name, then the case of the
       existing file on the disk will be used.  However, if a file name with exactly the same name is not  found
       but  a  name differing only by case exists, rclone will transparently fixup the name.  This fixup happens
       only when an existing file is requested.  Case sensitivity of  file  names  created  anew  by  rclone  is
       controlled by the underlying remote.

       Note  that  case  sensitivity  of  the  operating system running rclone (the target) may differ from case
       sensitivity of a file system presented by rclone (the source).  The  flag  controls  whether  “fixup”  is
       performed to satisfy the target.

       If  the  flag is not provided on the command line, then its default value depends on the operating system
       where rclone runs: “true” on Windows and macOS, “false” otherwise.  If the flag  is  provided  without  a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some  backends, most notably S3, do not report the amount of bytes used.  If you need this information to
       be available when running df on the filesystem, then pass the flag --vfs-used-is-size  to  rclone.   With
       this  flag  set, instead of relying on the backend to report this information, rclone will scan the whole
       remote similar to rclone size and compute the total used space itself.

       WARNING. Contrary to rclone size, this flag ignores filters so that the  result  is  accurate.   However,
       this  is  very  inefficient  and may cost lots of API calls resulting in extra charges.  Use it as a last
       resort and only with caching.

   Auth Proxy
       If you supply the parameter --auth-proxy /path/to/program then rclone will use that program  to  generate
       backends on the fly which then are used to authenticate incoming requests.  This uses a simple JSON based
       protocol with input on STDIN and output on STDOUT.

       PLEASE  NOTE:  --auth-proxy  and  --authorized-keys  cannot  be used together, if --auth-proxy is set the
       authorized keys option will be ignored.

       There is an example program bin/test_proxy.py in the rclone source code.

       The program’s job is to take a user and pass on the input and turn those into the config for a backend on
       STDOUT in JSON format.  This config will have any default parameters for the backend added, but it  won’t
       use configuration from environment variables or command line options - it is the job of the proxy program
       to make a complete config.

       This config generated must have this extra parameter - _root - root to use for the backend

       And it may have this parameter - _obscure - comma separated strings for parameters to obscure

       If  password  authentication  was  used  by  the client, input to the proxy process (on STDIN) would look
       similar to this:

              {
                  "user": "me",
                  "pass": "mypassword"
              }

       If public-key authentication was used by the client, input to the proxy process  (on  STDIN)  would  look
       similar to this:

              {
                  "user": "me",
                  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
              }

       And as an example return this on STDOUT

              {
                  "type": "sftp",
                  "_root": "",
                  "_obscure": "pass",
                  "user": "me",
                  "pass": "mypassword",
                  "host": "sftp.example.com"
              }

       This  would  mean  that  an  SFTP  backend  would  be created on the fly for the user and pass/public_key
       returned in the output to the host given.  Note that since _obscure is set to pass, rclone  will  obscure
       the pass parameter before creating the backend (which is required for sftp backends).

       The program can manipulate the supplied user in any way, for example to make proxy to many different sftp
       backends,  you could make the user be user@example.com and then set the host to example.com in the output
       and the user to user.  For security you’d probably want to restrict the host to a limited list.

       Note that an internal cache is keyed on user so only use  that  for  configuration,  don’t  use  pass  or
       public_key.   This  also  means that if a user’s password or public-key is changed the cache will need to
       expire (which takes 5 mins) before it takes effect.

       This can be used to build general purpose proxies to any kind of backend that rclone supports.

              rclone serve sftp remote:path [flags]

   Options
                    --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2022")
                    --auth-proxy string                      A program to use to create the backend from the auth
                    --authorized-keys string                 Authorized keys file (default "~/.ssh/authorized_keys")
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --file-perms FileMode                    File permissions (default 0666)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for sftp
                    --key stringArray                        SSH private host key file (Can be multi-valued, leave blank to auto generate)
                    --no-auth                                Allow connections with no authentication if set
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --pass string                            Password for authentication
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --stdio                                  Run an sftp server on stdin/stdout
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --user string                            User name for authentication
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone serve webdav

       Serve remote:path over WebDAV.

   Synopsis
       Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol.  This can be viewed with a
       WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.

   WebDAV options
   –etag-hash
       This controls the ETag header.  Without this flag the ETag will be based on the ModTime and Size  of  the
       object.

       If  this flag is set to “auto” then rclone will choose the first supported hash on the backend or you can
       use a named hash such as “MD5” or “SHA-1”.  Use the hashsum command to see the full list.

   Server options
       Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000  or
       --addr  :8080 to listen to all IPs.  By default it only listens on localhost.  You can use port :0 to let
       the OS choose an available port.

       If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised
       - see the next section for info.

       --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server.  Note
       that this is the total time for a transfer.

       --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

       --baseurl controls the URL prefix that rclone serves from.  By default rclone will serve from  the  root.
       If  you  used  --baseurl  "/rclone" then rclone would serve from a URL starting with “/rclone/”.  This is
       useful if you wish to proxy rclone serve.  Rclone automatically  inserts  leading  and  trailing  “/”  on
       --baseurl,  so  --baseurl  "rclone",  --baseurl  "/rclone"  and  --baseurl  "/rclone/"  are  all  treated
       identically.

       --template allows a user to specify a custom markup template for HTTP and WebDAV  serve  functions.   The
       server exports the following markup to be used within the template to server pages:

       Parameter                             Description
       ──────────────────────────────────────────────────────────────────────────
       .Name                                 The full path of a file/directory.
       .Title                                Directory listing of .Name
       .Sort                                 The  current  sort  used.   This is
                                             changeable via ?sort= parameter
                                             Sort                       Options:
                                             namedirfirst,name,size,time
                                             (default namedirfirst)
       .Order                                The current ordering used.  This is
                                             changeable via ?order= parameter
                                             Order  Options:  asc,desc  (default
                                             asc)
       .Query                                Currently unused.
       .Breadcrumb                           Allows  for  creating  a   relative
                                             navigation
       – .Link                               The  relative  to  the root link of
                                             the Text.
       – .Text                               The Name of the directory.
       .Entries                              Information   about   a    specific
                                             file/directory.
       – .URL                                The `url' of an entry.
       – .Leaf                               Currently   same   as   `URL'   but
                                             intended to be `just' the name.
       – .IsDir                              Boolean  for  if  an  entry  is   a
                                             directory or not.
       – .Size                               Size in Bytes of the entry.
       – .ModTime                            The UTC timestamp of an entry.

   Authentication
       By default this will serve files without needing a login.

       You  can  either use an htpasswd file which can take lots of users, or set a single username and password
       with the --user and --pass flags.

       Use --htpasswd /path/to/htpasswd to provide an htpasswd file.  This is  in  standard  apache  format  and
       supports MD5, SHA1 and BCrypt for basic authentication.  Bcrypt is recommended.

       To create an htpasswd file:

              touch htpasswd
              htpasswd -B htpasswd user
              htpasswd -B htpasswd anotherUser

       The password file can be updated while rclone is running.

       Use --realm to set the authentication realm.

   SSL/TLS
       By default this will serve over HTTP.  If you want you can serve over HTTPS.  You will need to supply the
       --cert  and  --key  flags.   If  you  wish to do client side certificate validation then you will need to
       supply --client-ca also.

       --cert should be either a PEM encoded certificate or a concatenation of that  with  the  CA  certificate.
       --key  should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate
       authority certificate.

       –min-tls-version is minimum TLS version  that  is  acceptable.   Valid  values  are  “tls1.0”,  “tls1.1”,
       “tls1.2” and “tls1.3” (default “tls1.0”).

   VFS - Virtual File System
       This  command  uses the VFS layer.  This adapts the cloud storage objects that rclone uses into something
       which looks much more like a disk filing system.

       Cloud storage objects have lots of properties which aren’t like disk files - you  can’t  extend  them  or
       write  to  the middle of them, so the VFS layer has to deal with that.  Because there is no one right way
       of doing this there are various options explained below.

       The VFS layer also implements a directory cache - this caches info about files and directories  (but  not
       the data) in memory.

   VFS Directory Cache
       Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and
       not  refreshed  from the backend.  Changes made through the VFS will appear immediately or invalidate the
       cache.

              --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
              --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

       However, changes made directly on the cloud storage by the web interface or a different  copy  of  rclone
       will  only  be  picked  up  once  the  directory cache expires if the backend configured does not support
       polling for changes.  If the backend supports polling, changes will  be  picked  up  within  the  polling
       interval.

       You  can  send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they
       are.  Assuming only one rclone instance is running, you can reset the cache like this:

              kill -SIGHUP $(pidof rclone)

       If you configure rclone with a remote control then you can use rclone rc to  flush  the  whole  directory
       cache:

              rclone rc vfs/forget

       Or individual files or directories:

              rclone rc vfs/forget file=path/to/file dir=path/to/dir

   VFS File Buffering
       The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

       Each  open  file will try to keep the specified amount of data in memory at all times.  The buffered data
       is bound to one open file and won’t be shared.

       This flag is a upper limit for the used memory per open file.  The buffer will only use memory  for  data
       that  is  downloaded but not not yet read.  If the buffer is empty, only a small amount of memory will be
       used.

       The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

   VFS File Caching
       These flags control the VFS file caching options.  File caching is necessary to make the VFS layer appear
       compatible with a normal file system.  It can be disabled at the cost of some compatibility.

       For example you’ll need to enable VFS caching if you want to read and write  simultaneously  to  a  file.
       See below for more details.

       Note  that  the  VFS  cache  is separate from the cache backend and you may find that you need one or the
       other or both.

              --cache-dir string                   Directory rclone will use for caching.
              --vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
              --vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
              --vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
              --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
              --vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

       If run with -vv rclone will print the location of the file cache.  The files are stored in the user cache
       file area which is OS dependent but can  be  controlled  with  --cache-dir  or  setting  the  appropriate
       environment variable.

       The  cache  has  4  different  modes  selected  by  --vfs-cache-mode.  The higher the cache mode the more
       compatible rclone becomes at the cost of using disk space.

       Note that files are written back to the remote only when  they  are  closed  and  if  they  haven’t  been
       accessed  for --vfs-write-back seconds.  If rclone is quit or dies with files that haven’t been uploaded,
       these will be uploaded next time rclone is run with the same flags.

       If using --vfs-cache-max-size note that the cache may exceed this size for two reasons.  Firstly  because
       it  is  only checked every --vfs-cache-poll-interval.  Secondly because open files cannot be evicted from
       the cache.

       You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes  if
       using --vfs-cache-mode > off.  This can potentially cause data corruption if you do.  You can work around
       this  by giving each rclone its own cache hierarchy with --cache-dir.  You don’t need to worry about this
       if the remotes in use don’t overlap.

   –vfs-cache-mode off
       In this mode (the default) the cache will read directly from the remote and write directly to the  remote
       without caching anything on disk.

       This will mean some operations are not possible

       • Files can’t be opened for both read AND write

       • Files opened for write can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files open for read with O_TRUNC will be opened write only

       • Files open for write only will behave as if O_TRUNC was supplied

       • Open modes O_APPEND, O_TRUNC are ignored

       • If an upload fails it can’t be retried

   –vfs-cache-mode minimal
       This is very similar to “off” except that files opened for read AND write will be buffered to disk.  This
       means that files opened for write will be a lot more compatible, but uses the minimal disk space.

       These operations are not possible

       • Files opened for write only can’t be seeked

       • Existing files opened for write must have O_TRUNC set

       • Files opened for write only will ignore O_APPEND, O_TRUNC

       • If an upload fails it can’t be retried

   –vfs-cache-mode writes
       In  this  mode  files  opened  for  read  only  are  still  read directly from the remote, write only and
       read/write files are buffered to disk first.

       This mode should support all normal file system operations.

       If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

   –vfs-cache-mode full
       In this mode all reads and writes are buffered to and from disk.  When data is read from the remote  this
       is buffered to disk as well.

       In  this mode the files in the cache will be sparse files and rclone will keep track of which bits of the
       files it has downloaded.

       So if an application only reads the starts of each file, then rclone will only buffer the  start  of  the
       file.   These  files  will  appear to be their full size in the cache, but they will be sparse files with
       only the data that has been downloaded present in them.

       This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
       writes.

       When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead.  The  --buffer-size
       is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

       When  using  this  mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is
       set large if required.

       IMPORTANT not all file systems support sparse files.   In  particular  FAT/exFAT  do  not.   Rclone  will
       perform  very  badly  if the cache directory is on a filesystem which doesn’t support sparse files and it
       will log an ERROR message if one is detected.

   Fingerprinting
       Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a  remote
       file.  Fingerprints are made from:

       • size

       • modification time

       • hash

       where available on an object.

       On  some  backends  some of these attributes are slow to read (they take an extra API call per object, or
       extra work per object).

       For example hash is slow with the local and sftp backends as they have to read the entire file  and  hash
       it,  and  modtime  is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra
       API call to fetch it.

       If you use the --vfs-fast-fingerprint flag then rclone will  not  include  the  slow  operations  in  the
       fingerprint.   This  makes  the fingerprinting less accurate but much faster and will improve the opening
       time of cached files.

       If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

       Note that if you change the value of this flag, the fingerprints  of  the  files  in  the  cache  may  be
       invalidated and the files will need to be downloaded again.

   VFS Chunked Reading
       When  rclone  reads  files from a remote it reads them in chunks.  This means that rather than requesting
       the whole file rclone reads the chunk specified.  This can  reduce  the  used  download  quota  for  some
       remotes  by  requesting  only  chunks from the remote that are actually read, at the cost of an increased
       number of requests.

       These flags control the chunking:

              --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
              --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

       Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each  read.
       When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for
       each  open file will get doubled only until the specified value is reached.  If the value is “off”, which
       is the default, the limit is disabled and the chunk size will grow indefinitely.

       With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded:
       0-100M, 100M-200M, 200M-300M, 300M-400M and so on.  When --vfs-read-chunk-size-limit 500M  is  specified,
       the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

       Setting --vfs-read-chunk-size to 0 or “off” disables chunked reading.

   VFS Performance
       These flags may be used to enable/disable features of the VFS for performance or other reasons.  See also
       the chunked reading feature.

       In  particular  S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a
       slightly different effect) as each read of the modification time takes a transaction.

              --no-checksum     Don't compare checksums on up/download.
              --no-modtime      Don't read/write the modification time (can speed things up).
              --no-seek         Don't allow seeking in files.
              --read-only       Only allow read-only access.

       Sometimes rclone is delivered reads or writes out of order.  Rather than seeking rclone will wait a short
       time for the in sequence read or write to come in.  These flags only come into effect when not  using  an
       on disk cache file.

              --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
              --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

       When  using  VFS  write caching (--vfs-cache-mode with value writes or full), the global flag --transfers
       can be set to adjust the number of parallel uploads of modified files from the cache (the related  global
       flag --checkers has no effect on the VFS).

              --transfers int  Number of file transfers to run in parallel (default 4)

   VFS Case Sensitivity
       Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used
       when opening a file.

       File  systems  in modern Windows are case-insensitive but case-preserving: although existing files can be
       opened using any case, the exact case used to create the file is preserved and available for programs  to
       query.  It is not allowed for two files in the same directory to differ only by case.

       Usually  file  systems  on  macOS  are  case-insensitive.   It  is  possible  to  make macOS file systems
       case-sensitive but that is not the default.

       The --vfs-case-insensitive VFS flag controls how rclone  handles  these  two  cases.   If  its  value  is
       “false”, rclone passes file names to the remote as-is.  If the flag is “true” (or appears without a value
       on the command line), rclone may perform a “fixup” as explained below.

       The  user  may specify a file name to open/delete/rename/etc with a case different than what is stored on
       the remote.  If an argument refers to an existing file with exactly the same name, then the case  of  the
       existing  file on the disk will be used.  However, if a file name with exactly the same name is not found
       but a name differing only by case exists, rclone will transparently fixup the name.  This  fixup  happens
       only  when  an  existing  file  is  requested.   Case sensitivity of file names created anew by rclone is
       controlled by the underlying remote.

       Note that case sensitivity of the operating system running rclone  (the  target)  may  differ  from  case
       sensitivity  of  a  file  system  presented by rclone (the source).  The flag controls whether “fixup” is
       performed to satisfy the target.

       If the flag is not provided on the command line, then its default value depends on the  operating  system
       where  rclone  runs:  “true”  on Windows and macOS, “false” otherwise.  If the flag is provided without a
       value, then it is “true”.

   VFS Disk Options
       This flag allows you to manually set the statistics about the filing system.  It can be useful when those
       statistics cannot be read correctly automatically.

              --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

   Alternate report of used bytes
       Some backends, most notably S3, do not report the amount of bytes used.  If you need this information  to
       be  available  when  running df on the filesystem, then pass the flag --vfs-used-is-size to rclone.  With
       this flag set, instead of relying on the backend to report this information, rclone will scan  the  whole
       remote similar to rclone size and compute the total used space itself.

       WARNING.  Contrary  to  rclone  size, this flag ignores filters so that the result is accurate.  However,
       this is very inefficient and may cost lots of API calls resulting in extra charges.  Use  it  as  a  last
       resort and only with caching.

   Auth Proxy
       If  you  supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate
       backends on the fly which then are used to authenticate incoming requests.  This uses a simple JSON based
       protocol with input on STDIN and output on STDOUT.

       PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used  together,  if  --auth-proxy  is  set  the
       authorized keys option will be ignored.

       There is an example program bin/test_proxy.py in the rclone source code.

       The program’s job is to take a user and pass on the input and turn those into the config for a backend on
       STDOUT  in JSON format.  This config will have any default parameters for the backend added, but it won’t
       use configuration from environment variables or command line options - it is the job of the proxy program
       to make a complete config.

       This config generated must have this extra parameter - _root - root to use for the backend

       And it may have this parameter - _obscure - comma separated strings for parameters to obscure

       If password authentication was used by the client, input to the  proxy  process  (on  STDIN)  would  look
       similar to this:

              {
                  "user": "me",
                  "pass": "mypassword"
              }

       If  public-key  authentication  was  used by the client, input to the proxy process (on STDIN) would look
       similar to this:

              {
                  "user": "me",
                  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
              }

       And as an example return this on STDOUT

              {
                  "type": "sftp",
                  "_root": "",
                  "_obscure": "pass",
                  "user": "me",
                  "pass": "mypassword",
                  "host": "sftp.example.com"
              }

       This would mean that an SFTP backend would be created  on  the  fly  for  the  user  and  pass/public_key
       returned  in  the output to the host given.  Note that since _obscure is set to pass, rclone will obscure
       the pass parameter before creating the backend (which is required for sftp backends).

       The program can manipulate the supplied user in any way, for example to make proxy to many different sftp
       backends, you could make the user be user@example.com and then set the host to example.com in the  output
       and the user to user.  For security you’d probably want to restrict the host to a limited list.

       Note  that  an  internal  cache  is  keyed  on user so only use that for configuration, don’t use pass or
       public_key.  This also means that if a user’s password or public-key is changed the cache  will  need  to
       expire (which takes 5 mins) before it takes effect.

       This can be used to build general purpose proxies to any kind of backend that rclone supports.

              rclone serve webdav remote:path [flags]

   Options
                    --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:8080")
                    --auth-proxy string                      A program to use to create the backend from the auth
                    --baseurl string                         Prefix for URLs - leave blank for root
                    --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
                    --client-ca string                       Client certificate authority to verify clients with
                    --dir-cache-time duration                Time to cache directory entries for (default 5m0s)
                    --dir-perms FileMode                     Directory permissions (default 0777)
                    --disable-dir-list                       Disable HTML directory list on GET request for a directory
                    --etag-hash string                       Which hash to use for the ETag, or auto or blank for off
                    --file-perms FileMode                    File permissions (default 0666)
                    --gid uint32                             Override the gid field set by the filesystem (not supported on Windows) (default 1000)
                -h, --help                                   help for webdav
                    --htpasswd string                        htpasswd file - if not provided no authentication is done
                    --key string                             SSL PEM Private key
                    --max-header-bytes int                   Maximum size of request header (default 4096)
                    --min-tls-version string                 Minimum TLS version that is acceptable (default "tls1.0")
                    --no-checksum                            Don't compare checksums on up/download
                    --no-modtime                             Don't read/write the modification time (can speed things up)
                    --no-seek                                Don't allow seeking in files
                    --pass string                            Password for authentication
                    --poll-interval duration                 Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
                    --read-only                              Only allow read-only access
                    --realm string                           Realm for authentication (default "rclone")
                    --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
                    --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
                    --template string                        User-specified template
                    --uid uint32                             Override the uid field set by the filesystem (not supported on Windows) (default 1000)
                    --umask int                              Override the permission bits set by the filesystem (not supported on Windows) (default 2)
                    --user string                            User name for authentication
                    --vfs-cache-max-age duration             Max age of objects in the cache (default 1h0m0s)
                    --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache (default off)
                    --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
                    --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects (default 1m0s)
                    --vfs-case-insensitive                   If a file name not found, find a case insensitive match
                    --vfs-disk-space-total-size SizeSuffix   Specify the total space of disk (default off)
                    --vfs-fast-fingerprint                   Use fast (less accurate) fingerprints for change detection
                    --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full
                    --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks (default 128Mi)
                    --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
                    --vfs-read-wait duration                 Time to wait for in-sequence read before seeking (default 20ms)
                    --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size
                    --vfs-write-back duration                Time to writeback files after last use when using cache (default 5s)
                    --vfs-write-wait duration                Time to wait for in-sequence write before giving error (default 1s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone serve - Serve a remote over a protocol.

rclone settier

       Changes storage class/tier of objects in remote.

   Synopsis
       rclone settier changes storage tier or class at remote if supported.  Few cloud storage services provides
       different  storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and
       Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.

       Note that, certain tier changes make objects not available to access immediately.  For example tiering to
       archive in azure blob storage makes objects in  frozen  state,  user  can  restore  by  setting  tier  to
       Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

       You can use it to tier single object

              rclone settier Cool remote:path/file

       Or use rclone filters to set tier on only specific files

              rclone --include "*.txt" settier Hot remote:path/dir

       Or just provide remote directory and all files in directory will be tiered

              rclone settier tier remote:path/dir

              rclone settier tier remote:path [flags]

   Options
                -h, --help   help for settier

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone test

       Run a test command

   Synopsis
       Rclone test is used to run test commands.

       Select which test comand you want with the subcommand, eg

              rclone test memory remote:

       Each subcommand has its own options which you can see in their help.

       NB  Be careful running these commands, they may do strange things so reading their documentation first is
       recommended.

   Options
                -h, --help   help for test

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

       • rclone test changenotify - Log any change notify requests for the remote passed in.

       • rclone test histogram - Makes a histogram of file name characters.

       • rclone test info - Discovers file name or other limitations for paths.

       • rclone test makefile - Make files with random contents of the size given

       • rclone test makefiles - Make a random file hierarchy in a directory

       • rclone test memory - Load all the objects at remote:path into memory and report memory stats.

rclone test changenotify

       Log any change notify requests for the remote passed in.

              rclone test changenotify remote: [flags]

   Options
                -h, --help                     help for changenotify
                    --poll-interval duration   Time to wait between polling for changes (default 10s)

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone test histogram

       Makes a histogram of file name characters.

   Synopsis
       This command outputs JSON which shows the histogram of characters used in filenames  in  the  remote:path
       specified.

       The  data  doesn’t  contain  any  identifying  information  but  is useful for the rclone developers when
       developing filename compression.

              rclone test histogram [remote:path] [flags]

   Options
                -h, --help   help for histogram

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone test info

       Discovers file name or other limitations for paths.

   Synopsis
       rclone info discovers what filenames and upload methods are possible to write to the paths passed in  and
       how  long  they can be.  It can take some time.  It will write test files into the remote:path passed in.
       It outputs a bit of go code for each one.

       NB this can create undeletable files and other hazards - use with care

              rclone test info [remote:path]+ [flags]

   Options
                    --all                    Run all tests
                    --check-control          Check control characters
                    --check-length           Check max filename length
                    --check-normalization    Check UTF-8 Normalization
                    --check-streaming        Check uploads with indeterminate file size
                -h, --help                   help for info
                    --upload-wait duration   Wait after writing a file
                    --write-json string      Write results to file

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone test makefile

       Make files with random contents of the size given

              rclone test makefile <size> [<file>]+ [flags]

   Options
                    --ascii      Fill files with random ASCII printable bytes only
                    --chargen    Fill files with a ASCII chargen pattern
                -h, --help       help for makefile
                    --pattern    Fill files with a periodic pattern
                    --seed int   Seed for the random number generator (0 for random) (default 1)
                    --sparse     Make the files sparse (appear to be filled with ASCII 0x00)
                    --zero       Fill files with ASCII 0x00

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone test makefiles

       Make a random file hierarchy in a directory

              rclone test makefiles <dir> [flags]

   Options
                    --ascii                      Fill files with random ASCII printable bytes only
                    --chargen                    Fill files with a ASCII chargen pattern
                    --files int                  Number of files to create (default 1000)
                    --files-per-directory int    Average number of files per directory (default 10)
                -h, --help                       help for makefiles
                    --max-file-size SizeSuffix   Maximum size of files to create (default 100)
                    --max-name-length int        Maximum size of file names (default 12)
                    --min-file-size SizeSuffix   Minimum size of file to create
                    --min-name-length int        Minimum size of file names (default 4)
                    --pattern                    Fill files with a periodic pattern
                    --seed int                   Seed for the random number generator (0 for random) (default 1)
                    --sparse                     Make the files sparse (appear to be filled with ASCII 0x00)
                    --zero                       Fill files with ASCII 0x00

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone test memory

       Load all the objects at remote:path into memory and report memory stats.

              rclone test memory remote:path [flags]

   Options
                -h, --help   help for memory

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone test - Run a test command

rclone touch

       Create new file or change file modification time.

   Synopsis
       Set the modification time on file(s) as specified by remote:path to have the current time.

       If remote:path does not exist then a zero sized file will be created, unless --no-create  or  --recursive
       is provided.

       If  --recursive  is  used then recursively sets the modification time on all existing files that is found
       under the path.  Filters are supported, and you can test with the --dry-run or the --interactive flag.

       If --timestamp is used then sets the modification time to that time instead of the current  time.   Times
       may be specified as one of:

       • `YYMMDD' - e.g. 17.10.30

       • `YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05

       • `YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789

       Note that value of --timestamp is in UTC.  If you want local time then add the --localtime flag.

              rclone touch remote:path [flags]

   Options
                -h, --help               help for touch
                    --localtime          Use localtime for timestamp, not UTC
                -C, --no-create          Do not create the file if it does not exist (implied with --recursive)
                -R, --recursive          Recursively touch all files
                -t, --timestamp string   Use specified time instead of the current time of day

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

rclone tree

       List the contents of the remote in a tree like fashion.

   Synopsis
       rclone tree lists the contents of a remote in a similar way to the unix tree command.

       For example

              $ rclone tree remote:path
              /
              ├── file1
              ├── file2
              ├── file3
              └── subdir
                  ├── file4
                  └── file5

              1 directories, 5 files

       You  can  use any of the filtering options with the tree command (e.g.  --include and --exclude.  You can
       also use --fast-list.

       The tree command has many options for controlling the listing which are compatible with the tree command,
       for example you can include file sizes with --size.  Note that not all of them have short options as they
       conflict with rclone’s short options.

       For a more interactive navigation of the remote see the ncdu command.

              rclone tree remote:path [flags]

   Options
                -a, --all             All files are listed (list . files too)
                -C, --color           Turn colorization on always
                -d, --dirs-only       List directories only
                    --dirsfirst       List directories before files (-U disables)
                    --full-path       Print the full path prefix for each file
                -h, --help            help for tree
                    --level int       Descend only level directories deep
                -D, --modtime         Print the date of last modification.
                    --noindent        Don't print indentation lines
                    --noreport        Turn off file/directory count at end of tree listing
                -o, --output string   Output to file instead of stdout
                -p, --protections     Print the protections for each file.
                -Q, --quote           Quote filenames with double quotes.
                -s, --size            Print the size in bytes of each file.
                    --sort string     Select sort: name,version,size,mtime,ctime
                    --sort-ctime      Sort files by last status change time
                -t, --sort-modtime    Sort files by last modification time
                -r, --sort-reverse    Reverse the order of the sort
                -U, --unsorted        Leave files unsorted
                    --version         Sort files alphanumerically by version

       See the global flags page for global options not listed here.

   SEE ALSO
       • rclone - Show help for rclone commands, flags and backends.

   Copying single files
       rclone normally syncs or copies directories.  However, if the source remote points to a file, rclone will
       just copy that file.  The destination remote must point to a directory  -  rclone  will  give  the  error
       Failed to create file system for "remote:file": is a file not a directory if it isn’t.

       For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file
       like this

              rclone copy remote:test.jpg /tmp/download

       The file test.jpg will be placed inside /tmp/download.

       This is equivalent to specifying

              rclone copy --files-from /tmp/files remote: /tmp/download

       Where /tmp/files contains the single line

              test.jpg

       It  is  recommended  to use copy when copying individual files, not sync.  They have pretty much the same
       effect but copy will use a lot less memory.

   Syntax of remote paths
       The syntax of the paths passed to the rclone command are as follows.

   /path/to/dir
       This refers to the local file system.

       On Windows \ may be used instead of / in local paths only,  non  local  paths  must  use  /.   See  local
       filesystem documentation for more about Windows-specific paths.

       These  paths  needn’t  start  with  a leading / - if they don’t then they will be relative to the current
       directory.

   remote:path/to/dir
       This refers to a directory path/to/dir on remote: as defined in the config file (configured  with  rclone
       config).

   remote:/path/to/dir
       On  most  backends  this  is refers to the same directory as remote:path/to/dir and that format should be
       preferred.  On a very small number of remotes (FTP, SFTP, Dropbox for business)  this  will  refer  to  a
       different  directory.   On these, paths without a leading / will refer to your “home” directory and paths
       with a leading / will refer to the root.

   :backend:path/to/dir
       This is an advanced form for creating remotes on the fly.  backend should be the  name  or  prefix  of  a
       backend (the type in the config file) and all the configuration for the backend should be provided on the
       command line (or in environment variables).

       Here are some examples:

              rclone lsd --http-url https://pub.rclone.org :http:

       To list all the directories in the root of https://pub.rclone.org/.

              rclone lsf --http-url https://example.com :http:path/to/dir

       To list files and directories in https://example.com/path/to/dir/

              rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir

       To copy files and directories in https://example.com/path/to/dir to /tmp/dir.

              rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir

       To  copy  files  and directories from example.com in the relative directory path/to/dir to /tmp/dir using
       sftp.

   Connection strings
       The above examples can also be written using a connection string syntax,  so  instead  of  providing  the
       arguments  as  command line parameters --http-url https://pub.rclone.org they are provided as part of the
       remote specification as a kind of connection string.

              rclone lsd ":http,url='https://pub.rclone.org':"
              rclone lsf ":http,url='https://example.com':path/to/dir"
              rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir
              rclone copy :sftp,host=example.com:path/to/dir /tmp/dir

       These can apply to modify existing remotes as well as create new remotes with  the  on  the  fly  syntax.
       This example is equivalent to adding the --drive-shared-with-me parameter to the remote gdrive:.

              rclone lsf "gdrive,shared_with_me:path/to/dir"

       The  major  advantage  to using the connection string style syntax is that it only applies to the remote,
       not to all the remotes of that type of the command line.  A common confusion is this attempt  to  copy  a
       file  shared  on  google drive to the normal drive which does not work because the --drive-shared-with-me
       flag applies to both the source and the destination.

              rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive:

       However using the connection string syntax, this does work.

              rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:

       Note that the connection string only affects the options  of  the  immediate  backend.   If  for  example
       gdriveCrypt  is  a  crypt  based on gdrive, then the following command will not work as intended, because
       shared_with_me is ignored by the crypt backend:

              rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:

       The connection strings have the following syntax

              remote,parameter=value,parameter2=value2:path/to/dir
              :backend,parameter=value,parameter2=value2:path/to/dir

       If the parameter has a : or , then it must be placed in quotes " or ', so

              remote,parameter="colon:value",parameter2="comma,value":path/to/dir
              :backend,parameter='colon:value',parameter2='comma,value':path/to/dir

       If a quoted value needs to include that quote, then it should be doubled, so

              remote,parameter="with""quote",parameter2='with''quote':path/to/dir

       This will make parameter be with"quote and parameter2 be with'quote.

       If you leave off the =parameter then rclone will substitute =true which works very well with flags.   For
       example, to use s3 configured in the environment you could use:

              rclone lsd :s3,env_auth:

       Which is equivalent to

              rclone lsd :s3,env_auth=true:

       Note that on the command line you might need to surround these connection strings with " or ' to stop the
       shell interpreting any special characters within them.

       If  you are a shell master then you’ll know which strings are OK and which aren’t, but if you aren’t sure
       then enclose them in " and use ' as the inside quote.  This syntax works on all OSes.

              rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir

       On Linux/macOS some characters are still interpreted inside " strings in the shell (notably \ and  $  and
       ")  so  if your strings contain those you can swap the roles of " and ' thus.  (This syntax does not work
       on Windows.)

              rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir

   Connection strings, config and logging
       If you supply extra configuration to a backend by command line flag, environment variable  or  connection
       string then rclone will add a suffix based on the hash of the config to the name of the remote, eg

              rclone -vv lsf --s3-chunk-size 20M s3:

       Has the log message

              DEBUG : s3: detected overridden config - adding "{Srj1p}" suffix to name

       This  is  so  rclone  can  tell  the  modified  remote  apart from the unmodified remote when caching the
       backends.

       This should only be noticeable in the logs.

       This means that on the fly backends such as

              rclone -vv lsf :s3,env_auth:

       Will get their own names

              DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name

   Valid remote names
       Remote names are case sensitive, and must adhere to the following rules: - May  only  contain  0-9,  A-Z,
       a-z, _, -, . and space.  - May not start with - or space.

   Quoting and the shell
       When  you  are  typing  commands  to your computer you are using something called the command line shell.
       This interprets various characters in an OS specific way.

       Here are some gotchas which may help users unfamiliar with the shell rules

   Linux / OSX
       If your names have spaces or shell metacharacters (e.g. *, ?, $, ', ", etc.)  then you must  quote  them.
       Use single quotes ' by default.

              rclone copy 'Important files?' remote:backup

       If you want to send a ' you will need to use ", e.g.

              rclone copy "O'Reilly Reviews" remote:backup

       The  rules  for  quoting  metacharacters  are complicated and if you want the full details you’ll have to
       consult the manual page for your shell.

   Windows
       If your names have spaces in you need to put them in ", e.g.

              rclone copy "E:\folder name\folder name\folder name" remote:backup

       If you are using the root directory on its own then don’t quote it (see #464 for why), e.g.

              rclone copy E:\ remote:backup

   Copying files or directories with : in the names
       rclone uses : to mark a remote name.  This is, however, a valid filename component in  non-Windows  OSes.
       The  remote  name  parser  will  only search for a : up to the first / so if you need to act on a file or
       directory like this then use the full path starting with a /, or use ./ as a current directory prefix.

       So to sync a directory called sync:me to a remote called remote: use

              rclone sync -i ./sync:me remote:path

       or

              rclone sync -i /full/path/to/sync:me remote:path

   Server Side Copy
       Most remotes (but not all - see the overview) support server-side copy.

       This means if you want to copy one folder to another  then  rclone  won’t  download  all  the  files  and
       re-upload them; it will instruct the server to copy them in place.

       Eg

              rclone copy s3:oldbucket s3:newbucket

       Will copy the contents of oldbucket to newbucket without downloading and re-uploading.

       Remotes which don’t support server-side copy will download and re-upload in this case.

       Server  side copies are used with sync and copy and will be identified in the log when using the -v flag.
       The move command may also use them if remote doesn’t support server-side move directly.  This is done  by
       issuing a server-side copy then a delete which is much quicker than a download and re-upload.

       Server side copies will only be attempted if the remote names are the same.

       This can be used when scripting to make aged backups efficiently, e.g.

              rclone sync -i remote:current-backup remote:previous-backup
              rclone sync -i /path/to/files remote:current-backup

   Metadata support
       Metadata  is  data about a file which isn’t the contents of the file.  Normally rclone only preserves the
       modification time and the content (MIME) type where possible.

       Rclone supports preserving all  the  available  metadata  on  files  (not  directories)  when  using  the
       --metadata or -M flag.

       Exactly  what  metadata  is  supported and what that support means depends on the backend.  Backends that
       support metadata have a metadata section in their docs and are listed in  the  features table (Eg  local,
       s3)

       Rclone  only  supports  a  one-time  sync  of metadata.  This means that metadata will be synced from the
       source object to the destination object only  when  the  source  object  has  changed  and  needs  to  be
       re-uploaded.   If  the  metadata  subsequently  changes  on the source object without changing the object
       itself then it won’t be synced to the destination object.  This is in line  with  the  way  rclone  syncs
       Content-Type without the --metadata flag.

       Using --metadata when syncing from local to local will preserve file attributes such as file mode, owner,
       extended attributes (not Windows).

       Note  that  arbitrary  metadata  may be added to objects using the --metadata-set key=value flag when the
       object is first uploaded.  This flag can be repeated as many times as necessary.

   Types of metadata
       Metadata is divided into two type.  System metadata and User metadata.

       Metadata which the backend uses itself is called system metadata.  For example on the local  backend  the
       system metadata uid will store the user ID of the file when used on a unix based platform.

       Arbitrary metadata is called user metadata and this can be set however is desired.

       When  objects are copied from backend to backend, they will attempt to interpret system metadata if it is
       supplied.  Metadata may change from being user metadata to system metadata as objects are copied  between
       different  backends.  For example copying an object from s3 sets the content-type metadata.  In a backend
       which understands this (like azureblob) this will become the Content-Type of the object.   In  a  backend
       which  doesn’t  understand  this (like the local backend) this will become user metadata.  However should
       the local object be copied back to s3, the Content-Type will be set correctly.

   Metadata framework
       Rclone implements a metadata framework which can read metadata from an object and write it to the  object
       when (and only when) it is being uploaded.

       This metadata is stored as a dictionary with string keys and string values.

       There are some limits on the names of the keys (these may be clarified further in the future).

       • must be lower case

       • may be a-z 0-9 containing .  - or _

       • length is backend dependent

       Each  backend  can  provide  system metadata that it understands.  Some backends can also store arbitrary
       user metadata.

       Where possible the key names are standardized, so, for example, it is possible to  copy  object  metadata
       from s3 to azureblob for example and metadata will be translated appropriately.

       Some  backends  have limits on the size of the metadata and rclone will give errors on upload if they are
       exceeded.

   Metadata preservation
       The goal of the implementation is to

       1. Preserve metadata if at all possible

       2. Interpret metadata if at all possible

       The consequences of 1 is that you can copy an S3 object to a local  disk  then  back  to  S3  losslessly.
       Likewise  you  can copy a local file with file attributes and xattrs from local disk to s3 and back again
       losslessly.

       The consequence of 2 is that you can copy an S3 object with metadata to  Azureblob  (say)  and  have  the
       metadata appear on the Azureblob object also.

   Standard system metadata
       Here is a table of standard system metadata which, if appropriate, a backend may implement.

       key                                  description             example
       ─────────────────────────────────────────────────────────────────────────────────────────────────
       mode                                 File  type  and mode:   0100664
                                            octal, unix style
       uid                                  User  ID  of   owner:   500
                                            decimal number
       gid                                  Group  ID  of  owner:   500
                                            decimal number
       rdev                                 Device ID (if special   0
                                            file) => hexadecimal
       atime                                Time of last  access:   2006-01-02T15:04:05.999999999Z07:00
                                            RFC 3339
       mtime                                Time      of     last   2006-01-02T15:04:05.999999999Z07:00
                                            modification:     RFC
                                            3339
       btime                                Time of file creation   2006-01-02T15:04:05.999999999Z07:00
                                            (birth): RFC 3339
       cache-control                        Cache-Control header    no-cache
       content-disposition                  Content-Disposition     inline
                                            header
       content-encoding                     Content-Encoding        gzip
                                            header
       content-language                     Content-Language        en-US
                                            header
       content-type                         Content-Type header     text/plain

       The  metadata  keys  mtime and content-type will take precedence if supplied in the metadata over reading
       the Content-Type or modification time of the source object.

       Hashes are not included in system metadata as there is a well defined way of reading those already.

   Options
       Rclone has a number of options to control its behaviour.

       Options that take parameters can have the values passed in two ways, --option=value  or  --option  value.
       However  boolean  (true/false) options behave slightly differently to the other options in that --boolean
       sets the option to true and the absence of the flag sets it to false.  It is  also  possible  to  specify
       --boolean=false  or --boolean=true.  Note that --boolean false is not valid - this is parsed as --boolean
       and the false is parsed as an extra command line argument for rclone.

   Time or duration options
       TIME or DURATION options can be specified as a duration string or a time string.

       A duration string is a possibly signed sequence of decimal numbers, each with  optional  fraction  and  a
       unit  suffix,  such  as  “300ms”,  “-1.5h”  or  “2h45m”.   Default  units  are  seconds  or the following
       abbreviations are valid:

       • ms - Milliseconds

       • s - Seconds

       • m - Minutes

       • h - Hours

       • d - Days

       • w - Weeks

       • M - Months

       • y - Years

       These can also be specified as an absolute time in the following formats:

       • RFC3339 - e.g. 2006-01-02T15:04:05Z or 2006-01-02T15:04:05+07:00

       • ISO8601 Date and time, local timezone - 2006-01-02T15:04:05

       • ISO8601 Date and time, local timezone - 2006-01-02 15:04:05

       • ISO8601 Date - 2006-01-02 (YYYY-MM-DD)

   Size options
       Options which use SIZE use KiB (multiples of 1024 bytes) by default.  However, a suffix of B for Byte,  K
       for  KiB, M for MiB, G for GiB, T for TiB and P for PiB may be used.  These are the binary units, e.g. 1,
       2**10, 2**20, 2**30 respectively.

   –backup-dir=DIR
       When using sync, copy or move any files which would have been overwritten or deleted are moved  in  their
       original hierarchy into this directory.

       If --suffix is set, then the moved files will have the suffix added to them.  If there is a file with the
       same path (after the suffix has been added) in DIR, then it will be overwritten.

       The  remote  in  use  must  support  server-side  move  or  copy  and you must use the same remote as the
       destination of the sync.  The backup directory must not overlap  the  destination  directory  without  it
       being excluded by a filter rule.

       For example

              rclone sync -i /path/to/local remote:current --backup-dir remote:old

       will  sync  /path/to/local  to remote:current, but for any files which would have been updated or deleted
       will be stored in remote:old.

       If running rclone from a script you might want to use today’s  date  as  the  directory  name  passed  to
       --backup-dir to store the old files, or you might want to pass --suffix with today’s date.

       See --compare-dest and --copy-dest.

   –bind string
       Local  address  to  bind  to  for  outgoing  connections.  This can be an IPv4 address (1.2.3.4), an IPv6
       address (1234::789A) or host name.  If the host name doesn’t resolve or resolves  to  more  than  one  IP
       address it will give an error.

   –bwlimit=BANDWIDTH_SPEC
       This option controls the bandwidth limit.  For example

              --bwlimit 10M

       would mean limit the upload and download bandwidth to 10 MiB/s.  NB this is bytes per second not bits per
       second.  To use a single limit, specify the desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P.  The
       default is 0 which means to not limit bandwidth.

       The upload and download bandwidth can be specified separately, as --bwlimit UP:DOWN, so

              --bwlimit 10M:100k

       would  mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s.  Either limit
       can be “off” meaning no limit, so to just limit the upload bandwidth you would use

              --bwlimit 10M:off

       this would limit the upload bandwidth to 10 MiB/s but the download bandwidth would be unlimited.

       When specified as above the bandwidth limits last for the duration of run of the rclone binary.

       It is also possible to specify a “timetable” of limits, which will cause certain limits to be applied  at
       certain   times.    To   specify   a   timetable,   format   your   entries   as  WEEKDAY-HH:MM,BANDWIDTH
       WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element.

       • BANDWIDTH can be a single number, e.g.100k or a pair of numbers for upload:download, e.g.10M:1M.

       • WEEKDAY can be written as the whole word or only using the first 3 characters.  It is optional.

       • HH:MM is an hour from 00:00 to 23:59.

       An example of a typical timetable to avoid link saturation during daytime working hours could be:

       --bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"

       In this example, the transfer bandwidth will be set to 512 KiB/s at 8am every day.  At noon, it will rise
       to 10 MiB/s, and drop back to 512 KiB/sec at 1pm.  At 6pm, the bandwidth limit will be set to  30  MiB/s,
       and  at  11pm  it  will  be  completely disabled (full speed).  Anything between 11pm and 8am will remain
       unlimited.

       An example of timetable with WEEKDAY could be:

       --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"

       It means that, the transfer bandwidth will be set to 512 KiB/s on Monday.   It  will  rise  to  10  MiB/s
       before  the end of Friday.  At 10:00 on Saturday it will be set to 1 MiB/s.  From 20:00 on Sunday it will
       be unlimited.

       Timeslots without WEEKDAY are extended to the whole week.  So this example:

       --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"

       Is equivalent to this:

       --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M  Wed-12:00,1M  Thu-12:00,1M  Fri-12:00,1M  Sat-12:00,1M
       Sun-12:00,1M Sun-20:00,off"

       Bandwidth  limit  apply  to  the data transfer for all backends.  For most backends the directory listing
       bandwidth is also included (exceptions being the non HTTP backends, ftp, sftp and storj).

       Note that the units are Byte/s, not bit/s.  Typically connections are measured  in  bit/s  -  to  convert
       divide  by  8.  For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of
       it - 5 Mbit/s.  This is 5/8 = 0.625 MiB/s so you would use a --bwlimit 0.625M parameter for rclone.

       On Unix systems (Linux, macOS, ...)  the bandwidth limiter can be toggled by sending a SIGUSR2 signal  to
       rclone.   This  allows to remove the limitations of a long running rclone transfer and to restore it back
       to the value specified with --bwlimit quickly when needed.  Assuming there is only  one  rclone  instance
       running, you can toggle the limiter like this:

              kill -SIGUSR2 $(pidof rclone)

       If you configure rclone with a remote control then you can use change the bwlimit dynamically:

              rclone rc core/bwlimit rate=1M

   –bwlimit-file=BANDWIDTH_SPEC
       This option controls per file bandwidth limit.  For the options see the --bwlimit flag.

       For example use this to allow no transfers to be faster than 1 MiB/s

              --bwlimit-file 1M

       This can be used in conjunction with --bwlimit.

       Note  that  if  a  schedule  is  provided  the  file  will use the schedule in effect at the start of the
       transfer.

   –buffer-size=SIZE
       Use this sized buffer to speed up file  transfers.   Each  --transfer  will  use  this  much  memory  for
       buffering.

       When  using  mount  or cmount each open file descriptor will use this much memory for buffering.  See the
       mount documentation for more details.

       Set to 0 to disable the buffering for the minimum memory usage.

       Note that the memory allocation of the buffers is influenced by the –use-mmap flag.

   –cache-dir=DIR
       Specify the directory rclone will use for caching, to override the default.

       Default value is depending on operating system:  -  Windows  %LocalAppData%\rclone,  if  LocalAppData  is
       defined.   -  macOS  $HOME/Library/Caches/rclone  if  HOME  is defined.  - Unix $XDG_CACHE_HOME/rclone if
       XDG_CACHE_HOME is defined, else $HOME/.cache/rclone if HOME is  defined.   -  Fallback  (on  all  OS)  to
       $TMPDIR/rclone, where TMPDIR is the value from –temp-dir.

       You can use the config paths command to see the current value.

       Cache  directory  is heavily used by the VFS File Caching mount feature, but also by serve, GUI and other
       parts of rclone.

   –check-first
       If this flag is set then in a sync, copy or move, rclone will do all the checks to see whether files need
       to be transferred before doing any of the transfers.  Normally rclone would start  running  transfers  as
       soon as possible.

       This flag can be useful on IO limited systems where transfers interfere with checking.

       It can also be useful to ensure perfect ordering when using --order-by.

       Using  this  flag  can use more memory as it effectively sets --max-backlog to infinite.  This means that
       all the info on the objects to transfer is held in memory before the transfers start.

   –checkers=N
       Originally controlling just the number of file checkers to run in parallel, e.g. by rclone copy.   Now  a
       fairly universal parallelism control used by rclone in several places.

       Note:  checkers  do  the  equality  checking  of files during a sync.  For some storage systems (e.g. S3,
       Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

       The default is to run 8 checkers in parallel.  However, in case of slow-reacting backends you may need to
       lower (rather than increase) this default by setting --checkers to 4 or less threads.  This is especially
       advised if you are experiencing backend server crashes during file checking phase (e.g. on subsequent  or
       top-up backups where little or no file copying is done and checking takes up most of the time).  Increase
       this setting only with utmost care, while monitoring your server health and file checking throughput.

   -c, –checksum
       Normally  rclone  will  look at modification time and size of files to see if they are equal.  If you set
       this flag then rclone will check the file hash and size to determine if files are equal.

       This is useful when the remote doesn’t support setting modified time and a more accurate sync is  desired
       than just checking the file size.

       This  is  very  useful  when  transferring  between remotes which store the same hash type on the object,
       e.g. Drive and Swift.  For details of which remotes support which hash type see the table in the overview
       section.

       Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker  than  without  the  --checksum
       flag.

       When  using  this  flag,  rclone  won’t  update  mtimes of remote files if they are incorrect as it would
       normally.

   –compare-dest=DIR
       When using sync, copy or move DIR is checked in addition  to  the  destination  for  files.   If  a  file
       identical  to the source is found that file is NOT copied from source.  This is useful to copy just files
       that have changed since the last backup.

       You must use the same remote as the destination of the sync.  The compare directory must not overlap  the
       destination directory.

       See --copy-dest and --backup-dir.

   –config=CONFIG_FILE
       Specify  the  location  of  the  rclone configuration file, to override the default.  E.g.  rclone config
       --config="rclone.conf".

       The exact default is a bit complex to describe, due to changes introduced through different  versions  of
       rclone while preserving backwards compatibility, but in most cases it is as simple as:

       • %APPDATA%/rclone/rclone.conf on Windows

       • ~/.config/rclone/rclone.conf on other

       The  complete  logic  is  as  follows:  Rclone will look for an existing configuration file in any of the
       following locations, in priority order:

       1. rclone.conf (in program directory, where rclone executable is)

       2. %APPDATA%/rclone/rclone.conf (only on Windows)

       3. $XDG_CONFIG_HOME/rclone/rclone.conf (on all systems, including Windows)

       4. ~/.config/rclone/rclone.conf (see below for explanation of ~ symbol)

       5. ~/.rclone.conf

       If no existing configuration file is found, then a new one will be created in the following location:

       • On Windows: Location 2 listed above, except in the unlikely event that APPDATA  is  not  defined,  then
         location 4 is used instead.

       • On Unix: Location 3 if XDG_CONFIG_HOME is defined, else location 4.

       • Fallback  to  location  5  (on all OS), when the rclone directory cannot be created, but if also a home
         directory was not found then path .rclone.conf relative to current working directory will be used as  a
         final resort.

       The  ~ symbol in paths above represent the home directory of the current user on any OS, and the value is
       defined as following:

       • On Windows: %HOME% if defined, else %USERPROFILE%, or else %HOMEDRIVE%\%HOMEPATH%.

       • On Unix: $HOME if defined, else by looking up current user in OS-specific  user  database  (e.g. passwd
         file), or else use the result from shell command cd && pwd.

       If you run rclone config file you will see where the default location is for you.

       The  fact  that  an  existing  file  rclone.conf in the same directory as the rclone executable is always
       preferred, means that it is easy to run in “portable” mode by downloading rclone executable to a writable
       directory and then create an empty file rclone.conf in the same directory.

       If the location is set to empty string "" or path to a file with name notfound, or  the  os  null  device
       represented  by value NUL on Windows and /dev/null on Unix systems, then rclone will keep the config file
       in memory only.

       The file format is basic INI: Sections of text, led by a  [section]  header  and  followed  by  key=value
       entries  on  separate  lines.  In rclone each remote is represented by its own section, where the section
       name defines the name of the remote.  Options are specified as the key=value entries, where  the  key  is
       the  option  name  without  the  --backend-  prefix,  in lowercase and with _ instead of -.  E.g.  option
       --mega-hard-delete corresponds to key hard_delete.  Only backend options can be  specified.   A  special,
       and  required,  key type identifies the storage system, where the value is the internal lowercase name as
       returned by command rclone help backends.  Comments are indicated by ; or # at the beginning of a line.

       Example:

              [megaremote]
              type = mega
              user = you@example.com
              pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

       Note that passwords are in obscured form.  Also, many storage  systems  uses  token-based  authentication
       instead  of  passwords,  and  this  requires  additional  steps.   It  is  easier,  and safer, to use the
       interactive command rclone config instead of manually editing the configuration file.

       The configuration file will typically contain login information, and  should  therefore  have  restricted
       permissions  so  that  only the current user can read it.  Rclone tries to ensure this when it writes the
       file.  You may also choose to encrypt the file.

       When token-based authentication are used, the configuration file must be writable, because  rclone  needs
       to update the tokens inside it.

   –contimeout=TIME
       Set  the connection timeout.  This should be in go time format which looks like 5s for 5 seconds, 10m for
       10 minutes, or 3h30m.

       The connection timeout is the amount of time rclone will wait for a connection to go through to a  remote
       object storage system.  It is 1m by default.

   –copy-dest=DIR
       When  using  sync,  copy  or  move  DIR  is  checked in addition to the destination for files.  If a file
       identical to the source is found that file is server-side copied from DIR to the  destination.   This  is
       useful for incremental backup.

       The  remote  in  use must support server-side copy and you must use the same remote as the destination of
       the sync.  The compare directory must not overlap the destination directory.

       See --compare-dest and --backup-dir.

   –dedupe-mode MODE
       Mode to run dedupe command in.  One of interactive, skip, first, newest, oldest, rename.  The default  is
       interactive.
       See the dedupe command for more information as to what these options mean.

   –disable FEATURE,FEATURE,...
       This  disables  a comma separated list of optional features.  For example to disable server-side move and
       server-side copy use:

              --disable move,copy

       The features can be put in any case.

       To see a list of which features can be disabled use:

              --disable help

       See the overview features and optional features to get an idea of which feature does what.

       This flag can be useful for debugging and in exceptional circumstances (e.g. Google  Drive  limiting  the
       total volume of Server Side Copies to 100 GiB/day).

   –disable-http2
       This stops rclone from trying to use HTTP/2 if available.  This can sometimes speed up transfers due to a
       problem in the Go standard library.

   –dscp VALUE
       Specify  a  DSCP  value  or  name  to use in connections.  This could help QoS system to identify traffic
       class.  BE, EF, DF, LE, CSx and AFxx are allowed.

       See the description of differentiated services to get an idea of this field.  Setting this to 1  (LE)  to
       identify  the  flow  to SCAVENGER class can avoid occupying too much bandwidth in a network with DiffServ
       support (RFC 8622).

       For example, if you configured QoS on router to handle LE properly.  Running:

              rclone copy --dscp LE from:/from to:/to

       would make the priority lower than usual internet flows.

       This option has no effect on Windows (see golang/go#42728).

   -n, –dry-run
       Do a trial run with no permanent changes.  Use this to see what rclone would do  without  actually  doing
       it.  Useful when setting up the sync command which deletes files in the destination.

   –expect-continue-timeout=TIME
       This  specifies  the amount of time to wait for a server’s first response headers after fully writing the
       request headers if the request has an “Expect: 100-continue” header.   Not  all  backends  support  using
       this.

       Zero  means  no  timeout  and  causes  the body to be sent immediately, without waiting for the server to
       approve.  This time does not include the time to send the request header.

       The default is 1s.  Set to 0 to disable.

   –error-on-no-transfer
       By default, rclone will exit with return code 0 if there were no errors.

       This option allows rclone to return exit code 9 if no files  were  transferred  between  the  source  and
       destination.   This  allows using rclone in scripts, and triggering follow-on actions if data was copied,
       or skipping if not.

       NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check  and
       adjust your scripts accordingly!

   –fs-cache-expire-duration=TIME
       When  using  rclone via the API rclone caches created remotes for 5 minutes by default in the “fs cache”.
       This means that if you do repeated actions on the same remote then rclone won’t have to  build  it  again
       from scratch, which makes it more efficient.

       This  flag  sets  the time that the remotes are cached for.  If you set it to 0 (or negative) then rclone
       won’t cache the remotes at all.

       Note that if you use some flags, eg --backup-dir and if this is set to 0 rclone  may  build  two  remotes
       (one for the source or destination and one for the --backup-dir where it may have only built one before.

   –fs-cache-expire-interval=TIME
       This  controls  how often rclone checks for cached remotes to expire.  See the --fs-cache-expire-duration
       documentation above for more info.  The default is 60s, set to 0 to disable expiry.

   –header
       Add an HTTP header for all transactions.  The flag can be repeated to add multiple headers.

       If you want to add headers only for uploads use --header-upload and if you want to add headers  only  for
       downloads use --header-download.

       This  flag  is  supported  for  all  HTTP  based backends even those not supported by --header-upload and
       --header-download so may be used as a workaround for those with care.

              rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"

   –header-download
       Add an HTTP header for all download transactions.  The flag can be repeated to add multiple headers.

              rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"

       See the GitHub issue here for currently supported backends.

   –header-upload
       Add an HTTP header for all upload transactions.  The flag can be repeated to add multiple headers.

              rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"

       See the GitHub issue here for currently supported backends.

   –human-readable
       Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of  files)  either
       as raw numbers, or in human-readable format.

       In  human-readable  format the values are scaled to larger units, indicated with a suffix shown after the
       value, and rounded to three decimals.  Rclone consistently uses binary units (powers of 2) for sizes  and
       decimal units (powers of 10) for counts.  The unit prefix for size is according to IEC standard notation,
       e.g. Ki  for  kibi.   Used  with byte unit, 1 KiB means 1024 Byte.  In list type of output, only the unit
       prefix appended to the value (e.g. 9.762Ki), while  in  more  textual  output  the  full  unit  is  shown
       (e.g. 9.762  KiB).   For counts the SI standard notation is used, e.g. prefix k for kilo.  Used with file
       counts, 1k means 1000 files.

       The various list commands output raw numbers by default.  Option --human-readable will make  them  output
       values in human-readable format instead (with the short unit prefix).

       The  about command outputs human-readable by default, with a command-specific option --full to output the
       raw numbers instead.

       Command size outputs both human-readable and raw numbers in the same output.

       The tree command also considers --human-readable, but it will not use the  exact  same  notation  as  the
       other  commands:  It  rounds  to  one  decimal, and uses single letter suffix, e.g. K instead of Ki.  The
       reason for this is that it relies on an external library.

       The interactive command ncdu shows human-readable  by  default,  and  responds  to  key  u  for  toggling
       human-readable format.

   –ignore-case-sync
       Using  this option will cause rclone to ignore the case of the files when synchronizing so files will not
       be copied/synced when the existing filenames are the same, even if the casing is different.

   –ignore-checksum
       Normally rclone will check that the checksums of transferred files match, and give an error “corrupted on
       transfer” if they don’t.

       You can use this option to skip that check.  You should only use it if you have  had  the  “corrupted  on
       transfer” error message and you are sure you might want to transfer potentially corrupted data.

   –ignore-existing
       Using  this  option  will  make  rclone  unconditionally skip all files that exist on the destination, no
       matter the content of these files.

       While this isn’t a generally recommended option, it can be useful in cases where your files change due to
       encryption.  However, it cannot correct partial transfers in case a transfer was interrupted.

       When performing a move/moveto command, this  flag  will  leave  skipped  files  in  the  source  location
       unchanged when a file with the same name exists on the destination.

   –ignore-size
       Normally  rclone  will  look at modification time and size of files to see if they are equal.  If you set
       this flag then rclone will check only the modification time.  If --checksum is set then  it  only  checks
       the checksum.

       It will also cause rclone to skip verifying the sizes are the same after transfer.

       This  can be useful for transferring files to and from OneDrive which occasionally misreports the size of
       image files (see #399 for more info).

   -I, –ignore-times
       Using this option will cause rclone to unconditionally upload all files regardless of the state of  files
       on the destination.

       Normally  rclone would skip any files that have the same modification time and are the same size (or have
       the same checksum if using --checksum).

   –immutable
       Treat source and destination files as immutable and disallow modification.

       With this option set, files will be created and deleted as requested, but existing files  will  never  be
       updated.   If  an  existing  file does not match between the source and destination, rclone will give the
       error Source and destination exist but do not match: immutable file modified.

       Note that only commands which transfer files (e.g. sync, copy, move) are affected by this  behavior,  and
       only  modification  is  disallowed.   Files  may  still  be  deleted  explicitly  (e.g. delete, purge) or
       implicitly (e.g. sync, move).  Use copy --immutable if it  is  desired  to  avoid  deletion  as  well  as
       modification.

       This  can  be useful as an additional layer of protection for immutable or append-only data sets (notably
       backup archives), where modification implies corruption and should not be propagated.

   -i / –interactive
       This flag can be used to tell rclone that you wish a manual confirmation before destructive operations.

       It is recommended that you use this flag while learning rclone especially with rclone sync.

       For example

              $ rclone delete -i /tmp/dir
              rclone: delete "important-file.txt"?
              y) Yes, this is OK (default)
              n) No, skip this
              s) Skip all delete operations with no more questions
              !) Do all delete operations with no more questions
              q) Exit rclone now.
              y/n/s/!/q> n

       The options mean

       • y: Yes, this operation should go ahead.  You can also press Return for this to happen.  You’ll be asked
         every time unless you choose s or !.

       • n: No, do not do this operation.  You’ll be asked every time unless you choose s or !.

       • s: Skip all the following operations of this type with no more  questions.   This  takes  effect  until
         rclone exits.  If there are any different kind of operations you’ll be prompted for them.

       • !:  Do  all  the  following operations with no more questions.  Useful if you’ve decided that you don’t
         mind rclone doing that kind of operation.  This takes effect until rclone exits .   If  there  are  any
         different kind of operations you’ll be prompted for them.

       • q: Quit rclone now, just in case!

   –leave-root
       During rmdirs it will not remove root directory, even if it’s empty.

   –log-file=FILE
       Log all of rclone’s output to FILE.  This is not active by default.  This can be useful for tracking down
       problems with syncs in combination with the -v flag.  See the Logging section for more info.

       If FILE exists then rclone will append to it.

       Note  that  if  you  are  using  the  logrotate  program to manage rclone’s logs, then you should use the
       copytruncate option as rclone doesn’t have a signal to rotate logs.

   –log-format LIST
       Comma separated list of log  format  options.   Accepted  options  are  date,  time,  microseconds,  pid,
       longfile,  shortfile,  UTC.  Any other keywords will be silently ignored.  pid will tag log messages with
       process identifier which useful with rclone mount --daemon.  Other accepted options are explained in  the
       go documentation.  The default log format is “date,time”.

   –log-level LEVEL
       This sets the log level for rclone.  The default log level is NOTICE.

       DEBUG  is  equivalent  to -vv.  It outputs lots of debug info - useful for bug reports and really finding
       out what rclone is doing.

       INFO is equivalent to -v.  It outputs information about each transfer and prints stats once a  minute  by
       default.

       NOTICE is the default log level if no logging flags are supplied.  It outputs very little when things are
       working normally.  It outputs warnings and significant events.

       ERROR is equivalent to -q.  It only outputs error messages.

   –use-json-log
       This switches the log format to JSON for rclone.  The fields of json log are level, msg, source, time.

   –low-level-retries NUMBER
       This controls the number of low level retries rclone does.

       A  low  level  retry  is  used  to retry a failing operation - typically one HTTP request.  This might be
       uploading a chunk of a big file for example.  You will see low level retries in the log with the -v flag.

       This shouldn’t need to be changed from the default in normal operations.  However, if you get  a  lot  of
       low  level  retries  you  may  wish to reduce the value so rclone moves on to a high level retry (see the
       --retries flag) quicker.

       Disable low level retries with --low-level-retries 1.

   –max-backlog=N
       This is the maximum allowable  backlog  of  files  in  a  sync/copy/move  queued  for  being  checked  or
       transferred.

       This  can be set arbitrarily large.  It will only use memory when the queue is in use.  Note that it will
       use in the order of N KiB of memory when the backlog is in use.

       Setting this large allows rclone to calculate how many files are pending more  accurately,  give  a  more
       accurate estimated finish time and make --order-by work more accurately.

       Setting  this  small  will  make  rclone  more  synchronous  to  the  listings of the remote which may be
       desirable.

       Setting this to a negative number will make the backlog as large as possible.

   –max-delete=N
       This tells rclone not to delete more than N files.  If that limit is exceeded then a fatal error will  be
       generated and rclone will stop the operation in progress.

   –max-depth=N
       This modifies the recursion depth for all the commands except purge.

       So  if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory.
       Using --max-depth 2 means you will see all the files in first two directory levels and so on.

       For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this  with
       the command line flag.

       You can use this command to disable recursion (with --max-depth 1).

       Note  that  if you use this with sync and --delete-excluded the files not recursed through are considered
       excluded and will be deleted on the destination.  Test first with --dry-run if you are not sure what will
       happen.

   –max-duration=TIME
       Rclone will stop scheduling new transfers when it has run for the duration specified.

       Defaults to off.

       When the limit is reached any existing transfers will complete.

       Rclone won’t exit with an error if the transfer limit is reached.

   –max-transfer=SIZE
       Rclone will stop transferring when it has reached the size specified.  Defaults to off.

       When the limit is reached all transfers will stop immediately.

       Rclone will exit with exit code 8 if the transfer limit is reached.

   –metadata / -M
       Setting this flag enables rclone to copy the metadata from the source  to  the  destination.   For  local
       backends this is ownership, permissions, xattr etc.  See the #metadata for more info.

   –metadata-set key=value
       Add  metadata  key  =  value  when  uploading.   This can be repeated as many times as required.  See the
       #metadata for more info.

   –cutoff-mode=hard|soft|cautious
       This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

       Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

       Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

       Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

   –modify-window=TIME
       When checking whether a file has been modified, this is the maximum allowed time difference that  a  file
       can have and still be considered equivalent.

       The  default  is  1ns  unless  this is overridden by a remote.  For example OS X only stores modification
       times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by
       default.

       This command line flag allows you to override that computed default.

   –multi-thread-cutoff=SIZE
       When downloading files to the local backend above this size, rclone will use multiple threads to download
       the file (default 250M).

       Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on  unix  or  NTSetInformationFile  on
       Windows both of which takes no time) then each thread writes directly into the file at the correct place.
       This  means  that  rclone won’t create fragmented or sparse files and there won’t be any assembly time at
       the end of the transfer.

       The number of threads used to download is controlled by --multi-thread-streams.

       Use -vv if you wish to see info about the threads.

       This will work with the sync/copy/move commands and friends copyto/moveto.  Multi thread  downloads  will
       be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

       NB that this only works for a local destination but will work with any source.

       NB  that  multi  thread  copies  are disabled for local to local copies as they are faster without unless
       --multi-thread-streams is set explicitly.

       NB on  Windows  using  multi-thread  downloads  will  cause  the  resulting  files  to  be  sparse.   Use
       --local-no-sparse  to  disable  sparse  files  (which may cause long delays at the start of downloads) or
       disable multi-thread downloads with --multi-thread-streams 0

   –multi-thread-streams=N
       When using multi thread downloads (see above --multi-thread-cutoff)  this  sets  the  maximum  number  of
       streams to use.  Set to 0 to disable multi thread downloads (Default 4).

       Exactly  how many streams rclone uses for the download depends on the size of the file.  To calculate the
       number of download streams Rclone divides the size of the file by the  --multi-thread-cutoff  and  rounds
       up, up to the maximum set with --multi-thread-streams.

       So if --multi-thread-cutoff 250M and --multi-thread-streams 4 are in effect (the defaults):

       • 0..250 MiB files will be downloaded with 1 stream

       • 250..500 MiB files will be downloaded with 2 streams

       • 500..750 MiB files will be downloaded with 3 streams

       • 750+ MiB files will be downloaded with 4 streams

   –no-check-dest
       The  --no-check-dest  can  be used with move or copy and it causes rclone not to check the destination at
       all when copying files.

       This means that:

       • the destination is not listed minimising the API calls

       • files are always transferred

       • this can cause duplicates on remotes which allow it (e.g. Google Drive)

       • --retries 1 is recommended otherwise you’ll transfer everything again on a retry

       This flag is useful to minimise the transactions  if  you  know  that  none  of  the  files  are  on  the
       destination.

       This is a specialized flag which should be ignored by most users!

   –no-gzip-encoding
       Don’t  set  Accept-Encoding:  gzip.   This  means  that  rclone won’t ask the server for compressed files
       automatically.  Useful if you’ve set the server to return  files  with  Content-Encoding:  gzip  but  you
       uploaded compressed files.

       There  is  no  need  to  set  this  in  normal operation, and doing so will decrease the network transfer
       efficiency of rclone.

   –no-traverse
       The --no-traverse flag controls whether the destination file system is traversed when using the  copy  or
       move commands.  --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

       If  you are only copying a small number of files (or are filtering most of the files) and/or have a large
       number of files on the destination then --no-traverse will stop rclone listing the destination  and  save
       time.

       However, if you are copying a large number of files, especially if you are doing a copy where lots of the
       files under consideration haven’t changed and won’t need copying then you shouldn’t use --no-traverse.

       See rclone copy for an example of how to use it.

   –no-unicode-normalization
       Don’t normalize unicode characters in filenames during the sync routine.

       Sometimes,  an  operating  system  will store filenames containing unicode parts in their decomposed form
       (particularly macOS).  Some cloud storage systems will then recompose the unicode, resulting in duplicate
       files if the data is ever copied back to a local filesystem.

       Using this flag will disable that functionality, treating each unicode character as unique.  For example,
       by default é and é will be normalized into the same  character.   With  --no-unicode-normalization  they
       will be treated as unique characters.

   –no-update-modtime
       When  using this flag, rclone won’t update modification times of remote files if they are incorrect as it
       would normally.

       This can be used if the remote is being synced with another tool also (e.g. the Google Drive client).

   –order-by string
       The --order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone
       copy and rclone move.

       The order by string is constructed like this.  The first part describes what aspect is being measured:

       • size - order by the size of the files

       • name - order by the full path of the files

       • modtime - order by the modification date of the files

       This can have a modifier appended with a comma:

       • ascending or asc - order so that the smallest (or oldest) is processed first

       • descending or desc - order so that the largest (or newest) is processed first

       • mixed - order so that the smallest is processed first for some threads and the largest for others

       If  the  modifier  is  mixed  then  it  can  have  an  optional  percentage  (which  defaults   to   50),
       e.g. size,mixed,25  which  means  that 25% of the threads should be taking the smallest items and 75% the
       largest.  The threads which take the smallest first will always take the smallest first and likewise  the
       largest  first  threads.   The  mixed  mode  can  be  useful  to  minimise the transfer time when you are
       transferring a mixture of large and small files - the large  files  are  guaranteed  upload  threads  and
       bandwidth and the small files will be processed continuously.

       If no modifier is supplied then the order is ascending.

       For example

       • --order-by size,desc - send the largest files first

       • --order-by modtime,ascending - send the oldest files first

       • --order-by name - send the files with alphabetically by path first

       If  the  --order-by flag is not supplied or it is supplied with an empty string then the default ordering
       will be used which is as scanned.  With --checkers 1  this  is  mostly  alphabetical,  however  with  the
       default --checkers 8 it is somewhat random.

   Limitations
       The  --order-by  flag  does  not  do a separate pass over the data.  This means that it may transfer some
       files out of the order specified if

       • there are no files in the backlog or the source has not been fully scanned yet

       • there are more than –max-backlog files in the backlog

       Rclone will do its best to transfer the best file it has so in practice this should not cause a  problem.
       Think of --order-by as being more of a best efforts flag rather than a perfect ordering.

       If  you  want  perfect  ordering then you will need to specify –check-first which will find all the files
       which need transferring first before transferring any.

   –password-command SpaceSepList
       This flag supplies a program which should supply the config password when run.  This is an alternative to
       rclone prompting for the password or setting the RCLONE_CONFIG_PASS variable.

       The argument to this should be a command with a space  separated  list  of  arguments.   If  one  of  the
       arguments  has  a  space in then enclose it in ", if you want a literal " in an argument then enclose the
       argument in " and double the ".  See CSV encoding for more info.

       Eg

              --password-command echo hello
              --password-command echo "hello with space"
              --password-command echo "hello with ""quotes"" and space"

       See the Configuration Encryption for more info.

       See  a   https://github.com/rclone/rclone/wiki/Windows-Powershell-use-rclone-password-command-for-Config-
       file- password Windows PowerShell example on the Wiki.

   -P, –progress
       This  flag  makes rclone update the stats in a static block in the terminal providing a realtime overview
       of the transfer.

       Any log messages will scroll above the static block.  Log messages will push the static block down to the
       bottom of the terminal where it will stay.

       Normally this is updated every 500mS but this period can be overridden with the --stats flag.

       This can be used with the --stats-one-line flag for a simpler display.

       Note: On Windows until this bug is  fixed  all  non-ASCII  characters  will  be  replaced  with  .   when
       --progress is in use.

   –progress-terminal-title
       This flag, when used with -P/--progress, will print the string ETA: %s to the terminal title.

   -q, –quiet
       This flag will limit rclone’s output to error messages only.

   –refresh-times
       The  --refresh-times flag can be used to update modification times of existing files when they are out of
       sync on backends which don’t support hashes.

       This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them.

       This flag is only useful for destinations which don’t support hashes (e.g. crypt).

       This can be used any of the sync commands sync, copy or move.

       To use this flag you will need to be doing  a  modification  time  sync  (so  not  using  --size-only  or
       --checksum).  The flag will have no effect when using --size-only or --checksum.

       If this flag is used when rclone comes to upload a file it will check to see if there is an existing file
       on  the  destination.   If  this  file matches the source with size (and checksum if available) but has a
       differing timestamp then instead of re-uploading it, rclone will update the timestamp on the  destination
       file.   If  the checksum does not match rclone will upload the new file.  If the checksum is absent (e.g.
       on a crypt backend) then rclone will update the timestamp.

       Note that some remotes can’t set the modification time without re-uploading the file so this flag is less
       useful on them.

       Normally if you are doing a  modification  time  sync  rclone  will  update  modification  times  without
       --refresh-times provided that the remote supports checksums and the checksums match on the file.  However
       if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is
       the safe behaviour.

   –retries int
       Retry the entire sync if it fails this many times it fails (default 3).

       Some  remotes  can  be  unreliable  and a few retries help pick up the files which didn’t get transferred
       because of errors.

       Disable retries with --retries 1.

   –retries-sleep=TIME
       This sets the interval between each retry specified by --retries

       The default is 0.  Use 0 to disable.

   –server-side-across-configs
       Allow server-side operations (e.g. copy or move) to work across different configurations.

       This can be useful if you wish to do a server-side copy or move between two remotes which  use  the  same
       backend but are configured differently.

       Note  that this isn’t enabled by default because it isn’t easy for rclone to tell if it will work between
       any two configurations.

   –size-only
       Normally rclone will look at modification time and size of files to see if they are equal.   If  you  set
       this flag then rclone will check only the size.

       This  can  be  useful transferring files from Dropbox which have been modified by the desktop sync client
       which doesn’t set checksums of modification times in the same way as rclone.

   –stats=TIME
       Commands which transfer data (sync, copy, copyto, move, moveto) will print data transfer stats at regular
       intervals to show their progress.

       This sets the interval.

       The default is 1m.  Use 0 to disable.

       If you set the stats interval then all commands can show stats.  This can be useful  when  running  other
       commands, check or mount for example.

       Stats  are  logged at INFO level by default which means they won’t show at default log level NOTICE.  Use
       --stats-log-level NOTICE or -v to make them show.  See the Logging section for more info on log levels.

       Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to  make  the  stats
       print immediately.

   –stats-file-name-length integer
       By  default,  the  --stats  output will truncate file names and paths longer than 40 characters.  This is
       equivalent to providing --stats-file-name-length 40.   Use  --stats-file-name-length  0  to  disable  any
       truncation of file names printed by stats.

   –stats-log-level string
       Log  level  to  show --stats output at.  This can be DEBUG, INFO, NOTICE, or ERROR.  The default is INFO.
       This means at the default level of logging which is NOTICE the stats won’t show - if  you  want  them  to
       then use --stats-log-level NOTICE.  See the Logging section for more info on log levels.

   –stats-one-line
       When  this  is  specified, rclone condenses the stats into a single line showing the most important stats
       only.

   –stats-one-line-date
       When this is specified, rclone enables the single-line stats and prepends the display with a date string.
       The default is 2006/01/02 15:04:05 -

   –stats-one-line-date-format
       When this  is  specified,  rclone  enables  the  single-line  stats  and  prepends  the  display  with  a
       user-supplied  date  string.   The  date string MUST be enclosed in quotes.  Follow golang specs for date
       formatting syntax.

   –stats-unit=bits|bytes
       By default, data transfer rates will be printed in bytes per second.

       This option allows the data rate to be printed in bits per second.

       Data transfer volume will still be reported in bytes.

       The rate is reported as a binary unit, not SI unit.  So 1 Mbit/s equals 1,048,576 bit/s and not 1,000,000
       bit/s.

       The default is bytes.

   –suffix=SUFFIX
       When using sync, copy or move any files which would have been overwritten or deleted will have the suffix
       added to them.  If there is a file with the same path (after the suffix has been added), then it will  be
       overwritten.

       The  remote  in  use  must  support  server-side  move  or  copy  and you must use the same remote as the
       destination of the sync.

       This is for use with files to add the  suffix  in  the  current  directory  or  with  --backup-dir.   See
       --backup-dir for more info.

       For example

              rclone copy -i /path/to/local/file remote:current --suffix .bak

       will  copy  /path/to/local  to remote:current, but for any files which would have been updated or deleted
       have .bak added.

       If using rclone sync with --suffix and without --backup-dir then it is recommended to put a  filter  rule
       in excluding the suffix otherwise the sync will delete the backup files.

              rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"

   –suffix-keep-extension
       When  using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it
       backs up rather than after.

       So  let’s  say  we  had  --suffix  -2019-01-01,  without  the  flag  file.txt  would  be  backed  up   to
       file.txt-2019-01-01  and with the flag it would be backed up to file-2019-01-01.txt.  This can be helpful
       to make sure the suffixed files can still be opened.

   –syslog
       On capable OSes (not Windows or Plan9) send all log output to syslog.

       This can be useful for running rclone in a script or rclone mount.

   –syslog-facility string
       If using --syslog this sets the syslog facility (e.g. KERN, USER).  See man syslog for a list of possible
       facilities.  The default facility is DAEMON.

   –temp-dir=DIR
       Specify the directory rclone will use for temporary files,  to  override  the  default.   Make  sure  the
       directory exists and have accessible permissions.

       By  default  the operating system’s temp directory will be used: - On Unix systems, $TMPDIR if non-empty,
       else /tmp.  - On Windows, the first non-empty value from %TMP%, %TEMP%,  %USERPROFILE%,  or  the  Windows
       directory.

       When  overriding  the  default  with  this option, the specified path will be set as value of environment
       variable TMPDIR on Unix systems and TMP and TEMP on Windows.

       You can use the config paths command to see the current value.

   –tpslimit float
       Limit transactions per second to this number.  Default is 0 which is used to mean unlimited  transactions
       per second.

       A  transaction is roughly defined as an API call; its exact meaning will depend on the backend.  For HTTP
       based backends it is an HTTP PUT/GET/POST/etc and  its  response.   For  FTP/SFTP  it  is  a  round  trip
       transaction over TCP.

       For example, to limit rclone to 10 transactions per second use --tpslimit 10, or to 1 transaction every 2
       seconds use --tpslimit 0.5.

       Use  this  when  the  number  of  transactions per second from rclone is causing a problem with the cloud
       storage provider (e.g. getting you banned or rate limited).

       This can be very useful for rclone mount to control the behaviour of applications using it.

       This limit applies to all HTTP based backends and to the FTP and SFTP backends.  It does not apply to the
       local backend or the Storj backend.

       See also --tpslimit-burst.

   –tpslimit-burst int
       Max burst of transactions for --tpslimit (default 1).

       Normally --tpslimit will do exactly the number of transaction  per  second  specified.   However  if  you
       supply  --tps-burst  then rclone can save up some transactions from when it was idle giving a burst of up
       to the parameter supplied.

       For example if you provide --tpslimit-burst 10 then if rclone has been idle for more  than  10*--tpslimit
       then it can do 10 transactions very quickly before they are limited again.

       This  may  be used to increase performance of --tpslimit without changing the long term average number of
       transactions per second.

   –track-renames
       By default, rclone doesn’t keep track of renamed files, so if you rename a file locally then sync it to a
       remote, rclone will delete the old file on the remote and upload a new copy.

       An rclone sync with --track-renames runs like a normal sync, but keeps track of objects  which  exist  in
       the  destination  but not in the source (which would normally be deleted), and which objects exist in the
       source but not the destination (which would normally be transferred).  These objects are then  candidates
       for renaming.

       After   the   sync,   rclone  matches  up  the  source  only  and  destination  only  objects  using  the
       --track-renames-strategy specified and either renames the destination object or transfers the source  and
       deletes the destination object.  --track-renames is stateless like all of rclone’s syncs.

       To  use  this  flag  the destination must support server-side copy or server-side move, and to use a hash
       based --track-renames-strategy (the default) the source and the destination must have a compatible hash.

       If the destination does not support server-side copy or move,  rclone  will  fall  back  to  the  default
       behaviour and log an error level message to the console.

       Encrypted  destinations  are  not  currently  supported  by  --track-renames  if --track-renames-strategy
       includes hash.

       Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep  track
       of all the rename candidates.

       Note  also  that  --track-renames  is  incompatible  with  --delete-before and will select --delete-after
       instead of --delete-during.

   –track-renames-strategy (hash,modtime,leaf,size)
       This option changes the file matching criteria for --track-renames.

       The matching is controlled by a comma separated selection of these tokens:

       • modtime - the modification time of the file - not supported on all backends

       • hash - the hash of the file contents - not supported on all backends

       • leaf - the name of the file not including its directory name

       • size - the size of the file (this is always enabled)

       The default option is hash.

       Using --track-renames-strategy modtime,leaf would match files based on modification time, the leaf of the
       file name and the size only.

       Using  --track-renames-strategy  modtime  or  leaf  can  enable  --track-renames  support  for  encrypted
       destinations.

       Note that the hash strategy is not supported with encrypted destinations.

   –delete-(before,during,after)
       This option allows you to specify when files on your destination are deleted when you sync folders.

       Specifying  the  value  --delete-before  will delete all files present on the destination, but not on the
       source before starting the transfer of any new or updated files.  This uses two passes through  the  file
       systems, one for the deletions and one for the copies.

       Specifying  --delete-during  will  delete  files while checking and uploading files.  This is the fastest
       option and uses the least memory.

       Specifying --delete-after (the default value) will delay deletion of files until  all  new/updated  files
       have  been successfully transferred.  The files to be deleted are collected in the copy pass then deleted
       after the copy pass has completed successfully.  The files to be deleted are held in memory so this  mode
       may  use  more memory.  This is the safest mode as it will only delete files if there have been no errors
       subsequent to that.  If there have been errors before the deletions start then you will get  the  message
       not deleting files as there were IO errors.

   –fast-list
       When  doing  anything  which  involves  a  directory  listing (e.g. sync, copy, ls - in fact nearly every
       command), rclone normally lists a directory and processes it before using more directory lists to process
       any subdirectories.  This can be parallelised and works very quickly using the least amount of memory.

       However, some remotes have a way of listing all files beneath a directory in one (or a small  number)  of
       transactions.  These tend to be the bucket-based remotes (e.g. S3, B2, GCS, Swift).

       If you use the --fast-list flag then rclone will use this method for listing directories.  This will have
       the following consequences for the listing:

       • It will use fewer transactions (important if you pay for them)

       • It will use more memory.  Rclone has to load the whole listing into memory.

       • It may be faster because it uses fewer transactions

       • It may be slower because it can’t be parallelized

       rclone should always give identical results with and without --fast-list.

       If  you  pay  for  transactions  and  can  fit  your  entire sync listing into memory then --fast-list is
       recommended.  If you have a very big sync to do then don’t use --fast-list otherwise you will run out  of
       memory.

       If you use --fast-list on a remote which doesn’t support it, then rclone will just ignore it.

   –timeout=TIME
       This  sets  the  IO  idle  timeout.   If a transfer has started but then becomes idle for this long it is
       considered broken and disconnected.

       The default is 5m.  Set to 0 to disable.

   –transfers=N
       The number of file transfers to run in parallel.  It can sometimes be useful to set  this  to  a  smaller
       number  if  the  remote  is  giving  a lot of timeouts or bigger if you have lots of bandwidth and a fast
       remote.

       The default is to run 4 file transfers in parallel.

       Look at –multi-thread-streams if you would like to control single file transfers.

   -u, –update
       This forces rclone to skip any files which exist on the destination and have  a  modified  time  that  is
       newer than the source file.

       This  can  be  useful  in avoiding needless transfers when transferring to a remote which doesn’t support
       modification times directly (or when using --use-server-modtime to avoid extra API calls) as it  is  more
       accurate  than  a  --size-only  check  and  faster than using --checksum.  On such remotes (or when using
       --use-server-modtime) the time checked will be the uploaded time.

       If an existing destination file has a modification time older than the source file’s, it will be  updated
       if  the  sizes are different.  If the sizes are the same, it will be updated if the checksum is different
       or not available.

       If an existing destination file has a modification time equal (within the computed modify window) to  the
       source  file’s,  it will be updated if the sizes are different.  The checksum will not be checked in this
       case unless the --checksum flag is provided.

       In all other cases the file will not be updated.

       Consider using the --modify-window flag to compensate for time skews between the source and the  backend,
       for backends that do not support mod times, and instead use uploaded times.  However, if the backend does
       not  support  checksums,  note  that  syncing  or copying within the time skew window may still result in
       additional transfers for safety.

   –use-mmap
       If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based  platforms  and
       VirtualAlloc  on  Windows  for its transfer buffers (size controlled by --buffer-size).  Memory allocated
       like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.

       If this flag is not set then rclone will allocate and free the buffers  using  the  Go  memory  allocator
       which may use more memory as memory pages are returned less aggressively to the OS.

       It  is  possible  this does not work well on all platforms so it is disabled by default; in the future it
       may be enabled by default.

   –use-server-modtime
       Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime).  On  these
       backends,  rclone  stores  the original modtime as additional metadata on the object.  By default it will
       make an API call to retrieve the metadata when the modtime is needed by an operation.

       Use this flag to disable the extra API call and rely instead on the server’s  modified  time.   In  cases
       such  as a local to remote sync using --update, knowing the local file is newer than the time it was last
       uploaded to the remote is sufficient.  In those cases, this flag can speed up the process and reduce  the
       number of API calls necessary.

       Using  this  flag  on  a sync operation without also using --update would cause all files modified at any
       time other than the last upload time to be uploaded again, which is probably not what you want.

   -v, -vv, –verbose
       With -v rclone will tell you about each file that is  transferred  and  a  small  number  of  significant
       events.

       With -vv rclone will become very verbose telling you about every file it considers and transfers.  Please
       send bug reports with a log with this setting.

       When  setting  verbosity  as an environment variable, use RCLONE_VERBOSE=1 or RCLONE_VERBOSE=2 for -v and
       -vv respectively.

   -V, –version
       Prints the version number

   SSL/TLS options
       The outgoing SSL/TLS connections rclone makes can be controlled with these options.  For example this can
       be very useful with the HTTP or WebDAV backends.  Rclone HTTP servers have their own set of configuration
       for SSL/TLS which you can find in their documentation.

   –ca-cert string
       This loads the PEM encoded certificate authority certificate and uses it to verify  the  certificates  of
       the servers rclone connects to.

       If  you  have  generated  certificates  signed with a local CA then you will need this flag to connect to
       servers using those certificates.

   –client-cert string
       This loads the PEM encoded client side certificate.

       This is used for mutual TLS authentication.

       The --client-key flag is required too when using this.

   –client-key string
       This loads the PEM encoded client  side  private  key  used  for  mutual  TLS  authentication.   Used  in
       conjunction with --client-cert.

   –no-check-certificate=true/false
       --no-check-certificate  controls  whether a client verifies the server’s certificate chain and host name.
       If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host  name
       in that certificate.  In this mode, TLS is susceptible to man-in-the-middle attacks.

       This option defaults to false.

       This should be used only for testing.

   Configuration Encryption
       Your  configuration file contains information for logging in to your cloud services.  This means that you
       should keep your rclone.conf file in a secure location.

       If you are in an environment where that isn’t possible, you can add a  password  to  your  configuration.
       This means that you will have to supply the password every time you start rclone.

       To add a password to your rclone configuration, execute rclone config.

              >rclone config
              Current remotes:

              e) Edit existing remote
              n) New remote
              d) Delete remote
              s) Set configuration password
              q) Quit config
              e/n/d/s/q>

       Go into s, Set configuration password:

              e/n/d/s/q> s
              Your configuration is not encrypted.
              If you add a password, you will protect your login information to cloud services.
              a) Add Password
              q) Quit to main menu
              a/q> a
              Enter NEW configuration password:
              password:
              Confirm NEW password:
              password:
              Password set
              Your configuration is encrypted.
              c) Change Password
              u) Unencrypt configuration
              q) Quit to main menu
              c/u/q>

       Your  configuration  is  now  encrypted,  and  every  time  you  start rclone you will have to supply the
       password.  See below for details.  In the same menu, you can change the  password  or  completely  remove
       encryption from your configuration.

       There is no way to recover the configuration if you lose your password.

       rclone  uses  nacl secretbox which  in  turn  uses XSalsa20 and Poly1305 to encrypt and authenticate your
       configuration with secret-key cryptography.  The password is SHA-256 hashed, which produces the  key  for
       secretbox.  The hashed password is not stored.

       While  this  provides very good security, we do not recommend storing your encrypted rclone configuration
       in public if it contains sensitive information, maybe except if you use a very strong password.

       If it is safe in your environment, you can set the RCLONE_CONFIG_PASS  environment  variable  to  contain
       your password, in which case it will be used for decrypting the configuration.

       You  can  set  this  for  a  session  from  a  script.   For unix like systems save this to a file called
       set-rclone-password:

              #!/bin/echo Source this file don't run it

              read -s RCLONE_CONFIG_PASS
              export RCLONE_CONFIG_PASS

       Then source the file when you want to use it.  From the shell you would  do  source  set-rclone-password.
       It will then ask you for the password and set it in the environment variable.

       An  alternate means of supplying the password is to provide a script which will retrieve the password and
       print on standard output.  This script should have a fully specified  path  name  and  not  rely  on  any
       environment  variables.  The script is supplied either via --password-command="..." command line argument
       or via the RCLONE_PASSWORD_COMMAND environment variable.

       One useful example of this is using the passwordstore application to retrieve the password:

              export RCLONE_PASSWORD_COMMAND="pass rclone/config"

       If the passwordstore password manager holds the password for the rclone configuration, using  the  script
       method  means  the  password is primarily protected by the passwordstore system, and is never embedded in
       the clear in scripts, nor available for examination using the standard commands available.  It  is  quite
       possible with long running rclone sessions for copies of passwords to be innocently captured in log files
       or terminal scroll buffers, etc.  Using the script method of supplying the password enhances the security
       of the config password considerably.

       If  you are running rclone inside a script, unless you are using the --password-command method, you might
       want to disable password prompts.  To do that, pass the parameter --ask-password=false to  rclone.   This
       will  make  rclone  fail  instead  of asking for a password if RCLONE_CONFIG_PASS doesn’t contain a valid
       password, and --password-command has not been supplied.

       Whenever running commands that may be affected by options in a configuration file, rclone will  look  for
       an  existing file according to the rules described above, and load any it finds.  If an encrypted file is
       found, this includes decrypting it, with the possible consequence of a password prompt.  When executing a
       command line that you know are not actually using anything from such a configuration file, you can  avoid
       it  being  loaded  by  overriding  the  location,  e.g. with  one  of  the  documented special values for
       memory-only configuration.  Since only backend options can be stored  in  configuration  files,  this  is
       normally  unnecessary  for  commands  that do not operate on backends, e.g. genautocomplete.  However, it
       will be relevant for commands that do operate on backends in general, but are used without referencing  a
       stored remote, e.g.  listing local filesystem paths, or connection strings: rclone --config="" ls .

   Developer options
       These  options  are useful when developing or debugging rclone.  There are also some more remote specific
       options which aren’t documented here which are used for testing.   These  start  with  remote  name  e.g.
       --drive-test-option - see the docs for the remote in question.

   –cpuprofile=FILE
       Write CPU profile to file.  This can be analysed with go tool pprof.

   –dump flag,flag,flag
       The --dump flag takes a comma separated list of flags to dump info about.

       Note  that  some  headers  including  Accept-Encoding  as shown may not be correct in the request and the
       response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect.  In
       this case the body of the request will be gunzipped before showing it.

       The available flags are:

   –dump headers
       Dump HTTP headers with Authorization: lines removed.  May still contain  sensitive  info.   Can  be  very
       verbose.  Useful for debugging only.

       Use --dump auth if you do want the Authorization: headers.

   –dump bodies
       Dump  HTTP  headers  and bodies - may contain sensitive info.  Can be very verbose.  Useful for debugging
       only.

       Note that the bodies are buffered in memory so don’t use this for enormous files.

   –dump requests
       Like --dump bodies but dumps the request bodies and the response headers.  Useful for debugging  download
       problems.

   –dump responses
       Like  --dump  bodies  but dumps the response bodies and the request headers.  Useful for debugging upload
       problems.

   –dump auth
       Dump HTTP headers - will contain sensitive info such as Authorization: headers - use  --dump  headers  to
       dump without Authorization: headers.  Can be very verbose.  Useful for debugging only.

   –dump filters
       Dump the filters to the output.  Useful to see exactly what include and exclude options are filtering on.

   –dump goroutines
       This dumps a list of the running go-routines at the end of the command to standard output.

   –dump openfiles
       This  dumps  a  list of the open files at the end of the command.  It uses the lsof command to do that so
       you’ll need that installed to use it.

   –memprofile=FILE
       Write memory profile to file.  This can be analysed with go tool pprof.

   Filtering
       For the filtering options

       • --delete-excluded

       • --filter

       • --filter-from

       • --exclude

       • --exclude-from

       • --exclude-if-present

       • --include

       • --include-from

       • --files-from

       • --files-from-raw

       • --min-size

       • --max-size

       • --min-age

       • --max-age

       • --dump filters

       See the filtering section.

   Remote control
       For the remote control options and for instructions on how to remote control rclone

       • --rc


       • and anything starting with --rc-
       See the remote control section.

   Logging
       rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.

       By default, rclone logs to standard error.  This means you can redirect standard error and still see  the
       normal output of rclone commands (e.g.  rclone ls).

       By default, rclone will produce Error and Notice level messages.

       If you use the -q flag, rclone will only produce Error messages.

       If you use the -v flag, rclone will produce Error, Notice and Info messages.

       If you use the -vv flag, rclone will produce Error, Notice, Info and Debug messages.

       You can also control the log levels with the --log-level flag.

       If  you  use  the  --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with
       standard error to FILE.

       If you use the --syslog flag then rclone will log to  syslog  and  the  --syslog-facility  control  which
       facility it uses.

       Rclone  prefixes all log messages with their level in capitals, e.g. INFO which makes it easy to grep the
       log file for different kinds of information.

   Exit Code
       If any errors occur during the command execution, rclone will exit  with  a  non-zero  exit  code.   This
       allows scripts to detect when rclone operations have failed.

       During  the  startup  phase,  rclone  will exit immediately if an error is detected in the configuration.
       There will always be a log message immediately before exiting.

       When rclone is running it will accumulate errors as it goes along, and only exit  with  a  non-zero  exit
       code  if (after retries) there were still failed transfers.  For every error counted there will be a high
       priority log message (visible with -q) showing the message and which file caused  the  problem.   A  high
       priority message is also shown when starting a retry so the user can see that any previous error messages
       may  not be valid after the retry.  If rclone has done a retry it will log a high priority message if the
       retry was successful.

   List of exit codes
       • 0 - success

       • 1 - Syntax or usage error

       • 2 - Error not otherwise categorised

       • 3 - Directory not found

       • 4 - File not found

       • 5 - Temporary error (one that more retries might fix) (Retry errors)

       • 6 - Less serious errors (like 461 errors from dropbox) (NoRetry errors)

       • 7 - Fatal error (one that more retries won’t fix, like account suspended) (Fatal errors)

       • 8 - Transfer exceeded - limit set by –max-transfer reached

       • 9 - Operation successful, but no files transferred

   Environment Variables
       Rclone can be configured entirely using environment variables.  These can be used  to  set  defaults  for
       options or config file entries.

   Options
       Every option in rclone can have its default set by environment variable.

       To  find  the  name  of the environment variable, first, take the long option name, strip the leading --,
       change - to _, make upper case and prepend RCLONE_.

       For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s.  If you set stats on
       the command line this will override the environment variable setting.

       Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true.

       Verbosity  is  slightly  different,  the  environment  variable  equivalent  of  --verbose   or   -v   is
       RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2.

       The same parser is used for the options and the environment variables so they take exactly the same form.

       The options set by environment variables can be seen with the -vv flag, e.g. rclone version -vv.

   Config file
       You  can  set  defaults  for  values  in the config file on an individual remote basis.  The names of the
       config items are documented in the page for each backend.

       To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ +
       name of config file option and make it all uppercase.

       For example, to configure an S3 remote named mys3: without a config file  (using  unix  ways  of  setting
       environment variables):

              $ export RCLONE_CONFIG_MYS3_TYPE=s3
              $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
              $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
              $ rclone lsd mys3:
                        -1 2016-09-21 12:54:21        -1 my-bucket
              $ rclone listremotes | grep mys3
              mys3:

       Note  that  if  you  want  to  create  a  remote using environment variables you must create the ..._TYPE
       variable as above.

       Note that the name of a remote created using environment variable is case  insensitive,  in  contrast  to
       regular  remotes  stored in config file as documented above.  You must write the name in uppercase in the
       environment variable, but as seen from example above it will be listed and can be accessed in  lowercase,
       while you can also refer to the same remote in uppercase:

              $ rclone lsd mys3:
                        -1 2016-09-21 12:54:21        -1 my-bucket
              $ rclone lsd MYS3:
                        -1 2016-09-21 12:54:21        -1 my-bucket

       Note that you can only set the options of the immediate backend, so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID
       has  no  effect,  if  myS3Crypt is a crypt remote based on an S3 remote.  However RCLONE_S3_ACCESS_KEY_ID
       will set the access key of all remotes using S3, including myS3Crypt.

       Note also that now rclone has connection strings, it is probably easier to use those instead which  makes
       the above example

              rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:

   Precedence
       The  various  different  methods of backend configuration are read in this order and the first one with a
       value is used.

       • Parameters in connection strings, e.g. myRemote,skip_links:

       • Flag values as supplied on the command line, e.g. --skip-links

       • Remote specific environment vars, e.g. RCLONE_CONFIG_MYREMOTE_SKIP_LINKS (see above).

       • Backend-specific environment vars, e.g. RCLONE_LOCAL_SKIP_LINKS.

       • Backend generic environment vars, e.g. RCLONE_SKIP_LINKS.

       • Config file, e.g. skip_links = true.

       • Default values, e.g. false - these can’t be changed.

       So  if  both  --skip-links  is   supplied   on   the   command   line   and   an   environment   variable
       RCLONE_LOCAL_SKIP_LINKS is set, the command line flag will take preference.

       The  backend configurations set by environment variables can be seen with the -vv flag, e.g. rclone about
       myRemote: -vv.

       For non backend configuration the order is as follows:

       • Flag values as supplied on the command line, e.g. --stats 5s.

       • Environment vars, e.g. RCLONE_STATS=5s.

       • Default values, e.g. 1m - these can’t be changed.

   Other environment variables
       • RCLONE_CONFIG_PASS set to contain your config file password (see Configuration Encryption section)

       • HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).

         • HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.

         • The environment values may be either a complete URL or a “host[:port]” for, in which case the  “http”
           scheme is assumed.

       • USER  and LOGNAME values are used as fallbacks for current username.  The primary method for looking up
         username is OS-specific: Windows API on Windows, real user ID in /etc/passwd on Unix systems.   In  the
         documentation the current username is simply referred to as $USER.

       • RCLONE_CONFIG_DIR - rclone sets this variable for use in config files and sub processes to point to the
         directory holding the config file.

       The  options  set  by  environment  variables  can  be  seen  with  the  -vv and --log-level=DEBUG flags,
       e.g. rclone version -vv.

Configuring rclone on a remote / headless machine

       Some of the configurations (those involving oauth2) require an Internet connected web browser.

       If you are trying to set rclone up on a remote or headless box with no browser available  on  it  (e.g. a
       NAS  or a server in a datacenter) then you will need to use an alternative means of configuration.  There
       are two ways of doing it, described below.

   Configuring using rclone authorize
       On the headless box run rclone config but answer N to the Use auto config? question.

              ...
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes (default)
              n) No
              y/n> n
              For this to work, you will need rclone available on a machine that has
              a web browser available.

              For more help and alternate methods see: https://rclone.org/remote_setup/

              Execute the following on the machine with the web browser (same rclone
              version recommended):

                  rclone authorize "amazon cloud drive"

              Then paste the result below:
              result>

       Then on your main desktop machine

              rclone authorize "amazon cloud drive"
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              Paste the following into your remote machine --->
              SECRET_TOKEN
              <---End paste

       Then back to the headless box, paste in the code

              result> SECRET_TOKEN
              --------------------
              [acd12]
              client_id =
              client_secret =
              token = SECRET_TOKEN
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d>

   Configuring by copying the config file
       Rclone stores all of its config in a single configuration file.  This can easily be copied to configure a
       remote rclone.

       So first configure rclone on your desktop machine with

              rclone config

       to set up the config file.

       Find the config file by running rclone config file, for example

              $ rclone config file
              Configuration file is stored at:
              /home/user/.rclone.conf

       Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.)  and place it in  the  correct  place
       (use rclone config file on the remote box to find out where).

   Configuring using SSH Tunnel
       Linux  and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by
       using the following command:

              ssh -L localhost:53682:localhost:53682 username@remote_server

       Then on the headless box run rclone config and answer Y to the Use auto config? question.

              ...
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes (default)
              n) No
              y/n> y

       Then copy and paste the auth url http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the  browser  on  your
       local machine, complete the auth and it is done.

Filtering, includes and excludes

       Filter  flags  determine which files rclone sync, move, ls, lsl, md5sum, sha1sum, size, delete, check and
       similar commands apply to.

       They are specified in terms of path/file name patterns; path/file lists; file age and size,  or  presence
       of  a file in a directory.  Bucket based remotes without the concept of directory apply filters to object
       key, age and size in an analogous way.

       Rclone purge does not obey filters.

       To test filters without risk of damage to data, apply them to rclone ls, or with the  --dry-run  and  -vv
       flags.

       Rclone  filter  patterns  can  only be used in filter command line options, not in the specification of a
       remote.

       E.g.  rclone copy "remote:dir*.jpg" /path/to/dir does not have a filter effect.  rclone  copy  remote:dir
       /path/to/dir --include "*.jpg" does.

       Important  Avoid  mixing any two of --include..., --exclude... or --filter... flags in an rclone command.
       The results may not be what you expect.  Instead use a --filter... flag.

   Patterns for matching path/file names
   Pattern syntax
       Here is a formal definition of the pattern syntax, examples are below.

       Rclone matching rules follow a glob style:

              *         matches any sequence of non-separator (/) characters
              **        matches any sequence of characters including / separators
              ?         matches any single non-separator (/) character
              [ [ ! ] { character-range } ]
                        character class (must be non-empty)
              { pattern-list }
                        pattern alternatives
              {{ regexp }}
                        regular expression to match
              c         matches character c (c != *, **, ?, \, [, {, })
              \c        matches reserved character c (c = *, **, ?, \, [, {, }) or character class

       character-range:

              c         matches character c (c != \, -, ])
              \c        matches reserved character c (c = \, -, ])
              lo - hi   matches character c for lo <= c <= hi

       pattern-list:

              pattern { , pattern }
                        comma-separated (without spaces) patterns

       character classes (see Go regular expression reference) include:

              Named character classes (e.g. [\d], [^\d], [\D], [^\D])
              Perl character classes (e.g. \s, \S, \w, \W)
              ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]])

       regexp for advanced users to insert a regular expression - see below for more info:

              Any re2 regular expression not containing `}}`

       If the filter pattern starts with a / then it only matches at  the  top  level  of  the  directory  tree,
       relative  to the root of the remote (not necessarily the root of the drive).  If it does not start with /
       then it is matched starting at the end of the path/file name but it only matches a complete path  element
       - it must match from a / separator or the beginning of the path/file.

              file.jpg   - matches "file.jpg"
                         - matches "directory/file.jpg"
                         - doesn't match "afile.jpg"
                         - doesn't match "directory/afile.jpg"
              /file.jpg  - matches "file.jpg" in the root directory of the remote
                         - doesn't match "afile.jpg"
                         - doesn't match "directory/file.jpg"

       The top level of the remote may not be the top level of the drive.

       E.g.  for a Microsoft Windows local directory structure

              F:
              ├── bkp
              ├── data
              │   ├── excl
              │   │   ├── 123.jpg
              │   │   └── 456.jpg
              │   ├── incl
              │   │   └── document.pdf

       To copy the contents of folder data into folder bkp excluding the contents of subfolder exclthe following
       command treats F:\data and F:\bkp as top level for filtering.

       rclone copy F:\data\ F:\bkp\ --exclude=/excl/**

       Important Use / in path/file name patterns and not \ even if running on Microsoft Windows.

       Simple patterns are case sensitive unless the --ignore-case flag is used.

       Without --ignore-case (default)

              potato - matches "potato"
                     - doesn't match "POTATO"

       With --ignore-case

              potato - matches "potato"
                     - matches "POTATO"

   Using regular expressions in filter patterns
       The  syntax  of  filter  patterns  is glob style matching (like bash uses) to make things easy for users.
       However this does not provide absolute control over the matching,  so  for  advanced  users  rclone  also
       provides a regular expression syntax.

       The  regular expressions used are as defined in the Go regular expression reference.  Regular expressions
       should be enclosed in {{ }}.  They will match only the last path segment if the glob doesn’t start with /
       or the whole path name if it does.  Note that rclone does not  attempt  to  parse  the  supplied  regular
       expression,  meaning  that  using  any regular expression filter will prevent rclone from using directory
       filter rules, as it will instead check every path against the supplied regular expression(s).

       Here is how the {{regexp}} is transformed into an full regular expression to match the entire path:

              {{regexp}}  becomes (^|/)(regexp)$
              /{{regexp}} becomes ^(regexp)$

       Regexp syntax can be mixed with glob syntax, for example

              *.{{jpe?g}} to match file.jpg, file.jpeg but not file.png

       You can also use regexp flags - to set case insensitive, for example

              *.{{(?i)jpg}} to match file.jpg, file.JPG but not file.png

       Be careful with wildcards in regular expressions - you don’t want them to match path separators normally.
       To match any file name starting with start and ending with end write

              {{start[^/]*end\.jpg}}

       Not

              {{start.*end\.jpg}}

       Which will match a directory called start with a file called end.jpg  in  it  as  the  .*  will  match  /
       characters.

       Note that you can use -vv --dump filters to show the filter patterns in regexp format - rclone implements
       the glob patters by transforming them into regular expressions.

   Filter pattern examples
       Description           Pattern          Matches                         Does not match
       ─────────────────────────────────────────────────────────────────────────────────────────────────
       Wildcard              *.jpg            /file.jpg                       /file.png
                                              /dir/file.jpg                   /dir/file.png
       Rooted                /*.jpg           /file.jpg                       /file.png
                                              /file2.jpg                      /dir/file.jpg
       Alternates            *.{jpg,png}      /file.jpg                       /file.gif
                                              /dir/file.png                   /dir/file.gif
       Path Wildcard         dir/**           /dir/anyfile                    file.png
                                              /subdir/dir/subsubdir/anyfile   /subdir/file.png
       Any Char              *.t?t            /file.txt                       /file.qxt
                                              /dir/file.tzt                   /dir/file.png
       Range                 *.[a-z]          /file.a                         /file.0
                                              /dir/file.b                     /dir/file.1
       Escape                *.\?\?\?         /file.???                       /file.abc
                                              /dir/file.???                   /dir/file.def
       Class                 *.\d\d\d         /file.012                       /file.abc
                                              /dir/file.345                   /dir/file.def
       Regexp                *.{{jpe?g}}      /file.jpeg                      /file.png
                                              /dir/file.jpg                   /dir/file.jpeeg
       Rooted Regexp         /{{.*\.jpe?g}}   /file.jpeg                      /file.png
                                              /file.jpg                       /dir/file.jpg

   How filter rules are applied to files
       Rclone path/file name filters are made up of one or more of the following flags:

       • --include

       • --include-from

       • --exclude

       • --exclude-from

       • --filter

       • --filter-from

       There can be more than one instance of individual flags.

       Rclone  internally  uses  a combined list of all the include and exclude rules.  The order in which rules
       are processed can influence the result of the filter.

       All flags of the same type are processed together in the  order  above,  regardless  of  what  order  the
       different types of flags are included on the command line.

       Multiple  instances  of the same flag are processed from left to right according to their position in the
       command line.

       To mix up the order of processing includes and excludes use --filter... flags.

       Within --include-from, --exclude-from and --filter-from flags rules are processed from top to  bottom  of
       the referenced file.

       If  there  is  an --include or --include-from flag specified, rclone implies a - ** rule which it adds to
       the bottom of the internal rule list.  Specifying a + rule with a --filter... flag does  not  imply  that
       rule.

       Each path/file name passed through rclone is matched against the combined filter list.  At first match to
       a  rule  the  path/file  name  is included or excluded and no further filter rules are processed for that
       path/file.

       If rclone does not find a match,  after  testing  against  all  rules  (including  the  implied  rule  if
       appropriate), the path/file name is included.

       Any path/file included at that stage is processed by the rclone command.

       --files-from and --files-from-raw flags over-ride and cannot be combined with other filter options.

       To  see the internal combined rule list, in regular expression form, for a command add the --dump filters
       flag.  Running an rclone command with --dump filters and -vv flags lists the internal filter elements and
       shows how they are applied to each source path/file.  There is not currently a  means  provided  to  pass
       regular  expression  filter  options  into  rclone  directly  though character class filter rules contain
       character classes.  Go regular expression reference

   How filter rules are applied to directories
       Rclone commands are applied to path/file names not directories.  The entire contents of a  directory  can
       be matched to a filter by the pattern directory/* or recursively by directory/**.

       Directory filter rules are defined with a closing / separator.

       E.g.  /directory/subdirectory/ is an rclone directory filter rule.

       Rclone  commands  can  use  directory filter rules to determine whether they recurse into subdirectories.
       This potentially optimises access to a remote  by  avoiding  listing  unnecessary  directories.   Whether
       optimisation is desirable depends on the specific filter rules and source remote content.

       If  any  regular  expression filters are in use, then no directory recursion optimisation is possible, as
       rclone must check every path against the supplied regular expression(s).

       Directory recursion optimisation occurs if either:

       • A source remote does not support the rclone ListR  primitive.   local,  sftp,  Microsoft  OneDrive  and
         WebDAV do not support ListR.  Google Drive and most bucket type storage do.  Full list

       • On  other  remotes  (those  that  support ListR), if the rclone command is not naturally recursive, and
         provided it is not run with the --fast-list flag.  ls, lsf -R and  size  are  naturally  recursive  but
         sync, copy and move are not.

       • Whenever the --disable ListR flag is applied to an rclone command.

       Rclone  commands  imply directory filter rules from path/file filter rules.  To view the directory filter
       rules rclone has implied for a command specify the --dump filters flag.

       E.g.  for an include rule

              /a/*.jpg

       Rclone implies the directory include rule

              /a/

       Directory filter rules specified in an rclone command can limit  the  scope  of  an  rclone  command  but
       path/file filters still have to be specified.

       E.g.   rclone  ls  remote:  --include  /directory/  will not match any files.  Because it is an --include
       option the --exclude ** rule is implied, and the /directory/ pattern serves only to  optimise  access  to
       the remote by ignoring everything outside of that directory.

       E.g.  rclone ls remote: --filter-from filter-list.txt with a file filter-list.txt:

              - /dir1/
              - /dir2/
              + *.pdf
              - **

       All  files  in directories dir1 or dir2 or their subdirectories are completely excluded from the listing.
       Only files of suffix pdf in the root of remote: or its subdirectories are listed.  The - ** rule prevents
       listing of any path/files not previously matched by the rules above.

       Option exclude-if-present creates a directory exclude rule based on the presence of a file in a directory
       and takes precedence over other rclone directory filter rules.

       When using pattern list syntax, if a pattern item contains either / or **, then rclone will not  able  to
       imply a directory filter rule from this pattern list.

       E.g.  for an include rule

              {dir1/**,dir2/**}

       Rclone  will  match files below directories dir1 or dir2 only, but will not be able to use this filter to
       exclude a directory dir3 from being traversed.

       Directory recursion optimisation may affect performance, but normally not the result.  One  exception  to
       this  is  sync operations with option --create-empty-src-dirs, where any traversed empty directories will
       be created.  With the pattern list example {dir1/**,dir2/**} above, this would create an empty  directory
       dir3  on  destination (when it exists on source).  Changing the filter to {dir1,dir2}/**, or splitting it
       into two include rules --include dir1/**  --include  dir2/**,  will  match  the  same  files  while  also
       filtering directories, with the result that an empty directory dir3 will no longer be created.

   --exclude - Exclude files matching pattern
       Excludes path/file names from an rclone command based on a single exclude rule.

       This flag can be repeated.  See above for the order filter flags are processed in.

       --exclude should not be used with --include, --include-from, --filter or --filter-from flags.

       --exclude has no effect when combined with --files-from or --files-from-raw flags.

       E.g.  rclone ls remote: --exclude *.bak excludes all .bak files from listing.

       E.g.   rclone  size  remote: "--exclude /dir/**" returns the total size of all files on remote: excluding
       those in root directory dir and sub directories.

       E.g.  on Microsoft Windows rclone ls remote: --exclude "*\[{JP,KR,HK}\]*" lists the files in remote: with
       [JP] or [KR] or [HK] in their name.  Quotes prevent  the  shell  from  interpreting  the  \  characters.\
       characters escape the [ and ] so an rclone filter treats them literally rather than as a character-range.
       The  {  and  }  define an rclone pattern list.  For other operating systems single quotes are required ie
       rclone ls remote: --exclude '*\[{JP,KR,HK}\]*'

   --exclude-from - Read exclude patterns from file
       Excludes path/file names from an rclone command based on rules in a named file.  The file contains a list
       of remarks and pattern rules.

       For an example exclude-file.txt:

              # a sample exclude rule file
              *.bak
              file2.jpg

       rclone ls remote: --exclude-from exclude-file.txt lists the files on remote: except those named file2.jpg
       or with a suffix .bak.  That is equivalent to rclone ls remote: --exclude file2.jpg --exclude "*.bak".

       This flag can be repeated.  See above for the order filter flags are processed in.

       The --exclude-from flag is useful where multiple exclude filter rules are applied to an rclone command.

       --exclude-from should not be used with --include, --include-from, --filter or --filter-from flags.

       --exclude-from has no effect when combined with --files-from or --files-from-raw flags.

       --exclude-from followed by - reads filter rules from standard input.

   --include - Include files matching pattern
       Adds a single include rule based on path/file names to an rclone command.

       This flag can be repeated.  See above for the order filter flags are processed in.

       --include has no effect when combined with --files-from or --files-from-raw flags.

       --include implies --exclude ** at the end of an rclone  internal  filter  list.   Therefore  if  you  mix
       --include  and  --include-from  flags with --exclude, --exclude-from, --filter or --filter-from, you must
       use include rules for all the files you want in the include statement.   For  more  flexibility  use  the
       --filter-from flag.

       E.g.   rclone  ls  remote:  --include "*.{png,jpg}" lists the files on remote: with suffix .png and .jpg.
       All other files are excluded.

       E.g.  multiple rclone copy commands can be combined with --include and a pattern-list.

              rclone copy /vol1/A remote:A
              rclone copy /vol1/B remote:B

       is equivalent to:

              rclone copy /vol1 remote: --include "{A,B}/**"

       E.g.  rclone ls remote:/wheat --include "??[^[:punct:]]*" lists the files remote:  directory  wheat  (and
       subdirectories) whose third character is not punctuation.  This example uses an ASCII character class.

   --include-from - Read include patterns from file
       Adds  path/file  names  to an rclone command based on rules in a named file.  The file contains a list of
       remarks and pattern rules.

       For an example include-file.txt:

              # a sample include rule file
              *.jpg
              file2.avi

       rclone ls remote: --include-from include-file.txt lists the files  on  remote:  with  name  file2.avi  or
       suffix .jpg.  That is equivalent to rclone ls remote: --include file2.avi --include "*.jpg".

       This flag can be repeated.  See above for the order filter flags are processed in.

       The --include-from flag is useful where multiple include filter rules are applied to an rclone command.

       --include-from  implies  --exclude ** at the end of an rclone internal filter list.  Therefore if you mix
       --include and --include-from flags with --exclude, --exclude-from, --filter or  --filter-from,  you  must
       use  include  rules  for  all  the files you want in the include statement.  For more flexibility use the
       --filter-from flag.

       --exclude-from has no effect when combined with --files-from or --files-from-raw flags.

       --exclude-from followed by - reads filter rules from standard input.

   --filter - Add a file-filtering rule
       Specifies path/file names to an rclone command, based on a single include or exclude  rule,  in  +  or  -
       format.

       This flag can be repeated.  See above for the order filter flags are processed in.

       --filter  + differs from --include.  In the case of --include rclone implies an --exclude * rule which it
       adds to the bottom of the internal rule list.  --filter...+ does not imply that rule.

       --filter has no effect when combined with --files-from or --files-from-raw flags.

       --filter should not be used with --include, --include-from, --exclude or --exclude-from flags.

       E.g.  rclone ls remote: --filter "- *.bak" excludes all .bak files from a list of remote:.

   --filter-from - Read filtering patterns from a file
       Adds path/file names to an rclone command based on rules in a named file.  The file contains  a  list  of
       remarks  and  pattern  rules.   Include  rules  start with + and exclude rules with -.  ! clears existing
       rules.  Rules are processed in the order they are defined.

       This flag can be repeated.  See above for the order filter flags are processed in.

       Arrange the order of filter rules with the most restrictive first and work down.

       E.g.  for filter-file.txt:

              # a sample filter rule file
              - secret*.jpg
              + *.jpg
              + *.png
              + file2.avi
              - /dir/Trash/**
              + /dir/**
              # exclude everything else
              - *

       rclone ls remote: --filter-from filter-file.txt lists the path/files on remote: including all jpg and png
       files, excluding any matching secret*.jpg and including file2.avi.  It also includes  everything  in  the
       directory  dir  at  the  root  of  remote, except remote:dir/Trash which it excludes.  Everything else is
       excluded.

       E.g.  for an alternative filter-file.txt:

              - secret*.jpg
              + *.jpg
              + *.png
              + file2.avi
              - *

       Files file1.jpg, file3.png and file2.avi are listed whilst secret17.jpg  and  files  without  the  suffix
       .jpgor.png` are excluded.

       E.g.  for an alternative filter-file.txt:

              + *.jpg
              + *.gif
              !
              + 42.doc
              - *

       Only file 42.doc is listed.  Prior rules are cleared by the !.

   --files-from - Read list of source-file names
       Adds  path/files  to an rclone command from a list in a named file.  Rclone processes the path/file names
       in the order of the list, and no others.

       Other filter flags (--include, --include-from, --exclude, --exclude-from, --filter and --filter-from) are
       ignored when --files-from is used.

       --files-from expects a list of files as its input.  Leading or trailing whitespace is stripped  from  the
       input lines.  Lines starting with # or ; are ignored.

       Rclone commands with a --files-from flag traverse the remote, treating the names in --files-from as a set
       of filters.

       If  the  --no-traverse  and  --files-from flags are used together an rclone command does not traverse the
       remote.  Instead it addresses each path/file named in the file individually.  For  each  path/file  name,
       that  requires  typically  1  API call.  This can be efficient for a short --files-from list and a remote
       containing many files.

       Rclone commands do not error if any names in the --files-from file are missing from the source remote.

       The --files-from flag can be repeated in a single rclone command to read path/file names from  more  than
       one file.  The files are read from left to right along the command line.

       Paths  within  the  --files-from  file  are interpreted as starting with the root specified in the rclone
       command.  Leading / separators are ignored.  See –files-from-raw if you need the input to be processed in
       a raw manner.

       E.g.  for a file files-from.txt:

              # comment
              file1.jpg
              subdir/file2.jpg

       rclone copy --files-from files-from.txt /home/me/pics remote:pics copies the following,  if  they  exist,
       and only those files.

              /home/me/pics/file1.jpg        → remote:pics/file1.jpg
              /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg

       E.g.  to copy the following files referenced by their absolute paths:

              /home/user1/42
              /home/user1/dir/ford
              /home/user2/prefect

       First  find a common subdirectory - in this case /home and put the remaining files in files-from.txt with
       or without leading /, e.g.

              user1/42
              user1/dir/ford
              user2/prefect

       Then copy these to a remote:

              rclone copy --files-from files-from.txt /home remote:backup

       The three files are transferred as follows:

              /home/user1/42       → remote:backup/user1/important
              /home/user1/dir/ford → remote:backup/user1/dir/file
              /home/user2/prefect  → remote:backup/user2/stuff

       Alternatively if / is chosen as root files-from.txt will be:

              /home/user1/42
              /home/user1/dir/ford
              /home/user2/prefect

       The copy command will be:

              rclone copy --files-from files-from.txt / remote:backup

       Then there will be an extra home directory on the remote:

              /home/user1/42       → remote:backup/home/user1/42
              /home/user1/dir/ford → remote:backup/home/user1/dir/ford
              /home/user2/prefect  → remote:backup/home/user2/prefect

   --files-from-raw - Read list of source-file names without any processing
       This flag is the same as --files-from except that input is read in a raw manner.  Lines  with  leading  /
       trailing  whitespace,  and  lines starting with ; or # are read without any processing.  rclone lsf has a
       compatible format that can be used to export file lists from remotes for input to --files-from-raw.

   --ignore-case - make searches case insensitive
       By default, rclone filter patterns are case sensitive.  The --ignore-case flag makes all of  the  filters
       patterns on the command line case insensitive.

       E.g.  --include "zaphod.txt" does not match a file Zaphod.txt.  With --ignore-case a match is made.

   Quoting shell metacharacters
       Rclone  commands with filter patterns containing shell metacharacters may not as work as expected in your
       shell and may require quoting.

       E.g.  linux, OSX (* metacharacter)

       • --include \*.jpg

       • --include '*.jpg'

       • --include='*.jpg'

       Microsoft Windows expansion is done by the command, not  shell,  so  --include  *.jpg  does  not  require
       quoting.

       If  the  rclone error Command .... needs .... arguments maximum: you provided .... non flag arguments: is
       encountered, the cause is commonly spaces within the name of a remote or flag value.  The fix then is  to
       quote values containing spaces.

   Other filters
   --min-size - Don’t transfer any file smaller than this
       Controls  the  minimum  size  file  within  the  scope  of  an rclone command.  Default units are KiB but
       abbreviations K, M, G, T or P are valid.

       E.g.  rclone ls remote: --min-size 50k lists files on remote: of 50 KiB size or larger.

       See the size option docs for more info.

   --max-size - Don’t transfer any file larger than this
       Controls the maximum size file within the scope  of  an  rclone  command.   Default  units  are  KiB  but
       abbreviations K, M, G, T or P are valid.

       E.g.  rclone ls remote: --max-size 1G lists files on remote: of 1 GiB size or smaller.

       See the size option docs for more info.

   --max-age - Don’t transfer any file older than this
       Controls the maximum age of files within the scope of an rclone command.

       --max-age applies only to files and not to directories.

       E.g.  rclone ls remote: --max-age 2d lists files on remote: of 2 days old or less.

       See the time option docs for valid formats.

   --min-age - Don’t transfer any file younger than this
       Controls  the  minimum  age  of  files  within  the scope of an rclone command.  (see --max-age for valid
       formats)

       --min-age applies only to files and not to directories.

       E.g.  rclone ls remote: --min-age 2d lists files on remote: of 2 days old or more.

       See the time option docs for valid formats.

   Other flags
   --delete-excluded - Delete files on dest excluded from sync
       Important this flag is dangerous to your data - use with --dry-run and -v first.

       In conjunction with rclone sync, --delete-excluded  deletes  any  files  on  the  destination  which  are
       excluded from the command.

       E.g.  the scope of rclone sync -i A: B: can be restricted:

              rclone --min-size 50k --delete-excluded sync A: B:

       All  files  on  B:  which are less than 50 KiB are deleted because they are excluded from the rclone sync
       command.

   --dump filters - dump the filters to the output
       Dumps the defined filters to standard output in regular expression format.

       Useful for debugging.

   Exclude directory based on a file
       The --exclude-if-present flag controls whether a directory is within the scope of an rclone command based
       on the presence of a named file within it.  The flag can be repeated to check for  multiple  file  names,
       presence of any of them will exclude the directory.

       This flag has a priority over other filter flags.

       E.g.  for the following directory structure:

              dir1/file1
              dir1/dir2/file2
              dir1/dir2/dir3/file3
              dir1/dir2/dir3/.ignore

       The command rclone ls --exclude-if-present .ignore dir1 does not list dir3, file3 or .ignore.

   Common pitfalls
       The most frequent filter support issues on the rclone forum are:

       • Not using paths relative to the root of the remote

       • Not using / to match from the root of a remote

       • Not using ** to match the contents of a directory

GUI (Experimental)

       Rclone can serve a web based GUI (graphical user interface).  This is somewhat experimental at the moment
       so things may be subject to change.

       Run this command in a terminal and rclone will download and then display the GUI in a web browser.

              rclone rcd --rc-web-gui

       This will produce logs like this and rclone needs to continue to run to serve the GUI:

              2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
              2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path :  /home/USER/.cache/rclone/webgui/v0.0.6.zip]
              2019/08/25 11:40:16 NOTICE: Unzipping
              2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/

       This  assumes  you are running rclone locally on your machine.  It is possible to separate the rclone and
       the GUI - see below for details.

       If you wish to check for updates then you can add --rc-web-gui-update to the command line.

       If you find your GUI broken, you may force it to update by add --rc-web-gui-force-update.

       By default, rclone will open your browser.  Add --rc-web-gui-no-open-browser to disable this feature.

   Using the GUI
       Once the GUI opens, you will be looking at the dashboard which has an overall overview.

       On the left hand side you will see a series of view buttons you can click on:

       • Dashboard - main overview

       • Configs - examine and create new configurations

       • Explorer - view, download and upload files to the cloud storage systems

       • Backend - view or alter the backend config

       • Log out

       (More docs and walkthrough video to come!)

   How it works
       When you run the rclone rcd --rc-web-gui this is what happens

       • Rclone starts but only runs the remote control API (“rc”).

       • The API is bound to localhost with an auto-generated username and password.

       • If the API bundle is missing then rclone will download it.

       • rclone will start serving the files from the API bundle over the same port as the API

       • rclone will open the browser with a login_token so it can log straight in.

   Advanced use
       The rclone rcd may use any of the flags documented on the rc page.

       The flag --rc-web-gui is shorthand for

       • Download the web GUI if necessary

       • Check we are using some authentication

       • --rc-user gui

       • --rc-pass <random password>

       • --rc-serve

       These flags can be overridden as desired.

       See also the rclone rcd documentation.

   Example: Running a public GUI
       For example the GUI could be served on a public port over SSL using an htpasswd file using the  following
       flags:

       • --rc-web-gui

       • --rc-addr :443

       • --rc-htpasswd /path/to/htpasswd

       • --rc-cert /path/to/ssl.crt

       • --rc-key /path/to/ssl.key

   Example: Running a GUI behind a proxy
       If you want to run the GUI behind a proxy at /rclone you could use these flags:

       • --rc-web-gui

       • --rc-baseurl rclone

       • --rc-htpasswd /path/to/htpasswd

       Or instead of htpasswd if you just want a single user and password:

       • --rc-user me

       • --rc-pass mypassword

   Project
       The GUI is being developed in the: rclone/rclone-webui-react repository.

       Bug reports and contributions are very welcome :-)

       If you have questions then please ask them on the rclone forum.

Remote controlling rclone with its API

       If  rclone  is  run  with the --rc flag then it starts an HTTP server which can be used to remote control
       rclone using its API.

       You can either use the rc command to access the API or use HTTP directly.

       If you just want to run a remote control then see the rcd command.

   Supported parameters
   –rc
       Flag to start the http server listen on remote requests

   –rc-addr=IP
       IPaddress:Port or :Port to bind server to.  (default “localhost:5572”)

   –rc-cert=KEY
       SSL PEM key (concatenation of certificate and CA certificate)

   –rc-client-ca=PATH
       Client certificate authority to verify clients with

   –rc-htpasswd=PATH
       htpasswd file - if not provided no authentication is done

   –rc-key=PATH
       SSL PEM Private key

   –rc-max-header-bytes=VALUE
       Maximum size of request header (default 4096)

   –rc-min-tls-version=VALUE
       The minimum TLS version that is acceptable.  Valid values are “tls1.0”, “tls1.1”, “tls1.2”  and  “tls1.3”
       (default “tls1.0”).

   –rc-user=VALUE
       User name for authentication.

   –rc-pass=VALUE
       Password for authentication.

   –rc-realm=VALUE
       Realm for authentication (default “rclone”)

   –rc-server-read-timeout=DURATION
       Timeout for server reading data (default 1h0m0s)

   –rc-server-write-timeout=DURATION
       Timeout for server writing data (default 1h0m0s)

   –rc-serve
       Enable  the  serving  of remote objects via the HTTP interface.  This means objects will be accessible at
       http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/*
       to  see  a  listing  of  the  remotes.   Objects  may  be  requested  from  remotes  using  this   syntax
       http://127.0.0.1:5572/[remote:path]/path/to/object

       Default Off.

   –rc-files /path/to/directory
       Path to local files to serve on the HTTP server.

       If this is set then rclone will serve the files in that directory.  It will also open the root in the web
       browser if specified.  This is for implementing browser based GUIs for rclone functions.

       If  --rc-user  or  --rc-pass is set then the URL that is opened will have the authorization in the URL in
       the http://user:pass@localhost/ style.

       Default Off.

   –rc-enable-metrics
       Enable OpenMetrics/Prometheus compatible endpoint at /metrics.

       Default Off.

   –rc-web-gui
       Set this flag to serve the default web gui on the same port as rclone.

       Default Off.

   –rc-allow-origin
       Set the allowed Access-Control-Allow-Origin for rc requests.

       Can be used with –rc-web-gui if the rclone is running on different IP than the web-gui.

       Default is IP address on which rc is running.

   –rc-web-fetch-url
       Set the URL to fetch the rclone-web-gui files from.

       Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.

   –rc-web-gui-update
       Set this flag to check and update rclone-webui-react from the rc-web-fetch-url.

       Default Off.

   –rc-web-gui-force-update
       Set this flag to force update rclone-webui-react from the rc-web-fetch-url.

       Default Off.

   –rc-web-gui-no-open-browser
       Set this flag to disable opening browser automatically when using web-gui.

       Default Off.

   –rc-job-expire-duration=DURATION
       Expire finished async jobs older than DURATION (default 60s).

   –rc-job-expire-interval=DURATION
       Interval duration to check for expired async jobs (default 10s).

   –rc-no-auth
       By default rclone will require authorisation to have been set up on the rc interface in order to use  any
       methods  which  access any rclone remotes.  Eg operations/list is denied as it involved creating a remote
       as is sync/copy.

       If this is set then no authorisation  will  be  required  on  the  server  to  use  these  methods.   The
       alternative is to use --rc-user and --rc-pass and use these credentials in the request.

       Default Off.

   –rc-baseurl
       Prefix for URLs.

       Default is root

   –rc-template
       User-specified template.

   Accessing the remote control via the rclone rc command
       Rclone itself implements the remote control protocol in its rclone rc command.

       You can use it like this

              $ rclone rc rc/noop param1=one param2=two
              {
                  "param1": "one",
                  "param2": "two"
              }

       Run rclone rc on its own to see the help for the installed remote control commands.

   JSON input
       rclone rc also supports a --json flag which can be used to send more complicated input parameters.

              $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
              {
                  "p1": [
                      1,
                      "2",
                      null,
                      4
                  ],
                  "p2": {
                      "a": 1,
                      "b": 2
                  }
              }

       If  the  parameter being passed is an object then it can be passed as a JSON string rather than using the
       --json flag which simplifies the command line.

              rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'

       Rather than

              rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'

   Special parameters
       The rc interface supports some special parameters which apply to all commands.  These  start  with  _  to
       show they are different.

   Running asynchronous jobs with _async = true
       Each  rc  call  is  classified  as  a  job  and  it is assigned its own id.  By default jobs are executed
       immediately as they are created or synchronously.

       If _async has a true value when supplied to an rc call then it will return immediately with a job id  and
       the  task  will  be  run  in  the  background.  The job/status call can be used to get information of the
       background job.  The job can be queried for up to 1 minute after it has finished.

       It  is  recommended  that  potentially  long  running   jobs,   e.g. sync/sync,   sync/copy,   sync/move,
       operations/purge  are  run with the _async flag to avoid any potential problems with the HTTP request and
       response timing out.

       Starting a job with the _async flag:

              $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
              {
                  "jobid": 2
              }

       Query the status to see if the job has finished.  For more information on the  meaning  of  these  return
       parameters see the job/status call.

              $ rclone rc --json '{ "jobid":2 }' job/status
              {
                  "duration": 0.000124163,
                  "endTime": "2018-10-27T11:38:07.911245881+01:00",
                  "error": "",
                  "finished": true,
                  "id": 2,
                  "output": {
                      "_async": true,
                      "p1": [
                          1,
                          "2",
                          null,
                          4
                      ],
                      "p2": {
                          "a": 1,
                          "b": 2
                      }
                  },
                  "startTime": "2018-10-27T11:38:07.911121728+01:00",
                  "success": true
              }

       job/list can be used to show the running or recently completed jobs

              $ rclone rc job/list
              {
                  "jobids": [
                      2
                  ]
              }

   Setting config flags with _config
       If  you  wish to set config (the equivalent of the global flags) for the duration of an rc call only then
       pass in the _config parameter.

       This should be in the same format as the config key returned by options/get.

       For example, if you wished to run a sync with the --checksum parameter, you would pass this parameter  in
       your JSON blob.

              "_config":{"CheckSum": true}

       If using rclone rc this could be passed as

              rclone rc operations/sync ... _config='{"CheckSum": true}'

       Any  config  parameters  you  don’t set will inherit the global defaults which were set with command line
       flags or environment variables.

       Note that it is possible to set some values as strings or integers - see data types for more info.   Here
       is an example setting the equivalent of --buffer-size in string or integer format.

              "_config":{"BufferSize": "42M"}
              "_config":{"BufferSize": 44040192}

       If you wish to check the _config assignment has worked properly then calling options/local will show what
       the value got set to.

   Setting filter flags with _filter
       If you wish to set filters for the duration of an rc call only then pass in the _filter parameter.

       This should be in the same format as the filter key returned by options/get.

       For example, if you wished to run a sync with these flags

              --max-size 1M --max-age 42s --include "a" --include "b"

       you would pass this parameter in your JSON blob.

              "_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}

       If using rclone rc this could be passed as

              rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'

       Any  filter  parameters  you  don’t set will inherit the global defaults which were set with command line
       flags or environment variables.

       Note that it is possible to set some values as strings or integers - see data types for more info.   Here
       is an example setting the equivalent of --buffer-size in string or integer format.

              "_filter":{"MinSize": "42M"}
              "_filter":{"MinSize": 44040192}

       If you wish to check the _filter assignment has worked properly then calling options/local will show what
       the value got set to.

   Assigning operations to groups with _group = value
       Each  rc  call  has  its  own  stats  group for tracking its metrics.  By default grouping is done by the
       composite group name from prefix job/ and id of the job like so job/1.

       If _group has a value then stats for that request will be grouped under that value.  This  allows  caller
       to group stats under their own name.

       Stats for specific group can be accessed by passing group to core/stats:

              $ rclone rc --json '{ "group": "job/1" }' core/stats
              {
                  "speed": 12345
                  ...
              }

   Data types
       When the API returns types, these will mostly be straight forward integer, string or boolean types.

       However  some of the types returned by the options/get call and taken by the options/set calls as well as
       the vfsOpt, mountOpt and the _config parameters.

       • Duration - these are returned as an integer duration in nanoseconds.  They may be set as an integer, or
         they may be set with time string, eg “5s”.  See the options section for more info.

       • Size - these are returned as an integer number of bytes.  They may be set as an integer or they may  be
         set with a size suffix string, eg “10M”.  See the options section for more info.

       • Enumerated  type  (such as CutoffMode, DumpFlags, LogLevel, VfsCacheMode - these will be returned as an
         integer and may be set as an integer but more conveniently they can be set as a string, eg  “HARD”  for
         CutoffMode or DEBUG for LogLevel.

       • BandwidthSpec - this will be set and returned as a string, eg “1M”.

   Specifying remotes to work on
       Remotes are specified with the fs=, srcFs=, dstFs= parameters depending on the command being used.

       The  parameters  can be a string as per the rest of rclone, eg s3:bucket/path or :sftp:/my/dir.  They can
       also be specified as JSON blobs.

       If specifying a JSON blob it should be a object mapping strings to strings.  These values will be used to
       configure the remote.  There are 3 special values which may be set:

       • type - set to type to specify a remote called :type:

       • _name - set to name to specify a remote called name:

       • _root - sets the root of the remote - may be empty

       One of _name or type should normally be set.  If the local backend is desired then type should be set  to
       local.  If _root isn’t specified then it defaults to the root of the remote.

       For example this JSON is equivalent to remote:/tmp

              {
                  "_name": "remote",
                  "_path": "/tmp"
              }

       And this is equivalent to :sftp,host='example.com':/tmp

              {
                  "type": "sftp",
                  "host": "example.com",
                  "_path": "/tmp"
              }

       And this is equivalent to /tmp/dir

              {
                  type = "local",
                  _ path = "/tmp/dir"
              }

   Supported commands
   backend/command: Runs a backend command.
       This takes the following parameters:

       • command - a string with the command name

       • fs - a remote name string e.g. “drive:”

       • arg - a list of arguments for the backend command

       • opt - a map of string to string of options

       Returns:

       • result - result from the backend command

       Example:

              rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2

       Returns

              {
                  "result": {
                      "arg": [
                          "path1",
                          "path2"
                      ],
                      "name": "noop",
                      "opt": {
                          "blue": "",
                          "echo": "yes"
                      }
                  }
              }

       Note that this is the direct equivalent of using this “backend” command:

              rclone backend noop . -o echo=yes -o blue path1 path2

       Note that arguments must be preceded by the “-a” flag

       See the backend command for more information.

       Authentication is required for this call.

   cache/expire: Purge a remote from cache
       Purge  a  remote from the cache backend.  Supports either a directory or a file.  Params: - remote = path
       to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)

       Eg

              rclone rc cache/expire remote=path/to/sub/folder/
              rclone rc cache/expire remote=/ withData=true

   cache/fetch: Fetch file chunks
       Ensure the specified file chunks are cached on disk.

       The chunks= parameter specifies the file chunks to check.  It takes a comma separated list of array slice
       indices.  The slice indices are similar to Python slices: start[:end]

       start is the 0 based chunk number from the beginning of the file to fetch  inclusive.   end  is  0  based
       chunk  number  from  the beginning of the file to fetch exclusive.  Both values can be negative, in which
       case they count from the back of the file.  The value “-5:” represents the last 5 chunks of a file.

       Some valid examples are: “:5,-5:” -> the first and last five chunks “0,-2” -> the first  and  the  second
       last chunk “0:10” -> the first ten chunks

       Any parameter with a key that starts with “file” can be used to specify files to fetch, e.g.

              rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye

       File names will automatically be encrypted when the a crypt remote is used on top of the cache.

   cache/stats: Get cache stats
       Show statistics for the cache remote.

   config/create: create the config for a remote.
       This takes the following parameters:

       • name - name of remote

       • parameters - a map of { “key”: “value” } pairs

       • type - type of the new remote

       • opt - a dictionary of options to control the configuration

         • obscure - declare passwords are plain and need obscuring

         • noObscure - declare passwords are already obscured and don’t need obscuring

         • nonInteractive - don’t interact with a user, return questions

         • continue - continue the config process with an answer

         • all - ask all the config questions not just the post config ones

         • state - state to restart with - used with continue

         • result - result to restart with - used with continue

       See the config create command for more information on the above.

       Authentication is required for this call.

   config/delete: Delete a remote in the config file.
       Parameters:

       • name - name of remote to delete

       See the config delete command for more information on the above.

       Authentication is required for this call.

   config/dump: Dumps the config file.
       Returns a JSON object: - key: value

       Where keys are remote names and values are the config parameters.

       See the config dump command for more information on the above.

       Authentication is required for this call.

   config/get: Get a remote in the config file.
       Parameters:

       • name - name of remote to get

       See the config dump command for more information on the above.

       Authentication is required for this call.

   config/listremotes: Lists the remotes in the config file.
       Returns - remotes - array of remote names

       See the listremotes command for more information on the above.

       Authentication is required for this call.

   config/password: password the config for a remote.
       This takes the following parameters:

       • name - name of remote

       • parameters - a map of { “key”: “value” } pairs

       See the config password command for more information on the above.

       Authentication is required for this call.

   config/providers: Shows how providers are configured in the config file.
       Returns a JSON object: - providers - array of objects

       See the config providers command for more information on the above.

       Authentication is required for this call.

   config/update: update the config for a remote.
       This takes the following parameters:

       • name - name of remote

       • parameters - a map of { “key”: “value” } pairs

       • opt - a dictionary of options to control the configuration

         • obscure - declare passwords are plain and need obscuring

         • noObscure - declare passwords are already obscured and don’t need obscuring

         • nonInteractive - don’t interact with a user, return questions

         • continue - continue the config process with an answer

         • all - ask all the config questions not just the post config ones

         • state - state to restart with - used with continue

         • result - result to restart with - used with continue

       See the config update command for more information on the above.

       Authentication is required for this call.

   core/bwlimit: Set the bandwidth limit.
       This  sets the bandwidth limit to the string passed in.  This should be a single bandwidth limit entry or
       a pair of upload:download bandwidth.

       Eg

              rclone rc core/bwlimit rate=off
              {
                  "bytesPerSecond": -1,
                  "bytesPerSecondTx": -1,
                  "bytesPerSecondRx": -1,
                  "rate": "off"
              }
              rclone rc core/bwlimit rate=1M
              {
                  "bytesPerSecond": 1048576,
                  "bytesPerSecondTx": 1048576,
                  "bytesPerSecondRx": 1048576,
                  "rate": "1M"
              }
              rclone rc core/bwlimit rate=1M:100k
              {
                  "bytesPerSecond": 1048576,
                  "bytesPerSecondTx": 1048576,
                  "bytesPerSecondRx": 131072,
                  "rate": "1M"
              }

       If the rate parameter is not supplied then the bandwidth is queried

              rclone rc core/bwlimit
              {
                  "bytesPerSecond": 1048576,
                  "bytesPerSecondTx": 1048576,
                  "bytesPerSecondRx": 1048576,
                  "rate": "1M"
              }

       The format of the parameter is exactly the same as passed to –bwlimit except only one  bandwidth  may  be
       specified.

       In  either  case  “rate”  is  returned  as a human-readable string, and “bytesPerSecond” is returned as a
       number.

   core/command: Run a rclone terminal command over rc.
       This takes the following parameters:

       • command - a string with the command name.

       • arg - a list of arguments for the backend command.

       • opt - a map of string to string of options.

       • returnType - one of (“COMBINED_OUTPUT”, “STREAM”, “STREAM_ONLY_STDOUT”, “STREAM_ONLY_STDERR”).

         • Defaults to “COMBINED_OUTPUT” if not set.

         • The STREAM returnTypes will write the output to the body of the HTTP message.

         • The COMBINED_OUTPUT will write the output to the “result” parameter.

       Returns:

       • result - result from the backend command.

         • Only set when using returnType “COMBINED_OUTPUT”.

       • error - set if rclone exits with an error code.

       • returnType - one of (“COMBINED_OUTPUT”, “STREAM”, “STREAM_ONLY_STDOUT”, “STREAM_ONLY_STDERR”).

       Example:

              rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
              rclone rc core/command -a ls -a mydrive:/ -o max-depth=1

       Returns:

              {
                  "error": false,
                  "result": "<Raw command line output>"
              }

              OR
              {
                  "error": true,
                  "result": "<Raw command line output>"
              }

       Authentication is required for this call.

   core/gc: Runs a garbage collection.
       This tells the go runtime to do a garbage collection run.  It isn’t necessary to call this normally,  but
       it can be useful for debugging memory problems.

   core/group-list: Returns list of stats.
       This returns list of stats groups currently in memory.

       Returns the following values:

              {
                  "groups":  an array of group names:
                      [
                          "group1",
                          "group2",
                          ...
                      ]
              }

   core/memstats: Returns the memory statistics
       This  returns the memory statistics of the running program.  What the values mean are explained in the go
       docs: https://golang.org/pkg/runtime/#MemStats

       The most interesting values for most people are:

       • HeapAlloc - this is the amount of memory rclone is actually using

       • HeapSys - this is the amount of memory rclone has obtained from the OS

       • Sys - this is the total amount of memory requested from the OS

         • It is virtual memory so may include unused memory

   core/obscure: Obscures a string passed in.
       Pass a clear string and rclone will obscure it for the config file: - clear - string

       Returns: - obscured - string

   core/pid: Return PID of current process
       This returns PID of current process.  Useful for stopping rclone process.

   core/quit: Terminates the app.
       (Optional) Pass an exit code to be used for terminating the app: - exitCode - int

   core/stats: Returns stats about current transfers.
       This returns all available stats:

              rclone rc core/stats

       If group is not provided then summed up stats for all groups will be returned.

       Parameters

       • group - name of the stats group (string)

       Returns the following values:

              {
                  "bytes": total transferred bytes since the start of the group,
                  "checks": number of files checked,
                  "deletes" : number of files deleted,
                  "elapsedTime": time in floating point seconds since rclone was started,
                  "errors": number of errors,
                  "eta": estimated time in seconds until the group completes,
                  "fatalError": boolean whether there has been at least one fatal error,
                  "lastError": last error string,
                  "renames" : number of files renamed,
                  "retryError": boolean showing whether there has been at least one non-NoRetryError,
                  "speed": average speed in bytes per second since start of the group,
                  "totalBytes": total number of bytes in the group,
                  "totalChecks": total number of checks in the group,
                  "totalTransfers": total number of transfers in the group,
                  "transferTime" : total time spent on running jobs,
                  "transfers": number of transferred files,
                  "transferring": an array of currently active file transfers:
                      [
                          {
                              "bytes": total transferred bytes for this file,
                              "eta": estimated time in seconds until file transfer completion
                              "name": name of the file,
                              "percentage": progress of the file transfer in percent,
                              "speed": average speed over the whole transfer in bytes per second,
                              "speedAvg": current speed in bytes per second as an exponentially weighted moving average,
                              "size": size of the file in bytes
                          }
                      ],
                  "checking": an array of names of currently active file checks
                      []
              }

       Values for “transferring”, “checking” and “lastError” are only assigned if data is available.  The  value
       for “eta” is null if an eta cannot be determined.

   core/stats-delete: Delete stats group.
       This deletes entire stats group.

       Parameters

       • group - name of the stats group (string)

   core/stats-reset: Reset stats.
       This  clears  counters,  errors  and finished transfers for all stats or specific stats group if group is
       provided.

       Parameters

       • group - name of the stats group (string)

   core/transferred: Returns stats about completed transfers.
       This returns stats about completed transfers:

              rclone rc core/transferred

       If group is not provided then completed transfers for all groups will be returned.

       Note only the last 100 completed transfers are returned.

       Parameters

       • group - name of the stats group (string)

       Returns the following values:

              {
                  "transferred":  an array of completed transfers (including failed ones):
                      [
                          {
                              "name": name of the file,
                              "size": size of the file in bytes,
                              "bytes": total transferred bytes for this file,
                              "checked": if the transfer is only checked (skipped, deleted),
                              "timestamp": integer representing millisecond unix epoch,
                              "error": string description of the error (empty if successful),
                              "jobid": id of the job that this transfer belongs to
                          }
                      ]
              }

   core/version: Shows the current version of rclone and the go runtime.
       This shows the current version of go and the go runtime:

       • version - rclone version, e.g. “v1.53.0”

       • decomposed - version number as [major, minor, patch]

       • isGit - boolean - true if this was compiled from the git version

       • isBeta - boolean - true if this is a beta version

       • os - OS in use as according to Go

       • arch - cpu architecture in use according to Go

       • goVersion - version of Go runtime in use

       • linking - type of rclone executable (static or dynamic)

       • goTags - space separated build tags or “none”

   debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling.
       SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the  blocking
       profile.   The  profiler  aims  to  sample  an  average  of one blocking event per rate nanoseconds spent
       blocked.

       To include every blocking event in the profile, pass rate = 1.  To turn off profiling entirely, pass rate
       <= 0.

       After calling this you can use this to see the blocking profile:

              go tool pprof http://localhost:5572/debug/pprof/block

       Parameters:

       • rate - int

   debug/set-mutex-profile-fraction: Set runtime.SetMutexProfileFraction for mutex profiling.
       SetMutexProfileFraction controls the fraction of mutex contention events that are reported in  the  mutex
       profile.  On average 1/rate events are reported.  The previous rate is returned.

       To turn off profiling entirely, pass rate 0.  To just read the current rate, pass rate < 0.  (For n>1 the
       details of sampling may change.)

       Once this is set you can look use this to profile the mutex contention:

              go tool pprof http://localhost:5572/debug/pprof/mutex

       Parameters:

       • rate - int

       Results:

       • previousRate - int

   fscache/clear: Clear the Fs cache.
       This  clears  the  fs cache.  This is where remotes created from backends are cached for a short while to
       make repeated rc calls more efficient.

       If you change the parameters of a backend then you may want to call this to clear an existing remote  out
       of the cache before re-creating it.

       Authentication is required for this call.

   fscache/entries: Returns the number of entries in the fs cache.
       This returns the number of entries in the fs cache.

       Returns - entries - number of items in the cache

       Authentication is required for this call.

   job/list: Lists the IDs of the running jobs
       Parameters: None.

       Results:

       • jobids - array of integer job ids.

   job/status: Reads the status of the job ID
       Parameters:

       • jobid - id of the job (integer).

       Results:

       • finished - boolean

       • duration - time in seconds that the job ran for

       • endTime - time the job finished (e.g. “2018-10-26T18:50:20.528746884+01:00”)

       • error - error from the job or empty string for no error

       • finished - boolean whether the job has finished or not

       • id - as passed in above

       • startTime - time the job started (e.g. “2018-10-26T18:50:20.528336039+01:00”)

       • success - boolean - true for success false otherwise

       • output - output of the job as would have been returned if called synchronously

       • progress - output of the progress related to the underlying job

   job/stop: Stop the running job
       Parameters:

       • jobid - id of the job (integer).

   job/stopgroup: Stop all running jobs in a group
       Parameters:

       • group - name of the group (string).

   mount/listmounts: Show current mount points
       This shows currently mounted points, which can be used for performing an unmount.

       This takes no parameters and returns

       • mountPoints: list of current mount points

       Eg

              rclone rc mount/listmounts

       Authentication is required for this call.

   mount/mount: Create a new mount point
       rclone  allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file
       system with FUSE.

       If no mountType is provided, the priority is given as follows: 1.  mount 2.cmount 3.mount2

       This takes the following parameters:

       • fs - a remote path to be mounted (required)

       • mountPoint: valid path on the local machine (required)

       • mountType: one of the values (mount, cmount, mount2) specifies the mount implementation to use

       • mountOpt: a JSON object with Mount options in.

       • vfsOpt: a JSON object with VFS options in.

       Example:

              rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
              rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
              rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'

       The vfsOpt are as described in options/get and can be seen in the the “vfs” section when running and  the
       mountOpt can be seen in the “mount” section:

              rclone rc options/get

       Authentication is required for this call.

   mount/types: Show all possible mount types
       This shows all possible mount types and returns them as a list.

       This takes no parameters and returns

       • mountTypes: list of mount types

       The  mount  types  are  strings  like “mount”, “mount2”, “cmount” and can be passed to mount/mount as the
       mountType parameter.

       Eg

              rclone rc mount/types

       Authentication is required for this call.

   mount/unmount: Unmount selected active mount
       rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a  file
       system with FUSE.

       This takes the following parameters:

       • mountPoint: valid path on the local machine where the mount was created (required)

       Example:

              rclone rc mount/unmount mountPoint=/home/<user>/mountPoint

       Authentication is required for this call.

   mount/unmountall: Unmount all active mounts
       rclone  allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file
       system with FUSE.

       This takes no parameters and returns error if unmount does not succeed.

       Eg

              rclone rc mount/unmountall

       Authentication is required for this call.

   operations/about: Return the space used on the remote
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       The result is as returned from rclone about –json

       See the about command for more information on the above.

       Authentication is required for this call.

   operations/cleanup: Remove trashed files in the remote or path
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       See the cleanup command for more information on the above.

       Authentication is required for this call.

   operations/copyfile: Copy a file from source remote to destination remote
       This takes the following parameters:

       • srcFs - a remote name string e.g. “drive:” for the source

       • srcRemote - a path within that remote e.g. “file.txt” for the source

       • dstFs - a remote name string e.g. “drive2:” for the destination

       • dstRemote - a path within that remote e.g. “file2.txt” for the destination

       Authentication is required for this call.

   operations/copyurl: Copy the URL to the object
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       • url - string, URL to read from

       • autoFilename - boolean, set to true to retrieve destination file name from url

       See the copyurl command for more information on the above.

       Authentication is required for this call.

   operations/delete: Remove files in the path
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       See the delete command for more information on the above.

       Authentication is required for this call.

   operations/deletefile: Remove the single file pointed to
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       See the deletefile command for more information on the above.

       Authentication is required for this call.

   operations/fsinfo: Return information about the remote
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       This returns info about the remote passed in;

              {
                      // optional features and whether they are available or not
                      "Features": {
                              "About": true,
                              "BucketBased": false,
                              "BucketBasedRootOK": false,
                              "CanHaveEmptyDirectories": true,
                              "CaseInsensitive": false,
                              "ChangeNotify": false,
                              "CleanUp": false,
                              "Command": true,
                              "Copy": false,
                              "DirCacheFlush": false,
                              "DirMove": true,
                              "Disconnect": false,
                              "DuplicateFiles": false,
                              "GetTier": false,
                              "IsLocal": true,
                              "ListR": false,
                              "MergeDirs": false,
                              "MetadataInfo": true,
                              "Move": true,
                              "OpenWriterAt": true,
                              "PublicLink": false,
                              "Purge": true,
                              "PutStream": true,
                              "PutUnchecked": false,
                              "ReadMetadata": true,
                              "ReadMimeType": false,
                              "ServerSideAcrossConfigs": false,
                              "SetTier": false,
                              "SetWrapper": false,
                              "Shutdown": false,
                              "SlowHash": true,
                              "SlowModTime": false,
                              "UnWrap": false,
                              "UserInfo": false,
                              "UserMetadata": true,
                              "WrapFs": false,
                              "WriteMetadata": true,
                              "WriteMimeType": false
                      },
                      // Names of hashes available
                      "Hashes": [
                              "md5",
                              "sha1",
                              "whirlpool",
                              "crc32",
                              "sha256",
                              "dropbox",
                              "mailru",
                              "quickxor"
                      ],
                      "Name": "local",        // Name as created
                      "Precision": 1,         // Precision of timestamps in ns
                      "Root": "/",            // Path as created
                      "String": "Local file system at /", // how the remote will appear in logs
                      // Information about the system metadata for this backend
                      "MetadataInfo": {
                              "System": {
                                      "atime": {
                                              "Help": "Time of last access",
                                              "Type": "RFC 3339",
                                              "Example": "2006-01-02T15:04:05.999999999Z07:00"
                                      },
                                      "btime": {
                                              "Help": "Time of file birth (creation)",
                                              "Type": "RFC 3339",
                                              "Example": "2006-01-02T15:04:05.999999999Z07:00"
                                      },
                                      "gid": {
                                              "Help": "Group ID of owner",
                                              "Type": "decimal number",
                                              "Example": "500"
                                      },
                                      "mode": {
                                              "Help": "File type and mode",
                                              "Type": "octal, unix style",
                                              "Example": "0100664"
                                      },
                                      "mtime": {
                                              "Help": "Time of last modification",
                                              "Type": "RFC 3339",
                                              "Example": "2006-01-02T15:04:05.999999999Z07:00"
                                      },
                                      "rdev": {
                                              "Help": "Device ID (if special file)",
                                              "Type": "hexadecimal",
                                              "Example": "1abc"
                                      },
                                      "uid": {
                                              "Help": "User ID of owner",
                                              "Type": "decimal number",
                                              "Example": "500"
                                      }
                              },
                              "Help": "Textual help string\n"
                      }
              }

       This command does not have a command line equivalent so use this instead:

              rclone rc --loopback operations/fsinfo fs=remote:

   operations/list: List the given remote and path in JSON format
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       • opt - a dictionary of options to control the listing (optional)

         • recurse - If set recurse directories

         • noModTime - If set return modification time

         • showEncrypted - If set show decrypted names

         • showOrigIDs - If set show the IDs for each item if known

         • showHash - If set return a dictionary of hashes

         • noMimeType - If set don’t show mime types

         • dirsOnly - If set only show directories

         • filesOnly - If set only show files

         • metadata - If set return metadata of objects also

         • hashTypes - array of strings of hash types to show if showHash set

       Returns:

       • list

         • This is an array of objects as described in the lsjson command

       See the lsjson command for more information on the above and examples.

       Authentication is required for this call.

   operations/mkdir: Make a destination directory or container
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       See the mkdir command for more information on the above.

       Authentication is required for this call.

   operations/movefile: Move a file from source remote to destination remote
       This takes the following parameters:

       • srcFs - a remote name string e.g. “drive:” for the source

       • srcRemote - a path within that remote e.g. “file.txt” for the source

       • dstFs - a remote name string e.g. “drive2:” for the destination

       • dstRemote - a path within that remote e.g. “file2.txt” for the destination

       Authentication is required for this call.

   operations/publiclink: Create or retrieve a public link to the given file or folder.
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       • unlink - boolean - if set removes the link rather than adding it (optional)

       • expire - string - the expiry time of the link e.g. “1d” (optional)

       Returns:

       • url - URL of the resource

       See the link command for more information on the above.

       Authentication is required for this call.

   operations/purge: Remove a directory or container and all of its contents
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       See the purge command for more information on the above.

       Authentication is required for this call.

   operations/rmdir: Remove an empty directory or container
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       See the rmdir command for more information on the above.

       Authentication is required for this call.

   operations/rmdirs: Remove all the empty directories in the path
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       • leaveRoot - boolean, set to true not to delete the root

       See the rmdirs command for more information on the above.

       Authentication is required for this call.

   operations/size: Count the number of bytes and files in remote
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:path/to/dir”

       Returns:

       • count - number of files

       • bytes - number of bytes in those files

       See the size command for more information on the above.

       Authentication is required for this call.

   operations/stat: Give information about the supplied file or directory
       This takes the following parameters

       • fs - a remote name string eg “drive:”

       • remote - a path within that remote eg “dir”

       • opt - a dictionary of options to control the listing (optional)

         • see operations/list for the options

       The result is

       • item - an object as described in the lsjson command.  Will be null if not found.

       Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in
       the options.

       See the lsjson command for more information on the above and examples.

       Authentication is required for this call.

   operations/uploadfile: Upload file using multiform/form-data
       This takes the following parameters:

       • fs - a remote name string e.g. “drive:”

       • remote - a path within that remote e.g. “dir”

       • each part in body represents a file to be uploaded

       See the uploadfile command for more information on the above.

       Authentication is required for this call.

   options/blocks: List all the option blocks
       Returns: - options - a list of the options block names

   options/get: Get all the global options
       Returns an object where keys are option block names and values are an  object  with  the  current  option
       values in.

       Note that these are the global options which are unaffected by use of the _config and _filter parameters.
       If  you  wish  to  read  the  parameters  set  in  _config  then  use  options/config and for _filter use
       options/filter.

       This shows the internal names of the option within rclone which should map to the external  options  very
       easily with a few exceptions.

   options/local: Get the currently active config for this call
       Returns  an  object  with the keys “config” and “filter”.  The “config” key contains the local config and
       the “filter” key contains the local filters.

       Note that these are the local options specific to this rc call.  If _config was not  supplied  then  they
       will be the global options.  Likewise with “_filter”.

       This call is mostly useful for seeing if _config and _filter passing is working.

       This  shows  the internal names of the option within rclone which should map to the external options very
       easily with a few exceptions.

   options/set: Set an option
       Parameters:

       • option block name containing an object with

         • key: value

       Repeated as often as required.

       Only supply the options you wish to change.  If an option is unknown it will be  silently  ignored.   Not
       all options will have an effect when changed like this.

       For example:

       This sets DEBUG level logs (-vv) (these can be set by number or string)

              rclone rc options/set --json '{"main": {"LogLevel": "DEBUG"}}'
              rclone rc options/set --json '{"main": {"LogLevel": 8}}'

       And this sets INFO level logs (-v)

              rclone rc options/set --json '{"main": {"LogLevel": "INFO"}}'

       And this sets NOTICE level logs (normal without -v)

              rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'

   pluginsctl/addPlugin: Add a plugin using url
       Used for adding a plugin to the webgui.

       This takes the following parameters:

       • url     -     http     url     of     the     github     repo    where    the    plugin    is    hosted
         (http://github.com/rclone/rclone-webui-react).

       Example:

       rclone rc pluginsctl/addPlugin

       Authentication is required for this call.

   pluginsctl/getPluginsForType: Get plugins with type criteria
       This shows all possible plugins by a mime type.

       This takes the following parameters:

       • type - supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3).

       • pluginType - filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL).

       Returns:

       • loadedPlugins - list of current production plugins.

       • testPlugins - list of temporarily loaded development plugins, usually running on a different server.

       Example:

       rclone rc pluginsctl/getPluginsForType type=video/mp4

       Authentication is required for this call.

   pluginsctl/listPlugins: Get the list of currently loaded plugins
       This allows you to get the currently enabled plugins and their details.

       This takes no parameters and returns:

       • loadedPlugins - list of current production plugins.

       • testPlugins - list of temporarily loaded development plugins, usually running on a different server.

       E.g.

       rclone rc pluginsctl/listPlugins

       Authentication is required for this call.

   pluginsctl/listTestPlugins: Show currently loaded test plugins
       Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.

       This takes no parameters and returns:

       • loadedTestPlugins - list of currently available test plugins.

       E.g.

              rclone rc pluginsctl/listTestPlugins

       Authentication is required for this call.

   pluginsctl/removePlugin: Remove a loaded plugin
       This allows you to remove a plugin using it’s name.

       This takes parameters:

       • name - name of the plugin in the format author/plugin_name.

       E.g.

       rclone rc pluginsctl/removePlugin name=rclone/video-plugin

       Authentication is required for this call.

   pluginsctl/removeTestPlugin: Remove a test plugin
       This allows you to remove a plugin using it’s name.

       This takes the following parameters:

       • name - name of the plugin in the format author/plugin_name.

       Example:

              rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react

       Authentication is required for this call.

   rc/error: This returns an error
       This returns an error with the input as part of its error string.  Useful for testing error handling.

   rc/list: List all the registered remote control commands
       This lists all the registered remote control commands as a JSON map in the commands response.

   rc/noop: Echo the input to the output parameters
       This echoes the input parameters to the output parameters for testing purposes.  It can be used to  check
       that rclone is still alive and to check that parameter passing is working properly.

   rc/noopauth: Echo the input to the output parameters requiring auth
       This  echoes the input parameters to the output parameters for testing purposes.  It can be used to check
       that rclone is still alive and to check that parameter passing is working properly.

       Authentication is required for this call.

   sync/bisync: Perform bidirectional synchronization between two paths.
       This takes the following parameters

       • path1 - a remote directory string e.g. drive:path1

       • path2 - a remote directory string e.g. drive:path2

       • dryRun - dry-run mode

       • resync - performs the resync run

       • checkAccess - abort if RCLONE_TEST files are not found on both filesystems

       • checkFilename - file name for checkAccess (default: RCLONE_TEST)

       • maxDelete - abort sync if percentage of deleted files is above this threshold (default: 50)

       • force - maxDelete safety check and run the sync

       • checkSync - true by default, false disables comparison of final listings, only  will  skip  sync,  only
         compare listings from the last run

       • removeEmptyDirs - remove empty directories at the final cleanup step

       • filtersFile - read filtering patterns from a file

       • workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync)

       • noCleanup - retain working files

       See bisync command help and full bisync description for more information.

       Authentication is required for this call.

   sync/copy: copy a directory from source remote to destination remote
       This takes the following parameters:

       • srcFs - a remote name string e.g. “drive:src” for the source

       • dstFs - a remote name string e.g. “drive:dst” for the destination

       • createEmptySrcDirs - create empty src directories on destination if set

       See the copy command for more information on the above.

       Authentication is required for this call.

   sync/move: move a directory from source remote to destination remote
       This takes the following parameters:

       • srcFs - a remote name string e.g. “drive:src” for the source

       • dstFs - a remote name string e.g. “drive:dst” for the destination

       • createEmptySrcDirs - create empty src directories on destination if set

       • deleteEmptySrcDirs - delete empty src directories if set

       See the move command for more information on the above.

       Authentication is required for this call.

   sync/sync: sync a directory from source remote to destination remote
       This takes the following parameters:

       • srcFs - a remote name string e.g. “drive:src” for the source

       • dstFs - a remote name string e.g. “drive:dst” for the destination

       • createEmptySrcDirs - create empty src directories on destination if set

       See the sync command for more information on the above.

       Authentication is required for this call.

   vfs/forget: Forget files or directories in the directory cache.
       This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

       If no paths are passed in then it will forget all the paths in the directory cache.

              rclone rc vfs/forget

       Otherwise  pass  files  or  dirs  in as file=path or dir=path.  Any parameter key starting with file will
       forget that file and any starting with dir will forget that dir, e.g.

              rclone rc vfs/forget file=hello file2=goodbye dir=home/junk

       This command takes an “fs” parameter.  If this parameter is not supplied and if there is only one VFS  in
       use  then  that  VFS  will be used.  If there is more than one VFS in use then the “fs” parameter must be
       supplied.

   vfs/list: List active VFSes.
       This lists the active VFSes.

       It returns a list under the key “vfses” where the values are the VFS names that could be  passed  to  the
       other VFS commands in the “fs” parameter.

   vfs/poll-interval: Get the status or update the value of the poll-interval option.
       Without any parameter given this returns the current status of the poll-interval setting.

       When  the interval=duration parameter is set, the poll-interval value is updated and the polling function
       is notified.  Setting interval=0 disables poll-interval.

              rclone rc vfs/poll-interval interval=5m

       The timeout=duration parameter can be used to specify a time to wait for the  current  poll  function  to
       apply the new value.  If timeout is less or equal 0, which is the default, wait indefinitely.

       The new poll-interval value will only be active when the timeout is not reached.

       If  poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling
       function, depending on the used remote.

       This command takes an “fs” parameter.  If this parameter is not supplied and if there is only one VFS  in
       use  then  that  VFS  will be used.  If there is more than one VFS in use then the “fs” parameter must be
       supplied.

   vfs/refresh: Refresh the directory cache.
       This reads the directories for the specified paths and freshens the directory cache.

       If no paths are passed in then it will refresh the root directory.

              rclone rc vfs/refresh

       Otherwise pass directories in as dir=path.  Any  parameter  key  starting  with  dir  will  refresh  that
       directory, e.g.

              rclone rc vfs/refresh dir=home/junk dir2=data/misc

       If  the parameter recursive=true is given the whole directory tree will get refreshed.  This refresh will
       use –fast-list if enabled.

       This command takes an “fs” parameter.  If this parameter is not supplied and if there is only one VFS  in
       use  then  that  VFS  will be used.  If there is more than one VFS in use then the “fs” parameter must be
       supplied.

   vfs/stats: Stats for a VFS.
       This returns stats for the selected VFS.

              {
                  // Status of the disk cache - only present if --vfs-cache-mode > off
                  "diskCache": {
                      "bytesUsed": 0,
                      "erroredFiles": 0,
                      "files": 0,
                      "hashType": 1,
                      "outOfSpace": false,
                      "path": "/home/user/.cache/rclone/vfs/local/mnt/a",
                      "pathMeta": "/home/user/.cache/rclone/vfsMeta/local/mnt/a",
                      "uploadsInProgress": 0,
                      "uploadsQueued": 0
                  },
                  "fs": "/mnt/a",
                  "inUse": 1,
                  // Status of the in memory metadata cache
                  "metadataCache": {
                      "dirs": 1,
                      "files": 0
                  },
                  // Options as returned by options/get
                  "opt": {
                      "CacheMaxAge": 3600000000000,
                      // ...
                      "WriteWait": 1000000000
                  }
              }

       This command takes an “fs” parameter.  If this parameter is not supplied and if there is only one VFS  in
       use  then  that  VFS  will be used.  If there is more than one VFS in use then the “fs” parameter must be
       supplied.

   Accessing the remote control via HTTP
       Rclone implements a simple HTTP based protocol.

       Each endpoint takes an JSON object and returns  a  JSON  object  or  an  error.   The  JSON  objects  are
       essentially a map of string names to values.

       All calls must made using POST.

       The  input  objects  can be supplied using URL parameters, POST parameters or by supplying “Content-Type:
       application/json” and a JSON blob in the body.  There are examples of these below using curl.

       The response will be a JSON blob in the body of  the  response.   This  is  formatted  to  be  reasonably
       human-readable.

   Error returns
       If  an  error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will
       contain a JSON encoded error object, e.g.

              {
                  "error": "Expecting string value for key \"remote\" (was float64)",
                  "input": {
                      "fs": "/tmp",
                      "remote": 3
                  },
                  "status": 400
                  "path": "operations/rmdir",
              }

       The keys in the error response are - error - error string - input - the input parameters to  the  call  -
       status - the HTTP status code - path - the path of the call

   CORS
       The  sever  implements  basic  CORS support and allows all origins for that.  The response to a preflight
       OPTIONS request will echo the requested “Access-Control-Request-Headers” back.

   Using POST with URL parameters only
              curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'

       Response

              {
                  "potato": "1",
                  "sausage": "2"
              }

       Here is what an error response looks like:

              curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'

              {
                  "error": "arbitrary error on input map[potato:1 sausage:2]",
                  "input": {
                      "potato": "1",
                      "sausage": "2"
                  }
              }

       Note that curl doesn’t return errors to the shell unless you use the -f option

              $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
              curl: (22) The requested URL returned error: 400 Bad Request
              $ echo $?
              22

   Using POST with a form
              curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop

       Response

              {
                  "potato": "1",
                  "sausage": "2"
              }

       Note that you can combine these with URL parameters too with the POST parameters taking precedence.

              curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"

       Response

              {
                  "potato": "1",
                  "rutabaga": "3",
                  "sausage": "4"
              }

   Using POST with a JSON blob
              curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop

       response

              {
                  "password": "xyz",
                  "username": "xyz"
              }

       This can be combined with URL parameters too if required.  The JSON blob takes precedence.

              curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'

              {
                  "potato": 2,
                  "rutabaga": "3",
                  "sausage": 1
              }

   Debugging rclone with pprof
       If you use the --rc flag this will also enable the use of the go profiling tools on the same port.

       To use these, first install go.

   Debugging memory use
       To profile rclone’s memory use you can run:

              go tool pprof -web http://localhost:5572/debug/pprof/heap

       This should open a page in your browser showing what is using what memory.

       You can also use the -text flag to produce a textual summary

              $ go tool pprof -text http://localhost:5572/debug/pprof/heap
              Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
                    flat  flat%   sum%        cum   cum%
               1024.03kB 66.62% 66.62%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
                   513kB 33.38%   100%      513kB 33.38%  net/http.newBufioWriterSize
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/all.init
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve.init
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve/restic.init
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
                       0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
                       0     0%   100%  1024.03kB 66.62%  main.init
                       0     0%   100%      513kB 33.38%  net/http.(*conn).readRequest
                       0     0%   100%      513kB 33.38%  net/http.(*conn).serve
                       0     0%   100%  1024.03kB 66.62%  runtime.main

   Debugging go routine leaks
       Memory leaks are most often caused by go routine leaks  keeping  memory  alive  which  should  have  been
       garbage collected.

       See all active go routines using

              curl http://localhost:5572/debug/pprof/goroutine?debug=1

       Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser.

   Other profiles to look at
       You can see a summary of profiles available at http://localhost:5572/debug/pprof/

       Here is how to use some of them:

       • Memory: go tool pprof http://localhost:5572/debug/pprof/heap

       • Go routines: curl http://localhost:5572/debug/pprof/goroutine?debug=1

       • 30-second CPU profile: go tool pprof http://localhost:5572/debug/pprof/profile

       • 5-second execution trace: wget http://localhost:5572/debug/pprof/trace?seconds=5

       • Goroutine blocking profile

         • Enable first with: rclone rc debug/set-block-profile-rate rate=1 (docs)

         • go tool pprof http://localhost:5572/debug/pprof/block

       • Contended mutexes:

         • Enable first with: rclone rc debug/set-mutex-profile-fraction rate=1 (docs)

         • go tool pprof http://localhost:5572/debug/pprof/mutex

       See  the net/http/pprof docs for more info on how to use the profiling and for a general overview see the
       Go team’s blog post on profiling go programs.

       The profiling hook is zero overhead unless it is used.

Overview of cloud storage systems

       Each cloud storage system is slightly different.  Rclone attempts to provide a unified interface to them,
       but some underlying differences show through.

   Features
       Here is an overview of the major features of each cloud storage system.

       Name                     Hash      ModTime    Case           Duplicate     MIME    Metadata
                                                     Insensitive    Files         Type
       ────────────────────────────────────────────────────────────────────────────────────────────
       1Fichier              Whirlpool       -           No            Yes         R          -
       Akamai Netstorage    MD5, SHA256     R/W          No            No          R          -
       Amazon Drive             MD5          -           Yes           No          R          -
       Amazon S3  (or  S3       MD5         R/W          No            No         R/W        RWU
       compatible)
       Backblaze B2             SHA1        R/W          No            No         R/W         -
       Box                      SHA1        R/W          Yes           No          -          -
       Citrix ShareFile         MD5         R/W          Yes           No          -          -
       Dropbox                DBHASH ¹       R           Yes           No          -          -
       Enterprise    File        -          R/W          Yes           No         R/W         -
       Fabric
       FTP                       -         R/W ¹⁰        No            No          -          -
       Google       Cloud       MD5         R/W          No            No         R/W         -
       Storage
       Google Drive             MD5         R/W          No            Yes        R/W         -
       Google Photos             -           -           No            Yes         R          -
       HDFS                      -          R/W          No            No          -          -
       HiDrive               HiDrive ¹²     R/W          No            No          -          -
       HTTP                      -           R           No            No          R          -
       Internet Archive     MD5,  SHA1,    R/W ¹¹        No            No          -         RWU
                            CRC32
       Jottacloud               MD5         R/W          Yes           No          R          -
       Koofr                    MD5          -           Yes           No          -          -
       Mail.ru Cloud          Mailru ⁶      R/W          Yes           No          -          -
       Mega                      -           -           No            Yes         -          -
       Memory                   MD5         R/W          No            No          -          -
       Microsoft    Azure       MD5         R/W          No            No         R/W         -
       Blob Storage
       Microsoft OneDrive      SHA1 ⁵       R/W          Yes           No          R          -
       OpenDrive                MD5         R/W          Yes        Partial ⁸      -          -
       OpenStack Swift          MD5         R/W          No            No         R/W         -
       Oracle      Object       MD5         R/W          No            No         R/W         -
       Storage
       pCloud               MD5, SHA1 ⁷      R           No            No          W          -
       premiumize.me             -           -           Yes           No          R          -
       put.io                  CRC-32       R/W          No            Yes         R          -
       QingStor                 MD5         - ⁹          No            No         R/W         -
       Seafile                   -           -           No            No          -          -
       SFTP                 MD5, SHA1 ²     R/W        Depends         No          -          -
       Sia                       -           -           No            No          -          -
       SMB                       -           -           Yes           No          -          -
       SugarSync                 -           -           No            No          -          -
       Storj                     -           R           No            No          -          -
       Uptobox                   -           -           No            Yes         -          -
       WebDAV               MD5, SHA1 ³     R ⁴        Depends         No          -          -
       Yandex Disk              MD5         R/W          No            No          R          -
       Zoho WorkDrive            -           -           No            No          -          -
       The          local       All         R/W        Depends         No          -         RWU
       filesystem

   Notes
       ¹ Dropbox supports its own custom hash.  This is an SHA256 sum of all the 4 MiB block SHA256s.

       ² SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in
       the remote’s PATH.

       ³ WebDAV supports hashes when used with Owncloud and Nextcloud only.

       ⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.

       ⁵  Microsoft  OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server
       support  Microsoft’s  own  https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash
       QuickXorHash.

       ⁶ Mail.ru uses its own modified SHA1 hash

       ⁷ pCloud only supports SHA1 (not MD5) in its EU region

       ⁸  Opendrive does not support creation of duplicate files using their web client interface or other stock
       clients, but the underlying storage platform has been determined to allow  duplicate  files,  and  it  is
       possible to create them with rclone.  It may be that this is a mistake or an unsupported feature.

       ⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.

       ¹⁰  FTP supports modtimes for the major FTP servers, and also others if they advertised required protocol
       extensions.  See this for more details.

       ¹¹ Internet Archive requires option wait_archive to be set to a non-zero value for full modtime support.

       ¹² HiDrive supports its own custom hash.  It combines SHA1 sums for each 4 KiB block hierarchically to  a
       single top-level sum.

   Hash
       The  cloud  storage  system  supports  various  hash  types  of  the  objects.   The hashes are used when
       transferring data as an integrity check and can be specifically used with the --checksum  flag  in  syncs
       and in the check command.

       To  use  the  verify checksums when transferring between cloud storage systems they must support a common
       hash type.

   ModTime
       Almost all cloud storage systems store some sort of  timestamp  on  objects,  but  several  of  them  not
       something  that  is appropriate to use for syncing.  E.g.  some backends will only write a timestamp that
       represent the time of the upload.  To be relevant for syncing it should be able to store the modification
       time of the source object.  If this is not the case, rclone will only check the  file  size  by  default,
       though  can  be  configured to check the file hash (with the --checksum flag).  Ideally it should also be
       possible to change the timestamp of an existing file without having to re-upload it.

       Storage systems with a - in the ModTime column, means  the  modification  read  on  objects  is  not  the
       modification  time  of  the  file  when  uploaded.   It is most likely the time the file was uploaded, or
       possibly something else (like the time the picture was taken in Google Photos).

       Storage systems with a R (for read-only) in the ModTime column, means the it keeps modification times  on
       objects,  and updates them when uploading objects, but it does not support changing only the modification
       time (SetModTime operation) without re-uploading, possibly not  even  without  deleting  existing  first.
       Some  operations  in  rclone,  such  as  copy  and sync commands, will automatically check for SetModTime
       support and re-upload if necessary to keep the modification times in sync.  Other commands will not  work
       without SetModTime support, e.g. touch command on an existing file will fail, and changes to modification
       time only on a files in a mount will be silently ignored.

       Storage  systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only
       operations.

   Case Insensitive
       If a cloud storage systems is case sensitive then it is possible to have two files which differ  only  in
       case,  e.g. file.txt  and  FILE.txt.   If  a  cloud  storage  system  is case insensitive then that isn’t
       possible.

       This can cause problems when syncing between a case insensitive system and a case sensitive system.   The
       symptom of this is that no matter how many times you run the sync it never completes fully.

       The local filesystem and SFTP may or may not be case sensitive depending on OS.

       • Windows - usually case insensitive, though case is preserved

       • OSX - usually case insensitive, though it is possible to format case sensitive

       • Linux  -  usually  case  sensitive, but there are case insensitive file systems (e.g. FAT formatted USB
         keys)

       Most of the time this doesn’t cause any problems as people tend to avoid files whose name differs only by
       case even on case sensitive systems.

   Duplicate files
       If a cloud storage system allows duplicate files then it can have two objects with the same name.

       This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates.

   Restricted filenames
       Some cloud storage systems might have restrictions on the characters that are usable in file or directory
       names.  When rclone detects such a  name  during  a  file  upload,  it  will  transparently  replace  the
       restricted  characters  with  similar  looking  Unicode  characters.   To  handle  the  different sets of
       restricted characters for different backends, rclone uses something it calls encoding.

       This process is designed to avoid ambiguous file names as much  as  possible  and  allow  to  move  files
       between many cloud storage systems transparently.

       The  name  shown  by  rclone to the user or during log output will only contain a minimal set of replaced
       characters to ensure correct formatting and not necessarily the actual name used on the cloud storage.

       This transformation is reversed when downloading a file or parsing rclone arguments.  For  example,  when
       uploading a file named my file?.txt to Onedrive, it will be displayed as my file?.txt on the console, but
       stored  as  my  file?.txt  to  Onedrive  (the  ?  gets replaced by the similar looking ? character, the
       so-called “fullwidth question mark”).  The reverse transformation allows to read a file  unusual/name.txt
       from  Google Drive, by passing the name unusual/name.txt on the command line (the / needs to be replaced
       by the similar looking / character).

   Caveats
       The filename encoding system works well in most cases, at least where file names are written  in  English
       or similar languages.  You might not even notice it: It just works.  In some cases it may lead to issues,
       though.   E.g.   when  file  names  are  written  in Chinese, or Japanese, where it is always the Unicode
       fullwidth variants of the punctuation marks that are used.

       On Windows, the characters :, * and ?  are examples of restricted  characters.   If  these  are  used  in
       filenames on a remote that supports it, Rclone will transparently convert them to their fullwidth Unicode
       variants  *,  ? and : when downloading to Windows, and back again when uploading.  This way files with
       names that are not allowed on Windows can still be stored.

       However, if you have files on your Windows system originally with these same Unicode characters in  their
       names,  they will be included in the same conversion process.  E.g.  if you create a file in your Windows
       filesystem with name Test:1.jpg, where : is the Unicode fullwidth  colon  symbol,  and  use  rclone  to
       upload  it  to  Google Drive, which supports regular : (halfwidth question mark), rclone will replace the
       fullwidth : with the halfwidth : and store the file as Test:1.jpg in Google Drive.   Since  both  Windows
       and Google Drive allows the name Test:1.jpg, it would probably be better if rclone just kept the name as
       is in this case.

       With  the  opposite  situation;  if you have a file named Test:1.jpg, in your Google Drive, e.g. uploaded
       from a Linux system where : is valid in file names.  Then later use rclone to  copy  this  file  to  your
       Windows  computer  you  will notice that on your local disk it gets renamed to Test:1.jpg.  The original
       filename is not legal on Windows, due to the :,  and  rclone  therefore  renames  it  to  make  the  copy
       possible.   That  is  all  good.  However, this can also lead to an issue: If you already had a different
       file named Test:1.jpg on Windows, and then use rclone to copy either way.  Rclone will  then  treat  the
       file  originally named Test:1.jpg on Google Drive and the file originally named Test:1.jpg on Windows as
       the same file, and replace the contents from one with the other.

       Its virtually impossible to handle all cases like these correctly in all situations, but  by  customizing
       the  encoding  option,  changing  the set of characters that rclone should convert, you should be able to
       create a configuration that works well for your specific situation.  See also the example below.

       (Windows was used as an example of a file system with many restricted  characters,  and  Google  drive  a
       storage system with few.)

   Default restricted characters
       The table below shows the characters that are replaced by default.

       When  a replacement character is found in a filename, this character will be escaped with the ‛ character
       to avoid ambiguous file names.  (e.g. a file named ␀.txt would shown as ‛␀.txt)

       Each cloud storage backend can use a different  set  of  characters,  which  will  be  specified  in  the
       documentation for each backend.

       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       SOH         0x01         ␁
       STX         0x02         ␂
       ETX         0x03         ␃
       EOT         0x04         ␄
       ENQ         0x05         ␅
       ACK         0x06         ␆
       BEL         0x07         ␇
       BS          0x08         ␈
       HT          0x09         ␉
       LF          0x0A         ␊
       VT          0x0B         ␋
       FF          0x0C         ␌
       CR          0x0D         ␍
       SO          0x0E         ␎
       SI          0x0F         ␏
       DLE         0x10         ␐
       DC1         0x11         ␑
       DC2         0x12         ␒
       DC3         0x13         ␓
       DC4         0x14         ␔
       NAK         0x15         ␕
       SYN         0x16         ␖
       ETB         0x17         ␗
       CAN         0x18         ␘
       EM          0x19         ␙
       SUB         0x1A         ␚
       ESC         0x1B         ␛
       FS          0x1C         ␜
       GS          0x1D         ␝
       RS          0x1E         ␞
       US          0x1F         ␟
       /           0x2F        /
       DEL         0x7F         ␡

       The  default  encoding  will also encode these file names as they are problematic with many cloud storage
       systems.

       File name   Replacement
       ────────────────────────
       .               .
       ..             ..

   Invalid UTF-8 bytes
       Some backends only support a sequence of well formed UTF-8 bytes as file or directory names.

       In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte  value  to
       allow uploading a file to such a backend.  For example, the invalid byte 0xFE will be encoded as ‛FE.

       A  common  source  of invalid UTF-8 bytes are local filesystems, that store names in a different encoding
       than UTF-8 or UTF-16, like latin1.  See the local filenames section for details.

   Encoding option
       Most backends have an encoding option, specified as a flag --backend-encoding where backend is  the  name
       of  the  backend,  or as a config parameter encoding (you’ll need to select the Advanced config in rclone
       config to see it).

       This will have default value which encodes and decodes characters in  such  a  way  as  to  preserve  the
       maximum number of characters (see above).

       However  this  can  be  incorrect  in  some scenarios, for example if you have a Windows file system with
       Unicode fullwidth characters *, ? or :, that you want to remain as  those  characters  on  the  remote
       rather than being translated to regular (halfwidth) *, ? and :.

       The  --backend-encoding  flags  allow  you  to change that.  You can disable the encoding completely with
       --backend-encoding None or set encoding = None in the config file.

       Encoding takes a comma separated list of encodings.  You can see the  list  of  all  possible  values  by
       passing  an  invalid  value  to  this  flag, e.g. --local-encoding "help".  The command rclone help flags
       encoding will show you the defaults for the backends.

       Encoding                 Characters                 Encoded as
       ─────────────────────────────────────────────────────────────────────────────────────
       Asterisk                 *                          *
       BackQuote                `                          `
       BackSlash                \                          \
       Colon                    :                          :
       CrLf                     CR 0x0D, LF 0x0A           ␍, ␊
       Ctl                      All  control  characters   ␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟
                                0x00-0x1F
       Del                      DEL 0x7F                   ␡
       Dollar                   $                          $
       Dot                      . or .. as entire string   ., ..
       DoubleQuote              "                          "
       Hash                     #                          #
       InvalidUtf8              An     invalid     UTF-8   �
                                character (e.g. latin1)
       LeftCrLfHtVt             CR  0x0D,  LF  0x0A,  HT   ␍, ␊, ␉, ␋
                                0x09,  VT  0x0B  on  the
                                left of a string
       LeftPeriod               .  on  the  left  of   a   .
                                string
       LeftSpace                SPACE  on  the left of a   ␠
                                string
       LeftTilde                ~  on  the  left  of   a   ~
                                string
       LtGt                     <, >                       <, >
       None                     No     characters    are
                                encoded
       Percent                  %                          %
       Pipe                     |                          |
       Question                 ?                          ?
       RightCrLfHtVt            CR  0x0D,  LF  0x0A,  HT   ␍, ␊, ␉, ␋
                                0x09,  VT  0x0B  on  the
                                right of a string
       RightPeriod              .  on  the  right  of  a   .
                                string
       RightSpace               SPACE  on the right of a   ␠
                                string
       Semicolon                ;                          ;
       SingleQuote              '                          '
       Slash                    /                          /
       SquareBracket            [, ]                       [, ]

   Encoding example: FTP
       To take a specific example, the FTP backend’s default encoding is

              --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"

       However, let’s say the FTP server is running on Windows  and  can’t  have  any  of  the  invalid  Windows
       characters  in  file  names.   You  are  backing  up Linux servers to this FTP server which do have those
       characters in file names.  So you would add the Windows set which are

              Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot

       to the existing ones, giving:

              Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace

       This can be specified using the --ftp-encoding flag or using an encoding parameter in the config file.

   Encoding example: Windows
       As a nother example, take a Windows system where there is a file with name Test:1.jpg, where :  is  the
       Unicode fullwidth colon symbol.  When using rclone to copy this to a remote which supports :, the regular
       (halfwidth) colon (such as Google Drive), you will notice that the file gets renamed to Test:1.jpg.

       To  avoid this you can change the set of characters rclone should convert for the local filesystem, using
       command-line argument --local-encoding.  Rclone’s default behavior on Windows corresponds to

              --local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

       If you want to use fullwidth characters :, * and ? in your filenames without rclone changing them when
       uploading to a remote, then set the same as the default value but without Colon,Question,Asterisk:

              --local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

       Alternatively, you can disable the conversion of any characters with --local-encoding None.

       Instead  of  using  command-line  argument  --local-encoding,  you  may  also  set  it   as   environment
       variable RCLONE_LOCAL_ENCODING,  or configure a remote of type local in your config, and set the encoding
       option there.

       The risk by doing this is that if you have a filename with the regular (halfwidth) :, *  and  ?  in  your
       cloud  storage,  and you try to download it to your Windows filesystem, this will fail.  These characters
       are not valid in filenames on Windows, and you have told rclone not to work  around  this  by  converting
       them to valid fullwidth variants.

   MIME Type
       MIME  types  (also  known as media types) classify types of documents using a simple text classification,
       e.g. text/html or application/pdf.

       Some cloud storage systems support reading (R) the MIME type of objects and some support writing (W)  the
       MIME type of objects.

       The MIME type can be important if you are serving files directly to HTTP from the storage system.

       If  you  are copying from a remote which supports reading (R) to a remote which supports writing (W) then
       rclone will preserve the MIME types.  Otherwise they will be guessed from the extension,  or  the  remote
       itself may assign the MIME type.

   Metadata
       Backends  may  or  may  support reading or writing metadata.  They may support reading and writing system
       metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).

       The levels of metadata support are

       Key                   Explanation
       ──────────────────────────────────────────────────────────────────────────
       R                     Read only System Metadata
       RW                    Read and write System Metadata
       RWU                   Read and write System Metadata and read  and  write
                             User Metadata

       See the metadata docs for more info.

   Optional Features
       All rclone remotes support a base command set.  Other features depend upon backend-specific capabilities.

       Name                Purge   Copy   Move   DirMove   CleanUp   ListR   StreamUpload   LinkSharing   About   EmptyDir
       ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       1Fichier             No     Yes    Yes      No        No       No          No            Yes        No       Yes
       Akamai Netstorage    Yes     No     No      No        No       Yes        Yes            No         No       Yes
       Amazon Drive         Yes     No    Yes      Yes       No       No          No            No         No       Yes
       Amazon  S3 (or S3    No     Yes     No      No        Yes      Yes        Yes            Yes        No        No
       compatible)
       Backblaze B2         No     Yes     No      No        Yes      Yes        Yes            Yes        No        No
       Box                  Yes    Yes    Yes      Yes     Yes ‡‡     No         Yes            Yes        Yes      Yes
       Citrix ShareFile     Yes    Yes    Yes      Yes       No       No         Yes            No         No       Yes
       Dropbox              Yes    Yes    Yes      Yes       No       No         Yes            Yes        Yes      Yes
       Enterprise   File    Yes    Yes    Yes      Yes       Yes      No          No            No         No       Yes
       Fabric
       FTP                  No      No    Yes      Yes       No       No         Yes            No         No       Yes
       Google      Cloud    Yes    Yes     No      No        No       Yes        Yes            No         No        No
       Storage
       Google Drive         Yes    Yes    Yes      Yes       Yes      Yes        Yes            Yes        Yes      Yes
       Google Photos        No      No     No      No        No       No          No            No         No        No
       HDFS                 Yes     No    Yes      Yes       No       No         Yes            No         Yes      Yes
       HiDrive              Yes    Yes    Yes      Yes       No       No         Yes            No         No       Yes
       HTTP                 No      No     No      No        No       No          No            No         No       Yes
       Internet Archive     No     Yes     No      No        Yes      Yes         No            Yes        Yes       No
       Jottacloud           Yes    Yes    Yes      Yes       Yes      Yes         No            Yes        Yes      Yes
       Koofr                Yes    Yes    Yes      Yes       No       No         Yes            Yes        Yes      Yes
       Mail.ru Cloud        Yes    Yes    Yes      Yes       Yes      No          No            Yes        Yes      Yes
       Mega                 Yes     No    Yes      Yes       Yes      No          No            Yes        Yes      Yes
       Memory               No     Yes     No      No        No       Yes        Yes            No         No        No
       Microsoft   Azure    Yes    Yes     No      No        No       Yes        Yes            No         No        No
       Blob Storage
       Microsoft            Yes    Yes    Yes      Yes       Yes      No          No            Yes        Yes      Yes
       OneDrive
       OpenDrive            Yes    Yes    Yes      Yes       No       No          No            No         No       Yes
       OpenStack Swift     Yes †   Yes     No      No        No       Yes        Yes            No         Yes       No
       Oracle     Object    No     Yes     No      No        Yes      Yes        Yes            No         No        No
       Storage
       pCloud               Yes    Yes    Yes      Yes       Yes      No          No            Yes        Yes      Yes
       premiumize.me        Yes     No    Yes      Yes       No       No          No            Yes        Yes      Yes
       put.io               Yes     No    Yes      Yes       Yes      No         Yes            No         Yes      Yes
       QingStor             No     Yes     No      No        Yes      Yes         No            No         No        No
       Seafile              Yes    Yes    Yes      Yes       Yes      Yes        Yes            Yes        Yes      Yes
       SFTP                 No      No    Yes      Yes       No       No         Yes            No         Yes      Yes
       Sia                  No      No     No      No        No       No         Yes            No         No       Yes
       SMB                  No      No    Yes      Yes       No       No         Yes            No         No       Yes
       SugarSync            Yes    Yes    Yes      Yes       No       No         Yes            Yes        No       Yes
       Storj               Yes †    No    Yes      No        No       Yes        Yes            No         No        No
       Uptobox              No     Yes    Yes      Yes       No       No          No            No         No        No
       WebDAV               Yes    Yes    Yes      Yes       No       No        Yes ‡           No         Yes      Yes
       Yandex Disk          Yes    Yes    Yes      Yes       Yes      No         Yes            Yes        Yes      Yes
       Zoho WorkDrive       Yes    Yes    Yes      Yes       No       No          No            No         Yes      Yes
       The         local    Yes     No    Yes      Yes       No       No         Yes            No         Yes      Yes
       filesystem

   Purge
       This deletes a directory quicker than just deleting all the files in the directory.

       † Note Swift and Storj implement this in order to delete directory markers but they don’t actually have a
       quicker way of deleting files other than deleting them individually.

       ‡ StreamUpload is not supported with Nextcloud

   Copy
       Used  when  copying  an  object to and from the same remote.  This known as a server-side copy so you can
       copy a file without downloading it and uploading it again.  It is used if you use rclone copy  or  rclone
       move if the remote doesn’t support Move directly.

       If  the  server  doesn’t  support  Copy  directly  then  for  copy operations the file is downloaded then
       re-uploaded.

   Move
       Used when moving/renaming an object on the same remote.  This is known as a server-side move of  a  file.
       This is used in rclone move if the server doesn’t support DirMove.

       If  the  server  isn’t  capable  of  Move  then rclone simulates it with Copy then delete.  If the server
       doesn’t support Copy then rclone will download the file and re-upload it.

   DirMove
       This is used to implement rclone move to move a directory if possible.  If it isn’t then it will use Move
       on each file (which falls back to Copy then download and upload - see Move section).

   CleanUp
       This is used for emptying the trash for a remote by rclone cleanup.

       If the server can’t do CleanUp then rclone cleanup will return an error.

       ‡‡ Note that while Box implements this it has to delete every file individually so it will be slower than
       emptying the trash via the WebUI

   ListR
       The remote supports a recursive list to list all the contents beneath a directory quickly.  This  enables
       the --fast-list flag to work.  See the rclone docs for more details.

   StreamUpload
       Some  remotes  allow  files to be uploaded without knowing the file size in advance.  This allows certain
       operations to work without spooling the file to local disk first, e.g. rclone rcat.

   LinkSharing
       Sets the necessary permissions on a file or folder and prints a link that allows others to  access  them,
       even if they don’t have an account on the particular cloud provider.

   About
       Rclone  about prints quota information for a remote.  Typical output includes bytes used, free, quota and
       in trash.

       If a remote lacks about capability rclone about remote:returns an error.

       Backends without about capability cannot determine free space for an rclone  mount,  or  use  policy  mfs
       (most free space) as a member of an rclone union remote.

       See rclone about command

   EmptyDir
       The remote supports empty directories.  See Limitations for details.  Most Object/Bucket-based remotes do
       not support this.

Global Flags

       This  describes the global flags available to every rclone command split into two groups, non backend and
       backend flags.

   Non Backend Flags
       These flags are available for every command.

                    --ask-password                         Allow prompt for password for encrypted configuration (default true)
                    --auto-confirm                         If enabled, do not request console confirmation
                    --backup-dir string                    Make backups into hierarchy based in DIR
                    --bind string                          Local address to bind to for outgoing connections, IPv4, IPv6 or name
                    --buffer-size SizeSuffix               In memory buffer size when reading files for each --transfer (default 16Mi)
                    --bwlimit BwTimetable                  Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
                    --bwlimit-file BwTimetable             Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
                    --ca-cert string                       CA certificate used to verify servers
                    --cache-dir string                     Directory rclone will use for caching (default "$HOME/.cache/rclone")
                    --check-first                          Do all the checks before starting transfers
                    --checkers int                         Number of checkers to run in parallel (default 8)
                -c, --checksum                             Skip based on checksum (if available) & size, not mod-time & size
                    --client-cert string                   Client SSL certificate (PEM) for mutual TLS auth
                    --client-key string                    Client SSL private key (PEM) for mutual TLS auth
                    --compare-dest stringArray             Include additional comma separated server-side paths during comparison
                    --config string                        Config file (default "$HOME/.config/rclone/rclone.conf")
                    --contimeout duration                  Connect timeout (default 1m0s)
                    --copy-dest stringArray                Implies --compare-dest but also copies files from paths into destination
                    --cpuprofile string                    Write cpu profile to file
                    --cutoff-mode string                   Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
                    --delete-after                         When synchronizing, delete files on destination after transferring (default)
                    --delete-before                        When synchronizing, delete files on destination before transferring
                    --delete-during                        When synchronizing, delete files during transfer
                    --delete-excluded                      Delete files on dest excluded from sync
                    --disable string                       Disable a comma separated list of features (use --disable help to see a list)
                    --disable-http-keep-alives             Disable HTTP keep-alives and use each connection once.
                    --disable-http2                        Disable HTTP/2 in the global transport
                -n, --dry-run                              Do a trial run with no permanent changes
                    --dscp string                          Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
                    --dump DumpFlags                       List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
                    --dump-bodies                          Dump HTTP headers and bodies - may contain sensitive info
                    --dump-headers                         Dump HTTP headers - may contain sensitive info
                    --error-on-no-transfer                 Sets exit code 9 if no files are transferred, useful in scripts
                    --exclude stringArray                  Exclude files matching pattern
                    --exclude-from stringArray             Read exclude patterns from file (use - to read from stdin)
                    --exclude-if-present stringArray       Exclude directories if filename is present
                    --expect-continue-timeout duration     Timeout when using expect / 100-continue in HTTP (default 1s)
                    --fast-list                            Use recursive list if available; uses more memory but fewer transactions
                    --files-from stringArray               Read list of source-file names from file (use - to read from stdin)
                    --files-from-raw stringArray           Read list of source-file names from file without any processing of lines (use - to read from stdin)
                -f, --filter stringArray                   Add a file-filtering rule
                    --filter-from stringArray              Read filtering patterns from a file (use - to read from stdin)
                    --fs-cache-expire-duration duration    Cache remotes for this long (0 to disable caching) (default 5m0s)
                    --fs-cache-expire-interval duration    Interval to check for expired remotes (default 1m0s)
                    --header stringArray                   Set HTTP header for all transactions
                    --header-download stringArray          Set HTTP header for download transactions
                    --header-upload stringArray            Set HTTP header for upload transactions
                    --human-readable                       Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
                    --ignore-case                          Ignore case in filters (case insensitive)
                    --ignore-case-sync                     Ignore case when synchronizing
                    --ignore-checksum                      Skip post copy check of checksums
                    --ignore-errors                        Delete even if there are I/O errors
                    --ignore-existing                      Skip all files that exist on destination
                    --ignore-size                          Ignore size when skipping use mod-time or checksum
                -I, --ignore-times                         Don't skip files that match size and time - transfer all files
                    --immutable                            Do not modify files, fail if existing files have been modified
                    --include stringArray                  Include files matching pattern
                    --include-from stringArray             Read include patterns from file (use - to read from stdin)
                -i, --interactive                          Enable interactive mode
                    --kv-lock-time duration                Maximum time to keep key-value database locked by process (default 1s)
                    --log-file string                      Log everything to this file
                    --log-format string                    Comma separated list of log format options (default "date,time")
                    --log-level string                     Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
                    --log-systemd                          Activate systemd integration for the logger
                    --low-level-retries int                Number of low level retries to do (default 10)
                    --max-age Duration                     Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
                    --max-backlog int                      Maximum number of objects in sync or check backlog (default 10000)
                    --max-delete int                       When synchronizing, limit the number of deletes (default -1)
                    --max-depth int                        If set limits the recursion depth to this (default -1)
                    --max-duration duration                Maximum duration rclone will transfer data for
                    --max-size SizeSuffix                  Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
                    --max-stats-groups int                 Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
                    --max-transfer SizeSuffix              Maximum size of data to transfer (default off)
                    --memprofile string                    Write memory profile to file
                -M, --metadata                             If set, preserve metadata when copying objects
                    --metadata-set stringArray             Add metadata key=value when uploading
                    --min-age Duration                     Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
                    --min-size SizeSuffix                  Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
                    --modify-window duration               Max time diff to be considered the same (default 1ns)
                    --multi-thread-cutoff SizeSuffix       Use multi-thread downloads for files above this size (default 250Mi)
                    --multi-thread-streams int             Max number of streams to use for multi-thread downloads (default 4)
                    --no-check-certificate                 Do not verify the server SSL certificate (insecure)
                    --no-check-dest                        Don't check the destination, copy regardless
                    --no-console                           Hide console window (supported on Windows only)
                    --no-gzip-encoding                     Don't set Accept-Encoding: gzip
                    --no-traverse                          Don't traverse destination file system on copy
                    --no-unicode-normalization             Don't normalize unicode characters in filenames
                    --no-update-modtime                    Don't update destination mod-time if files identical
                    --order-by string                      Instructions on how to order the transfers, e.g. 'size,descending'
                    --password-command SpaceSepList        Command for supplying password for encrypted configuration
                -P, --progress                             Show progress during transfer
                    --progress-terminal-title              Show progress on the terminal title (requires -P/--progress)
                -q, --quiet                                Print as little stuff as possible
                    --rc                                   Enable the remote control server
                    --rc-addr string                       IPaddress:Port or :Port to bind server to (default "localhost:5572")
                    --rc-allow-origin string               Set the allowed origin for CORS
                    --rc-baseurl string                    Prefix for URLs - leave blank for root
                    --rc-cert string                       SSL PEM key (concatenation of certificate and CA certificate)
                    --rc-client-ca string                  Client certificate authority to verify clients with
                    --rc-enable-metrics                    Enable prometheus metrics on /metrics
                    --rc-files string                      Path to local files to serve on the HTTP server
                    --rc-htpasswd string                   htpasswd file - if not provided no authentication is done
                    --rc-job-expire-duration duration      Expire finished async jobs older than this value (default 1m0s)
                    --rc-job-expire-interval duration      Interval to check for expired async jobs (default 10s)
                    --rc-key string                        SSL PEM Private key
                    --rc-max-header-bytes int              Maximum size of request header (default 4096)
                    --rc-min-tls-version string            Minimum TLS version that is acceptable (default "tls1.0")
                    --rc-no-auth                           Don't require auth for certain methods
                    --rc-pass string                       Password for authentication
                    --rc-realm string                      Realm for authentication (default "rclone")
                    --rc-serve                             Enable the serving of remote objects
                    --rc-server-read-timeout duration      Timeout for server reading data (default 1h0m0s)
                    --rc-server-write-timeout duration     Timeout for server writing data (default 1h0m0s)
                    --rc-template string                   User-specified template
                    --rc-user string                       User name for authentication
                    --rc-web-fetch-url string              URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
                    --rc-web-gui                           Launch WebGUI on localhost
                    --rc-web-gui-force-update              Force update to latest version of web gui
                    --rc-web-gui-no-open-browser           Don't open the browser automatically
                    --rc-web-gui-update                    Check and update to latest version of web gui
                    --refresh-times                        Refresh the modtime of remote files
                    --retries int                          Retry operations this many times if they fail (default 3)
                    --retries-sleep duration               Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
                    --server-side-across-configs           Allow server-side operations (e.g. copy) to work across different configs
                    --size-only                            Skip based on size only, not mod-time or checksum
                    --stats duration                       Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
                    --stats-file-name-length int           Max file name length in stats (0 for no limit) (default 45)
                    --stats-log-level string               Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
                    --stats-one-line                       Make the stats fit on one line
                    --stats-one-line-date                  Enable --stats-one-line and add current date/time prefix
                    --stats-one-line-date-format string    Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
                    --stats-unit string                    Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
                    --streaming-upload-cutoff SizeSuffix   Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
                    --suffix string                        Suffix to add to changed files
                    --suffix-keep-extension                Preserve the extension when using --suffix
                    --syslog                               Use Syslog for logging
                    --syslog-facility string               Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
                    --temp-dir string                      Directory rclone will use for temporary files (default "/tmp")
                    --timeout duration                     IO idle timeout (default 5m0s)
                    --tpslimit float                       Limit HTTP transactions per second to this
                    --tpslimit-burst int                   Max burst of transactions for --tpslimit (default 1)
                    --track-renames                        When synchronizing, track file renames and do a server-side move if possible
                    --track-renames-strategy string        Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
                    --transfers int                        Number of file transfers to run in parallel (default 4)
                -u, --update                               Skip files that are newer on the destination
                    --use-cookies                          Enable session cookiejar
                    --use-json-log                         Use json log format
                    --use-mmap                             Use mmap allocator (see docs)
                    --use-server-modtime                   Use server modified time instead of object metadata
                    --user-agent string                    Set the user-agent to a specified string (default "rclone/v1.60.1")
                -v, --verbose count                        Print lots more stuff (repeat for more)

   Backend Flags
       These flags are available for every command.  They control the backends and may  be  set  in  the  config
       file.

                    --acd-auth-url string                          Auth server URL
                    --acd-client-id string                         OAuth Client Id
                    --acd-client-secret string                     OAuth Client Secret
                    --acd-encoding MultiEncoder                    The encoding for the backend (default Slash,InvalidUtf8,Dot)
                    --acd-templink-threshold SizeSuffix            Files >= this size will be downloaded via their tempLink (default 9Gi)
                    --acd-token string                             OAuth Access Token as a JSON blob
                    --acd-token-url string                         Token server url
                    --acd-upload-wait-per-gb Duration              Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
                    --alias-remote string                          Remote or path to alias
                    --azureblob-access-tier string                 Access tier of blob: hot, cool or archive
                    --azureblob-account string                     Storage Account Name
                    --azureblob-archive-tier-delete                Delete archive tier blobs before overwriting
                    --azureblob-chunk-size SizeSuffix              Upload chunk size (default 4Mi)
                    --azureblob-disable-checksum                   Don't store MD5 checksum with object metadata
                    --azureblob-encoding MultiEncoder              The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
                    --azureblob-endpoint string                    Endpoint for the service
                    --azureblob-key string                         Storage Account Key
                    --azureblob-list-chunk int                     Size of blob list (default 5000)
                    --azureblob-memory-pool-flush-time Duration    How often internal memory buffer pools will be flushed (default 1m0s)
                    --azureblob-memory-pool-use-mmap               Whether to use mmap buffers in internal memory pool
                    --azureblob-msi-client-id string               Object ID of the user-assigned MSI to use, if any
                    --azureblob-msi-mi-res-id string               Azure resource ID of the user-assigned MSI to use, if any
                    --azureblob-msi-object-id string               Object ID of the user-assigned MSI to use, if any
                    --azureblob-no-head-object                     If set, do not do HEAD before GET when getting objects
                    --azureblob-public-access string               Public access level of a container: blob or container
                    --azureblob-sas-url string                     SAS URL for container level access only
                    --azureblob-service-principal-file string      Path to file containing credentials for use with a service principal
                    --azureblob-upload-concurrency int             Concurrency for multipart uploads (default 16)
                    --azureblob-upload-cutoff string               Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
                    --azureblob-use-emulator                       Uses local storage emulator if provided as 'true'
                    --azureblob-use-msi                            Use a managed service identity to authenticate (only works in Azure)
                    --b2-account string                            Account ID or Application Key ID
                    --b2-chunk-size SizeSuffix                     Upload chunk size (default 96Mi)
                    --b2-copy-cutoff SizeSuffix                    Cutoff for switching to multipart copy (default 4Gi)
                    --b2-disable-checksum                          Disable checksums for large (> upload cutoff) files
                    --b2-download-auth-duration Duration           Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
                    --b2-download-url string                       Custom endpoint for downloads
                    --b2-encoding MultiEncoder                     The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --b2-endpoint string                           Endpoint for the service
                    --b2-hard-delete                               Permanently delete files on remote removal, otherwise hide files
                    --b2-key string                                Application Key
                    --b2-memory-pool-flush-time Duration           How often internal memory buffer pools will be flushed (default 1m0s)
                    --b2-memory-pool-use-mmap                      Whether to use mmap buffers in internal memory pool
                    --b2-test-mode string                          A flag string for X-Bz-Test-Mode header for debugging
                    --b2-upload-cutoff SizeSuffix                  Cutoff for switching to chunked upload (default 200Mi)
                    --b2-version-at Time                           Show file versions as they were at the specified time (default off)
                    --b2-versions                                  Include old versions in directory listings
                    --box-access-token string                      Box App Primary Access Token
                    --box-auth-url string                          Auth server URL
                    --box-box-config-file string                   Box App config.json location
                    --box-box-sub-type string                       (default "user")
                    --box-client-id string                         OAuth Client Id
                    --box-client-secret string                     OAuth Client Secret
                    --box-commit-retries int                       Max number of times to try committing a multipart file (default 100)
                    --box-encoding MultiEncoder                    The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
                    --box-list-chunk int                           Size of listing chunk 1-1000 (default 1000)
                    --box-owned-by string                          Only show items owned by the login (email address) passed in
                    --box-root-folder-id string                    Fill in for rclone to use a non root folder as its starting point
                    --box-token string                             OAuth Access Token as a JSON blob
                    --box-token-url string                         Token server url
                    --box-upload-cutoff SizeSuffix                 Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
                    --cache-chunk-clean-interval Duration          How often should the cache perform cleanups of the chunk storage (default 1m0s)
                    --cache-chunk-no-memory                        Disable the in-memory cache for storing chunks during streaming
                    --cache-chunk-path string                      Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
                    --cache-chunk-size SizeSuffix                  The size of a chunk (partial file data) (default 5Mi)
                    --cache-chunk-total-size SizeSuffix            The total size that the chunks can take up on the local disk (default 10Gi)
                    --cache-db-path string                         Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
                    --cache-db-purge                               Clear all the cached data for this remote on start
                    --cache-db-wait-time Duration                  How long to wait for the DB to be available - 0 is unlimited (default 1s)
                    --cache-info-age Duration                      How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
                    --cache-plex-insecure string                   Skip all certificate verification when connecting to the Plex server
                    --cache-plex-password string                   The password of the Plex user (obscured)
                    --cache-plex-url string                        The URL of the Plex server
                    --cache-plex-username string                   The username of the Plex user
                    --cache-read-retries int                       How many times to retry a read from a cache storage (default 10)
                    --cache-remote string                          Remote to cache
                    --cache-rps int                                Limits the number of requests per second to the source FS (-1 to disable) (default -1)
                    --cache-tmp-upload-path string                 Directory to keep temporary files until they are uploaded
                    --cache-tmp-wait-time Duration                 How long should files be stored in local cache before being uploaded (default 15s)
                    --cache-workers int                            How many workers should run in parallel to download chunks (default 4)
                    --cache-writes                                 Cache file data on writes through the FS
                    --chunker-chunk-size SizeSuffix                Files larger than chunk size will be split in chunks (default 2Gi)
                    --chunker-fail-hard                            Choose how chunker should handle files with missing or invalid chunks
                    --chunker-hash-type string                     Choose how chunker handles hash sums (default "md5")
                    --chunker-remote string                        Remote to chunk/unchunk
                    --combine-upstreams SpaceSepList               Upstreams for combining
                    --compress-level int                           GZIP compression level (-2 to 9) (default -1)
                    --compress-mode string                         Compression mode (default "gzip")
                    --compress-ram-cache-limit SizeSuffix          Some remotes don't allow the upload of files with unknown size (default 20Mi)
                    --compress-remote string                       Remote to compress
                -L, --copy-links                                   Follow symlinks and copy the pointed to item
                    --crypt-directory-name-encryption              Option to either encrypt directory names or leave them intact (default true)
                    --crypt-filename-encoding string               How to encode the encrypted filename to text string (default "base32")
                    --crypt-filename-encryption string             How to encrypt the filenames (default "standard")
                    --crypt-no-data-encryption                     Option to either encrypt file data or leave it unencrypted
                    --crypt-password string                        Password or pass phrase for encryption (obscured)
                    --crypt-password2 string                       Password or pass phrase for salt (obscured)
                    --crypt-remote string                          Remote to encrypt/decrypt
                    --crypt-server-side-across-configs             Allow server-side operations (e.g. copy) to work across different crypt configs
                    --crypt-show-mapping                           For all files listed show how the names encrypt
                    --drive-acknowledge-abuse                      Set to allow files which return cannotDownloadAbusiveFile to be downloaded
                    --drive-allow-import-name-change               Allow the filetype to change when uploading Google docs
                    --drive-auth-owner-only                        Only consider files owned by the authenticated user
                    --drive-auth-url string                        Auth server URL
                    --drive-chunk-size SizeSuffix                  Upload chunk size (default 8Mi)
                    --drive-client-id string                       Google Application Client Id
                    --drive-client-secret string                   OAuth Client Secret
                    --drive-copy-shortcut-content                  Server side copy contents of shortcuts instead of the shortcut
                    --drive-disable-http2                          Disable drive using http2 (default true)
                    --drive-encoding MultiEncoder                  The encoding for the backend (default InvalidUtf8)
                    --drive-export-formats string                  Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
                    --drive-formats string                         Deprecated: See export_formats
                    --drive-impersonate string                     Impersonate this user when using a service account
                    --drive-import-formats string                  Comma separated list of preferred formats for uploading Google docs
                    --drive-keep-revision-forever                  Keep new head revision of each file forever
                    --drive-list-chunk int                         Size of listing chunk 100-1000, 0 to disable (default 1000)
                    --drive-pacer-burst int                        Number of API calls to allow without sleeping (default 100)
                    --drive-pacer-min-sleep Duration               Minimum time to sleep between API calls (default 100ms)
                    --drive-resource-key string                    Resource key for accessing a link-shared file
                    --drive-root-folder-id string                  ID of the root folder
                    --drive-scope string                           Scope that rclone should use when requesting access from drive
                    --drive-server-side-across-configs             Allow server-side operations (e.g. copy) to work across different drive configs
                    --drive-service-account-credentials string     Service Account Credentials JSON blob
                    --drive-service-account-file string            Service Account Credentials JSON file path
                    --drive-shared-with-me                         Only show files that are shared with me
                    --drive-size-as-quota                          Show sizes as storage quota usage, not actual size
                    --drive-skip-checksum-gphotos                  Skip MD5 checksum on Google photos and videos only
                    --drive-skip-dangling-shortcuts                If set skip dangling shortcut files
                    --drive-skip-gdocs                             Skip google documents in all listings
                    --drive-skip-shortcuts                         If set skip shortcut files
                    --drive-starred-only                           Only show files that are starred
                    --drive-stop-on-download-limit                 Make download limit errors be fatal
                    --drive-stop-on-upload-limit                   Make upload limit errors be fatal
                    --drive-team-drive string                      ID of the Shared Drive (Team Drive)
                    --drive-token string                           OAuth Access Token as a JSON blob
                    --drive-token-url string                       Token server url
                    --drive-trashed-only                           Only show files that are in the trash
                    --drive-upload-cutoff SizeSuffix               Cutoff for switching to chunked upload (default 8Mi)
                    --drive-use-created-date                       Use file created date instead of modified date
                    --drive-use-shared-date                        Use date file was shared instead of modified date
                    --drive-use-trash                              Send files to the trash instead of deleting permanently (default true)
                    --drive-v2-download-min-size SizeSuffix        If Object's are greater, use drive v2 API to download (default off)
                    --dropbox-auth-url string                      Auth server URL
                    --dropbox-batch-commit-timeout Duration        Max time to wait for a batch to finish committing (default 10m0s)
                    --dropbox-batch-mode string                    Upload file batching sync|async|off (default "sync")
                    --dropbox-batch-size int                       Max number of files in upload batch
                    --dropbox-batch-timeout Duration               Max time to allow an idle upload batch before uploading (default 0s)
                    --dropbox-chunk-size SizeSuffix                Upload chunk size (< 150Mi) (default 48Mi)
                    --dropbox-client-id string                     OAuth Client Id
                    --dropbox-client-secret string                 OAuth Client Secret
                    --dropbox-encoding MultiEncoder                The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
                    --dropbox-impersonate string                   Impersonate this user when using a business account
                    --dropbox-shared-files                         Instructs rclone to work on individual shared files
                    --dropbox-shared-folders                       Instructs rclone to work on shared folders
                    --dropbox-token string                         OAuth Access Token as a JSON blob
                    --dropbox-token-url string                     Token server url
                    --fichier-api-key string                       Your API Key, get it from https://1fichier.com/console/params.pl
                    --fichier-encoding MultiEncoder                The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
                    --fichier-file-password string                 If you want to download a shared file that is password protected, add this parameter (obscured)
                    --fichier-folder-password string               If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
                    --fichier-shared-folder string                 If you want to download a shared folder, add this parameter
                    --filefabric-encoding MultiEncoder             The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
                    --filefabric-permanent-token string            Permanent Authentication Token
                    --filefabric-root-folder-id string             ID of the root folder
                    --filefabric-token string                      Session Token
                    --filefabric-token-expiry string               Token expiry time
                    --filefabric-url string                        URL of the Enterprise File Fabric to connect to
                    --filefabric-version string                    Version read from the file fabric
                    --ftp-ask-password                             Allow asking for FTP password when needed
                    --ftp-close-timeout Duration                   Maximum time to wait for a response to close (default 1m0s)
                    --ftp-concurrency int                          Maximum number of FTP simultaneous connections, 0 for unlimited
                    --ftp-disable-epsv                             Disable using EPSV even if server advertises support
                    --ftp-disable-mlsd                             Disable using MLSD even if server advertises support
                    --ftp-disable-tls13                            Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
                    --ftp-disable-utf8                             Disable using UTF-8 even if server advertises support
                    --ftp-encoding MultiEncoder                    The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
                    --ftp-explicit-tls                             Use Explicit FTPS (FTP over TLS)
                    --ftp-force-list-hidden                        Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
                    --ftp-host string                              FTP host to connect to
                    --ftp-idle-timeout Duration                    Max time before closing idle connections (default 1m0s)
                    --ftp-no-check-certificate                     Do not verify the TLS certificate of the server
                    --ftp-pass string                              FTP password (obscured)
                    --ftp-port int                                 FTP port number (default 21)
                    --ftp-shut-timeout Duration                    Maximum time to wait for data connection closing status (default 1m0s)
                    --ftp-tls                                      Use Implicit FTPS (FTP over TLS)
                    --ftp-tls-cache-size int                       Size of TLS session cache for all control and data connections (default 32)
                    --ftp-user string                              FTP username (default "$USER")
                    --ftp-writing-mdtm                             Use MDTM to set modification time (VsFtpd quirk)
                    --gcs-anonymous                                Access public buckets and objects without credentials
                    --gcs-auth-url string                          Auth server URL
                    --gcs-bucket-acl string                        Access Control List for new buckets
                    --gcs-bucket-policy-only                       Access checks should use bucket-level IAM policies
                    --gcs-client-id string                         OAuth Client Id
                    --gcs-client-secret string                     OAuth Client Secret
                    --gcs-decompress                               If set this will decompress gzip encoded objects
                    --gcs-encoding MultiEncoder                    The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
                    --gcs-endpoint string                          Endpoint for the service
                    --gcs-location string                          Location for the newly created buckets
                    --gcs-no-check-bucket                          If set, don't attempt to check the bucket exists or create it
                    --gcs-object-acl string                        Access Control List for new objects
                    --gcs-project-number string                    Project number
                    --gcs-service-account-file string              Service Account Credentials JSON file path
                    --gcs-storage-class string                     The storage class to use when storing objects in Google Cloud Storage
                    --gcs-token string                             OAuth Access Token as a JSON blob
                    --gcs-token-url string                         Token server url
                    --gphotos-auth-url string                      Auth server URL
                    --gphotos-client-id string                     OAuth Client Id
                    --gphotos-client-secret string                 OAuth Client Secret
                    --gphotos-encoding MultiEncoder                The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
                    --gphotos-include-archived                     Also view and download archived media
                    --gphotos-read-only                            Set to make the Google Photos backend read only
                    --gphotos-read-size                            Set to read the size of media items
                    --gphotos-start-year int                       Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
                    --gphotos-token string                         OAuth Access Token as a JSON blob
                    --gphotos-token-url string                     Token server url
                    --hasher-auto-size SizeSuffix                  Auto-update checksum for files smaller than this size (disabled by default)
                    --hasher-hashes CommaSepList                   Comma separated list of supported checksum types (default md5,sha1)
                    --hasher-max-age Duration                      Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
                    --hasher-remote string                         Remote to cache checksums for (e.g. myRemote:path)
                    --hdfs-data-transfer-protection string         Kerberos data transfer protection: authentication|integrity|privacy
                    --hdfs-encoding MultiEncoder                   The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
                    --hdfs-namenode string                         Hadoop name node and port
                    --hdfs-service-principal-name string           Kerberos service principal name for the namenode
                    --hdfs-username string                         Hadoop user name
                    --hidrive-auth-url string                      Auth server URL
                    --hidrive-chunk-size SizeSuffix                Chunksize for chunked uploads (default 48Mi)
                    --hidrive-client-id string                     OAuth Client Id
                    --hidrive-client-secret string                 OAuth Client Secret
                    --hidrive-disable-fetching-member-count        Do not fetch number of objects in directories unless it is absolutely necessary
                    --hidrive-encoding MultiEncoder                The encoding for the backend (default Slash,Dot)
                    --hidrive-endpoint string                      Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
                    --hidrive-root-prefix string                   The root/parent folder for all paths (default "/")
                    --hidrive-scope-access string                  Access permissions that rclone should use when requesting access from HiDrive (default "rw")
                    --hidrive-scope-role string                    User-level that rclone should use when requesting access from HiDrive (default "user")
                    --hidrive-token string                         OAuth Access Token as a JSON blob
                    --hidrive-token-url string                     Token server url
                    --hidrive-upload-concurrency int               Concurrency for chunked uploads (default 4)
                    --hidrive-upload-cutoff SizeSuffix             Cutoff/Threshold for chunked uploads (default 96Mi)
                    --http-headers CommaSepList                    Set HTTP headers for all transactions
                    --http-no-head                                 Don't use HEAD requests
                    --http-no-slash                                Set this if the site doesn't end directories with /
                    --http-url string                              URL of HTTP host to connect to
                    --internetarchive-access-key-id string         IAS3 Access Key
                    --internetarchive-disable-checksum             Don't ask the server to test against MD5 checksum calculated by rclone (default true)
                    --internetarchive-encoding MultiEncoder        The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
                    --internetarchive-endpoint string              IAS3 Endpoint (default "https://s3.us.archive.org")
                    --internetarchive-front-endpoint string        Host of InternetArchive Frontend (default "https://archive.org")
                    --internetarchive-secret-access-key string     IAS3 Secret Key (password)
                    --internetarchive-wait-archive Duration        Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
                    --jottacloud-encoding MultiEncoder             The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
                    --jottacloud-hard-delete                       Delete files permanently rather than putting them into the trash
                    --jottacloud-md5-memory-limit SizeSuffix       Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
                    --jottacloud-no-versions                       Avoid server side versioning by deleting files and recreating files instead of overwriting them
                    --jottacloud-trashed-only                      Only show files that are in the trash
                    --jottacloud-upload-resume-limit SizeSuffix    Files bigger than this can be resumed if the upload fail's (default 10Mi)
                    --koofr-encoding MultiEncoder                  The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --koofr-endpoint string                        The Koofr API endpoint to use
                    --koofr-mountid string                         Mount ID of the mount to use
                    --koofr-password string                        Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
                    --koofr-provider string                        Choose your storage provider
                    --koofr-setmtime                               Does the backend support setting modification time (default true)
                    --koofr-user string                            Your user name
                -l, --links                                        Translate symlinks to/from regular files with a '.rclonelink' extension
                    --local-case-insensitive                       Force the filesystem to report itself as case insensitive
                    --local-case-sensitive                         Force the filesystem to report itself as case sensitive
                    --local-encoding MultiEncoder                  The encoding for the backend (default Slash,Dot)
                    --local-no-check-updated                       Don't check to see if the files change during upload
                    --local-no-preallocate                         Disable preallocation of disk space for transferred files
                    --local-no-set-modtime                         Disable setting modtime
                    --local-no-sparse                              Disable sparse files for multi-thread downloads
                    --local-nounc                                  Disable UNC (long path names) conversion on Windows
                    --local-unicode-normalization                  Apply unicode NFC normalization to paths and filenames
                    --local-zero-size-links                        Assume the Stat size of links is zero (and read them instead) (deprecated)
                    --mailru-check-hash                            What should copy do if file checksum is mismatched or invalid (default true)
                    --mailru-encoding MultiEncoder                 The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --mailru-pass string                           Password (obscured)
                    --mailru-speedup-enable                        Skip full upload if there is another file with same data hash (default true)
                    --mailru-speedup-file-patterns string          Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
                    --mailru-speedup-max-disk SizeSuffix           This option allows you to disable speedup (put by hash) for large files (default 3Gi)
                    --mailru-speedup-max-memory SizeSuffix         Files larger than the size given below will always be hashed on disk (default 32Mi)
                    --mailru-user string                           User name (usually email)
                    --mega-debug                                   Output more debug from Mega
                    --mega-encoding MultiEncoder                   The encoding for the backend (default Slash,InvalidUtf8,Dot)
                    --mega-hard-delete                             Delete files permanently rather than putting them into the trash
                    --mega-pass string                             Password (obscured)
                    --mega-user string                             User name
                    --netstorage-account string                    Set the NetStorage account name
                    --netstorage-host string                       Domain+path of NetStorage host to connect to
                    --netstorage-protocol string                   Select between HTTP or HTTPS protocol (default "https")
                    --netstorage-secret string                     Set the NetStorage account secret/G2O key for authentication (obscured)
                -x, --one-file-system                              Don't cross filesystem boundaries (unix/macOS only)
                    --onedrive-access-scopes SpaceSepList          Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
                    --onedrive-auth-url string                     Auth server URL
                    --onedrive-chunk-size SizeSuffix               Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
                    --onedrive-client-id string                    OAuth Client Id
                    --onedrive-client-secret string                OAuth Client Secret
                    --onedrive-drive-id string                     The ID of the drive to use
                    --onedrive-drive-type string                   The type of the drive (personal | business | documentLibrary)
                    --onedrive-encoding MultiEncoder               The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
                    --onedrive-expose-onenote-files                Set to make OneNote files show up in directory listings
                    --onedrive-link-password string                Set the password for links created by the link command
                    --onedrive-link-scope string                   Set the scope of the links created by the link command (default "anonymous")
                    --onedrive-link-type string                    Set the type of the links created by the link command (default "view")
                    --onedrive-list-chunk int                      Size of listing chunk (default 1000)
                    --onedrive-no-versions                         Remove all versions on modifying operations
                    --onedrive-region string                       Choose national cloud region for OneDrive (default "global")
                    --onedrive-root-folder-id string               ID of the root folder
                    --onedrive-server-side-across-configs          Allow server-side operations (e.g. copy) to work across different onedrive configs
                    --onedrive-token string                        OAuth Access Token as a JSON blob
                    --onedrive-token-url string                    Token server url
                    --oos-chunk-size SizeSuffix                    Chunk size to use for uploading (default 5Mi)
                    --oos-compartment string                       Object storage compartment OCID
                    --oos-config-file string                       Path to OCI config file (default "~/.oci/config")
                    --oos-config-profile string                    Profile name inside the oci config file (default "Default")
                    --oos-copy-cutoff SizeSuffix                   Cutoff for switching to multipart copy (default 4.656Gi)
                    --oos-copy-timeout Duration                    Timeout for copy (default 1m0s)
                    --oos-disable-checksum                         Don't store MD5 checksum with object metadata
                    --oos-encoding MultiEncoder                    The encoding for the backend (default Slash,InvalidUtf8,Dot)
                    --oos-endpoint string                          Endpoint for Object storage API
                    --oos-leave-parts-on-error                     If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
                    --oos-namespace string                         Object storage namespace
                    --oos-no-check-bucket                          If set, don't attempt to check the bucket exists or create it
                    --oos-provider string                          Choose your Auth Provider (default "env_auth")
                    --oos-region string                            Object storage Region
                    --oos-upload-concurrency int                   Concurrency for multipart uploads (default 10)
                    --oos-upload-cutoff SizeSuffix                 Cutoff for switching to chunked upload (default 200Mi)
                    --opendrive-chunk-size SizeSuffix              Files will be uploaded in chunks this size (default 10Mi)
                    --opendrive-encoding MultiEncoder              The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
                    --opendrive-password string                    Password (obscured)
                    --opendrive-username string                    Username
                    --pcloud-auth-url string                       Auth server URL
                    --pcloud-client-id string                      OAuth Client Id
                    --pcloud-client-secret string                  OAuth Client Secret
                    --pcloud-encoding MultiEncoder                 The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --pcloud-hostname string                       Hostname to connect to (default "api.pcloud.com")
                    --pcloud-password string                       Your pcloud password (obscured)
                    --pcloud-root-folder-id string                 Fill in for rclone to use a non root folder as its starting point (default "d0")
                    --pcloud-token string                          OAuth Access Token as a JSON blob
                    --pcloud-token-url string                      Token server url
                    --pcloud-username string                       Your pcloud username
                    --premiumizeme-encoding MultiEncoder           The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --putio-encoding MultiEncoder                  The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
                    --qingstor-access-key-id string                QingStor Access Key ID
                    --qingstor-chunk-size SizeSuffix               Chunk size to use for uploading (default 4Mi)
                    --qingstor-connection-retries int              Number of connection retries (default 3)
                    --qingstor-encoding MultiEncoder               The encoding for the backend (default Slash,Ctl,InvalidUtf8)
                    --qingstor-endpoint string                     Enter an endpoint URL to connection QingStor API
                    --qingstor-env-auth                            Get QingStor credentials from runtime
                    --qingstor-secret-access-key string            QingStor Secret Access Key (password)
                    --qingstor-upload-concurrency int              Concurrency for multipart uploads (default 1)
                    --qingstor-upload-cutoff SizeSuffix            Cutoff for switching to chunked upload (default 200Mi)
                    --qingstor-zone string                         Zone to connect to
                    --s3-access-key-id string                      AWS Access Key ID
                    --s3-acl string                                Canned ACL used when creating buckets and storing or copying objects
                    --s3-bucket-acl string                         Canned ACL used when creating buckets
                    --s3-chunk-size SizeSuffix                     Chunk size to use for uploading (default 5Mi)
                    --s3-copy-cutoff SizeSuffix                    Cutoff for switching to multipart copy (default 4.656Gi)
                    --s3-decompress                                If set this will decompress gzip encoded objects
                    --s3-disable-checksum                          Don't store MD5 checksum with object metadata
                    --s3-disable-http2                             Disable usage of http2 for S3 backends
                    --s3-download-url string                       Custom endpoint for downloads
                    --s3-encoding MultiEncoder                     The encoding for the backend (default Slash,InvalidUtf8,Dot)
                    --s3-endpoint string                           Endpoint for S3 API
                    --s3-env-auth                                  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
                    --s3-force-path-style                          If true use path style access if false use virtual hosted style (default true)
                    --s3-leave-parts-on-error                      If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
                    --s3-list-chunk int                            Size of listing chunk (response list for each ListObject S3 request) (default 1000)
                    --s3-list-url-encode Tristate                  Whether to url encode listings: true/false/unset (default unset)
                    --s3-list-version int                          Version of ListObjects to use: 1,2 or 0 for auto
                    --s3-location-constraint string                Location constraint - must be set to match the Region
                    --s3-max-upload-parts int                      Maximum number of parts in a multipart upload (default 10000)
                    --s3-memory-pool-flush-time Duration           How often internal memory buffer pools will be flushed (default 1m0s)
                    --s3-memory-pool-use-mmap                      Whether to use mmap buffers in internal memory pool
                    --s3-might-gzip Tristate                       Set this if the backend might gzip objects (default unset)
                    --s3-no-check-bucket                           If set, don't attempt to check the bucket exists or create it
                    --s3-no-head                                   If set, don't HEAD uploaded objects to check integrity
                    --s3-no-head-object                            If set, do not do HEAD before GET when getting objects
                    --s3-no-system-metadata                        Suppress setting and reading of system metadata
                    --s3-profile string                            Profile to use in the shared credentials file
                    --s3-provider string                           Choose your S3 provider
                    --s3-region string                             Region to connect to
                    --s3-requester-pays                            Enables requester pays option when interacting with S3 bucket
                    --s3-secret-access-key string                  AWS Secret Access Key (password)
                    --s3-server-side-encryption string             The server-side encryption algorithm used when storing this object in S3
                    --s3-session-token string                      An AWS session token
                    --s3-shared-credentials-file string            Path to the shared credentials file
                    --s3-sse-customer-algorithm string             If using SSE-C, the server-side encryption algorithm used when storing this object in S3
                    --s3-sse-customer-key string                   To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
                    --s3-sse-customer-key-base64 string            If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
                    --s3-sse-customer-key-md5 string               If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
                    --s3-sse-kms-key-id string                     If using KMS ID you must provide the ARN of Key
                    --s3-storage-class string                      The storage class to use when storing new objects in S3
                    --s3-upload-concurrency int                    Concurrency for multipart uploads (default 4)
                    --s3-upload-cutoff SizeSuffix                  Cutoff for switching to chunked upload (default 200Mi)
                    --s3-use-accelerate-endpoint                   If true use the AWS S3 accelerated endpoint
                    --s3-use-multipart-etag Tristate               Whether to use ETag in multipart uploads for verification (default unset)
                    --s3-use-presigned-request                     Whether to use a presigned request or PutObject for single part uploads
                    --s3-v2-auth                                   If true use v2 authentication
                    --s3-version-at Time                           Show file versions as they were at the specified time (default off)
                    --s3-versions                                  Include old versions in directory listings
                    --seafile-2fa                                  Two-factor authentication ('true' if the account has 2FA enabled)
                    --seafile-create-library                       Should rclone create a library if it doesn't exist
                    --seafile-encoding MultiEncoder                The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
                    --seafile-library string                       Name of the library
                    --seafile-library-key string                   Library password (for encrypted libraries only) (obscured)
                    --seafile-pass string                          Password (obscured)
                    --seafile-url string                           URL of seafile host to connect to
                    --seafile-user string                          User name (usually email address)
                    --sftp-ask-password                            Allow asking for SFTP password when needed
                    --sftp-chunk-size SizeSuffix                   Upload and download chunk size (default 32Ki)
                    --sftp-concurrency int                         The maximum number of outstanding requests for one file (default 64)
                    --sftp-disable-concurrent-reads                If set don't use concurrent reads
                    --sftp-disable-concurrent-writes               If set don't use concurrent writes
                    --sftp-disable-hashcheck                       Disable the execution of SSH commands to determine if remote file hashing is available
                    --sftp-host string                             SSH host to connect to
                    --sftp-idle-timeout Duration                   Max time before closing idle connections (default 1m0s)
                    --sftp-key-file string                         Path to PEM-encoded private key file
                    --sftp-key-file-pass string                    The passphrase to decrypt the PEM-encoded private key file (obscured)
                    --sftp-key-pem string                          Raw PEM-encoded private key
                    --sftp-key-use-agent                           When set forces the usage of the ssh-agent
                    --sftp-known-hosts-file string                 Optional path to known_hosts file
                    --sftp-md5sum-command string                   The command used to read md5 hashes
                    --sftp-pass string                             SSH password, leave blank to use ssh-agent (obscured)
                    --sftp-path-override string                    Override path used by SSH shell commands
                    --sftp-port int                                SSH port number (default 22)
                    --sftp-pubkey-file string                      Optional path to public key file
                    --sftp-server-command string                   Specifies the path or command to run a sftp server on the remote host
                    --sftp-set-env SpaceSepList                    Environment variables to pass to sftp and commands
                    --sftp-set-modtime                             Set the modified time on the remote if set (default true)
                    --sftp-sha1sum-command string                  The command used to read sha1 hashes
                    --sftp-shell-type string                       The type of SSH shell on remote server, if any
                    --sftp-skip-links                              Set to skip any symlinks and any other non regular files
                    --sftp-subsystem string                        Specifies the SSH2 subsystem on the remote host (default "sftp")
                    --sftp-use-fstat                               If set use fstat instead of stat
                    --sftp-use-insecure-cipher                     Enable the use of insecure ciphers and key exchange methods
                    --sftp-user string                             SSH username (default "$USER")
                    --sharefile-chunk-size SizeSuffix              Upload chunk size (default 64Mi)
                    --sharefile-encoding MultiEncoder              The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
                    --sharefile-endpoint string                    Endpoint for API calls
                    --sharefile-root-folder-id string              ID of the root folder
                    --sharefile-upload-cutoff SizeSuffix           Cutoff for switching to multipart upload (default 128Mi)
                    --sia-api-password string                      Sia Daemon API Password (obscured)
                    --sia-api-url string                           Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
                    --sia-encoding MultiEncoder                    The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
                    --sia-user-agent string                        Siad User Agent (default "Sia-Agent")
                    --skip-links                                   Don't warn about skipped symlinks
                    --smb-case-insensitive                         Whether the server is configured to be case-insensitive (default true)
                    --smb-domain string                            Domain name for NTLM authentication (default "WORKGROUP")
                    --smb-encoding MultiEncoder                    The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
                    --smb-hide-special-share                       Hide special shares (e.g. print$) which users aren't supposed to access (default true)
                    --smb-host string                              SMB server hostname to connect to
                    --smb-idle-timeout Duration                    Max time before closing idle connections (default 1m0s)
                    --smb-pass string                              SMB password (obscured)
                    --smb-port int                                 SMB port number (default 445)
                    --smb-user string                              SMB username (default "$USER")
                    --storj-access-grant string                    Access grant
                    --storj-api-key string                         API key
                    --storj-passphrase string                      Encryption passphrase
                    --storj-provider string                        Choose an authentication method (default "existing")
                    --storj-satellite-address string               Satellite address (default "us-central-1.storj.io")
                    --sugarsync-access-key-id string               Sugarsync Access Key ID
                    --sugarsync-app-id string                      Sugarsync App ID
                    --sugarsync-authorization string               Sugarsync authorization
                    --sugarsync-authorization-expiry string        Sugarsync authorization expiry
                    --sugarsync-deleted-id string                  Sugarsync deleted folder id
                    --sugarsync-encoding MultiEncoder              The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
                    --sugarsync-hard-delete                        Permanently delete files if true
                    --sugarsync-private-access-key string          Sugarsync Private Access Key
                    --sugarsync-refresh-token string               Sugarsync refresh token
                    --sugarsync-root-id string                     Sugarsync root id
                    --sugarsync-user string                        Sugarsync user
                    --swift-application-credential-id string       Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
                    --swift-application-credential-name string     Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
                    --swift-application-credential-secret string   Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
                    --swift-auth string                            Authentication URL for server (OS_AUTH_URL)
                    --swift-auth-token string                      Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
                    --swift-auth-version int                       AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
                    --swift-chunk-size SizeSuffix                  Above this size files will be chunked into a _segments container (default 5Gi)
                    --swift-domain string                          User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
                    --swift-encoding MultiEncoder                  The encoding for the backend (default Slash,InvalidUtf8)
                    --swift-endpoint-type string                   Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
                    --swift-env-auth                               Get swift credentials from environment variables in standard OpenStack form
                    --swift-key string                             API key or password (OS_PASSWORD)
                    --swift-leave-parts-on-error                   If true avoid calling abort upload on a failure
                    --swift-no-chunk                               Don't chunk files during streaming upload
                    --swift-no-large-objects                       Disable support for static and dynamic large objects
                    --swift-region string                          Region name - optional (OS_REGION_NAME)
                    --swift-storage-policy string                  The storage policy to use when creating a new container
                    --swift-storage-url string                     Storage URL - optional (OS_STORAGE_URL)
                    --swift-tenant string                          Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
                    --swift-tenant-domain string                   Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
                    --swift-tenant-id string                       Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
                    --swift-user string                            User name to log in (OS_USERNAME)
                    --swift-user-id string                         User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
                    --union-action-policy string                   Policy to choose upstream on ACTION category (default "epall")
                    --union-cache-time int                         Cache time of usage and free space (in seconds) (default 120)
                    --union-create-policy string                   Policy to choose upstream on CREATE category (default "epmfs")
                    --union-min-free-space SizeSuffix              Minimum viable free space for lfs/eplfs policies (default 1Gi)
                    --union-search-policy string                   Policy to choose upstream on SEARCH category (default "ff")
                    --union-upstreams string                       List of space separated upstreams
                    --uptobox-access-token string                  Your access token
                    --uptobox-encoding MultiEncoder                The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
                    --webdav-bearer-token string                   Bearer token instead of user/pass (e.g. a Macaroon)
                    --webdav-bearer-token-command string           Command to run to get a bearer token
                    --webdav-encoding string                       The encoding for the backend
                    --webdav-headers CommaSepList                  Set HTTP headers for all transactions
                    --webdav-pass string                           Password (obscured)
                    --webdav-url string                            URL of http host to connect to
                    --webdav-user string                           User name
                    --webdav-vendor string                         Name of the WebDAV site/service/software you are using
                    --yandex-auth-url string                       Auth server URL
                    --yandex-client-id string                      OAuth Client Id
                    --yandex-client-secret string                  OAuth Client Secret
                    --yandex-encoding MultiEncoder                 The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
                    --yandex-hard-delete                           Delete files permanently rather than putting them into the trash
                    --yandex-token string                          OAuth Access Token as a JSON blob
                    --yandex-token-url string                      Token server url
                    --zoho-auth-url string                         Auth server URL
                    --zoho-client-id string                        OAuth Client Id
                    --zoho-client-secret string                    OAuth Client Secret
                    --zoho-encoding MultiEncoder                   The encoding for the backend (default Del,Ctl,InvalidUtf8)
                    --zoho-region string                           Zoho region to connect to
                    --zoho-token string                            OAuth Access Token as a JSON blob
                    --zoho-token-url string                        Token server url

Docker Volume Plugin

   Introduction
       Docker  1.9  has added support for creating named volumes via command-line interface and mounting them in
       containers as a way to share data between them.  Since Docker 1.10 you  can  create  named  volumes  with
       Docker Compose by descriptions  in  https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-
       configuration-reference  docker-com‐  pose.yml files for use by container groups on a single host.  As of
       Docker 1.12  volumes  are  supported  by  Docker Swarm included  with  Docker  Engine  and  created  from
       descriptions    in    https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-
       reference  swarm  com‐ pose v3 files for use with swarm stacks across multiple cluster nodes.

       Docker Volume Plugins augment  the  default  local volume driver included in Docker with stateful volumes
       shared across containers and hosts.  Unlike local volumes, your data will not be deleted when such volume
       is removed.  Plugins can run managed by the docker daemon, as a native  system  service  (under  systemd,
       sysv  or  upstart)  or  as  a standalone executable.  Rclone can run as docker volume plugin in all these
       modes.  It interacts with the local docker daemon via plugin API and  handles  mounting  of  remote  file
       systems  into  docker  containers  so it must run on the same host as the docker daemon or on every Swarm
       node.

   Getting started
       In the first example we will use the SFTP rclone  volume  with  Docker  engine  on  a  standalone  Ubuntu
       machine.

       Start from installing Docker on the host.

       The FUSE driver is a prerequisite for rclone mounting and should be installed on host:

              sudo apt-get -y install fuse

       Create two directories required by rclone docker plugin:

              sudo mkdir -p /var/lib/docker-plugins/rclone/config
              sudo mkdir -p /var/lib/docker-plugins/rclone/cache

       Install the managed rclone docker plugin for your architecture (here amd64):

              docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
              docker plugin list

       Create your SFTP volume:

              docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true

       Note  that  since  all  options  are  static,  you  don’t  even  have  to run rclone config or create the
       rclone.conf file (but the config directory should still be present).  In the simplest case  you  can  use
       localhost  as hostname and your SSH credentials as username and password.  You can also change the remote
       path to your home directory on the host, for example -o path=/home/username.

       Time to create a test container and mount the volume into it:

              docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash

       If all goes well, you will enter the new container and change right to the mounted SFTP remote.  You  can
       type  ls  to  list  the  mounted  directory or otherwise play with it.  Type exit when you are done.  The
       container will stop but the volume will stay, ready to be reused.  When it’s not needed  anymore,  remove
       it:

              docker volume list
              docker volume remove firstvolume

       Now let us try something more elaborate: Google Drive volume on multi-node Docker Swarm.

       You  should  start  from  installing  Docker  and FUSE, creating plugin directories and installing rclone
       plugin on every swarm node.  Then setup the Swarm.

       Google Drive volumes need an access token which can be setup via web browser  and  will  be  periodically
       renewed  by  rclone.   The  managed plugin cannot run a browser so we will use a technique similar to the
       rclone setup on a headless box.

       Run rclone config on another machine equipped with web browser and graphical user interface.  Create  the
       Google Drive remote.   When  done,  transfer  the  resulting rclone.conf to the Swarm cluster and save as
       /var/lib/docker-plugins/rclone/config/rclone.conf on every node.  By default this location is  accessible
       only to the root user so you will need appropriate privileges.  The resulting config will look like this:

              [gdrive]
              type = drive
              scope = drive
              drive_id = 1234567...
              root_folder_id = 0Abcd...
              token = {"access_token":...}

       Now create the file named example.yml with a swarm stack description like this:

              version: '3'
              services:
                heimdall:
                  image: linuxserver/heimdall:latest
                  ports: [8080:80]
                  volumes: [configdata:/config]
              volumes:
                configdata:
                  driver: rclone
                  driver_opts:
                    remote: 'gdrive:heimdall'
                    allow_other: 'true'
                    vfs_cache_mode: full
                    poll_interval: 0

       and run the stack:

              docker stack deploy example -c ./example.yml

       After  a  few  seconds  docker  will  spread  the  parsed  stack  description  over  cluster,  create the
       example_heimdall service on port 8080, run service containers on one or more cluster  nodes  and  request
       the  example_configdata volume from rclone plugins on the node hosts.  You can use the following commands
       to confirm results:

              docker service ls
              docker service ps example_heimdall
              docker volume ls

       Point your browser to http://cluster.host.address:8080 and play with the service.  Stop  it  with  docker
       stack  remove example when you are done.  Note that the example_configdata volume(s) created on demand at
       the cluster nodes will not be automatically removed together with the stack but stay  for  future  reuse.
       You  can  remove  them  manually by invoking the docker volume remove example_configdata command on every
       node.

   Creating Volumes via CLI
       Volumes can be created with docker volume create.  Here are a few examples:

              docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
              docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall
              docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0

       Note the -d rclone flag that tells docker to request volume from the rclone driver.  This works  even  if
       you  installed  managed  driver  by  its  full  name rclone/docker-volume-rclone because you provided the
       --alias rclone option.

       Volumes can be inspected as follows:

              docker volume list
              docker volume inspect vol1

   Volume Configuration
       Rclone flags and volume options are set via the -o flag  to  the  docker  volume  create  command.   They
       include  backend-specific  parameters  as well as mount and VFS options.  Also there are a few special -o
       options: remote, fs, type, path, mount-type and persist.

       remote determines an existing remote name from the config file, with trailing colon and optionally with a
       remote path.  See the full syntax in the rclone documentation.  This option  can  be  aliased  as  fs  to
       prevent confusion with the remote parameter of such backends as crypt or alias.

       The  remote=:backend:dir/subdir  syntax can be used to create on-the-fly (config-less) remotes, while the
       type and path options provide a simpler alternative for this.  Using two split options

              -o type=backend -o path=dir/subdir

       is equivalent to the combined syntax

              -o remote=:backend:dir/subdir

       but is arguably easier to parameterize in scripts.  The path part is optional.

       Mount and VFS options as well as backend parameters are named like their twin command-line flags  without
       the  --  CLI prefix.  Optionally you can use underscores instead of dashes in option names.  For example,
       --vfs-cache-mode full becomes -o  vfs-cache-mode=full  or  -o  vfs_cache_mode=full.   Boolean  CLI  flags
       without  value  will  gain  the  true  value,  e.g.   --allow-other  becomes  -o  allow-other=true  or -o
       allow_other=true.

       Please note that you can provide parameters only for the backend immediately referenced  by  the  backend
       type  of  mounted remote.  If this is a wrapping backend like alias, chunker or crypt, you cannot provide
       options for the referred to remote or backend.  This limitation  is  imposed  by  the  rclone  connection
       string parser.  The only workaround is to feed plugin with rclone.conf or configure plugin arguments (see
       below).

   Special Volume Options
       mount-type  determines the mount method and in general can be one of: mount, cmount, or mount2.  This can
       be aliased as mount_type.  It should be noted that the managed rclone docker plugin  currently  does  not
       support  the  cmount method and mount2 is rarely needed.  This option defaults to the first found method,
       which is usually mount so you generally won’t need it.

       persist is a reserved boolean (true/false) option.  In future it will allow to persist on-the-fly remotes
       in the plugin rclone.conf file.

   Connection Strings
       The remote value can be  extended  with  connection strings as  an  alternative  way  to  supply  backend
       parameters.   This  is  equivalent  to  the  -o  backend  options  with one syntactic difference.  Inside
       connection string the backend prefix must be dropped from parameter names but in the -o param=value array
       it must be present.  For instance, compare the following option array

              -o remote=:sftp:/home -o sftp-host=localhost

       with equivalent connection string:

              -o remote=:sftp,host=localhost:/home

       This difference exists because flag options -o key=val include  not  only  backend  parameters  but  also
       mount/VFS  flags  and possibly other settings.  Also it allows to discriminate the remote option from the
       crypt-remote (or similarly named backend parameters) and arguably simplifies  scripting  due  to  clearer
       value substitution.

   Using with Swarm or Compose
       Both  Docker  Swarm  and  Docker  Compose  use  YAML-formatted  text files to describe groups (stacks) of
       containers,     their     properties,     networks     and      volumes.       Compose      uses      the
       https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference compose
       v2 format,    Swarm    uses   the   https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-
       configuration-reference compose v3 format.  They are mostly similar, differences  are  explained  in  the
       https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading docker documentation.

       Volumes are described by the children of the top-level volumes: node.  Each of them should be named after
       its volume and have at least two elements, the self-explanatory driver: rclone value and the driver_opts:
       structure playing the same role as -o key=val CLI flags:

              volumes:
                volume_name_1:
                  driver: rclone
                  driver_opts:
                    remote: 'gdrive:'
                    allow_other: 'true'
                    vfs_cache_mode: full
                    token: '{"type": "borrower", "expires": "2021-12-31"}'
                    poll_interval: 0

       Notice  a few important details: - YAML prefers _ in option names instead of -.  - YAML treats single and
       double quotes interchangeably.  Simple strings and integers can be left unquoted.  - Boolean values  must
       be  quoted  like 'true' or "false" because these two words are reserved by YAML.  - The filesystem string
       is keyed with remote (or with fs).  Normally you can omit quotes here, but if the string ends with colon,
       you must quote it like remote: "storage_box:".  - YAML is picky about surrounding  braces  in  values  as
       this  is  in fact another syntax for key/value mappings.  For example, JSON access tokens usually contain
       double quotes and surrounding braces, so you must put them in single quotes.

   Installing as Managed Plugin
       Docker daemon can install plugins from  an  image  registry  and  run  them  managed.   We  maintain  the
       docker-volume-rclone plugin image on Docker Hub.

       Rclone volume plugin requires Docker Engine >= 19.03.15

       The plugin requires presence of two directories on the host before it can be installed.  Note that plugin
       will  not  create  them  automatically.   By  default  they must exist on host at the following locations
       (though you can tweak the paths): - /var/lib/docker-plugins/rclone/config is reserved for the rclone.conf
       config  file  and  must  exist  even  if  it’s  empty  and  the  config   file   is   not   present.    -
       /var/lib/docker-plugins/rclone/cache holds the plugin state file as well as optional VFS caches.

       You can install managed plugin with default settings as follows:

              docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone

       The :amd64 part of the image specification after colon is called a tag.  Usually you will want to install
       the  latest plugin for your architecture.  In this case the tag will just name it, like amd64 above.  The
       following plugin architectures are currently available: - amd64 - arm64 - arm-v7

       Sometimes you might want a concrete plugin version, not the latest one.  Then you should use image tag in
       the form :ARCHITECTURE-VERSION.  For example, to install plugin version v1.56.2 on architecture arm64 you
       will  use  tag  arm64-1.56.2  (note  the  removed  v)   so   the   full   image   specification   becomes
       rclone/docker-volume-rclone:arm64-1.56.2.

       We also provide the latest plugin tag, but since docker does not support multi-architecture plugins as of
       the  time of this writing, this tag is currently an alias for amd64.  By convention the latest tag is the
       default   one   and   can   be   omitted,   thus   both   rclone/docker-volume-rclone:latest   and   just
       rclone/docker-volume-rclone will refer to the latest plugin release for the amd64 platform.

       Also  the  amd64  part  can  be omitted from the versioned rclone plugin tags.  For example, rclone image
       reference       rclone/docker-volume-rclone:amd64-1.56.2       can        be        abbreviated        as
       rclone/docker-volume-rclone:1.56.2  for convenience.  However, for non-intel architectures you still have
       to use the full tag as amd64 or latest will fail to start.

       Managed plugin is in fact a special  container  running  in  a  namespace  separate  from  normal  docker
       containers.   Inside  it  runs  the  rclone  serve  docker command.  The config and cache directories are
       bind-mounted into the container at start.  The docker daemon connects to a unix  socket  created  by  the
       command  inside  the  container.   The  command creates on-demand remote mounts right inside, then docker
       machinery  propagates  them  through  kernel  mount  namespaces  and  bind-mounts  into  requesting  user
       containers.

       You can tweak a few plugin settings after installation when it’s disabled (not in use), for instance:

              docker plugin disable rclone
              docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
              docker plugin enable rclone
              docker plugin inspect rclone

       Note  that  if  docker  refuses  to  disable  the  plugin,  you should find and remove all active volumes
       connected with it as well as containers and swarm services that use them.   This  is  rather  tedious  so
       please carefully plan in advance.

       You  can  tweak  the  following  settings:  args,  config,  cache,  HTTP_PROXY, HTTPS_PROXY, NO_PROXY and
       RCLONE_VERBOSE.  It’s your task to keep plugin settings in sync across swarm cluster nodes.

       args sets command-line arguments for the rclone serve docker command (none by default).  Arguments should
       be separated by space so you will normally want to put them in quotes  on  the  docker plugin set command
       line.   Both serve docker flags and generic rclone flags are supported, including backend parameters that
       will be used as defaults for volume creation.  Note that plugin will fail (due to this docker bug) if the
       args value is empty.  Use e.g. args="-v" as a workaround.

       config=/host/dir sets alternative  host  location  for  the  config  directory.   Plugin  will  look  for
       rclone.conf  here.   It’s  not  an  error if the config file is not present but the directory must exist.
       Please note that plugin can periodically rewrite the config file, for  example  when  it  renews  storage
       access tokens.  Keep this in mind and try to avoid races between the plugin and other instances of rclone
       on  the  host that might try to change the config simultaneously resulting in corrupted rclone.conf.  You
       can also put stuff like private key files for SFTP remotes  in  this  directory.   Just  note  that  it’s
       bind-mounted  inside  the plugin container at the predefined path /data/config.  For example, if your key
       file is named sftp-box1.key  on  the  host,  the  corresponding  volume  config  option  should  read  -o
       sftp-key-file=/data/config/sftp-box1.key.

       cache=/host/dir  sets alternative host location for the cache directory.  The plugin will keep VFS caches
       here.  Also it will create and maintain the docker-plugin.state file in this directory.  When the  plugin
       is  restarted  or reinstalled, it will look in this file to recreate any volumes that existed previously.
       However, they will not be re-mounted into consuming containers after restart.   Usually  this  is  not  a
       problem  as  the  docker  daemon  normally  will  restart affected user containers after failures, daemon
       restarts or host reboots.

       RCLONE_VERBOSE sets plugin verbosity from 0 (errors only, by default) to 2 (debugging).  Verbosity can be
       also tweaked via args="-v [-v] ...".  Since arguments  are  more  generic,  you  will  rarely  need  this
       setting.   The  plugin  output  by  default  feeds  the docker daemon log on local host.  Log entries are
       reflected as errors in the  docker  log  but  retain  their  actual  level  assigned  by  rclone  in  the
       encapsulated message string.

       HTTP_PROXY, HTTPS_PROXY, NO_PROXY customize the plugin proxy settings.

       You can set custom plugin options right when you install it, in one go:

              docker plugin remove rclone
              docker plugin install rclone/docker-volume-rclone:amd64 \
                     --alias rclone --grant-all-permissions \
                     args="-v --allow-other" config=/etc/rclone
              docker plugin inspect rclone

   Healthchecks
       The  docker  plugin  volume protocol doesn’t provide a way for plugins to inform the docker daemon that a
       volume is (un-)available.  As a workaround you can setup a  healthcheck  to  verify  that  the  mount  is
       responding, for example:

              services:
                my_service:
                  image: my_image
                  healthcheck:
                    test: ls /path/to/rclone/mount || exit 1
                    interval: 1m
                    timeout: 15s
                    retries: 3
                    start_period: 15s

   Running Plugin under Systemd
       In  most  cases you should prefer managed mode.  Moreover, MacOS and Windows do not support native Docker
       plugins.  Please use managed mode on these systems.  Proceed further only if you are on Linux.

       First, install rclone.  You can just run it (type rclone serve docker and hit enter) for the test.

       Install FUSE:

              sudo apt-get -y install fuse

       Download two  systemd  configuration  files:  https://raw.githubusercontent.com/rclone/rclone/master/con‐
       trib/docker-plugin/systemd/docker-volume-                 rclone.service docker-volume-rclone.service and
       https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-
       rclone.socket docker-volume-rclone.socket.

       Put them to the /etc/systemd/system/ directory:

              cp docker-volume-plugin.service /etc/systemd/system/
              cp docker-volume-plugin.socket  /etc/systemd/system/

       Please note that all commands in this section must be run as root but we omit sudo  prefix  for  brevity.
       Now create directories required by the service:

              mkdir -p /var/lib/docker-volumes/rclone
              mkdir -p /var/lib/docker-plugins/rclone/config
              mkdir -p /var/lib/docker-plugins/rclone/cache

       Run the docker plugin service in the socket activated mode:

              systemctl daemon-reload
              systemctl start docker-volume-rclone.service
              systemctl enable docker-volume-rclone.socket
              systemctl start docker-volume-rclone.socket
              systemctl restart docker

       Or  run  the  service  directly:  -  run  systemctl daemon-reload to let systemd pick up new config - run
       systemctl enable docker-volume-rclone.service to make the new service start automatically when you  power
       on  your  machine.   -  run systemctl start docker-volume-rclone.service to start the service now.  - run
       systemctl restart docker to restart docker daemon and let it detect the new  plugin  socket.   Note  that
       this step is not needed in managed mode where docker knows about plugin state changes.

       The two methods are equivalent from the user perspective, but I personally prefer socket activation.

   Troubleshooting
       You can see managed plugin settings with

              docker plugin list
              docker plugin inspect rclone

       Note that docker (including latest 20.10.7) will not show actual values of args, just the defaults.

       Use  journalctl  --unit  docker to see managed plugin output as part of the docker daemon log.  Note that
       docker reflects plugin lines as errors but their actual level  can  be  seen  from  encapsulated  message
       string.

       You  will  usually  install  the  latest  version of managed plugin for your platform.  Use the following
       commands to print the actual installed version:

              PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
              sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version

       You can even use runc to run shell inside the plugin container:

              sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash

       Also you can use curl to check the plugin socket connectivity:

              docker plugin list --no-trunc
              PLUGID=123abc...
              sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate

       though this is rarely needed.

   Caveats
       Finally I’d like to mention a caveat with updating volume settings.  Docker CLI does not have a dedicated
       command like docker volume update.  It may be tempting  to  invoke  docker  volume  create  with  updated
       options  on existing volume, but there is a gotcha.  The command will do nothing, it won’t even return an
       error.  I hope that docker maintainers will fix this some day.  In the meantime be aware  that  you  must
       remove your volume before recreating it with new settings:

              docker volume remove my_vol
              docker volume create my_vol -d rclone -o opt1=new_val1 ...

       and verify that settings did update:

              docker volume list
              docker volume inspect my_vol

       If docker refuses to remove the volume, you should find containers or swarm services that use it and stop
       them first.

   Getting started
       • Install rclone and setup your remotes.

       • Bisync    will    create    its    working    directory   at   ~/.cache/rclone/bisync   on   Linux   or
         C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows.  Make sure that this location is writable.

       • Run bisync with the --resync flag, specifying the paths to the local and remote sync directory roots.

       • For successive sync runs, leave off the --resync flag.

       • Consider using a filters file for excluding unnecessary files and directories from the sync.

       • Consider setting up the –check-access feature for safety.

       • On Linux, consider setting up a crontab entry.  bisync can safely run in concurrent cron jobs thanks to
         lock files it maintains.

       Here is a typical run log (with timestamps removed for clarity):

              rclone bisync /testdir/path1/ /testdir/path2/ --verbose
              INFO  : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
              INFO  : Path1 checking for diffs
              INFO  : - Path1    File is new                         - file11.txt
              INFO  : - Path1    File is newer                       - file2.txt
              INFO  : - Path1    File is newer                       - file5.txt
              INFO  : - Path1    File is newer                       - file7.txt
              INFO  : - Path1    File was deleted                    - file4.txt
              INFO  : - Path1    File was deleted                    - file6.txt
              INFO  : - Path1    File was deleted                    - file8.txt
              INFO  : Path1:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
              INFO  : Path2 checking for diffs
              INFO  : - Path2    File is new                         - file10.txt
              INFO  : - Path2    File is newer                       - file1.txt
              INFO  : - Path2    File is newer                       - file5.txt
              INFO  : - Path2    File is newer                       - file6.txt
              INFO  : - Path2    File was deleted                    - file3.txt
              INFO  : - Path2    File was deleted                    - file7.txt
              INFO  : - Path2    File was deleted                    - file8.txt
              INFO  : Path2:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
              INFO  : Applying changes
              INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file11.txt
              INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file2.txt
              INFO  : - Path2    Queue delete                        - /testdir/path2/file4.txt
              NOTICE: - WARNING  New or changed in both paths        - file5.txt
              NOTICE: - Path1    Renaming Path1 copy                 - /testdir/path1/file5.txt..path1
              NOTICE: - Path1    Queue copy to Path2                 - /testdir/path2/file5.txt..path1
              NOTICE: - Path2    Renaming Path2 copy                 - /testdir/path2/file5.txt..path2
              NOTICE: - Path2    Queue copy to Path1                 - /testdir/path1/file5.txt..path2
              INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file6.txt
              INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file7.txt
              INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file1.txt
              INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file10.txt
              INFO  : - Path1    Queue delete                        - /testdir/path1/file3.txt
              INFO  : - Path2    Do queued copies to                 - Path1
              INFO  : - Path1    Do queued copies to                 - Path2
              INFO  : -          Do queued deletes on                - Path1
              INFO  : -          Do queued deletes on                - Path2
              INFO  : Updating listings
              INFO  : Validating listings for Path1 "/testdir/path1/" vs Path2 "/testdir/path2/"
              INFO  : Bisync successful

   Command line syntax
              $ rclone bisync --help
              Usage:
                rclone bisync remote1:path1 remote2:path2 [flags]

              Positional arguments:
                Path1, Path2  Local path, or remote storage with ':' plus optional path.
                              Type 'rclone listremotes' for list of configured remotes.

              Optional Flags:
                    --check-access            Ensure expected `RCLONE_TEST` files are found on
                                              both Path1 and Path2 filesystems, else abort.
                    --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`)
                    --check-sync CHOICE       Controls comparison of final listings:
                                              `true | false | only` (default: true)
                                              If set to `only`, bisync will only compare listings
                                              from the last run but skip actual sync.
                    --filters-file PATH       Read filtering patterns from a file
                    --max-delete PERCENT      Safety check on maximum percentage of deleted files allowed.
                                              If exceeded, the bisync run will abort. (default: 50%)
                    --force                   Bypass `--max-delete` safety check and run the sync.
                                              Consider using with `--verbose`
                    --remove-empty-dirs       Remove empty directories at the final cleanup step.
                -1, --resync                  Performs the resync run.
                                              Warning: Path1 files may overwrite Path2 versions.
                                              Consider using `--verbose` or `--dry-run` first.
                    --localtime               Use local time in listings (default: UTC)
                    --no-cleanup              Retain working files (useful for troubleshooting and testing).
                    --workdir PATH            Use custom working directory (useful for testing).
                                              (default: `~/.cache/rclone/bisync`)
                -n, --dry-run                 Go through the motions - No files are copied/deleted.
                -v, --verbose                 Increases logging verbosity.
                                              May be specified more than once for more details.
                -h, --help                    help for bisync

       Arbitrary  rclone  flags  may  be  specified  on  the  bisync command line,  for  example  rclone  bisync
       ./testdir/path1/  gdrive:testdir/path2/  --drive-skip-gdocs -v -v --timeout 10s Note that interactions of
       various rclone flags with bisync process flow has not been fully tested yet.

   Paths
       Path1 and Path2 arguments may be references to any mix of local directory paths (absolute  or  relative),
       UNC  paths  (//server/share/path),  Windows  drive  paths  (with  a  drive  letter  and  :) or configured
       remotes with optional subdirectory paths.  Cloud references are  distinguished  by  having  a  :  in  the
       argument (see Windows support below).

       Path1 and Path2 are treated equally, in that neither has priority for file changes, and access efficiency
       does not change whether a remote is on Path1 or Path2.

       The  listings  in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1
       and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.:
       path_to_local_tree..dropbox_subdir.lst.

       Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by  default.
       If  the  --remove-empty-dirs flag is specified, then both paths will have any empty directories purged as
       the last step in the process.

   Command-line flags
   –resync
       This will effectively make both Path1 and Path2 filesystems contain a matching  superset  of  all  files.
       Path2  files that do not exist in Path1 will be copied to Path1, and the process will then sync the Path1
       tree to Path2.

       The base directories on the both Path1 and Path2 filesystems must exist or bisync  will  fail.   This  is
       required for safety - that bisync can verify that both paths are valid.

       When  using  --resync  a newer version of a file on the Path2 filesystem will be overwritten by the Path1
       filesystem version.  Carefully evaluate deltas using –dry-run.

       For a resync run, one of the paths may be empty (no files in the  path  tree).   The  resync  run  should
       result in files on both paths, else a normal non-resync run will fail.

       For  a  non-resync  run,  either  path  being empty (no files in the tree) fails with Empty current PathN
       listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an  unexpected  empty
       path does not result in deleting everything in the other path.

   –check-access
       Access  check  files  are an additional safety measure against data loss.  bisync will ensure it can find
       matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems.  Time stamps  and  file
       contents  are  not  important,  just the names and locations.  Place one or more RCLONE_TEST files in the
       Path1 or Path2 filesystem and then do either a run without --check-access or a --resync to  set  matching
       files  on  both  filesystems.   If  you  have symbolic links in your sync tree it is recommended to place
       RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch  of  deleted
       files if the linked-to tree should not be accessible.  Also see the --check-filename flag.

   –max-delete
       As  a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or
       Path2 filesystem, then bisync will abort with a warning message, without making any changes.  The default
       --max-delete is 50%.  One way to trigger this limit is to rename a directory that contains more than half
       of your files.  This will appear to bisync as a bunch of deleted files and a bunch of  new  files.   This
       safety  check  is  intended  to  block bisync from deleting all of the files on both filesystems due to a
       temporary network access issue, or if the user had inadvertently deleted the files on  one  side  or  the
       other.  To force the sync either set a different delete percentage limit, e.g. --max-delete 75 (allows up
       to 75% deletion), or use --force to bypass the check.

       Also see the all files changed check.

   –filters-file
       By using rclone filter features you can exclude file types or directory sub-trees from the sync.  See the
       bisync  filters  section  and  generic https://rclone.org/filtering/#filter-from-read-filtering-patterns-
       from-a-file –filter-from documentation.  An example filters file contains filters for  non-allowed  files
       for synching with Dropbox.

       If  you  make  changes  to  your filters file then bisync requires a run with --resync.  This is a safety
       feature, which avoids existing files on the Path1 and/or Path2 side from seeming to disappear  from  view
       (since  they  are  excluded in the new listings), which would fool bisync into seeing them as deleted (as
       compared to the prior run listings), and then bisync would proceed to delete them for real.

       To block this from happening bisync calculates an MD5 hash of the filters file and stores the hash  in  a
       .md5  file  in  the  same  place  as your filters file.  On the next runs with --filters-file set, bisync
       re-calculates the MD5 hash of the current filters file and compares it to the hash stored in  .md5  file.
       If  they  don’t  match  the run aborts with a critical error and thus forces you to do a --resync, likely
       avoiding a disaster.

   –check-sync
       Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and
       Path2 history listings.  This check-sync integrity check is performed at the  end  of  the  sync  run  by
       default.   Any  untrapped  failing copy/deletes between the two paths might result in differences between
       the two listings and in the untracked file content differences between the two paths.  A resync run would
       correct the error.

       Note that the default-enabled integrity check locally executes a load of both the final Path1  and  Path2
       listings,  and  thus  adds  to  the run time of a sync.  Using --check-sync=false will disable it and may
       significantly reduce the sync run times for very large numbers of files.

       The check may be run manually with --check-sync=only.  It runs only the integrity  check  and  terminates
       without actually synching.

   Operation
   Runtime flow details
       bisync  retains  the  listings of the Path1 and Path2 filesystems from the prior run.  On each successive
       run it will:

       • list files on path1 and path2, and check for changes on each side.  Changes include New, Newer,  Older,
         and Deleted files.

       • Propagate changes on path1 to path2, and vice-versa.

   Safety measures
       • Lock  file prevents multiple simultaneous runs when taking a while.  This can be particularly useful if
         bisync is run by cron scheduler.

       • Handle change conflicts non-destructively by creating ..path1 and ..path2 file versions.

       • File system access health check using RCLONE_TEST files (see the --check-access flag).

       • Abort on excessive deletes - protects against a failed listing being interpreted as all the files  were
         deleted.  See the --max-delete and --force flags.

       • If  something  evil  happens,  bisync goes into a safe state to block damage by later runs.  (See Error
         Handling)

   Normal sync checks
       Type       Description                     Result             Implementation
       ────────────────────────────────────────────────────────────────────────────────
       Path2      File is new on  Path2,  does    Path2    version   rclone copy Path2
       new        not exist on Path1              survives           to Path1
       Path2      File  is  newer  on   Path2,    Path2    version   rclone copy Path2
       newer      unchanged on Path1              survives           to Path1
       Path2      File is  deleted  on  Path2,    File is deleted    rclone     delete
       deleted    unchanged on Path1                                 Path1
       Path1      File is new on  Path1,  does    Path1    version   rclone copy Path1
       new        not exist on Path2              survives           to Path2
       Path1      File  is  newer  on   Path1,    Path1    version   rclone copy Path1
       newer      unchanged on Path2              survives           to Path2
       Path1      File  is  older  on   Path1,    Path1    version   rclone copy Path1
       older      unchanged on Path2              survives           to Path2
       Path2      File  is  older  on   Path2,    Path2    version   rclone copy Path2
       older      unchanged on Path1              survives           to Path1
       Path1      File  no  longer  exists  on    File is deleted    rclone     delete
       deleted    Path1                                              Path2

   Unusual sync checks
       Type                Description             Result                 Implementation
       ──────────────────────────────────────────────────────────────────────────────────
       Path1   new   AND   File  is new on Path1   Files  renamed   to    rclone    copy
       Path2 new           AND new on Path2        _Path1 and _Path2      _Path2 file to
                                                                          Path1,  rclone
                                                                          copy    _Path1
                                                                          file to Path2
       Path2  newer  AND   File   is   newer  on   Files   renamed  to    rclone    copy
       Path1 changed       Path2    AND     also   _Path1 and _Path2      _Path2 file to
                           changed                                        Path1,  rclone
                           (newer/older/size) on                          copy    _Path1
                           Path1                                          file to Path2
       Path2  newer  AND   File  is   newer   on   Path2       version    rclone    copy
       Path1 deleted       Path2     AND    also   survives               Path2 to Path1
                           deleted on Path1
       Path2 deleted AND   File  is  deleted  on   Path1       version    rclone    copy
       Path1 changed       Path2   AND   changed   survives               Path1 to Path2
                           (newer/older/size) on
                           Path1
       Path1 deleted AND   File  is  deleted  on   Path2       version    rclone    copy
       Path2 changed       Path1   AND   changed   survives               Path2 to Path1
                           (newer/older/size) on
                           Path2

   All files changed check
       if all prior existing files on either of the filesystems have changed (e.g. timestamps have  changed  due
       to  changing the system’s timezone) then bisync will abort without making any changes.  Any new files are
       not considered for this check.  You could use --force to force the sync (whichever side has  the  changed
       timestamp  files  wins).   Alternately,  a --resync may be used (Path1 versions will be pushed to Path2).
       Consider the situation carefully and perhaps use --dry-run before you commit to the changes.

   Modification time
       Bisync relies on file timestamps to identify changed files and will refuse to operate  if  backend  lacks
       the modification time support.

       If  you  or  your  application should change the content of a file without changing the modification time
       then bisync will not notice the change, and thus will not copy it to the other side.

       Note that on some cloud storage systems it is not possible to have file timestamps that  match  precisely
       between the local and other filesystems.

       Bisync’s  approach  to  this  problem is by tracking the changes on each side separately over time with a
       local database of files in that side then applying the resulting changes on the other side.

   Error handling
       Certain bisync critical errors, such as file copy/move failing,  will  result  in  a  bisync  lockout  of
       following  runs.   The  lockout  is  asserted  because the sync status and history of the Path1 and Path2
       filesystems cannot be trusted, so it is safer to block any further changes until  someone  checks  things
       out.  The recovery is to do a --resync again.

       It is recommended to use --resync --dry-run --verbose initially and carefully review what changes will be
       made before running the --resync without --dry-run.

       Most  of  these events come up due to a error status from an internal call.  On such a critical error the
       {...}.path1.lst and {...}.path2.lst listing files are renamed to extension  .lst-err,  which  blocks  any
       future  bisync  runs  (since  the  normal  .lst  files  are  not  found).  Bisync keeps them under bisync
       subdirectory of the rclone cache directory, typically at ${HOME}/.cache/rclone/bisync/ on Linux.

       Some errors are considered temporary and re-running the bisync  is  not  blocked.   The  critical  return
       blocks further bisync runs.

   Lock file
       When  bisync  is  running,  a  lock  file  is  created  in  the  bisync  working  directory, typically at
       ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux.  If bisync should crash or hang,  the  lock  file  will
       remain in place and block any further runs of bisync for the same paths.  Delete the lock file as part of
       debugging  the situation.  The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when
       the prior invocation is taking a long time.  The lock file contains PID of the  blocking  process,  which
       may help in debug.

       Note  that  while  concurrent  bisync  runs are allowed, be very cautious that there is no overlap in the
       trees being synched between concurrent runs, lest there be replicated files, deleted  files  and  general
       mayhem.

   Return codes
       rclone  bisync  returns  the  following  codes  to  calling  program:  - 0 on a successful run, - 1 for a
       non-critical failing run (a rerun may be successful), - 2  for  a  critically  aborted  run  (requires  a
       --resync to recover).

   Limitations
   Supported backends
       Bisync  is  considered  BETA and has been tested with the following backends: - Local filesystem - Google
       Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk

       It has not been fully tested with other services yet.  If it works, or sorta works, please  let  us  know
       and we’ll update the list.  Run the test suite to check for proper operation as described below.

       First  release  of rclone bisync requires that underlying backend supported the modification time feature
       and will refuse to run otherwise.  This limitation will be lifted in a future rclone bisync release.

   Concurrent modifications
       When using Local, FTP or SFTP remotes rclone does not create temporary  files  at  the  destination  when
       copying,  and thus if the connection is lost the created file may be corrupt, which will likely propagate
       back to the original path on the next sync, resulting in data loss.  This will  be  solved  in  a  future
       release, there is no workaround at the moment.

       Files  that  change  during a bisync run may result in data loss.  This has been seen in a highly dynamic
       environment, where the filesystem is getting hammered by running processes during the sync.  The solution
       is to sync at quiet times or filter out unnecessary directories and files.

   Empty directories
       New empty directories on one path are not propagated to the other side.   This  is  because  bisync  (and
       rclone)  natively  works  on  files not directories.  The following sequence is a workaround but will not
       propagate the delete of an empty directory to the other side:

              rclone bisync PATH1 PATH2
              rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
              rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs

   Renamed directories
       Renaming a folder on the Path1 side results is deleting all files on the Path2 side and then copying  all
       files  again from Path1 to Path2.  Bisync sees this as all files in the old directory name as deleted and
       all files in the new directory name as new.  Similarly, renaming a directory on both sides  to  the  same
       name  will  result in creating ..path1 and ..path2 files on both sides.  Currently the most effective and
       efficient method of renaming a directory is to rename it on both sides, then do a --resync.

   Case sensitivity
       Synching with case-insensitive filesystems, such as Windows or Box, can result in  file  name  conflicts.
       This  will  be  fixed  in  a future release.  The near term workaround is to make sure that files on both
       sides don’t have spelling case differences (Smile.jpg vs. smile.jpg).

   Windows support
       Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows GitHub runners.

       Drive letters are allowed, including drive letters mapped to network drives (rclone  bisync  J:\localsync
       GDrive:).   If  a  drive  letter is omitted, the shell current drive is the default.  Drive letters are a
       single character follows by :, so cloud names must be more than one character long.

       Absolute paths (with or without a drive letter), and relative paths (with or without a drive letter)  are
       supported.

       Working directory is created at C:\Users\MyLogin\AppData\Local\rclone\bisync.

       Note that bisync output may show a mix of forward / and back \ slashes.

       Be careful of case independent directory and file naming on Windows vs. case dependent Linux

   Filtering
       See filtering documentation for how filter rules are written and interpreted.

       Bisync’s  --filters-file  flag  slightly  extends the rclone’s https://rclone.org/filtering/#filter-from-
       read-filtering-patterns-from-a-file –filter-from filtering mechanism.  For a given  bisync  run  you  may
       provide only one --filters-file.  The --include*, --exclude*, and --filter flags are also supported.

   How to filter directories
       Filtering portions of the directory tree is a critical feature for synching.

       Examples of directory trees (always beneath the Path1/Path2 root level) you may want to exclude from your
       sync:  - Directory trees containing only software build intermediate files.  - Directory trees containing
       application temporary files and data such as the Windows  C:\Users\MyLogin\AppData\  tree.   -  Directory
       trees  containing  files  that are large, less important, or are getting thrashed continuously by ongoing
       processes.

       On the other hand, there may be only select directories that you actually want to sync, and  exclude  all
       others.  See the Example include-style filters for Windows user directories below.

   Filters file writing guidelines
       1. Begin with excluding directory trees:

           • e.g. - /AppData/

           • **  on  the end is not necessary.  Once a given directory level is excluded then everything beneath
             it won’t be looked at by rclone.

           • Exclude such directories that are unneeded, are big, dynamically thrashed, or where  there  may  be
             access permission issues.

           • Excluding such dirs first will make rclone operations (much) faster.

           • Specific files may also be excluded, as with the Dropbox exclusions example below.

       2. Decide if its easier (or cleaner) to:

           • Include select directories and therefore exclude everything else – or –

           • Exclude select directories and therefore include everything else

       3. Include select directories:

           • Add lines like: + /Documents/PersonalFiles/** to select which directories to include in the sync.

           • ** on the end specifies to include the full depth of the specified tree.

           • With  Include-style  filters, files at the Path1/Path2 root are not included.  They may be included
             with + /*.

           • Place RCLONE_TEST files within these included directory trees.  They will only  be  looked  for  in
             these directory trees.

           • Finish by excluding everything else by adding - ** at the end of the filters file.

           • Disregard step 4.

       4. Exclude select directories:

           • Add  more lines like in step 1.  For example: -/Desktop/tempfiles/, or - /testdir/.  Again, a ** on
             the end is not necessary.

           • Do not add a - ** in the file.  Without this line, everything will be  included  that  has  not  be
             explicitly excluded.

           • Disregard step 3.

       A few rules for the syntax of a filter file expanding on filtering documentation:

       • Lines may start with spaces and tabs - rclone strips leading whitespace.

       • If the first non-whitespace character is a # then the line is a comment and will be ignored.

       • Blank lines are ignored.

       • The first non-whitespace character on a filter line must be a + or -.

       • Exactly 1 space is allowed between the +/- and the path term.

       • Only forward slashes (/) are used in path terms, even on Windows.

       • The  rest  of the line is taken as the path term.  Trailing whitespace is taken literally, and probably
         is an error.

   Example include-style filters for Windows user directories
       This Windows include-style example is based on the  sync  root  (Path1)  set  to  C:\Users\MyLogin.   The
       strategy is to select specific directories to be synched with a network drive (Path2).

       • -  /AppData/  excludes  an  entire  tree of Windows stored stuff that need not be synched.  In my case,
         AppData has >11 GB of stuff I don’t care about, and there are some subdirectories beneath AppData  that
         are not accessible to my user login, resulting in bisync critical aborts.

       • Windows  creates  cache files starting with both upper and lowercase NTUSER at C:\Users\MyLogin.  These
         files may be dynamic, locked, and are generally don’t care.

       • There are just a few directories with my data that I do want synched, in the form  of  +  /<path>.   By
         selecting only the directory trees I want to avoid the dozen plus directories that various apps make at
         C:\Users\MyLogin\Documents.

       • Include files in the root of the sync point, C:\Users\MyLogin, by adding the + /* line.

       • This  is  an  Include-style  filters  file,  therefore  it ends with - ** which excludes everything not
         explicitly included.

         - /AppData/
         - NTUSER*
         - ntuser*
         + /Documents/Family/**
         + /Documents/Sketchup/**
         + /Documents/Microcapture_Photo/**
         + /Documents/Microcapture_Video/**
         + /Desktop/**
         + /Pictures/**
         + /*
         - **

       Note also that Windows implements several “library” links such as C:\Users\MyLogin\My Documents\My  Music
       pointing  to  C:\Users\MyLogin\Music.   rclone sees these as links, so you must add --links to the bisync
       command line if you which to follow these links.  I find that I get permission errors in trying to follow
       the links, so I don’t include the rclone --links flag, but then you get lots of Can't  follow  symlink...
       noise  from  rclone  about  not  following the links.  This noise can be quashed by adding --quiet to the
       bisync command line.

   Example exclude-style filters files for use with Dropbox
       • Dropbox disallows synching the listed temporary and configuration/data files.  The - <filename> filters
         exclude these files where ever they may occur in the sync tree.  Consider adding similar exclusions for
         file types you don’t need to sync, such as core dump and software build files.

       • bisync testing creates /testdir/ at the top level of the sync tree, and usually deletes the tree  after
         the  test.   If  a  normal sync should run while the /testdir/ tree exists the --check-access phase may
         fail due to unbalanced RCLONE_TEST files.  The - /testdir/ filter blocks this tree from being  synched.
         You don’t need this exclusion if you are not doing bisync development testing.

       • Everything else beneath the Path1/Path2 root will be synched.

       • RCLONE_TEST files may be placed anywhere within the tree, including the root.

   Example filters file for Dropbox
              # Filter file for use with bisync
              # See https://rclone.org/filtering/ for filtering rules
              # NOTICE: If you make changes to this file you MUST do a --resync run.
              #         Run with --dry-run to see what changes will be made.

              # Dropbox wont sync some files so filter them away here.
              # See https://help.dropbox.com/installs-integrations/sync-uploads/files-not-syncing
              - .dropbox.attr
              - ~*.tmp
              - ~$*
              - .~*
              - desktop.ini
              - .dropbox

              # Used for bisync testing, so excluded from normal runs
              - /testdir/

              # Other example filters
              #- /TiBU/
              #- /Photos/

   How –check-access handles filters
       At  the  start  of  a  bisync  run,  listings  are  gathered  for  Path1 and Path2 while using the user’s
       --filters-file.  During the check access phase, bisync scans these listings for RCLONE_TEST  files.   Any
       RCLONE_TEST  files  hidden  by the --filters-file are not in the listings and thus not checked during the
       check access phase.

   Troubleshooting
   Reading bisync logs
       Here are two normal runs.  The first one has a newer file on  the  remote.   The  second  has  no  deltas
       between local and remote.

              2021/05/16 00:24:38 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
              2021/05/16 00:24:38 INFO  : Path1 checking for diffs
              2021/05/16 00:24:38 INFO  : - Path1    File is new                         - file.txt
              2021/05/16 00:24:38 INFO  : Path1:    1 changes:    1 new,    0 newer,    0 older,    0 deleted
              2021/05/16 00:24:38 INFO  : Path2 checking for diffs
              2021/05/16 00:24:38 INFO  : Applying changes
              2021/05/16 00:24:38 INFO  : - Path1    Queue copy to Path2                 - dropbox:/file.txt
              2021/05/16 00:24:38 INFO  : - Path1    Do queued copies to                 - Path2
              2021/05/16 00:24:38 INFO  : Updating listings
              2021/05/16 00:24:38 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
              2021/05/16 00:24:38 INFO  : Bisync successful

              2021/05/16 00:36:52 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
              2021/05/16 00:36:52 INFO  : Path1 checking for diffs
              2021/05/16 00:36:52 INFO  : Path2 checking for diffs
              2021/05/16 00:36:52 INFO  : No changes found
              2021/05/16 00:36:52 INFO  : Updating listings
              2021/05/16 00:36:52 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
              2021/05/16 00:36:52 INFO  : Bisync successful

   Dry run oddity
       The  --dry-run  messages  may indicate that it would try to delete some files.  For example, if a file is
       new on Path2 and does not exist on Path1 then it would normally be copied to Path1,  but  with  --dry-run
       enabled  those  copies  don’t  happen, which leads to the attempted delete on the Path2, blocked again by
       –dry-run: ... Not deleting as --dry-run.

       This whole confusing situation is an artifact of the --dry-run flag.   Scrutinize  the  proposed  deletes
       carefully,  and  if the files would have been copied to Path1 then the threatened deletes on Path2 may be
       disregarded.

   Retries
       Rclone has built in retries.  If you run with --verbose you’ll see error and retry messages such as shown
       below.  This is usually not a bug.  If at the end of the run you see Bisync  successful  and  not  Bisync
       critical error or Bisync aborted then the run was successful, and you can ignore the error messages.

       The  following  run  shows  an  intermittent  fail.  Lines 5 and _6- are low level messages.  Line 6 is a
       bubbled-up warning message, conveying the error.  Rclone normally retries failing commands, so there  may
       be numerous such messages in the log.

       Since  there  are  no  final  error/warning messages on line 7, rclone has recovered from failure after a
       retry, and the overall sync was successful.

              1: 2021/05/14 00:44:12 INFO  : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
              2: 2021/05/14 00:44:12 INFO  : Path1 checking for diffs
              3: 2021/05/14 00:44:12 INFO  : Path2 checking for diffs
              4: 2021/05/14 00:44:12 INFO  : Path2:  113 changes:   22 new,    0 newer,    0 older,   91 deleted
              5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
              6: 2021/05/14 00:44:12 NOTICE: WARNING  listing try 1 failed.                 - dropbox:
              7: 2021/05/14 00:44:12 INFO  : Bisync successful

       This log shows a Critical failure which requires a --resync to  recover  from.   See  the  Runtime  Error
       Handling section.

              2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for checks to finish
              2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for transfers to finish
              2021/05/12 00:49:40 INFO  : Google drive root '': not deleting files as there were IO errors
              2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
              2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
              2021/05/12 00:49:40 NOTICE: WARNING  rclone sync try 3 failed.           - /path/to/local/tree/
              2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.

   Denied downloads of “infected” or “abusive” files
       Google Drive has a filter for certain file types (.exe, .apk, et cetera) that by default cannot be copied
       from  Google  Drive  to  the  local  filesystem.   If  you are having problems, run with --verbose to see
       specifically which files are generating complaints.  If the error is This file  has  been  identified  as
       malware or spam and cannot be downloaded, consider using the flag –drive-acknowledge-abuse.

   Google Doc files
       Google  docs  exist  as  virtual  files  on  Google  Drive and cannot be transferred to other filesystems
       natively.  While it is possible to export a Google doc to  a  normal  file  (with  .xlsx  extension,  for
       example), it is not possible to import a normal file back into a Google document.

       Bisync’s  handling  of  Google  Doc  files is to flag them in the run log output for user’s attention and
       ignore them for any file transfers, deletes, or syncs.  They will show up with a  length  of  -1  in  the
       listings.  This bisync run is otherwise successful:

              2021/05/11 08:23:15 INFO  : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:"
              2021/05/11 08:23:15 INFO  : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx"
              2021/05/11 08:23:15 INFO  : Bisync successful

   Usage examples
   Cron
       Rclone  does  not yet have a built-in capability to monitor the local file system for changes and must be
       blindly run periodically.  On Windows this can be done using a Task Scheduler, on Linux you can use  Cron
       which is described below.

       The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output
       logged to a runlog file:

              # Minute (0-59)
              #      Hour (0-23)
              #           Day of Month (1-31)
              #                Month (1-12 or Jan-Dec)
              #                     Day of Week (0-6 or Sun-Sat)
              #                         Command
                */5  *    *    *    *   /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log

       See   https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES crontab syntax).    for   the
       details of crontab time interval expressions.

       If you run rclone bisync as a cron job, redirect stdout/stderr to a file.  The 2nd example runs a sync to
       Dropbox every hour and logs all stdout (via the >>) and stderr (via 2>&1) to a log file.

              0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1

   Sharing an encrypted folder tree between hosts
       bisync can keep a local folder in sync with a cloud service, but what if you have some  highly  sensitive
       files to be synched?

       Usage  of  a cloud service is for exchanging both routine and sensitive personal files between one’s home
       network, one’s personal notebook when on the road, and with one’s work computer.  The routine data is not
       sensitive.  For the sensitive data, configure an rclone crypt remote to point to  a  subdirectory  within
       the local disk tree that is bisync’d to Dropbox, and then set up an bisync for this local crypt directory
       to a directory outside of the main sync tree.

   Linux server setup
       • /path/to/DBoxroot is the root of my local sync tree.  There are numerous subdirectories.

       • /path/to/DBoxroot/crypt  is  the  root subdirectory for files that are encrypted.  This local directory
         target is setup as an rclone crypt remote named Dropcrypt:.  See rclone.conf snippet below.

       • /path/to/my/unencrypted/files is the root of my sensitive files - not encrypted, not  within  the  tree
         synched to Dropbox.

       • To  sync  my  local  unencrypted  files  with  the  encrypted  Dropbox  versions  I manually run bisync
         /path/to/my/unencrypted/files DropCrypt:.  This step could be bundled into a script to run  before  and
         after the full Dropbox tree sync in the last step, thus actively keeping the sensitive files in sync.

       • bisync  /path/to/DBoxroot  Dropbox: runs periodically via cron, keeping my full local sync tree in sync
         with Dropbox.

   Windows notebook setup
       • The Dropbox client runs keeping the local tree C:\Users\MyLogin\Dropbox always in sync with Dropbox.  I
         could have used rclone bisync instead.

       • A separate directory  tree  at  C:\Users\MyLogin\Documents\DropLocal  hosts  the  tree  of  unencrypted
         files/folders.

       • To  sync  my  local  unencrypted files with the encrypted Dropbox versions I manually run the following
         command: rclone bisync C:\Users\MyLogin\Documents\DropLocal Dropcrypt:.

       • The Dropbox client then syncs the changes with Dropbox.

   rclone.conf snippet
              [Dropbox]
              type = dropbox
              ...

              [Dropcrypt]
              type = crypt
              remote = /path/to/DBoxroot/crypt          # on the Linux server
              remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
              filename_encryption = standard
              directory_name_encryption = true
              password = ...
              ...

   Testing
       You should read this section only if you are developing for rclone.  You need to have rclone source  code
       locally to work with bisync tests.

       Bisync has a dedicated test framework implemented in the bisync_test.go file located in the rclone source
       tree.   The  test  suite  is  based on the go test command.  Series of tests are stored in subdirectories
       below the cmd/bisync/testdata directory.  Individual tests can be invoked by their directory  name,  e.g.
       go test . -case basic -remote local -remote2 gdrive: -v

       Tests  will  make  a  temporary  folder  on remote and purge it afterwards.  If during test run there are
       intermittent errors and rclone retries, these errors will be captured and flagged as invalid MISCOMPAREs.
       Rerunning the test will let it pass.  Consider such failures as noise.

   Test command syntax
              usage: go test ./cmd/bisync [options...]

              Options:
                -case NAME        Name(s) of the test case(s) to run. Multiple names should
                                  be separated by commas. You can remove the `test_` prefix
                                  and replace `_` by `-` in test name for convenience.
                                  If not `all`, the name(s) should map to a directory under
                                  `./cmd/bisync/testdata`.
                                  Use `all` to run all tests (default: all)
                -remote PATH1     `local` or name of cloud service with `:` (default: local)
                -remote2 PATH2    `local` or name of cloud service with `:` (default: local)
                -no-compare       Disable comparing test results with the golden directory
                                  (default: compare)
                -no-cleanup       Disable cleanup of Path1 and Path2 testdirs.
                                  Useful for troubleshooting. (default: cleanup)
                -golden           Store results in the golden directory (default: false)
                                  This flag can be used with multiple tests.
                -debug            Print debug messages
                -stop-at NUM      Stop test after given step number. (default: run to the end)
                                  Implies `-no-compare` and `-no-cleanup`, if the test really
                                  ends prematurely. Only meaningful for a single test case.
                -refresh-times    Force refreshing the target modtime, useful for Dropbox
                                  (default: false)
                -verbose          Run tests verbosely

       Note: unlike rclone flags which must be prefixed by double dash (--),  the  test  command  flags  can  be
       equally prefixed by a single - or double dash.

   Running tests
       • go  test  . -case basic -remote local -remote2 local runs the test_basic test case using only the local
         filesystem, synching one local directory with another local directory.  Test script output  is  to  the
         console,  while  commands  within scenario.txt have their output sent to the .../workdir/test.log file,
         which is finally compared to the golden copy.

       • The first argument after go test should be a relative name of the directory  containing  bisync  source
         code.   If  you  run  tests  right  from  there, the argument will be .  (current directory) as in most
         examples below.  If you run bisync tests from the rclone source directory, the  command  should  be  go
         test ./cmd/bisync ....

       • The test engine will mangle rclone output to ensure comparability with golden listings and logs.

       • Test  scenarios  are  located  in ./cmd/bisync/testdata.  The test -case argument should match the full
         name of a subdirectory under that directory.  Every test subdirectory name  on  disk  must  start  with
         test_,  this  prefix  can be omitted on command line for brevity.  Also, underscores in the name can be
         replaced by dashes for convenience.

       • go test . -remote local -remote2 local -case all runs all tests.

       • Path1 and Path2 may either be the keyword local or may be names of configured cloud services.  go  test
         .  -remote  gdrive: -remote2 dropbox: -case basic will run the test between these two services, without
         transferring any files to the local filesystem.

       • Test run stdout and stderr console output may be directed to a file, e.g.  go test  .  -remote  gdrive:
         -remote2 local -case all > runlog.txt 2>&1

   Test execution flow
       1. The  base setup in the initial directory of the testcase is applied on the Path1 and Path2 filesystems
          (via rclone copy the initial directory to Path1, then rclone sync Path1 to Path2).

       2. The commands in the scenario.txt file are applied, with output directed to the test.log  file  in  the
          test  working  directory.   Typically,  the  first  actual command in the scenario.txt file is to do a
          --resync, which establishes the baseline {...}.path1.lst and {...}.path2.lst files in the test working
          directory (.../workdir/ relative to the temporary  test  directory).   Various  commands  and  listing
          snapshots are done within the test.

       3. Finally,  the  contents  of  the test working directory are compared to the contents of the testcase’s
          golden directory.

   Notes about testing
       • Test cases are in individual directories beneath ./cmd/bisync/testdata.  A command line reference to  a
         test  is understood to reference a directory beneath testdata.  For example, go test ./cmd/bisync -case
         dry-run -remote gdrive: -remote2 local refers to the test case in ./cmd/bisync/testdata/test_dry_run.

       • The test working directory is located at .../workdir relative to a temporary  test  directory,  usually
         under /tmp on Linux.

       • The  local  test  sync  tree  is  created  at  a temporary directory named like bisync.XXX under system
         temporary directory.

       • The remote test sync tree is located at a temporary directory under <remote:>/bisync.XXX/.

       • path1 and/or path2 subdirectories are created in a temporary directory under the  respective  local  or
         cloud test remote.

       • By  default,  the  Path1  and  Path2  test  dirs  and workdir will be deleted after each test run.  The
         -no-cleanup flag disables purging these directories when validating and debugging a given test.   These
         directories will be flushed before running another test, independent of the -no-cleanup usage.

       • You  will  likely  want to add - /testdir/ to your normal bisync --filters-file so that normal syncs do
         not attempt to sync the test temporary directories, which may  have  RCLONE_TEST  miscompares  in  some
         testcases  which  would  otherwise  trip  the  --check-access  system.  The --check-access mechanism is
         hard-coded to ignore RCLONE_TEST files beneath bisync/testdata, so the test cases  may  reside  on  the
         synched tree even if there are check file mismatches in the test tree.

       • Some  Dropbox  tests  can fail, notably printing the following message: src and dst identical but can't
         set mod time without deleting and re-uploading This is expected and happens due a way  Dropbox  handles
         modification times.  You should use the -refresh-times test flag to make up for this.

       • If  Dropbox  tests  hit  request  limit for you and print error message too_many_requests/...: Too many
         requests or write operations.  then follow the Dropbox App ID instructions.

   Updating golden results
       Sometimes even a slight change in the bisync source can cause  little  changes  spread  around  many  log
       files.  Updating them manually would be a nightmare.

       The  -golden  flag  will store the test.log and *.lst listings from each test case into respective golden
       directories.  Golden results will automatically contain generic strings instead of local or  cloud  paths
       which means that they should match when run with a different cloud service.

       Your  normal  workflow  might  be  as follows: 1.  Git-clone the rclone sources locally 2.  Modify bisync
       source and check that it builds 3.  Run the whole test suite go test ./cmd/bisync -remote  local  4.   If
       some tests show log difference, recheck them individually, e.g.: go test ./cmd/bisync -remote local -case
       basic  5.   If  you  are convinced with the difference, goldenize all tests at once: go test ./cmd/bisync
       -remote local -golden 6.  Use word diff: git diff --word-diff ./cmd/bisync/testdata/.  Please  note  that
       normal  line-level  diff  is generally useless here.  7.  Check the difference carefully!  8.  Commit the
       change (git commit) only if you are sure.  If unsure, save your code changes then wipe the log diffs from
       git: git reset [--hard].

   Structure of test scenarios
       • <testname>/initial/ contains a tree of files that will be set as the initial condition  on  both  Path1
         and Path2 testdirs.

       • <testname>/modfiles/ contains files that will be used to modify the Path1 and/or Path2 filesystems.

       • <testname>/golden/  contains  the  expected  content  of  the  test  working directory (workdir) at the
         completion of the testcase.

       • <testname>/scenario.txt contains the body of the test, in the form of various commands to modify files,
         run bisync, and snapshot listings.  Output from these commands is captured to .../workdir/test.log  for
         comparison to the golden files.

   Supported test commands
       • test  <some  message> Print the line to the console and to the test.log: test sync is working correctly
         with options x, y, z

       • copy-listings <prefix> Save a copy of all  .lst  listings  in  the  test  working  directory  with  the
         specified prefix: save-listings exclude-pass-run

       • move-listings <prefix> Similar to copy-listings but removes the source

       • purge-children <dir> This will delete all child files and purge all child subdirs under given directory
         but  keep  the  parent intact.  This behavior is important for tests with Google Drive because removing
         and re-creating the parent would change its ID.

       • delete-file <file> Delete a single file.

       • delete-glob <dir> <pattern> Delete a group of files located one level deep in the given directory  with
         names maching a given glob pattern.

       • touch-glob YYYY-MM-DD <dir> <pattern> Change modification time on a group of files.

       • touch-copy  YYYY-MM-DD  <source-file>  <dest-dir>  Change  file  modification  time  then  copy  it  to
         destination.

       • copy-file <source-file> <dest-dir> Copy a single file to given directory.

       • copy-as <source-file> <dest-file> Similar to above but destination must include both directory and  the
         new file name at destination.

       • copy-dir  <src>  <dst>  and  sync-dir <src> <dst> Copy/sync a directory.  Equivalent of rclone copy and
         rclone sync.

       • list-dirs <dir> Equivalent to rclone lsf -R --dirs-only <dir>

       • bisync [options] Runs bisync against -remote and -remote2.

   Supported substitution terms
       • {testdir/} - the root dir of the testcase

       • {datadir/} - the modfiles dir under the testcase root

       • {workdir/} - the temporary test working directory

       • {path1/} - the root of the Path1 test directory tree

       • {path2/} - the root of the Path2 test directory tree

       • {session} - base name of the test listings

       • {/} - OS-specific path separator

       • {spc}, {tab}, {eol} - whitespace

       • {chr:HH} - raw byte with given hexadecimal code

       Substitution results of the terms named like {dir/} will end with / (or backslash on Windows), so  it  is
       not necessary to include slash in the usage, for example delete-file {path1/}file1.txt.

   Benchmarks
       This section is work in progress.

       Here are a few data points for scale, execution times, and memory usage.

       The  first  set  of data was taken between a local disk to Dropbox.  The speedtest.net download speed was
       ~170 Mbps, and upload speed was ~10 Mbps.  500 files (~9.5 MB each) had been already synched.   50  files
       were added in a new directory, each ~9.5 MB, ~475 MB total.

       Change                     Operations and times                  Overall run
                                                                        time
       ─────────────────────────────────────────────────────────────────────────────
       500     files    synched   1x listings for Path1 & Path2         1.5 sec
       (nothing to move)
       500 files  synched  with   1x listings for Path1 & Path2         1.5 sec
       –check-access
       50 new files on remote     Queued 50 copies down: 27 sec         29 sec
       Moved local dir            Queued  50  copies  up: 410 sec, 50   421 sec
                                  deletes up: 9 sec
       Moved remote dir           Queued 50 copies down: 31  sec,  50   33 sec
                                  deletes down: <1 sec
       Delete local dir           Queued 50 deletes up: 9 sec           13 sec

       This  next  data  is  from  a  user’s application.  They had ~400GB of data over 1.96 million files being
       sync’ed between a Windows local disk and some remote cloud.  The file full path length was on average  35
       characters (which factors into load time and RAM required).

       • Loading  the  prior listing into memory (1.96 million files, listing file size 140 MB) took ~30 sec and
         occupied about 1 GB of RAM.

       • Getting a fresh listing of the local file system (producing the 140 MB output file) took about XXX sec.

       • Getting a fresh listing of the remote file system (producing the 140 MB output  file)  took  about  XXX
         sec. The network download speed was measured at XXX Mb/s.

       • Once the prior and current Path1 and Path2 listings were loaded (a total of four to be loaded, two at a
         time),  determining  the  deltas  was pretty quick (a few seconds for this test case), and the transfer
         time for any files to be copied was dominated by the network bandwidth.

   References
       rclone’s bisync implementation was derived from the rclonesync-V2 project,  including  documentation  and
       test mechanisms, with @cjnaz’s full support and encouragement.

       rclone bisync is similar in nature to a range of other projects:

       • unison

       • syncthing

       • cjnaz/rclonesync

       • ConorWilliams/rsinc

       • jwink3101/syncrclone

       • DavideRossi/upback

       Bisync  adopts  the  differential synchronization technique, which is based on keeping history of changes
       performed by both synchronizing sides.  See the Dual Shadow Method section in the Neil Fraser’s article.

       Also         note         a         number         of          academic          publications          by
       http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization Benjamin Pierce about
       Unison and synchronization in general.

1Fichier

       This  is  a backend for the 1fichier cloud storage service.  Note that a Premium subscription is required
       to use the API.

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       The initial setup for 1Fichier involves getting the API key from the website which you need to do in your
       browser.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / 1Fichier
                 \ "fichier"
              [snip]
              Storage> fichier
              ** See help for fichier backend at: https://rclone.org/fichier/ **

              Your API Key, get it from https://1fichier.com/console/params.pl
              Enter a string value. Press Enter for the default ("").
              api_key> example_key

              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n>
              Remote config
              --------------------
              [remote]
              type = fichier
              api_key = example_key
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Once configured you can then use rclone like this,

       List directories in top level of your 1Fichier account

              rclone lsd remote:

       List all the files in your 1Fichier account

              rclone ls remote:

       To copy a local directory to a 1Fichier directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       1Fichier does not support modification times.  It supports the Whirlpool hash algorithm.

   Duplicated files
       1Fichier can have two files with exactly the same name and path (unlike a normal file system).

       Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \
       <           0x3C        <
       >           0x3E        >
       ”           0x22        "
       $           0x24        $
       `           0x60        `
       ’           0x27        '

       File names can also not start or end with the following characters.  These only get replaced if they  are
       the first or last character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to fichier (1Fichier).

   –fichier-api-key
       Your API Key, get it from https://1fichier.com/console/params.pl.

       Properties:

       • Config: api_key

       • Env Var: RCLONE_FICHIER_API_KEY

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to fichier (1Fichier).

   –fichier-shared-folder
       If you want to download a shared folder, add this parameter.

       Properties:

       • Config: shared_folder

       • Env Var: RCLONE_FICHIER_SHARED_FOLDER

       • Type: string

       • Required: false

   –fichier-file-password
       If you want to download a shared file that is password protected, add this parameter.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: file_password

       • Env Var: RCLONE_FICHIER_FILE_PASSWORD

       • Type: string

       • Required: false

   –fichier-folder-password
       If you want to list the files in a shared folder that is password protected, add this parameter.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: folder_password

       • Env Var: RCLONE_FICHIER_FOLDER_PASSWORD

       • Type: string

       • Required: false

   –fichier-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_FICHIER_ENCODING

       • Type: MultiEncoder

       • Default:
         Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot

   Limitations
       rclone about is not supported by the 1Fichier backend.  Backends without this capability cannot determine
       free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Alias

       The alias remote provides a new name for another remote.

       Paths   may   be   as   deep   as   required  or  a  local  path,  e.g. remote:directory/subdirectory  or
       /directory/subdirectory.

       During the initial setup with rclone config you will specify the target remote.  The  target  remote  can
       either be a local path or another remote.

       Subfolders  can  be  used  in  target  remote.   Assume  an  alias  remote  named  backup with the target
       mydrive:private/backup.  Invoking rclone mkdir backup:desktop is exactly  the  same  as  invoking  rclone
       mkdir mydrive:private/backup/desktop.

       There   will   be  no  special  handling  of  paths  containing  ..   segments.   Invoking  rclone  mkdir
       backup:../desktop is exactly the same as invoking rclone  mkdir  mydrive:private/backup/../desktop.   The
       empty path is not allowed as a remote.  To alias the current directory use . instead.

       The target remote can also be a connection string.  This can be used to modify the config of a remote for
       different  uses, e.g.  the alias myDriveTrash with the target remote myDrive,trashed_only: can be used to
       only show the trashed files in myDrive.

   Configuration
       Here is an example of how to make an alias called remote for local folder.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Alias for an existing remote
                 \ "alias"
              [snip]
              Storage> alias
              Remote or path to alias.
              Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
              remote> /mnt/storage/backup
              Remote config
              --------------------
              [remote]
              remote = /mnt/storage/backup
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              remote               alias

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

       Once configured you can then use rclone like this,

       List directories in top level in /mnt/storage/backup

              rclone lsd remote:

       List all the files in /mnt/storage/backup

              rclone ls remote:

       Copy another local directory to the alias directory called source

              rclone copy /home/source remote:source

   Standard options
       Here are the Standard options specific to alias (Alias for an existing remote).

   –alias-remote
       Remote or path to alias.

       Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”.

       Properties:

       • Config: remote

       • Env Var: RCLONE_ALIAS_REMOTE

       • Type: string

       • Required: true

Amazon Drive

       Amazon Drive, formerly known as Amazon Cloud Drive,  is  a  cloud  storage  service  run  by  Amazon  for
       consumers.

   Status
       Important:  rclone  supports  Amazon  Drive only if you have your own set of API keys.  Unfortunately the
       Amazon Drive developer program is now closed to new entries so if you don’t already have your own set  of
       keys you will not be able to use rclone with Amazon Drive.

       For   the   history   on   why   rclone   no   longer   has   a   set   of  Amazon  Drive  API  keys  see
       https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314 the forum.

       If you happen to know anyone who works at Amazon then please ask  them  to  re-instate  rclone  into  the
       Amazon Drive developer program - thanks!

   Configuration
       The  initial  setup  for  Amazon  Drive involves getting a token from Amazon which you need to do in your
       browser.  rclone config walks you through it.

       The configuration process for Amazon Drive may involve using an oauth proxy.  This is used  to  keep  the
       Amazon credentials out of the source code.  The proxy runs in Google’s very secure App Engine environment
       and doesn’t store any credentials which pass through it.

       Since rclone doesn’t currently have its own Amazon Drive credentials so you will either need to have your
       own  client_id  and  client_secret  with Amazon Drive, or use a third-party oauth proxy in which case you
       will need to enter client_id, client_secret, auth_url and token_url.

       Note also if you are not using Amazon’s auth_url and token_url, (ie you filled in  something  for  those)
       then if setting up on a remote machine you can only use the https://rclone.org/remote_setup/#configuring-
       by-copying-the-config-file copying the config method of con‐ figuration - rclone authorize will not work.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              n/r/c/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Amazon Drive
                 \ "amazon cloud drive"
              [snip]
              Storage> amazon cloud drive
              Amazon Application Client Id - required.
              client_id> your client ID goes here
              Amazon Application Client Secret - required.
              client_secret> your client secret goes here
              Auth server URL - leave blank to use Amazon's.
              auth_url> Optional auth URL
              Token server url - leave blank to use Amazon's.
              token_url> Optional token URL
              Remote config
              Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              client_id = your client ID goes here
              client_secret = your client secret goes here
              auth_url = Optional auth URL
              token_url = Optional token URL
              token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note  that  rclone  runs  a webserver on your local machine to collect the token as returned from Amazon.
       This only runs from the moment it opens your browser to the moment you get back  the  verification  code.
       This  is  on  http://127.0.0.1:53682/  and  this  it may require you to unblock it temporarily if you are
       running a host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your Amazon Drive

              rclone lsd remote:

       List all the files in your Amazon Drive

              rclone ls remote:

       To copy a local directory to an Amazon Drive directory called backup

              rclone copy /home/source remote:backup

   Modified time and MD5SUMs
       Amazon Drive doesn’t allow modification times to be changed via the API so these  won’t  be  accurate  or
       used for syncing.

       It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Deleting files
       Any  files  you  delete with rclone will end up in the trash.  Amazon don’t provide an API to permanently
       delete files, nor to empty the trash, so you will have to do that with one of Amazon’s apps  or  via  the
       Amazon  Drive website.  As of November 17, 2016, files are automatically deleted by Amazon from the trash
       after 30 days.

   Using with non .com Amazon accounts
       Let’s say you usually use amazon.co.uk.  When you authenticate  with  rclone  it  will  take  you  to  an
       amazon.com page to log in.  Your amazon.co.uk email and password should work here just fine.

   Standard options
       Here are the Standard options specific to amazon cloud drive (Amazon Drive).

   –acd-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_ACD_CLIENT_ID

       • Type: string

       • Required: false

   –acd-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_ACD_CLIENT_SECRET

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to amazon cloud drive (Amazon Drive).

   –acd-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_ACD_TOKEN

       • Type: string

       • Required: false

   –acd-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_ACD_AUTH_URL

       • Type: string

       • Required: false

   –acd-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_ACD_TOKEN_URL

       • Type: string

       • Required: false

   –acd-checkpoint
       Checkpoint for internal polling (debug).

       Properties:

       • Config: checkpoint

       • Env Var: RCLONE_ACD_CHECKPOINT

       • Type: string

       • Required: false

   –acd-upload-wait-per-gb
       Additional time per GiB to wait after a failed complete upload to see if it appears.

       Sometimes  Amazon  Drive  gives  an error when a file has been fully uploaded but the file appears anyway
       after a little while.  This happens sometimes for files over 1 GiB in size  and  nearly  every  time  for
       files bigger than 10 GiB.  This parameter controls the time rclone waits for the file to appear.

       The default value for this parameter is 3 minutes per GiB, so by default it will wait 3 minutes for every
       GiB uploaded to see if the file appears.

       You  can  disable  this feature by setting it to 0.  This may cause conflict errors as rclone retries the
       failed upload but the file will most likely appear correctly eventually.

       These values were determined empirically by observing lots of uploads of big files for a  range  of  file
       sizes.

       Upload with the “-v” flag to see more info about what rclone is doing in this situation.

       Properties:

       • Config: upload_wait_per_gb

       • Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB

       • Type: Duration

       • Default: 3m0s

   –acd-templink-threshold
       Files >= this size will be downloaded via their tempLink.

       Files  this  size or more will be downloaded via their “tempLink”.  This is to work around a problem with
       Amazon Drive which blocks downloads of files bigger than about 10 GiB.  The default for  this  is  9  GiB
       which shouldn’t need to be changed.

       To  download  files above this threshold, rclone requests a “tempLink” which downloads the file through a
       temporary URL directly from the underlying S3 storage.

       Properties:

       • Config: templink_threshold

       • Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD

       • Type: SizeSuffix

       • Default: 9Gi

   –acd-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_ACD_ENCODING

       • Type: MultiEncoder

       • Default: Slash,InvalidUtf8,Dot

   Limitations
       Note that Amazon Drive is case insensitive so you can’t have a file called  “Hello.doc”  and  one  called
       “hello.doc”.

       Amazon  Drive  has  rate  limiting  so  you  may  notice  errors  in  the sync (429 errors).  rclone will
       automatically retry the sync up to 3 times by default (see --retries flag) which  should  hopefully  work
       around this problem.

       Amazon  Drive has an internal limit of file sizes that can be uploaded to the service.  This limit is not
       officially published, but all files larger than this will fail.

       At the time of writing (Jan 2016) is in the area of 50 GiB per file.  This means that  larger  files  are
       likely to fail.

       Unfortunately  there  is  no  way for rclone to see that this failure is because of file size, so it will
       retry the operation, as any other failure.  To avoid this problem, use --max-size 50000M option to  limit
       the  maximum  size  of  uploaded files.  Note that --max-size does not split files into segments, it only
       ignores files over this size.

       rclone about is not supported by the Amazon Drive  backend.   Backends  without  this  capability  cannot
       determine  free  space  for  an rclone mount or use policy mfs (most free space) as a member of an rclone
       union remote.

       See List of backends that do not support rclone about and rclone about

Amazon S3 Storage Providers

       The S3 backend can be used with a number of different providers:

       • AWS S3

       • Alibaba Cloud (Aliyun) Object Storage System (OSS)

       • Ceph

       • China Mobile Ecloud Elastic Object Storage (EOS)

       • Cloudflare R2

       • Arvan Cloud Object Storage (AOS)

       • DigitalOcean Spaces

       • Dreamhost

       • Huawei OBS

       • IBM COS S3

       • IDrive e2

       • IONOS Cloud

       • Minio

       • Qiniu Cloud Object Storage (Kodo)

       • RackCorp Object Storage

       • Scaleway

       • Seagate Lyve Cloud

       • SeaweedFS

       • StackPath

       • Storj

       • Tencent Cloud Object Storage (COS)

       • Wasabi

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:bucket/path/to/dir.

       Once you have made a remote (see the provider specific section above) you can use it like this:

       See all buckets

              rclone lsd remote:

       Make a new bucket

              rclone mkdir remote:bucket

       List the contents of a bucket

              rclone ls remote:bucket

       Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

              rclone sync -i /home/local/directory remote:bucket

   Configuration
       Here  is  an  example  of  making an s3 configuration for the AWS S3 provider.  Most applies to the other
       providers as well, any differences are described below.

       First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS
                 \ "s3"
              [snip]
              Storage> s3
              Choose your S3 provider.
              Choose a number from below, or type in your own value
               1 / Amazon Web Services (AWS) S3
                 \ "AWS"
               2 / Ceph Object Storage
                 \ "Ceph"
               3 / Digital Ocean Spaces
                 \ "DigitalOcean"
               4 / Dreamhost DreamObjects
                 \ "Dreamhost"
               5 / IBM COS S3
                 \ "IBMCOS"
               6 / Minio Object Storage
                 \ "Minio"
               7 / Wasabi Object Storage
                 \ "Wasabi"
               8 / Any other S3 compatible provider
                 \ "Other"
              provider> 1
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own value
               1 / Enter AWS credentials in the next step
                 \ "false"
               2 / Get AWS credentials from the environment (env vars or IAM)
                 \ "true"
              env_auth> 1
              AWS Access Key ID - leave blank for anonymous access or runtime credentials.
              access_key_id> XXX
              AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
              secret_access_key> YYY
              Region to connect to.
              Choose a number from below, or type in your own value
                 / The default endpoint - a good choice if you are unsure.
               1 | US Region, Northern Virginia, or Pacific Northwest.
                 | Leave location constraint empty.
                 \ "us-east-1"
                 / US East (Ohio) Region
               2 | Needs location constraint us-east-2.
                 \ "us-east-2"
                 / US West (Oregon) Region
               3 | Needs location constraint us-west-2.
                 \ "us-west-2"
                 / US West (Northern California) Region
               4 | Needs location constraint us-west-1.
                 \ "us-west-1"
                 / Canada (Central) Region
               5 | Needs location constraint ca-central-1.
                 \ "ca-central-1"
                 / EU (Ireland) Region
               6 | Needs location constraint EU or eu-west-1.
                 \ "eu-west-1"
                 / EU (London) Region
               7 | Needs location constraint eu-west-2.
                 \ "eu-west-2"
                 / EU (Frankfurt) Region
               8 | Needs location constraint eu-central-1.
                 \ "eu-central-1"
                 / Asia Pacific (Singapore) Region
               9 | Needs location constraint ap-southeast-1.
                 \ "ap-southeast-1"
                 / Asia Pacific (Sydney) Region
              10 | Needs location constraint ap-southeast-2.
                 \ "ap-southeast-2"
                 / Asia Pacific (Tokyo) Region
              11 | Needs location constraint ap-northeast-1.
                 \ "ap-northeast-1"
                 / Asia Pacific (Seoul)
              12 | Needs location constraint ap-northeast-2.
                 \ "ap-northeast-2"
                 / Asia Pacific (Mumbai)
              13 | Needs location constraint ap-south-1.
                 \ "ap-south-1"
                 / Asia Pacific (Hong Kong) Region
              14 | Needs location constraint ap-east-1.
                 \ "ap-east-1"
                 / South America (Sao Paulo) Region
              15 | Needs location constraint sa-east-1.
                 \ "sa-east-1"
              region> 1
              Endpoint for S3 API.
              Leave blank if using AWS to use the default endpoint for the region.
              endpoint>
              Location constraint - must be set to match the Region. Used when creating buckets only.
              Choose a number from below, or type in your own value
               1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
                 \ ""
               2 / US East (Ohio) Region.
                 \ "us-east-2"
               3 / US West (Oregon) Region.
                 \ "us-west-2"
               4 / US West (Northern California) Region.
                 \ "us-west-1"
               5 / Canada (Central) Region.
                 \ "ca-central-1"
               6 / EU (Ireland) Region.
                 \ "eu-west-1"
               7 / EU (London) Region.
                 \ "eu-west-2"
               8 / EU Region.
                 \ "EU"
               9 / Asia Pacific (Singapore) Region.
                 \ "ap-southeast-1"
              10 / Asia Pacific (Sydney) Region.
                 \ "ap-southeast-2"
              11 / Asia Pacific (Tokyo) Region.
                 \ "ap-northeast-1"
              12 / Asia Pacific (Seoul)
                 \ "ap-northeast-2"
              13 / Asia Pacific (Mumbai)
                 \ "ap-south-1"
              14 / Asia Pacific (Hong Kong)
                 \ "ap-east-1"
              15 / South America (Sao Paulo) Region.
                 \ "sa-east-1"
              location_constraint> 1
              Canned ACL used when creating buckets and/or storing objects in S3.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Choose a number from below, or type in your own value
               1 / Owner gets FULL_CONTROL. No one else has access rights (default).
                 \ "private"
               2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
                 \ "public-read"
                 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
               3 | Granting this on a bucket is generally not recommended.
                 \ "public-read-write"
               4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
                 \ "authenticated-read"
                 / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
               5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
                 \ "bucket-owner-read"
                 / Both the object owner and the bucket owner get FULL_CONTROL over the object.
               6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
                 \ "bucket-owner-full-control"
              acl> 1
              The server-side encryption algorithm used when storing this object in S3.
              Choose a number from below, or type in your own value
               1 / None
                 \ ""
               2 / AES256
                 \ "AES256"
              server_side_encryption> 1
              The storage class to use when storing objects in S3.
              Choose a number from below, or type in your own value
               1 / Default
                 \ ""
               2 / Standard storage class
                 \ "STANDARD"
               3 / Reduced redundancy storage class
                 \ "REDUCED_REDUNDANCY"
               4 / Standard Infrequent Access storage class
                 \ "STANDARD_IA"
               5 / One Zone Infrequent Access storage class
                 \ "ONEZONE_IA"
               6 / Glacier storage class
                 \ "GLACIER"
               7 / Glacier Deep Archive storage class
                 \ "DEEP_ARCHIVE"
               8 / Intelligent-Tiering storage class
                 \ "INTELLIGENT_TIERING"
               9 / Glacier Instant Retrieval storage class
                 \ "GLACIER_IR"
              storage_class> 1
              Remote config
              --------------------
              [remote]
              type = s3
              provider = AWS
              env_auth = false
              access_key_id = XXX
              secret_access_key = YYY
              region = us-east-1
              endpoint =
              location_constraint =
              acl = private
              server_side_encryption =
              storage_class =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d>

   Modified time
       The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as  floating  point  since  the
       epoch, accurate to 1 ns.

       If  the modification time needs to be updated rclone will attempt to perform a server side copy to update
       the modification if the object can be copied in a single part.  In the case the object is larger than 5Gb
       or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.

       Note that reading this from the object takes an additional HEAD request as the metadata isn’t returned in
       object listings.

   Reducing costs
   Avoiding HEAD requests to read the modification time
       By default, rclone will use the modification time of objects stored in S3 for syncing.  This is stored in
       object metadata which unfortunately takes an extra HEAD request to read which can be expensive  (in  time
       and money).

       The  modification  time  is  used by default for all operations that require checking the time a file was
       last updated.  It allows rclone to treat the remote more like a true filesystem, but it is inefficient on
       S3 because it requires an extra API call to retrieve the metadata.

       The extra API calls can be avoided when syncing (using rclone sync or rclone copy)  in  a  few  different
       ways, each with its own tradeoffs.

       • --size-only

         • Only checks the size of files.

         • Uses no extra transactions.

         • If the file doesn’t change size then rclone won’t detect it has changed.

         • rclone sync --size-only /path/to/source s3:bucket

       • --checksum

         • Checks the size and MD5 checksum of files.

         • Uses no extra transactions.

         • The most accurate detection of changes possible.

         • Will  cause  the source to read an MD5 checksum which, if it is a local disk, will cause lots of disk
           activity.

         • If the source and destination are both S3 this is the recommended flag to use for maximum efficiency.

         • rclone sync --checksum /path/to/source s3:bucket

       • --update --use-server-modtime

         • Uses no extra transactions.

         • Modification time becomes the time the object was uploaded.

         • For many operations this is sufficient to determine if it needs uploading.

         • Using --update along with --use-server-modtime, avoids the extra API call  and  uploads  files  whose
           local modification time is newer than the time it was last uploaded.

         • Files created with timestamps in the past will be missed by the sync.

         • rclone sync --update --use-server-modtime /path/to/source s3:bucket

       These flags can and should be used in combination with --fast-list - see below.

       If  using  rclone  mount  or  any command using the VFS (eg rclone serve) commands then you might want to
       consider using the VFS flag --no-modtime which will stop rclone reading the modification time  for  every
       object.   You  could  also  use  --use-server-modtime if you are happy with the modification times of the
       objects being the time of upload.

   Avoiding GET requests to read directory listings
       Rclone’s default directory traversal is to process each directory individually.  This takes one API  call
       per directory.  Using the --fast-list flag will read all info about the objects into memory first using a
       smaller number of API calls (one per 1000 objects).  See the rclone docs for more details.

              rclone sync --fast-list --checksum /path/to/source s3:bucket

       --fast-list  trades  off  API transactions for memory use.  As a rough guide rclone uses 1k of memory per
       object stored, so using --fast-list on a sync of a million objects will use roughly 1 GiB of RAM.

       If you are only copying a small number of files into a big repository then using --no-traverse is a  good
       idea.   This  finds  objects  directly instead of through directory listings.  You can do a “top-up” sync
       very cheaply by using --max-age and --no-traverse to copy only recent files, eg

              rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket

       You’d then do a full rclone sync less often.

       Note that --fast-list isn’t required in the top-up sync.

   Avoiding HEAD requests after PUT
       By default, rclone will HEAD every object it uploads.  It does this to  check  the  object  got  uploaded
       correctly.

       You can disable this with the –s3-no-head option - see there for more details.

       Setting this flag increases the chance for undetected upload failures.

   Hashes
       For  small objects which weren’t uploaded as multipart uploads (objects sized below --s3-upload-cutoff if
       uploaded with rclone) rclone uses the ETag: header as an MD5 checksum.

       However for objects which were uploaded as multipart uploads or with server side encryption  (SSE-AWS  or
       SSE-C)  the  ETag  header  is  no  longer  the MD5 sum of the data, so rclone adds an additional piece of
       metadata X-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (in the same format as is  required  for
       Content-MD5).

       For  large objects, calculating this hash can take some time so the addition of this hash can be disabled
       with --s3-disable-checksum.  This will mean that these objects do not have an MD5 checksum.

       Note that reading this from the object takes an additional HEAD request as the metadata isn’t returned in
       object listings.

   Versions
       When bucket versioning is enabled (this can be done  with  rclone  with  the  rclone  backend  versioning
       command)     when     rclone     uploads     a    new    version    of    a    file    it    creates    a
       https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html new version of it Likewise when you
       delete a file, the old version will be marked hidden and still be available.

       Old versions of files, where available, are visible using the --s3-versions flag.

       It is also possible to view a bucket as it was at a certain point  in  time,  using  the  --s3-version-at
       flag.   This  will show the file versions as they were at that time, showing files that have been deleted
       afterwards, and hiding files that were created since.

       If you wish to remove  all  the  old  versions  then  you  can  use  the  rclone  backend  cleanup-hidden
       remote:bucket  command  which  will delete all the old hidden versions of files, leaving the current ones
       intact.  You can also supply a path and only old versions under that path will be deleted,  e.g.   rclone
       backend cleanup-hidden remote:bucket/path/to/stuff.

       When  you  purge  a  bucket,  the  current  and  the old versions will be deleted then the bucket will be
       deleted.

       However delete will cause the current versions of the files to become hidden old versions.

       Here is a session showing the listing and retrieval of an old version followed by a cleanup  of  the  old
       versions.

       Show current version and all the versions with --s3-versions flag.

              $ rclone -q ls s3:cleanup-test
                      9 one.txt

              $ rclone -q --s3-versions ls s3:cleanup-test
                      9 one.txt
                      8 one-v2016-07-04-141032-000.txt
                     16 one-v2016-07-04-141003-000.txt
                     15 one-v2016-07-02-155621-000.txt

       Retrieve an old version

              $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

              $ ls -l /tmp/one-v2016-07-04-141003-000.txt
              -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

       Clean up all the old versions and show that they’ve gone.

              $ rclone -q backend cleanup-hidden s3:cleanup-test

              $ rclone -q ls s3:cleanup-test
                      9 one.txt

              $ rclone -q --s3-versions ls s3:cleanup-test
                      9 one.txt

   Cleanup
       If  you  run  rclone  cleanup  s3:bucket  then it will remove all pending multipart uploads older than 24
       hours.  You can use the -i flag to see exactly what it will do.  If you want more control over the expiry
       date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than  one  hour.
       You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.

   Restricted filename characters
       S3 allows any valid UTF-8 string as a key.

       Invalid UTF-8 bytes will be replaced, as they can’t be used in XML.

       The following characters are replaced since these are problematic when dealing with the REST API:

       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /

       The encoding will also encode these file names as they don’t seem to work with the SDK properly:

       File name   Replacement
       ────────────────────────
       .               .
       ..             ..

   Multipart uploads
       rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB.

       Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

       rclone   switches   from   single   part   uploads  to  multipart  uploads  at  the  point  specified  by
       --s3-upload-cutoff.  This can be a maximum of 5 GiB and a  minimum  of  0  (ie  always  upload  multipart
       files).

       The  chunk  sizes  used in the multipart upload are specified by --s3-chunk-size and the number of chunks
       uploaded concurrently is specified by --s3-upload-concurrency.

       Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory.   Single
       part uploads to not use extra memory.

       Single  part transfers can be faster than multipart transfers or slower depending on your latency from S3
       - the more latency, the more likely single part transfers will be faster.

       Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and  increasing
       --s3-chunk-size  also  increases throughput (16M would be sensible).  Increasing either of these will use
       more memory.  The default values are high enough to gain most of the possible performance  without  using
       too much memory.

   Buckets and Regions
       With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of
       a bucket from the region it was created in.  If you attempt to access a bucket from the wrong region, you
       will get an error, incorrect region, the bucket is not in 'XXX' region.

   Authentication
       There  are  a  number  of ways to supply rclone with a set of AWS credentials, with and without using the
       environment.

       The different authentication methods are tried in this order:

       • Directly in the rclone configuration file (env_auth = false in the config file):

         • access_key_id and secret_access_key are required.

         • session_token can be optionally set when using AWS STS.

       • Runtime configuration (env_auth = true in the config file):

         • Export the following environment variables before running rclone:

           • Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY

           • Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY

           • Session Token: AWS_SESSION_TOKEN (optional)

         • Or, use a https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html named profile:

           • Profile files are standard files used by AWS CLI tools

           • By default it will use the profile in your home directory (e.g. ~/.aws/credentials  on  unix  based
             systems) file and the “default” profile, to change set these environment variables:

             • AWS_SHARED_CREDENTIALS_FILE to control which file.

             • AWS_PROFILE to control which profile to use.

         • Or, run rclone in an ECS task with an IAM role (AWS only).

         • Or, run rclone on an EC2 instance with an IAM role (AWS only).

         • Or, run rclone in an EKS pod with an IAM role that is associated with a service account (AWS only).

       If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be
       non-authenticated (see below).

   S3 Permissions
       When  using  the sync subcommand of rclone the following minimum permissions are required to be available
       on the bucket being written to:

       • ListBucket

       • DeleteObject

       • GetObject

       • PutObject

       • PutObjectACL

       When using the lsd subcommand, the ListAllMyBuckets permission is required.

       Example policy:

              {
                  "Version": "2012-10-17",
                  "Statement": [
                      {
                          "Effect": "Allow",
                          "Principal": {
                              "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
                          },
                          "Action": [
                              "s3:ListBucket",
                              "s3:DeleteObject",
                              "s3:GetObject",
                              "s3:PutObject",
                              "s3:PutObjectAcl"
                          ],
                          "Resource": [
                            "arn:aws:s3:::BUCKET_NAME/*",
                            "arn:aws:s3:::BUCKET_NAME"
                          ]
                      },
                      {
                          "Effect": "Allow",
                          "Action": "s3:ListAllMyBuckets",
                          "Resource": "arn:aws:s3:::*"
                      }
                  ]
              }

       Notes on above:

       1. This is a policy that can be used when creating bucket.  It assumes that USER_NAME has been created.

       2. The Resource entry must include both resource ARNs, as one implies the bucket and  the  other  implies
          the bucket’s objects.

       For reference, here’s an Ansible script that will generate one or more buckets that will work with rclone
       sync.

   Key Management System (KMS)
       If  you  are  using  server-side  encryption  with  KMS then you must make sure rclone is configured with
       server_side_encryption = aws:kms otherwise you will find you can’t transfer small objects  -  these  will
       create checksum errors.

   Glacier and Glacier Deep Archive
       You  can  upload  objects  using  the  glacier  storage  class  or  transition  them  to  glacier using a
       http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html lifecycle policy.  The bucket
       can still be synced or copied into normally, but if rclone tries to access data from the glacier  storage
       class you will see an error like below.

              2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

       In   this   case   you  need  to  http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-
       objects.html restore the object(s) in question before using rclone.

       Note that rclone only speaks the S3 API it does not  speak  the  Glacier  Vault  API,  so  rclone  cannot
       directly access Glacier Vaults.

   Object-lock enabled S3 bucket
       According        to        AWS’s       https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-
       overview.html#object-lock-permission documentation on S3 Object Lock:

              If you configure a default retention period on a bucket, requests to  upload  objects  in  such  a
              bucket must include the Content-MD5 header.

       As  mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag,
       causing the upload to fail.  A simple solution is to set the --s3-upload-cutoff 0 and force all the files
       to be uploaded as multipart.

   Standard options
       Here are the Standard options specific to s3  (Amazon  S3  Compliant  Storage  Providers  including  AWS,
       Alibaba,  Ceph,  China  Mobile,  Cloudflare,  ArvanCloud,  Digital Ocean, Dreamhost, Huawei OBS, IBM COS,
       IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease,  RackCorp,  Scaleway,  SeaweedFS,  StackPath,  Storj,
       Tencent COS, Qiniu and Wasabi).

   –s3-provider
       Choose your S3 provider.

       Properties:

       • Config: provider

       • Env Var: RCLONE_S3_PROVIDER

       • Type: string

       • Required: false

       • Examples:

         • “AWS”

           • Amazon Web Services (AWS) S3

         • “Alibaba”

           • Alibaba Cloud Object Storage System (OSS) formerly Aliyun

         • “Ceph”

           • Ceph Object Storage

         • “ChinaMobile”

           • China Mobile Ecloud Elastic Object Storage (EOS)

         • “Cloudflare”

           • Cloudflare R2 Storage

         • “ArvanCloud”

           • Arvan Cloud Object Storage (AOS)

         • “DigitalOcean”

           • Digital Ocean Spaces

         • “Dreamhost”

           • Dreamhost DreamObjects

         • “HuaweiOBS”

           • Huawei Object Storage Service

         • “IBMCOS”

           • IBM COS S3

         • “IDrive”

           • IDrive e2

         • “IONOS”

           • IONOS Cloud

         • “LyveCloud”

           • Seagate Lyve Cloud

         • “Minio”

           • Minio Object Storage

         • “Netease”

           • Netease Object Storage (NOS)

         • “RackCorp”

           • RackCorp Object Storage

         • “Scaleway”

           • Scaleway Object Storage

         • “SeaweedFS”

           • SeaweedFS S3

         • “StackPath”

           • StackPath Object Storage

         • “Storj”

           • Storj (S3 Compatible Gateway)

         • “TencentCOS”

           • Tencent Cloud Object Storage (COS)

         • “Wasabi”

           • Wasabi Object Storage

         • “Qiniu”

           • Qiniu Object Storage (Kodo)

         • “Other”

           • Any other S3 compatible provider

   –s3-env-auth
       Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).

       Only applies if access_key_id and secret_access_key is blank.

       Properties:

       • Config: env_auth

       • Env Var: RCLONE_S3_ENV_AUTH

       • Type: bool

       • Default: false

       • Examples:

         • “false”

           • Enter AWS credentials in the next step.

         • “true”

           • Get AWS credentials from the environment (env vars or IAM).

   –s3-access-key-id
       AWS Access Key ID.

       Leave blank for anonymous access or runtime credentials.

       Properties:

       • Config: access_key_id

       • Env Var: RCLONE_S3_ACCESS_KEY_ID

       • Type: string

       • Required: false

   –s3-secret-access-key
       AWS Secret Access Key (password).

       Leave blank for anonymous access or runtime credentials.

       Properties:

       • Config: secret_access_key

       • Env Var: RCLONE_S3_SECRET_ACCESS_KEY

       • Type: string

       • Required: false

   –s3-region
       Region to connect to.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: AWS

       • Type: string

       • Required: false

       • Examples:

         • “us-east-1”

           • The default endpoint - a good choice if you are unsure.

           • US Region, Northern Virginia, or Pacific Northwest.

           • Leave location constraint empty.

         • “us-east-2”

           • US East (Ohio) Region.

           • Needs location constraint us-east-2.

         • “us-west-1”

           • US West (Northern California) Region.

           • Needs location constraint us-west-1.

         • “us-west-2”

           • US West (Oregon) Region.

           • Needs location constraint us-west-2.

         • “ca-central-1”

           • Canada (Central) Region.

           • Needs location constraint ca-central-1.

         • “eu-west-1”

           • EU (Ireland) Region.

           • Needs location constraint EU or eu-west-1.

         • “eu-west-2”

           • EU (London) Region.

           • Needs location constraint eu-west-2.

         • “eu-west-3”

           • EU (Paris) Region.

           • Needs location constraint eu-west-3.

         • “eu-north-1”

           • EU (Stockholm) Region.

           • Needs location constraint eu-north-1.

         • “eu-south-1”

           • EU (Milan) Region.

           • Needs location constraint eu-south-1.

         • “eu-central-1”

           • EU (Frankfurt) Region.

           • Needs location constraint eu-central-1.

         • “ap-southeast-1”

           • Asia Pacific (Singapore) Region.

           • Needs location constraint ap-southeast-1.

         • “ap-southeast-2”

           • Asia Pacific (Sydney) Region.

           • Needs location constraint ap-southeast-2.

         • “ap-northeast-1”

           • Asia Pacific (Tokyo) Region.

           • Needs location constraint ap-northeast-1.

         • “ap-northeast-2”

           • Asia Pacific (Seoul).

           • Needs location constraint ap-northeast-2.

         • “ap-northeast-3”

           • Asia Pacific (Osaka-Local).

           • Needs location constraint ap-northeast-3.

         • “ap-south-1”

           • Asia Pacific (Mumbai).

           • Needs location constraint ap-south-1.

         • “ap-east-1”

           • Asia Pacific (Hong Kong) Region.

           • Needs location constraint ap-east-1.

         • “sa-east-1”

           • South America (Sao Paulo) Region.

           • Needs location constraint sa-east-1.

         • “me-south-1”

           • Middle East (Bahrain) Region.

           • Needs location constraint me-south-1.

         • “af-south-1”

           • Africa (Cape Town) Region.

           • Needs location constraint af-south-1.

         • “cn-north-1”

           • China (Beijing) Region.

           • Needs location constraint cn-north-1.

         • “cn-northwest-1”

           • China (Ningxia) Region.

           • Needs location constraint cn-northwest-1.

         • “us-gov-east-1”

           • AWS GovCloud (US-East) Region.

           • Needs location constraint us-gov-east-1.

         • “us-gov-west-1”

           • AWS GovCloud (US) Region.

           • Needs location constraint us-gov-west-1.

   –s3-region
       region - the location where your bucket will be created and your data stored.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: RackCorp

       • Type: string

       • Required: false

       • Examples:

         • “global”

           • Global CDN (All locations) Region

         • “au”

           • Australia (All states)

         • “au-nsw”

           • NSW (Australia) Region

         • “au-qld”

           • QLD (Australia) Region

         • “au-vic”

           • VIC (Australia) Region

         • “au-wa”

           • Perth (Australia) Region

         • “ph”

           • Manila (Philippines) Region

         • “th”

           • Bangkok (Thailand) Region

         • “hk”

           • HK (Hong Kong) Region

         • “mn”

           • Ulaanbaatar (Mongolia) Region

         • “kg”

           • Bishkek (Kyrgyzstan) Region

         • “id”

           • Jakarta (Indonesia) Region

         • “jp”

           • Tokyo (Japan) Region

         • “sg”

           • SG (Singapore) Region

         • “de”

           • Frankfurt (Germany) Region

         • “us”

           • USA (AnyCast) Region

         • “us-east-1”

           • New York (USA) Region

         • “us-west-1”

           • Freemont (USA) Region

         • “nz”

           • Auckland (New Zealand) Region

   –s3-region
       Region to connect to.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: Scaleway

       • Type: string

       • Required: false

       • Examples:

         • “nl-ams”

           • Amsterdam, The Netherlands

         • “fr-par”

           • Paris, France

         • “pl-waw”

           • Warsaw, Poland

   –s3-region
       Region to connect to.  - the location where your bucket will be created and your data stored.  Need bo be
       same with your endpoint.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: HuaweiOBS

       • Type: string

       • Required: false

       • Examples:

         • “af-south-1”

           • AF-Johannesburg

         • “ap-southeast-2”

           • AP-Bangkok

         • “ap-southeast-3”

           • AP-Singapore

         • “cn-east-3”

           • CN East-Shanghai1

         • “cn-east-2”

           • CN East-Shanghai2

         • “cn-north-1”

           • CN North-Beijing1

         • “cn-north-4”

           • CN North-Beijing4

         • “cn-south-1”

           • CN South-Guangzhou

         • “ap-southeast-1”

           • CN-Hong Kong

         • “sa-argentina-1”

           • LA-Buenos Aires1

         • “sa-peru-1”

           • LA-Lima1

         • “na-mexico-1”

           • LA-Mexico City1

         • “sa-chile-1”

           • LA-Santiago2

         • “sa-brazil-1”

           • LA-Sao Paulo1

         • “ru-northwest-2”

           • RU-Moscow2

   –s3-region
       Region to connect to.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: Cloudflare

       • Type: string

       • Required: false

       • Examples:

         • “auto”

           • R2 buckets are automatically distributed across Cloudflare’s data centers for low latency.

   –s3-region
       Region to connect to.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: Qiniu

       • Type: string

       • Required: false

       • Examples:

         • “cn-east-1”

           • The default endpoint - a good choice if you are unsure.

           • East China Region 1.

           • Needs location constraint cn-east-1.

         • “cn-east-2”

           • East China Region 2.

           • Needs location constraint cn-east-2.

         • “cn-north-1”

           • North China Region 1.

           • Needs location constraint cn-north-1.

         • “cn-south-1”

           • South China Region 1.

           • Needs location constraint cn-south-1.

         • “us-north-1”

           • North America Region.

           • Needs location constraint us-north-1.

         • “ap-southeast-1”

           • Southeast Asia Region 1.

           • Needs location constraint ap-southeast-1.

         • “ap-northeast-1”

           • Northeast Asia Region 1.

           • Needs location constraint ap-northeast-1.

   –s3-region
       Region where your bucket will be created and your data stored.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider: IONOS

       • Type: string

       • Required: false

       • Examples:

         • “de”

           • Frankfurt, Germany

         • “eu-central-2”

           • Berlin, Germany

         • “eu-south-2”

           • Logrono, Spain

   –s3-region
       Region to connect to.

       Leave blank if you are using an S3 clone and you don’t have a region.

       Properties:

       • Config: region

       • Env Var: RCLONE_S3_REGION

       • Provider:
         !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Use this if unsure.

           • Will use v4 signatures and an empty region.

         • “other-v2-signature”

           • Use this only if v4 signatures don’t work.

           • E.g.  pre Jewel/v10 CEPH.

   –s3-endpoint
       Endpoint for S3 API.

       Leave blank if using AWS to use the default endpoint for the region.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: AWS

       • Type: string

       • Required: false

   –s3-endpoint
       Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: ChinaMobile

       • Type: string

       • Required: false

       • Examples:

         • “eos-wuxi-1.cmecloud.cn”

           • The default endpoint - a good choice if you are unsure.

           • East China (Suzhou)

         • “eos-jinan-1.cmecloud.cn”

           • East China (Jinan)

         • “eos-ningbo-1.cmecloud.cn”

           • East China (Hangzhou)

         • “eos-shanghai-1.cmecloud.cn”

           • East China (Shanghai-1)

         • “eos-zhengzhou-1.cmecloud.cn”

           • Central China (Zhengzhou)

         • “eos-hunan-1.cmecloud.cn”

           • Central China (Changsha-1)

         • “eos-zhuzhou-1.cmecloud.cn”

           • Central China (Changsha-2)

         • “eos-guangzhou-1.cmecloud.cn”

           • South China (Guangzhou-2)

         • “eos-dongguan-1.cmecloud.cn”

           • South China (Guangzhou-3)

         • “eos-beijing-1.cmecloud.cn”

           • North China (Beijing-1)

         • “eos-beijing-2.cmecloud.cn”

           • North China (Beijing-2)

         • “eos-beijing-4.cmecloud.cn”

           • North China (Beijing-3)

         • “eos-huhehaote-1.cmecloud.cn”

           • North China (Huhehaote)

         • “eos-chengdu-1.cmecloud.cn”

           • Southwest China (Chengdu)

         • “eos-chongqing-1.cmecloud.cn”

           • Southwest China (Chongqing)

         • “eos-guiyang-1.cmecloud.cn”

           • Southwest China (Guiyang)

         • “eos-xian-1.cmecloud.cn”

           • Nouthwest China (Xian)

         • “eos-yunnan.cmecloud.cn”

           • Yunnan China (Kunming)

         • “eos-yunnan-2.cmecloud.cn”

           • Yunnan China (Kunming-2)

         • “eos-tianjin-1.cmecloud.cn”

           • Tianjin China (Tianjin)

         • “eos-jilin-1.cmecloud.cn”

           • Jilin China (Changchun)

         • “eos-hubei-1.cmecloud.cn”

           • Hubei China (Xiangyan)

         • “eos-jiangxi-1.cmecloud.cn”

           • Jiangxi China (Nanchang)

         • “eos-gansu-1.cmecloud.cn”

           • Gansu China (Lanzhou)

         • “eos-shanxi-1.cmecloud.cn”

           • Shanxi China (Taiyuan)

         • “eos-liaoning-1.cmecloud.cn”

           • Liaoning China (Shenyang)

         • “eos-hebei-1.cmecloud.cn”

           • Hebei China (Shijiazhuang)

         • “eos-fujian-1.cmecloud.cn”

           • Fujian China (Xiamen)

         • “eos-guangxi-1.cmecloud.cn”

           • Guangxi China (Nanning)

         • “eos-anhui-1.cmecloud.cn”

           • Anhui China (Huainan)

   –s3-endpoint
       Endpoint for Arvan Cloud Object Storage (AOS) API.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: ArvanCloud

       • Type: string

       • Required: false

       • Examples:

         • “s3.ir-thr-at1.arvanstorage.com”

           • The default endpoint - a good choice if you are unsure.

           • Tehran Iran (Asiatech)

         • “s3.ir-tbz-sh1.arvanstorage.com”

           • Tabriz Iran (Shahriar)

   –s3-endpoint
       Endpoint for IBM COS S3 API.

       Specify if using an IBM COS On Premise.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: IBMCOS

       • Type: string

       • Required: false

       • Examples:

         • “s3.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Endpoint

         • “s3.dal.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Dallas Endpoint

         • “s3.wdc.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Washington DC Endpoint

         • “s3.sjc.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region San Jose Endpoint

         • “s3.private.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Private Endpoint

         • “s3.private.dal.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Dallas Private Endpoint

         • “s3.private.wdc.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region Washington DC Private Endpoint

         • “s3.private.sjc.us.cloud-object-storage.appdomain.cloud”

           • US Cross Region San Jose Private Endpoint

         • “s3.us-east.cloud-object-storage.appdomain.cloud”

           • US Region East Endpoint

         • “s3.private.us-east.cloud-object-storage.appdomain.cloud”

           • US Region East Private Endpoint

         • “s3.us-south.cloud-object-storage.appdomain.cloud”

           • US Region South Endpoint

         • “s3.private.us-south.cloud-object-storage.appdomain.cloud”

           • US Region South Private Endpoint

         • “s3.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Endpoint

         • “s3.fra.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Frankfurt Endpoint

         • “s3.mil.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Milan Endpoint

         • “s3.ams.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Amsterdam Endpoint

         • “s3.private.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Private Endpoint

         • “s3.private.fra.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Frankfurt Private Endpoint

         • “s3.private.mil.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Milan Private Endpoint

         • “s3.private.ams.eu.cloud-object-storage.appdomain.cloud”

           • EU Cross Region Amsterdam Private Endpoint

         • “s3.eu-gb.cloud-object-storage.appdomain.cloud”

           • Great Britain Endpoint

         • “s3.private.eu-gb.cloud-object-storage.appdomain.cloud”

           • Great Britain Private Endpoint

         • “s3.eu-de.cloud-object-storage.appdomain.cloud”

           • EU Region DE Endpoint

         • “s3.private.eu-de.cloud-object-storage.appdomain.cloud”

           • EU Region DE Private Endpoint

         • “s3.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Endpoint

         • “s3.tok.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Tokyo Endpoint

         • “s3.hkg.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional HongKong Endpoint

         • “s3.seo.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Seoul Endpoint

         • “s3.private.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Private Endpoint

         • “s3.private.tok.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Tokyo Private Endpoint

         • “s3.private.hkg.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional HongKong Private Endpoint

         • “s3.private.seo.ap.cloud-object-storage.appdomain.cloud”

           • APAC Cross Regional Seoul Private Endpoint

         • “s3.jp-tok.cloud-object-storage.appdomain.cloud”

           • APAC Region Japan Endpoint

         • “s3.private.jp-tok.cloud-object-storage.appdomain.cloud”

           • APAC Region Japan Private Endpoint

         • “s3.au-syd.cloud-object-storage.appdomain.cloud”

           • APAC Region Australia Endpoint

         • “s3.private.au-syd.cloud-object-storage.appdomain.cloud”

           • APAC Region Australia Private Endpoint

         • “s3.ams03.cloud-object-storage.appdomain.cloud”

           • Amsterdam Single Site Endpoint

         • “s3.private.ams03.cloud-object-storage.appdomain.cloud”

           • Amsterdam Single Site Private Endpoint

         • “s3.che01.cloud-object-storage.appdomain.cloud”

           • Chennai Single Site Endpoint

         • “s3.private.che01.cloud-object-storage.appdomain.cloud”

           • Chennai Single Site Private Endpoint

         • “s3.mel01.cloud-object-storage.appdomain.cloud”

           • Melbourne Single Site Endpoint

         • “s3.private.mel01.cloud-object-storage.appdomain.cloud”

           • Melbourne Single Site Private Endpoint

         • “s3.osl01.cloud-object-storage.appdomain.cloud”

           • Oslo Single Site Endpoint

         • “s3.private.osl01.cloud-object-storage.appdomain.cloud”

           • Oslo Single Site Private Endpoint

         • “s3.tor01.cloud-object-storage.appdomain.cloud”

           • Toronto Single Site Endpoint

         • “s3.private.tor01.cloud-object-storage.appdomain.cloud”

           • Toronto Single Site Private Endpoint

         • “s3.seo01.cloud-object-storage.appdomain.cloud”

           • Seoul Single Site Endpoint

         • “s3.private.seo01.cloud-object-storage.appdomain.cloud”

           • Seoul Single Site Private Endpoint

         • “s3.mon01.cloud-object-storage.appdomain.cloud”

           • Montreal Single Site Endpoint

         • “s3.private.mon01.cloud-object-storage.appdomain.cloud”

           • Montreal Single Site Private Endpoint

         • “s3.mex01.cloud-object-storage.appdomain.cloud”

           • Mexico Single Site Endpoint

         • “s3.private.mex01.cloud-object-storage.appdomain.cloud”

           • Mexico Single Site Private Endpoint

         • “s3.sjc04.cloud-object-storage.appdomain.cloud”

           • San Jose Single Site Endpoint

         • “s3.private.sjc04.cloud-object-storage.appdomain.cloud”

           • San Jose Single Site Private Endpoint

         • “s3.mil01.cloud-object-storage.appdomain.cloud”

           • Milan Single Site Endpoint

         • “s3.private.mil01.cloud-object-storage.appdomain.cloud”

           • Milan Single Site Private Endpoint

         • “s3.hkg02.cloud-object-storage.appdomain.cloud”

           • Hong Kong Single Site Endpoint

         • “s3.private.hkg02.cloud-object-storage.appdomain.cloud”

           • Hong Kong Single Site Private Endpoint

         • “s3.par01.cloud-object-storage.appdomain.cloud”

           • Paris Single Site Endpoint

         • “s3.private.par01.cloud-object-storage.appdomain.cloud”

           • Paris Single Site Private Endpoint

         • “s3.sng01.cloud-object-storage.appdomain.cloud”

           • Singapore Single Site Endpoint

         • “s3.private.sng01.cloud-object-storage.appdomain.cloud”

           • Singapore Single Site Private Endpoint

   –s3-endpoint
       Endpoint for IONOS S3 Object Storage.

       Specify the endpoint from the same region.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: IONOS

       • Type: string

       • Required: false

       • Examples:

         • “s3-eu-central-1.ionoscloud.com”

           • Frankfurt, Germany

         • “s3-eu-central-2.ionoscloud.com”

           • Berlin, Germany

         • “s3-eu-south-2.ionoscloud.com”

           • Logrono, Spain

   –s3-endpoint
       Endpoint for OSS API.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: Alibaba

       • Type: string

       • Required: false

       • Examples:

         • “oss-accelerate.aliyuncs.com”

           • Global Accelerate

         • “oss-accelerate-overseas.aliyuncs.com”

           • Global Accelerate (outside mainland China)

         • “oss-cn-hangzhou.aliyuncs.com”

           • East China 1 (Hangzhou)

         • “oss-cn-shanghai.aliyuncs.com”

           • East China 2 (Shanghai)

         • “oss-cn-qingdao.aliyuncs.com”

           • North China 1 (Qingdao)

         • “oss-cn-beijing.aliyuncs.com”

           • North China 2 (Beijing)

         • “oss-cn-zhangjiakou.aliyuncs.com”

           • North China 3 (Zhangjiakou)

         • “oss-cn-huhehaote.aliyuncs.com”

           • North China 5 (Hohhot)

         • “oss-cn-wulanchabu.aliyuncs.com”

           • North China 6 (Ulanqab)

         • “oss-cn-shenzhen.aliyuncs.com”

           • South China 1 (Shenzhen)

         • “oss-cn-heyuan.aliyuncs.com”

           • South China 2 (Heyuan)

         • “oss-cn-guangzhou.aliyuncs.com”

           • South China 3 (Guangzhou)

         • “oss-cn-chengdu.aliyuncs.com”

           • West China 1 (Chengdu)

         • “oss-cn-hongkong.aliyuncs.com”

           • Hong Kong (Hong Kong)

         • “oss-us-west-1.aliyuncs.com”

           • US West 1 (Silicon Valley)

         • “oss-us-east-1.aliyuncs.com”

           • US East 1 (Virginia)

         • “oss-ap-southeast-1.aliyuncs.com”

           • Southeast Asia Southeast 1 (Singapore)

         • “oss-ap-southeast-2.aliyuncs.com”

           • Asia Pacific Southeast 2 (Sydney)

         • “oss-ap-southeast-3.aliyuncs.com”

           • Southeast Asia Southeast 3 (Kuala Lumpur)

         • “oss-ap-southeast-5.aliyuncs.com”

           • Asia Pacific Southeast 5 (Jakarta)

         • “oss-ap-northeast-1.aliyuncs.com”

           • Asia Pacific Northeast 1 (Japan)

         • “oss-ap-south-1.aliyuncs.com”

           • Asia Pacific South 1 (Mumbai)

         • “oss-eu-central-1.aliyuncs.com”

           • Central Europe 1 (Frankfurt)

         • “oss-eu-west-1.aliyuncs.com”

           • West Europe (London)

         • “oss-me-east-1.aliyuncs.com”

           • Middle East 1 (Dubai)

   –s3-endpoint
       Endpoint for OBS API.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: HuaweiOBS

       • Type: string

       • Required: false

       • Examples:

         • “obs.af-south-1.myhuaweicloud.com”

           • AF-Johannesburg

         • “obs.ap-southeast-2.myhuaweicloud.com”

           • AP-Bangkok

         • “obs.ap-southeast-3.myhuaweicloud.com”

           • AP-Singapore

         • “obs.cn-east-3.myhuaweicloud.com”

           • CN East-Shanghai1

         • “obs.cn-east-2.myhuaweicloud.com”

           • CN East-Shanghai2

         • “obs.cn-north-1.myhuaweicloud.com”

           • CN North-Beijing1

         • “obs.cn-north-4.myhuaweicloud.com”

           • CN North-Beijing4

         • “obs.cn-south-1.myhuaweicloud.com”

           • CN South-Guangzhou

         • “obs.ap-southeast-1.myhuaweicloud.com”

           • CN-Hong Kong

         • “obs.sa-argentina-1.myhuaweicloud.com”

           • LA-Buenos Aires1

         • “obs.sa-peru-1.myhuaweicloud.com”

           • LA-Lima1

         • “obs.na-mexico-1.myhuaweicloud.com”

           • LA-Mexico City1

         • “obs.sa-chile-1.myhuaweicloud.com”

           • LA-Santiago2

         • “obs.sa-brazil-1.myhuaweicloud.com”

           • LA-Sao Paulo1

         • “obs.ru-northwest-2.myhuaweicloud.com”

           • RU-Moscow2

   –s3-endpoint
       Endpoint for Scaleway Object Storage.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: Scaleway

       • Type: string

       • Required: false

       • Examples:

         • “s3.nl-ams.scw.cloud”

           • Amsterdam Endpoint

         • “s3.fr-par.scw.cloud”

           • Paris Endpoint

         • “s3.pl-waw.scw.cloud”

           • Warsaw Endpoint

   –s3-endpoint
       Endpoint for StackPath Object Storage.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: StackPath

       • Type: string

       • Required: false

       • Examples:

         • “s3.us-east-2.stackpathstorage.com”

           • US East Endpoint

         • “s3.us-west-1.stackpathstorage.com”

           • US West Endpoint

         • “s3.eu-central-1.stackpathstorage.com”

           • EU Endpoint

   –s3-endpoint
       Endpoint of the Shared Gateway.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: Storj

       • Type: string

       • Required: false

       • Examples:

         • “gateway.eu1.storjshare.io”

           • EU1 Shared Gateway

         • “gateway.us1.storjshare.io”

           • US1 Shared Gateway

         • “gateway.ap1.storjshare.io”

           • Asia-Pacific Shared Gateway

   –s3-endpoint
       Endpoint for Tencent COS API.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: TencentCOS

       • Type: string

       • Required: false

       • Examples:

         • “cos.ap-beijing.myqcloud.com”

           • Beijing Region

         • “cos.ap-nanjing.myqcloud.com”

           • Nanjing Region

         • “cos.ap-shanghai.myqcloud.com”

           • Shanghai Region

         • “cos.ap-guangzhou.myqcloud.com”

           • Guangzhou Region

         • “cos.ap-nanjing.myqcloud.com”

           • Nanjing Region

         • “cos.ap-chengdu.myqcloud.com”

           • Chengdu Region

         • “cos.ap-chongqing.myqcloud.com”

           • Chongqing Region

         • “cos.ap-hongkong.myqcloud.com”

           • Hong Kong (China) Region

         • “cos.ap-singapore.myqcloud.com”

           • Singapore Region

         • “cos.ap-mumbai.myqcloud.com”

           • Mumbai Region

         • “cos.ap-seoul.myqcloud.com”

           • Seoul Region

         • “cos.ap-bangkok.myqcloud.com”

           • Bangkok Region

         • “cos.ap-tokyo.myqcloud.com”

           • Tokyo Region

         • “cos.na-siliconvalley.myqcloud.com”

           • Silicon Valley Region

         • “cos.na-ashburn.myqcloud.com”

           • Virginia Region

         • “cos.na-toronto.myqcloud.com”

           • Toronto Region

         • “cos.eu-frankfurt.myqcloud.com”

           • Frankfurt Region

         • “cos.eu-moscow.myqcloud.com”

           • Moscow Region

         • “cos.accelerate.myqcloud.com”

           • Use Tencent COS Accelerate Endpoint

   –s3-endpoint
       Endpoint for RackCorp Object Storage.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: RackCorp

       • Type: string

       • Required: false

       • Examples:

         • “s3.rackcorp.com”

           • Global (AnyCast) Endpoint

         • “au.s3.rackcorp.com”

           • Australia (Anycast) Endpoint

         • “au-nsw.s3.rackcorp.com”

           • Sydney (Australia) Endpoint

         • “au-qld.s3.rackcorp.com”

           • Brisbane (Australia) Endpoint

         • “au-vic.s3.rackcorp.com”

           • Melbourne (Australia) Endpoint

         • “au-wa.s3.rackcorp.com”

           • Perth (Australia) Endpoint

         • “ph.s3.rackcorp.com”

           • Manila (Philippines) Endpoint

         • “th.s3.rackcorp.com”

           • Bangkok (Thailand) Endpoint

         • “hk.s3.rackcorp.com”

           • HK (Hong Kong) Endpoint

         • “mn.s3.rackcorp.com”

           • Ulaanbaatar (Mongolia) Endpoint

         • “kg.s3.rackcorp.com”

           • Bishkek (Kyrgyzstan) Endpoint

         • “id.s3.rackcorp.com”

           • Jakarta (Indonesia) Endpoint

         • “jp.s3.rackcorp.com”

           • Tokyo (Japan) Endpoint

         • “sg.s3.rackcorp.com”

           • SG (Singapore) Endpoint

         • “de.s3.rackcorp.com”

           • Frankfurt (Germany) Endpoint

         • “us.s3.rackcorp.com”

           • USA (AnyCast) Endpoint

         • “us-east-1.s3.rackcorp.com”

           • New York (USA) Endpoint

         • “us-west-1.s3.rackcorp.com”

           • Freemont (USA) Endpoint

         • “nz.s3.rackcorp.com”

           • Auckland (New Zealand) Endpoint

   –s3-endpoint
       Endpoint for Qiniu Object Storage.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider: Qiniu

       • Type: string

       • Required: false

       • Examples:

         • “s3-cn-east-1.qiniucs.com”

           • East China Endpoint 1

         • “s3-cn-east-2.qiniucs.com”

           • East China Endpoint 2

         • “s3-cn-north-1.qiniucs.com”

           • North China Endpoint 1

         • “s3-cn-south-1.qiniucs.com”

           • South China Endpoint 1

         • “s3-us-north-1.qiniucs.com”

           • North America Endpoint 1

         • “s3-ap-southeast-1.qiniucs.com”

           • Southeast Asia Endpoint 1

         • “s3-ap-northeast-1.qiniucs.com”

           • Northeast Asia Endpoint 1

   –s3-endpoint
       Endpoint for S3 API.

       Required when using an S3 clone.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_S3_ENDPOINT

       • Provider:
         !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu

       • Type: string

       • Required: false

       • Examples:

         • “objects-us-east-1.dream.io”

           • Dream Objects endpoint

         • “nyc3.digitaloceanspaces.com”

           • Digital Ocean Spaces New York 3

         • “ams3.digitaloceanspaces.com”

           • Digital Ocean Spaces Amsterdam 3

         • “sgp1.digitaloceanspaces.com”

           • Digital Ocean Spaces Singapore 1

         • “localhost:8333”

           • SeaweedFS S3 localhost

         • “s3.us-east-1.lyvecloud.seagate.com”

           • Seagate Lyve Cloud US East 1 (Virginia)

         • “s3.us-west-1.lyvecloud.seagate.com”

           • Seagate Lyve Cloud US West 1 (California)

         • “s3.ap-southeast-1.lyvecloud.seagate.com”

           • Seagate Lyve Cloud AP Southeast 1 (Singapore)

         • “s3.wasabisys.com”

           • Wasabi US East 1 (N.  Virginia)

         • “s3.us-east-2.wasabisys.com”

           • Wasabi US East 2 (N.  Virginia)

         • “s3.us-central-1.wasabisys.com”

           • Wasabi US Central 1 (Texas)

         • “s3.us-west-1.wasabisys.com”

           • Wasabi US West 1 (Oregon)

         • “s3.ca-central-1.wasabisys.com”

           • Wasabi CA Central 1 (Toronto)

         • “s3.eu-central-1.wasabisys.com”

           • Wasabi EU Central 1 (Amsterdam)

         • “s3.eu-central-2.wasabisys.com”

           • Wasabi EU Central 2 (Frankfurt)

         • “s3.eu-west-1.wasabisys.com”

           • Wasabi EU West 1 (London)

         • “s3.eu-west-2.wasabisys.com”

           • Wasabi EU West 2 (Paris)

         • “s3.ap-northeast-1.wasabisys.com”

           • Wasabi AP Northeast 1 (Tokyo) endpoint

         • “s3.ap-northeast-2.wasabisys.com”

           • Wasabi AP Northeast 2 (Osaka) endpoint

         • “s3.ap-southeast-1.wasabisys.com”

           • Wasabi AP Southeast 1 (Singapore)

         • “s3.ap-southeast-2.wasabisys.com”

           • Wasabi AP Southeast 2 (Sydney)

         • “s3.ir-thr-at1.arvanstorage.com”

           • ArvanCloud Tehran Iran (Asiatech) endpoint

   –s3-location-constraint
       Location constraint - must be set to match the Region.

       Used when creating buckets only.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: AWS

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Empty for US Region, Northern Virginia, or Pacific Northwest

         • “us-east-2”

           • US East (Ohio) Region

         • “us-west-1”

           • US West (Northern California) Region

         • “us-west-2”

           • US West (Oregon) Region

         • “ca-central-1”

           • Canada (Central) Region

         • “eu-west-1”

           • EU (Ireland) Region

         • “eu-west-2”

           • EU (London) Region

         • “eu-west-3”

           • EU (Paris) Region

         • “eu-north-1”

           • EU (Stockholm) Region

         • “eu-south-1”

           • EU (Milan) Region

         • “EU”

           • EU Region

         • “ap-southeast-1”

           • Asia Pacific (Singapore) Region

         • “ap-southeast-2”

           • Asia Pacific (Sydney) Region

         • “ap-northeast-1”

           • Asia Pacific (Tokyo) Region

         • “ap-northeast-2”

           • Asia Pacific (Seoul) Region

         • “ap-northeast-3”

           • Asia Pacific (Osaka-Local) Region

         • “ap-south-1”

           • Asia Pacific (Mumbai) Region

         • “ap-east-1”

           • Asia Pacific (Hong Kong) Region

         • “sa-east-1”

           • South America (Sao Paulo) Region

         • “me-south-1”

           • Middle East (Bahrain) Region

         • “af-south-1”

           • Africa (Cape Town) Region

         • “cn-north-1”

           • China (Beijing) Region

         • “cn-northwest-1”

           • China (Ningxia) Region

         • “us-gov-east-1”

           • AWS GovCloud (US-East) Region

         • “us-gov-west-1”

           • AWS GovCloud (US) Region

   –s3-location-constraint
       Location constraint - must match endpoint.

       Used when creating buckets only.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: ChinaMobile

       • Type: string

       • Required: false

       • Examples:

         • “wuxi1”

           • East China (Suzhou)

         • “jinan1”

           • East China (Jinan)

         • “ningbo1”

           • East China (Hangzhou)

         • “shanghai1”

           • East China (Shanghai-1)

         • “zhengzhou1”

           • Central China (Zhengzhou)

         • “hunan1”

           • Central China (Changsha-1)

         • “zhuzhou1”

           • Central China (Changsha-2)

         • “guangzhou1”

           • South China (Guangzhou-2)

         • “dongguan1”

           • South China (Guangzhou-3)

         • “beijing1”

           • North China (Beijing-1)

         • “beijing2”

           • North China (Beijing-2)

         • “beijing4”

           • North China (Beijing-3)

         • “huhehaote1”

           • North China (Huhehaote)

         • “chengdu1”

           • Southwest China (Chengdu)

         • “chongqing1”

           • Southwest China (Chongqing)

         • “guiyang1”

           • Southwest China (Guiyang)

         • “xian1”

           • Nouthwest China (Xian)

         • “yunnan”

           • Yunnan China (Kunming)

         • “yunnan2”

           • Yunnan China (Kunming-2)

         • “tianjin1”

           • Tianjin China (Tianjin)

         • “jilin1”

           • Jilin China (Changchun)

         • “hubei1”

           • Hubei China (Xiangyan)

         • “jiangxi1”

           • Jiangxi China (Nanchang)

         • “gansu1”

           • Gansu China (Lanzhou)

         • “shanxi1”

           • Shanxi China (Taiyuan)

         • “liaoning1”

           • Liaoning China (Shenyang)

         • “hebei1”

           • Hebei China (Shijiazhuang)

         • “fujian1”

           • Fujian China (Xiamen)

         • “guangxi1”

           • Guangxi China (Nanning)

         • “anhui1”

           • Anhui China (Huainan)

   –s3-location-constraint
       Location constraint - must match endpoint.

       Used when creating buckets only.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: ArvanCloud

       • Type: string

       • Required: false

       • Examples:

         • “ir-thr-at1”

           • Tehran Iran (Asiatech)

         • “ir-tbz-sh1”

           • Tabriz Iran (Shahriar)

   –s3-location-constraint
       Location constraint - must match endpoint when using IBM Cloud Public.

       For on-prem COS, do not make a selection from this list, hit enter.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: IBMCOS

       • Type: string

       • Required: false

       • Examples:

         • “us-standard”

           • US Cross Region Standard

         • “us-vault”

           • US Cross Region Vault

         • “us-cold”

           • US Cross Region Cold

         • “us-flex”

           • US Cross Region Flex

         • “us-east-standard”

           • US East Region Standard

         • “us-east-vault”

           • US East Region Vault

         • “us-east-cold”

           • US East Region Cold

         • “us-east-flex”

           • US East Region Flex

         • “us-south-standard”

           • US South Region Standard

         • “us-south-vault”

           • US South Region Vault

         • “us-south-cold”

           • US South Region Cold

         • “us-south-flex”

           • US South Region Flex

         • “eu-standard”

           • EU Cross Region Standard

         • “eu-vault”

           • EU Cross Region Vault

         • “eu-cold”

           • EU Cross Region Cold

         • “eu-flex”

           • EU Cross Region Flex

         • “eu-gb-standard”

           • Great Britain Standard

         • “eu-gb-vault”

           • Great Britain Vault

         • “eu-gb-cold”

           • Great Britain Cold

         • “eu-gb-flex”

           • Great Britain Flex

         • “ap-standard”

           • APAC Standard

         • “ap-vault”

           • APAC Vault

         • “ap-cold”

           • APAC Cold

         • “ap-flex”

           • APAC Flex

         • “mel01-standard”

           • Melbourne Standard

         • “mel01-vault”

           • Melbourne Vault

         • “mel01-cold”

           • Melbourne Cold

         • “mel01-flex”

           • Melbourne Flex

         • “tor01-standard”

           • Toronto Standard

         • “tor01-vault”

           • Toronto Vault

         • “tor01-cold”

           • Toronto Cold

         • “tor01-flex”

           • Toronto Flex

   –s3-location-constraint
       Location constraint - the location where your bucket will be located and your data stored.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: RackCorp

       • Type: string

       • Required: false

       • Examples:

         • “global”

           • Global CDN Region

         • “au”

           • Australia (All locations)

         • “au-nsw”

           • NSW (Australia) Region

         • “au-qld”

           • QLD (Australia) Region

         • “au-vic”

           • VIC (Australia) Region

         • “au-wa”

           • Perth (Australia) Region

         • “ph”

           • Manila (Philippines) Region

         • “th”

           • Bangkok (Thailand) Region

         • “hk”

           • HK (Hong Kong) Region

         • “mn”

           • Ulaanbaatar (Mongolia) Region

         • “kg”

           • Bishkek (Kyrgyzstan) Region

         • “id”

           • Jakarta (Indonesia) Region

         • “jp”

           • Tokyo (Japan) Region

         • “sg”

           • SG (Singapore) Region

         • “de”

           • Frankfurt (Germany) Region

         • “us”

           • USA (AnyCast) Region

         • “us-east-1”

           • New York (USA) Region

         • “us-west-1”

           • Freemont (USA) Region

         • “nz”

           • Auckland (New Zealand) Region

   –s3-location-constraint
       Location constraint - must be set to match the Region.

       Used when creating buckets only.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider: Qiniu

       • Type: string

       • Required: false

       • Examples:

         • “cn-east-1”

           • East China Region 1

         • “cn-east-2”

           • East China Region 2

         • “cn-north-1”

           • North China Region 1

         • “cn-south-1”

           • South China Region 1

         • “us-north-1”

           • North America Region 1

         • “ap-southeast-1”

           • Southeast Asia Region 1

         • “ap-northeast-1”

           • Northeast Asia Region 1

   –s3-location-constraint
       Location constraint - must be set to match the Region.

       Leave blank if not sure.  Used when creating buckets only.

       Properties:

       • Config: location_constraint

       • Env Var: RCLONE_S3_LOCATION_CONSTRAINT

       • Provider:
         !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS

       • Type: string

       • Required: false

   –s3-acl
       Canned ACL used when creating buckets and storing or copying objects.

       This ACL is used for creating objects and if bucket_acl isn’t set, for creating buckets too.

       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

       Note that this ACL is applied when server-side copying objects as S3 doesn’t copy the ACL from the source
       but rather writes a fresh one.

       Properties:

       • Config: acl

       • Env Var: RCLONE_S3_ACL

       • Provider: !Storj,Cloudflare

       • Type: string

       • Required: false

       • Examples:

         • “default”

           • Owner gets Full_CONTROL.

           • No one else has access rights (default).

         • “private”

           • Owner gets FULL_CONTROL.

           • No one else has access rights (default).

         • “public-read”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ access.

         • “public-read-write”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ and WRITE access.

           • Granting this on a bucket is generally not recommended.

         • “authenticated-read”

           • Owner gets FULL_CONTROL.

           • The AuthenticatedUsers group gets READ access.

         • “bucket-owner-read”

           • Object owner gets FULL_CONTROL.

           • Bucket owner gets READ access.

           • If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.

         • “bucket-owner-full-control”

           • Both the object owner and the bucket owner get FULL_CONTROL over the object.

           • If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.

         • “private”

           • Owner gets FULL_CONTROL.

           • No one else has access rights (default).

           • This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.

         • “public-read”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ access.

           • This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.

         • “public-read-write”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ and WRITE access.

           • This acl is available on IBM Cloud (Infra), On-Premise IBM COS.

         • “authenticated-read”

           • Owner gets FULL_CONTROL.

           • The AuthenticatedUsers group gets READ access.

           • Not supported on Buckets.

           • This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.

   –s3-server-side-encryption
       The server-side encryption algorithm used when storing this object in S3.

       Properties:

       • Config: server_side_encryption

       • Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION

       • Provider: AWS,Ceph,ChinaMobile,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

         • “AES256”

           • AES256

         • “aws:kms”

           • aws:kms

   –s3-sse-kms-key-id
       If using KMS ID you must provide the ARN of Key.

       Properties:

       • Config: sse_kms_key_id

       • Env Var: RCLONE_S3_SSE_KMS_KEY_ID

       • Provider: AWS,Ceph,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

         • “arn:aws:kms:us-east-1:*”

           • arn:aws:kms:*

   –s3-storage-class
       The storage class to use when storing new objects in S3.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: AWS

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “STANDARD”

           • Standard storage class

         • “REDUCED_REDUNDANCY”

           • Reduced redundancy storage class

         • “STANDARD_IA”

           • Standard Infrequent Access storage class

         • “ONEZONE_IA”

           • One Zone Infrequent Access storage class

         • “GLACIER”

           • Glacier storage class

         • “DEEP_ARCHIVE”

           • Glacier Deep Archive storage class

         • “INTELLIGENT_TIERING”

           • Intelligent-Tiering storage class

         • “GLACIER_IR”

           • Glacier Instant Retrieval storage class

   –s3-storage-class
       The storage class to use when storing new objects in OSS.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: Alibaba

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “STANDARD”

           • Standard storage class

         • “GLACIER”

           • Archive storage mode

         • “STANDARD_IA”

           • Infrequent access storage mode

   –s3-storage-class
       The storage class to use when storing new objects in ChinaMobile.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: ChinaMobile

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “STANDARD”

           • Standard storage class

         • “GLACIER”

           • Archive storage mode

         • “STANDARD_IA”

           • Infrequent access storage mode

   –s3-storage-class
       The storage class to use when storing new objects in ArvanCloud.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: ArvanCloud

       • Type: string

       • Required: false

       • Examples:

         • “STANDARD”

           • Standard storage class

   –s3-storage-class
       The storage class to use when storing new objects in Tencent COS.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: TencentCOS

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “STANDARD”

           • Standard storage class

         • “ARCHIVE”

           • Archive storage mode

         • “STANDARD_IA”

           • Infrequent access storage mode

   –s3-storage-class
       The storage class to use when storing new objects in S3.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: Scaleway

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default.

         • “STANDARD”

           • The Standard class for any upload.

           • Suitable for on-demand content like streaming or CDN.

         • “GLACIER”

           • Archived storage.

           • Prices are lower, but it needs to be restored first to be accessed.

   –s3-storage-class
       The storage class to use when storing new objects in Qiniu.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_S3_STORAGE_CLASS

       • Provider: Qiniu

       • Type: string

       • Required: false

       • Examples:

         • “STANDARD”

           • Standard storage class

         • “LINE”

           • Infrequent access storage mode

         • “GLACIER”

           • Archive storage mode

         • “DEEP_ARCHIVE”

           • Deep archive storage mode

   Advanced options
       Here  are  the  Advanced  options  specific  to  s3 (Amazon S3 Compliant Storage Providers including AWS,
       Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital  Ocean,  Dreamhost,  Huawei  OBS,  IBM  COS,
       IDrive  e2,  IONOS  Cloud,  Lyve  Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj,
       Tencent COS, Qiniu and Wasabi).

   –s3-bucket-acl
       Canned ACL used when creating buckets.

       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

       Note that this ACL is applied when only when creating buckets.  If  it  isn’t  set  then  “acl”  is  used
       instead.

       Properties:

       • Config: bucket_acl

       • Env Var: RCLONE_S3_BUCKET_ACL

       • Type: string

       • Required: false

       • Examples:

         • “private”

           • Owner gets FULL_CONTROL.

           • No one else has access rights (default).

         • “public-read”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ access.

         • “public-read-write”

           • Owner gets FULL_CONTROL.

           • The AllUsers group gets READ and WRITE access.

           • Granting this on a bucket is generally not recommended.

         • “authenticated-read”

           • Owner gets FULL_CONTROL.

           • The AuthenticatedUsers group gets READ access.

   –s3-requester-pays
       Enables requester pays option when interacting with S3 bucket.

       Properties:

       • Config: requester_pays

       • Env Var: RCLONE_S3_REQUESTER_PAYS

       • Provider: AWS

       • Type: bool

       • Default: false

   –s3-sse-customer-algorithm
       If using SSE-C, the server-side encryption algorithm used when storing this object in S3.

       Properties:

       • Config: sse_customer_algorithm

       • Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM

       • Provider: AWS,Ceph,ChinaMobile,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

         • “AES256”

           • AES256

   –s3-sse-customer-key
       To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.

       Alternatively you can provide –sse-customer-key-base64.

       Properties:

       • Config: sse_customer_key

       • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY

       • Provider: AWS,Ceph,ChinaMobile,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

   –s3-sse-customer-key-base64
       If  using  SSE-C  you  must provide the secret encryption key encoded in base64 format to encrypt/decrypt
       your data.

       Alternatively you can provide –sse-customer-key.

       Properties:

       • Config: sse_customer_key_base64

       • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64

       • Provider: AWS,Ceph,ChinaMobile,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

   –s3-sse-customer-key-md5
       If using SSE-C you may provide the secret encryption key MD5 checksum (optional).

       If you leave it blank, this is calculated automatically from the sse_customer_key provided.

       Properties:

       • Config: sse_customer_key_md5

       • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5

       • Provider: AWS,Ceph,ChinaMobile,Minio

       • Type: string

       • Required: false

       • Examples:

         • “”

           • None

   –s3-upload-cutoff
       Cutoff for switching to chunked upload.

       Any files larger than this will be uploaded in chunks of chunk_size.  The minimum is 0 and the maximum is
       5 GiB.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_S3_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 200Mi

   –s3-chunk-size
       Chunk size to use for uploading.

       When uploading files larger than upload_cutoff or files with unknown size  (e.g. from  “rclone  rcat”  or
       uploaded  with “rclone mount” or google photos or google docs) they will be uploaded as multipart uploads
       using this chunk size.

       Note that “–s3-upload-concurrency” chunks of this size are buffered in memory per transfer.

       If you are transferring large files over high-speed links and you have  enough  memory,  then  increasing
       this will speed up the transfers.

       Rclone will automatically increase the chunk size when uploading a large file of known size to stay below
       the 10,000 chunks limit.

       Files of unknown size are uploaded with the configured chunk_size.  Since the default chunk size is 5 MiB
       and  there  can  be  at most 10,000 chunks, this means that by default the maximum size of a file you can
       stream upload is 48 GiB.  If you wish to stream upload larger  files  then  you  will  need  to  increase
       chunk_size.

       Increasing  the  chunk  size  decreases the accuracy of the progress statistics displayed with “-P” flag.
       Rclone treats chunk as sent when it’s buffered by the AWS SDK, when in fact it may still be uploading.  A
       bigger chunk size means a bigger AWS SDK buffer and progress reporting more deviating from the truth.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_S3_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 5Mi

   –s3-max-upload-parts
       Maximum number of parts in a multipart upload.

       This option defines the maximum number of multipart chunks to use when doing a multipart upload.

       This can be useful if a service does not support the AWS S3 specification of 10,000 chunks.

       Rclone will automatically increase the chunk size when uploading a large file of a  known  size  to  stay
       below this number of chunks limit.

       Properties:

       • Config: max_upload_parts

       • Env Var: RCLONE_S3_MAX_UPLOAD_PARTS

       • Type: int

       • Default: 10000

   –s3-copy-cutoff
       Cutoff for switching to multipart copy.

       Any files larger than this that need to be server-side copied will be copied in chunks of this size.

       The minimum is 0 and the maximum is 5 GiB.

       Properties:

       • Config: copy_cutoff

       • Env Var: RCLONE_S3_COPY_CUTOFF

       • Type: SizeSuffix

       • Default: 4.656Gi

   –s3-disable-checksum
       Don’t store MD5 checksum with object metadata.

       Normally  rclone  will  calculate  the  MD5 checksum of the input before uploading it so it can add it to
       metadata on the object.  This is great for data integrity checking but can cause long  delays  for  large
       files to start uploading.

       Properties:

       • Config: disable_checksum

       • Env Var: RCLONE_S3_DISABLE_CHECKSUM

       • Type: bool

       • Default: false

   –s3-shared-credentials-file
       Path to the shared credentials file.

       If env_auth = true then rclone can use a shared credentials file.

       If  this  variable  is empty rclone will look for the “AWS_SHARED_CREDENTIALS_FILE” env variable.  If the
       env value is empty it will default to the current user’s home directory.

              Linux/OSX: "$HOME/.aws/credentials"
              Windows:   "%USERPROFILE%\.aws\credentials"

       Properties:

       • Config: shared_credentials_file

       • Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE

       • Type: string

       • Required: false

   –s3-profile
       Profile to use in the shared credentials file.

       If env_auth = true then rclone can use a shared credentials file.  This variable controls  which  profile
       is used in that file.

       If  empty  it  will  default  to  the environment variable “AWS_PROFILE” or “default” if that environment
       variable is also not set.

       Properties:

       • Config: profile

       • Env Var: RCLONE_S3_PROFILE

       • Type: string

       • Required: false

   –s3-session-token
       An AWS session token.

       Properties:

       • Config: session_token

       • Env Var: RCLONE_S3_SESSION_TOKEN

       • Type: string

       • Required: false

   –s3-upload-concurrency
       Concurrency for multipart uploads.

       This is the number of chunks of the same file that are uploaded concurrently.

       If you are uploading small numbers of large files over high-speed links and these uploads  do  not  fully
       utilize your bandwidth, then increasing this may help to speed up the transfers.

       Properties:

       • Config: upload_concurrency

       • Env Var: RCLONE_S3_UPLOAD_CONCURRENCY

       • Type: int

       • Default: 4

   –s3-force-path-style
       If true use path style access if false use virtual hosted style.

       If  this  is  true  (the  default)  then rclone will use path style access, if false then rclone will use
       virtual path style.  See  https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-
       intro the AWS S3 docs for more info.

       Some  providers  (e.g. AWS,  Aliyun  OSS, Netease COS, or Tencent COS) require this set to false - rclone
       will do this automatically based on the provider setting.

       Properties:

       • Config: force_path_style

       • Env Var: RCLONE_S3_FORCE_PATH_STYLE

       • Type: bool

       • Default: true

   –s3-v2-auth
       If true use v2 authentication.

       If this is false (the default) then rclone will use v4 authentication.  If it is set then rclone will use
       v2 authentication.

       Use this only if v4 signatures don’t work, e.g. pre Jewel/v10 CEPH.

       Properties:

       • Config: v2_auth

       • Env Var: RCLONE_S3_V2_AUTH

       • Type: bool

       • Default: false

   –s3-use-accelerate-endpoint
       If true use the AWS S3 accelerated endpoint.

       See: https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html AWS S3  Transfer
       ac‐ celeration

       Properties:

       • Config: use_accelerate_endpoint

       • Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT

       • Provider: AWS

       • Type: bool

       • Default: false

   –s3-leave-parts-on-error
       If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual
       recovery.

       It should be set to true for resuming uploads across different sessions.

       WARNING:  Storing  parts  of an incomplete multipart upload counts towards space usage on S3 and will add
       additional costs if not cleaned up.

       Properties:

       • Config: leave_parts_on_error

       • Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR

       • Provider: AWS

       • Type: bool

       • Default: false

   –s3-list-chunk
       Size of listing chunk (response list for each ListObject S3 request).

       This option is also known as “MaxKeys”, “max-items”, or “page-size” from the AWS S3 specification.   Most
       services  truncate the response list to 1000 objects even if requested more than that.  In AWS S3 this is
       a global maximum and cannot be changed, see AWS S3.  In Ceph, this can be increased with  the  “rgw  list
       buckets max chunk” option.

       Properties:

       • Config: list_chunk

       • Env Var: RCLONE_S3_LIST_CHUNK

       • Type: int

       • Default: 1000

   –s3-list-version
       Version of ListObjects to use: 1,2 or 0 for auto.

       When S3 originally launched it only provided the ListObjects call to enumerate objects in a bucket.

       However in May 2016 the ListObjectsV2 call was introduced.  This is much higher performance and should be
       used if at all possible.

       If  set  to  the default, 0, rclone will guess according to the provider set which list objects method to
       call.  If it guesses wrong, then it may be set manually here.

       Properties:

       • Config: list_version

       • Env Var: RCLONE_S3_LIST_VERSION

       • Type: int

       • Default: 0

   –s3-list-url-encode
       Whether to url encode listings: true/false/unset

       Some providers support URL encoding listings and where this is available this is more reliable when using
       control characters in file names.  If this is  set  to  unset  (the  default)  then  rclone  will  choose
       according to the provider setting what to apply, but you can override rclone’s choice here.

       Properties:

       • Config: list_url_encode

       • Env Var: RCLONE_S3_LIST_URL_ENCODE

       • Type: Tristate

       • Default: unset

   –s3-no-check-bucket
       If set, don’t attempt to check the bucket exists or create it.

       This  can be useful when trying to minimise the number of transactions rclone does if you know the bucket
       exists already.

       It can also be needed if the user you are using  does  not  have  bucket  creation  permissions.   Before
       v1.52.0 this would have passed silently due to a bug.

       Properties:

       • Config: no_check_bucket

       • Env Var: RCLONE_S3_NO_CHECK_BUCKET

       • Type: bool

       • Default: false

   –s3-no-head
       If set, don’t HEAD uploaded objects to check integrity.

       This can be useful when trying to minimise the number of transactions rclone does.

       Setting it means that if rclone receives a 200 OK message after uploading an object with PUT then it will
       assume that it got uploaded properly.

       In particular it will assume:

       • the metadata, including modtime, storage class and content type was as uploaded

       • the size was as uploaded

       It reads the following items from the response for a single part PUT:

       • the MD5SUM

       • The uploaded date

       For multipart uploads these items aren’t read.

       If an source object of unknown length is uploaded then rclone will do a HEAD request.

       Setting  this  flag increases the chance for undetected upload failures, in particular an incorrect size,
       so it isn’t recommended for normal operation.  In practice the chance of an undetected upload failure  is
       very small even with this flag.

       Properties:

       • Config: no_head

       • Env Var: RCLONE_S3_NO_HEAD

       • Type: bool

       • Default: false

   –s3-no-head-object
       If set, do not do HEAD before GET when getting objects.

       Properties:

       • Config: no_head_object

       • Env Var: RCLONE_S3_NO_HEAD_OBJECT

       • Type: bool

       • Default: false

   –s3-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_S3_ENCODING

       • Type: MultiEncoder

       • Default: Slash,InvalidUtf8,Dot

   –s3-memory-pool-flush-time
       How often internal memory buffer pools will be flushed.

       Uploads  which  requires  additional  buffers (f.e multipart) will use memory pool for allocations.  This
       option controls how often unused buffers will be removed from the pool.

       Properties:

       • Config: memory_pool_flush_time

       • Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME

       • Type: Duration

       • Default: 1m0s

   –s3-memory-pool-use-mmap
       Whether to use mmap buffers in internal memory pool.

       Properties:

       • Config: memory_pool_use_mmap

       • Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP

       • Type: bool

       • Default: false

   –s3-disable-http2
       Disable usage of http2 for S3 backends.

       There is currently an unsolved issue with the s3 (specifically minio)  backend  and  HTTP/2.   HTTP/2  is
       enabled  by default for the s3 backend but can be disabled here.  When the issue is solved this flag will
       be removed.

       See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631

       Properties:

       • Config: disable_http2

       • Env Var: RCLONE_S3_DISABLE_HTTP2

       • Type: bool

       • Default: false

   –s3-download-url
       Custom endpoint for downloads.  This is usually set to a CloudFront CDN URL  as  AWS  S3  offers  cheaper
       egress for data downloaded through the CloudFront network.

       Properties:

       • Config: download_url

       • Env Var: RCLONE_S3_DOWNLOAD_URL

       • Type: string

       • Required: false

   –s3-use-multipart-etag
       Whether to use ETag in multipart uploads for verification

       This should be true, false or left unset to use the default for the provider.

       Properties:

       • Config: use_multipart_etag

       • Env Var: RCLONE_S3_USE_MULTIPART_ETAG

       • Type: Tristate

       • Default: unset

   –s3-use-presigned-request
       Whether to use a presigned request or PutObject for single part uploads

       If this is false rclone will use PutObject from the AWS SDK to upload an object.

       Versions  of rclone < 1.59 use presigned requests to upload a single part object and setting this flag to
       true will re-enable that functionality.  This shouldn’t be necessary except in exceptional  circumstances
       or for testing.

       Properties:

       • Config: use_presigned_request

       • Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST

       • Type: bool

       • Default: false

   –s3-versions
       Include old versions in directory listings.

       Properties:

       • Config: versions

       • Env Var: RCLONE_S3_VERSIONS

       • Type: bool

       • Default: false

   –s3-version-at
       Show file versions as they were at the specified time.

       The  parameter should be a date, “2006-01-02”, datetime “2006-01-02 15:04:05” or a duration for that long
       ago, eg “100d” or “1h”.

       Note that when using this no file write operations are permitted, so you can’t  upload  files  or  delete
       them.

       See the time option docs for valid formats.

       Properties:

       • Config: version_at

       • Env Var: RCLONE_S3_VERSION_AT

       • Type: Time

       • Default: off

   –s3-decompress
       If set this will decompress gzip encoded objects.

       It  is possible to upload objects to S3 with “Content-Encoding: gzip” set.  Normally rclone will download
       these files as compressed objects.

       If this flag is set then rclone will decompress these files with “Content-Encoding:  gzip”  as  they  are
       received.   This  means  that  rclone  can’t  check  the  size  and  hash  but  the file contents will be
       decompressed.

       Properties:

       • Config: decompress

       • Env Var: RCLONE_S3_DECOMPRESS

       • Type: bool

       • Default: false

   –s3-might-gzip
       Set this if the backend might gzip objects.

       Normally providers will not alter objects when they are downloaded.  If an object was not  uploaded  with
       Content-Encoding: gzip then it won’t be set on download.

       However  some  providers  may  gzip objects even if they weren’t uploaded with Content-Encoding: gzip (eg
       Cloudflare).

       A symptom of this would be receiving errors like

              ERROR corrupted on transfer: sizes differ NNN vs MMM

       If you set this flag and rclone downloads an object with Content-Encoding: gzip set and chunked  transfer
       encoding, then rclone will decompress the object on the fly.

       If  this  is set to unset (the default) then rclone will choose according to the provider setting what to
       apply, but you can override rclone’s choice here.

       Properties:

       • Config: might_gzip

       • Env Var: RCLONE_S3_MIGHT_GZIP

       • Type: Tristate

       • Default: unset

   –s3-no-system-metadata
       Suppress setting and reading of system metadata

       Properties:

       • Config: no_system_metadata

       • Env Var: RCLONE_S3_NO_SYSTEM_METADATA

       • Type: bool

       • Default: false

   Metadata
       User metadata is stored as x-amz-meta- keys.  S3 metadata  keys  are  case  insensitive  and  are  always
       returned in lower case.

       Here are the possible system metadata items for the s3 backend.

       Name                  Help                  Type          Example                               Read Only
       ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       btime                 Time     of           RFC 3339      2006-01-02T15:04:05.999999999Z07:00   Y
                             file  birth
                             (creation)
                             read   from
                             Last-Modified
                             header
       cache-control         Cache-Control         string        no-cache                              N
                             header
       content-disposition   Content-Disposition   string        inline                                N
                             header
       content-encoding      Content-Encoding      string        gzip                                  N
                             header
       content-language      Content-Language      string        en-US                                 N
                             header
       content-type          Content-Type header   string        text/plain                            N
       mtime                 Time    of     last   RFC 3339      2006-01-02T15:04:05.999999999Z07:00   N
                             modification,  read
                             from         rclone
                             metadata
       tier                  Tier of the object    string        GLACIER                               Y

       See the metadata docs for more info.

   Backend commands
       Here are the commands specific to the s3 backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   restore
       Restore objects from GLACIER to normal storage

              rclone backend restore remote: [options] [<arguments>+]

       This command can be used to restore one or more objects from GLACIER to normal storage.

       Usage Examples:

              rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
              rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
              rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]

       This flag also obeys the filters.  Test first with -i/–interactive or –dry-run flags

              rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard

       All the objects shown will be marked for restore, then

              rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard

       It  returns  a  list of status dictionaries with Remote and Status keys.  The Status will be OK if it was
       successful or an error message if not.

              [
                  {
                      "Status": "OK",
                      "Path": "test.txt"
                  },
                  {
                      "Status": "OK",
                      "Path": "test/file4.txt"
                  }
              ]

       Options:

       • “description”: The optional description for the job.

       • “lifetime”: Lifetime of the active copy in days

       • “priority”: Priority of restore: Standard|Expedited|Bulk

   list-multipart-uploads
       List the unfinished multipart uploads

              rclone backend list-multipart-uploads remote: [options] [<arguments>+]

       This command lists the unfinished multipart uploads in JSON format.

              rclone backend list-multipart s3:bucket/path/to/object

       It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

       You can call it with no bucket in which case it lists all bucket, with a bucket  or  with  a  bucket  and
       path.

              {
                "rclone": [
                  {
                    "Initiated": "2020-06-26T14:20:36Z",
                    "Initiator": {
                      "DisplayName": "XXX",
                      "ID": "arn:aws:iam::XXX:user/XXX"
                    },
                    "Key": "KEY",
                    "Owner": {
                      "DisplayName": null,
                      "ID": "XXX"
                    },
                    "StorageClass": "STANDARD",
                    "UploadId": "XXX"
                  }
                ],
                "rclone-1000files": [],
                "rclone-dst": []
              }

   cleanup
       Remove unfinished multipart uploads.

              rclone backend cleanup remote: [options] [<arguments>+]

       This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

       Note that you can use -i/–dry-run with this command to see what it would do.

              rclone backend cleanup s3:bucket/path/to/object
              rclone backend cleanup -o max-age=7w s3:bucket/path/to/object

       Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

       Options:

       • “max-age”: Max age of upload to delete

   cleanup-hidden
       Remove old versions of files.

              rclone backend cleanup-hidden remote: [options] [<arguments>+]

       This command removes any old hidden versions of files on a versions enabled bucket.

       Note that you can use -i/–dry-run with this command to see what it would do.

              rclone backend cleanup-hidden s3:bucket/path/to/dir

   versioning
       Set/get versioning support for a bucket.

              rclone backend versioning remote: [options] [<arguments>+]

       This  command  sets  versioning  support if a parameter is passed and then returns the current versioning
       status for the bucket supplied.

              rclone backend versioning s3:bucket # read status only
              rclone backend versioning s3:bucket Enabled
              rclone backend versioning s3:bucket Suspended

       It may return “Enabled”, “Suspended” or “Unversioned”.  Note that once versioning has  been  enabled  the
       status can’t be set back to “Unversioned”.

   Anonymous access to public buckets
       If  you  want  to  use  rclone  to  access  a  public  bucket,  configure  with a blank access_key_id and
       secret_access_key.  Your config should end up looking like this:

              [anons3]
              type = s3
              provider = AWS
              env_auth = false
              access_key_id =
              secret_access_key =
              region = us-east-1
              endpoint =
              location_constraint =
              acl = private
              server_side_encryption =
              storage_class =

       Then use it as normal with the name of the public bucket, e.g.

              rclone lsd anons3:1000genomes

       You will be able to list and copy data but not upload it.

   Providers
   AWS S3
       This is the provider used as main example and described in the configuration section above.

   AWS Snowball Edge
       AWS Snowball is a hardware appliance used for transferring bulk data back  to  AWS.   Its  main  software
       interface is S3 object storage.

       To use rclone with AWS Snowball Edge devices, configure as standard for an `S3 Compatible Service'.

       If  using  rclone  pre  v1.59 be sure to set upload_cutoff = 0 otherwise you will run into authentication
       header issues as the snowball device does not support query parameter based authentication.

       With rclone v1.59 or later setting upload_cutoff should not be necessary.

       eg.

              [snowball]
              type = s3
              provider = Other
              access_key_id = YOUR_ACCESS_KEY
              secret_access_key = YOUR_SECRET_KEY
              endpoint = http://[IP of Snowball]:8080
              upload_cutoff = 0

   Ceph
       Ceph is  an  open-source,  unified,  distributed  storage  system  designed  for  excellent  performance,
       reliability and scalability.  It has an S3 compatible object storage interface.

       To  use rclone with Ceph, configure as above but leave the region blank and set the endpoint.  You should
       end up with something like this in your config:

              [ceph]
              type = s3
              provider = Ceph
              env_auth = false
              access_key_id = XXX
              secret_access_key = YYY
              region =
              endpoint = https://ceph.endpoint.example.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =

       If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a version of rclone before  v1.59  then
       you may need to supply the parameter --s3-upload-cutoff 0 or put this in the config file as upload_cutoff
       0 to work around a bug which causes uploading of small files to fail.

       Note  also that Ceph sometimes puts / in the passwords it gives users.  If you read the secret access key
       using the command line tools you will get a JSON blob with the / escaped as \/.  Make sure you only write
       / in the secret access key.

       Eg the dump from Ceph looks something like this (irrelevant keys removed).

              {
                  "user_id": "xxx",
                  "display_name": "xxxx",
                  "keys": [
                      {
                          "user": "xxx",
                          "access_key": "xxxxxx",
                          "secret_key": "xxxxxx\/xxxx"
                      }
                  ],
              }

       Because this is a json dump, it is encoding the / as \/, so if you use the secret key as  xxxxxx/xxxx  it
       will work fine.

   Cloudflare R2
       Cloudflare R2 Storage  allows  developers  to store large amounts of unstructured data without the costly
       egress bandwidth fees associated with typical cloud storage services.

       Here is an example of making a Cloudflare R2 configuration.  First run:

              rclone config

       This will guide you through an interactive setup process.

       Note that all buckets are private, and all are stored in the same “auto” region.  It is necessary to  use
       Cloudflare workers to share the content of a bucket publicly.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> r2
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              ...
              XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
                 \ (s3)
              ...
              Storage> s3
              Option provider.
              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
              ...
              XX / Cloudflare R2 Storage
                 \ (Cloudflare)
              ...
              provider> Cloudflare
              Option env_auth.
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth> 1
              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> ACCESS_KEY
              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> SECRET_ACCESS_KEY
              Option region.
              Region to connect to.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
                 \ (auto)
              region> 1
              Option endpoint.
              Endpoint for S3 API.
              Required when using an S3 clone.
              Enter a value. Press Enter to leave empty.
              endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This will leave your config looking something like:

              [r2]
              type = s3
              provider = Cloudflare
              access_key_id = ACCESS_KEY
              secret_access_key = SECRET_ACCESS_KEY
              region = auto
              endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
              acl = private

       Now run rclone lsf r2: to see your buckets and rclone lsf r2:bucket to look within a bucket.

   Dreamhost
       Dreamhost DreamObjects is an object storage system based on CEPH.

       To  use  rclone  with Dreamhost, configure as above but leave the region blank and set the endpoint.  You
       should end up with something like this in your config:

              [dreamobjects]
              type = s3
              provider = DreamHost
              env_auth = false
              access_key_id = your_access_key
              secret_access_key = your_secret_key
              region =
              endpoint = objects-us-west-1.dream.io
              location_constraint =
              acl = private
              server_side_encryption =
              storage_class =

   DigitalOcean Spaces
       Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.

       To connect to DigitalOcean Spaces you will need an access key and secret key.  These can be retrieved  on
       the  “Applications & API”  page  of the DigitalOcean control panel.  They will be needed when prompted by
       rclone config for your access_key_id and secret_access_key.

       When prompted for a region or location_constraint, press enter to use the default value.  The region must
       be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com).  The default values can  be  used
       for other settings.

       Going  through the whole process of creating a new remote by running rclone config, each prompt should be
       answered as shown below:

              Storage> s3
              env_auth> 1
              access_key_id> YOUR_ACCESS_KEY
              secret_access_key> YOUR_SECRET_KEY
              region>
              endpoint> nyc3.digitaloceanspaces.com
              location_constraint>
              acl>
              storage_class>

       The resulting configuration file should look like:

              [spaces]
              type = s3
              provider = DigitalOcean
              env_auth = false
              access_key_id = YOUR_ACCESS_KEY
              secret_access_key = YOUR_SECRET_KEY
              region =
              endpoint = nyc3.digitaloceanspaces.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =

       Once configured, you can create a new Space and begin copying files.  For example:

              rclone mkdir spaces:my-new-space
              rclone copy /path/to/files spaces:my-new-space

   Huawei OBS
       Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that  lets
       you store virtually any volume of unstructured data in any format and access it from anywhere.

       OBS  provides  an  S3  interface,  you can copy and modify the following configuration and add it to your
       rclone configuration file.

              [obs]
              type = s3
              provider = HuaweiOBS
              access_key_id = your-access-key-id
              secret_access_key = your-secret-access-key
              region = af-south-1
              endpoint = obs.af-south-1.myhuaweicloud.com
              acl = private

       Or you can also configure via the interactive command line:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> obs
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
               5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
                 \ (s3)
              [snip]
              Storage> 5
              Option provider.
              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
              [snip]
               9 / Huawei Object Storage Service
                 \ (HuaweiOBS)
              [snip]
              provider> 9
              Option env_auth.
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth> 1
              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> your-access-key-id
              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> your-secret-access-key
              Option region.
              Region to connect to.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / AF-Johannesburg
                 \ (af-south-1)
               2 / AP-Bangkok
                 \ (ap-southeast-2)
              [snip]
              region> 1
              Option endpoint.
              Endpoint for OBS API.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / AF-Johannesburg
                 \ (obs.af-south-1.myhuaweicloud.com)
               2 / AP-Bangkok
                 \ (obs.ap-southeast-2.myhuaweicloud.com)
              [snip]
              endpoint> 1
              Option acl.
              Canned ACL used when creating buckets and storing or copying objects.
              This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Owner gets FULL_CONTROL.
               1 | No one else has access rights (default).
                 \ (private)
              [snip]
              acl> 1
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n>
              --------------------
              [obs]
              type = s3
              provider = HuaweiOBS
              access_key_id = your-access-key-id
              secret_access_key = your-secret-access-key
              region = af-south-1
              endpoint = obs.af-south-1.myhuaweicloud.com
              acl = private
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              obs                  s3

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

   IBM COS (S3)
       Information stored with IBM Cloud Object Storage is encrypted and dispersed  across  multiple  geographic
       locations,  and  accessed  through  an  implementation  of  the  S3  API.   This service makes use of the
       distributed storage technologies provided by IBM’s Cloud Object  Storage  System  (formerly  Cleversafe).
       For more information visit: (http://www.ibm.com/cloud/object-storage)

       To configure access to IBM COS S3, follow the steps below:

       1. Run rclone config and select n for a new remote.

              2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n

       2. Enter the name for the configuration

              name> <YOUR NAME>

       3. Select “s3” storage.

          Choose a number from below, or type in your own value
              1 / Alias for an existing remote
              \ "alias"
              2 / Amazon Drive
              \ "amazon cloud drive"
              3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS)
              \ "s3"
              4 / Backblaze B2
              \ "b2"
          [snip]
              23 / HTTP
              \ "http"
          Storage> 3

       4. Select IBM COS as the S3 Storage Provider.

          Choose the S3 provider.
          Choose a number from below, or type in your own value
               1 / Choose this option to configure Storage to AWS S3
                 \ "AWS"
               2 / Choose this option to configure Storage to Ceph Systems
               \ "Ceph"
               3 /  Choose this option to configure Storage to Dreamhost
               \ "Dreamhost"
             4 / Choose this option to the configure Storage to IBM COS S3
               \ "IBMCOS"
               5 / Choose this option to the configure Storage to Minio
               \ "Minio"
               Provider>4

       5. Enter the Access Key and Secret.

              AWS Access Key ID - leave blank for anonymous access or runtime credentials.
              access_key_id> <>
              AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
              secret_access_key> <>

       6. Specify  the  endpoint for IBM COS.  For Public IBM COS, choose from the option below.  For On Premise
          IBM COS, enter an endpoint address.

              Endpoint for IBM COS S3 API.
              Specify if using an IBM COS On Premise.
              Choose a number from below, or type in your own value
               1 / US Cross Region Endpoint
                 \ "s3-api.us-geo.objectstorage.softlayer.net"
               2 / US Cross Region Dallas Endpoint
                 \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
               3 / US Cross Region Washington DC Endpoint
                 \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
               4 / US Cross Region San Jose Endpoint
                 \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
               5 / US Cross Region Private Endpoint
                 \ "s3-api.us-geo.objectstorage.service.networklayer.com"
               6 / US Cross Region Dallas Private Endpoint
                 \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
               7 / US Cross Region Washington DC Private Endpoint
                 \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
               8 / US Cross Region San Jose Private Endpoint
                 \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
               9 / US Region East Endpoint
                 \ "s3.us-east.objectstorage.softlayer.net"
              10 / US Region East Private Endpoint
                 \ "s3.us-east.objectstorage.service.networklayer.com"
              11 / US Region South Endpoint
          [snip]
              34 / Toronto Single Site Private Endpoint
                 \ "s3.tor01.objectstorage.service.networklayer.com"
              endpoint>1

       7. Specify a IBM COS Location Constraint.  The location constraint must match  endpoint  when  using  IBM
          Cloud Public.  For on-prem COS, do not make a selection from this list, hit enter

               1 / US Cross Region Standard
                 \ "us-standard"
               2 / US Cross Region Vault
                 \ "us-vault"
               3 / US Cross Region Cold
                 \ "us-cold"
               4 / US Cross Region Flex
                 \ "us-flex"
               5 / US East Region Standard
                 \ "us-east-standard"
               6 / US East Region Vault
                 \ "us-east-vault"
               7 / US East Region Cold
                 \ "us-east-cold"
               8 / US East Region Flex
                 \ "us-east-flex"
               9 / US South Region Standard
                 \ "us-south-standard"
              10 / US South Region Vault
                 \ "us-south-vault"
          [snip]
              32 / Toronto Flex
                 \ "tor01-flex"
          location_constraint>1

       9. Specify  a  canned  ACL.   IBM Cloud (Storage) supports “public-read” and “private”.  IBM Cloud(Infra)
          supports all the canned ACLs.  On-Premise COS supports all the canned ACLs.

          Canned ACL used when creating buckets and/or storing objects in S3.
          For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
          Choose a number from below, or type in your own value
                1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
                \ "private"
                2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
                \ "public-read"
                3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
                \ "public-read-write"
                4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
                \ "authenticated-read"
          acl> 1

       12. Review the displayed configuration and accept to save the “remote” then quit.  The config file should
           look like this

               [xxx]
               type = s3
               Provider = IBMCOS
               access_key_id = xxx
               secret_access_key = yyy
               endpoint = s3-api.us-geo.objectstorage.softlayer.net
               location_constraint = us-standard
               acl = private

       13. Execute rclone commands

               1)  Create a bucket.
                   rclone mkdir IBM-COS-XREGION:newbucket
               2)  List available buckets.
                   rclone lsd IBM-COS-XREGION:
                   -1 2017-11-08 21:16:22        -1 test
                   -1 2018-02-14 20:16:39        -1 newbucket
               3)  List contents of a bucket.
                   rclone ls IBM-COS-XREGION:newbucket
                   18685952 test.exe
               4)  Copy a file from local to remote.
                   rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
               5)  Copy a file from remote to local.
                   rclone copy IBM-COS-XREGION:newbucket/file.txt .
               6)  Delete a file on remote.
                   rclone delete IBM-COS-XREGION:newbucket/file.txt

   IDrive e2
       Here is an example of making an IDrive e2 configuration.  First run:

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n

              Enter name for new remote.
              name> e2

              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
                 \ (s3)
              [snip]
              Storage> s3

              Option provider.
              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
              [snip]
              XX / IDrive e2
                 \ (IDrive)
              [snip]
              provider> IDrive

              Option env_auth.
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth>

              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> YOUR_ACCESS_KEY

              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> YOUR_SECRET_KEY

              Option acl.
              Canned ACL used when creating buckets and storing or copying objects.
              This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Owner gets FULL_CONTROL.
               1 | No one else has access rights (default).
                 \ (private)
                 / Owner gets FULL_CONTROL.
               2 | The AllUsers group gets READ access.
                 \ (public-read)
                 / Owner gets FULL_CONTROL.
               3 | The AllUsers group gets READ and WRITE access.
                 | Granting this on a bucket is generally not recommended.
                 \ (public-read-write)
                 / Owner gets FULL_CONTROL.
               4 | The AuthenticatedUsers group gets READ access.
                 \ (authenticated-read)
                 / Object owner gets FULL_CONTROL.
               5 | Bucket owner gets READ access.
                 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
                 \ (bucket-owner-read)
                 / Both the object owner and the bucket owner get FULL_CONTROL over the object.
               6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
                 \ (bucket-owner-full-control)
              acl>

              Edit advanced config?
              y) Yes
              n) No (default)
              y/n>

              Configuration complete.
              Options:
              - type: s3
              - provider: IDrive
              - access_key_id: YOUR_ACCESS_KEY
              - secret_access_key: YOUR_SECRET_KEY
              - endpoint: q9d9.la12.idrivee2-5.com
              Keep this "e2" remote?
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   IONOS Cloud
       IONOS S3 Object Storage is a service offered by IONOS for storing and accessing  unstructured  data.   To
       connect  to  the  service,  you will need an access key and a secret key.  These can be found in the Data
       Center Designer, by selecting Manager resources > Object Storage Key Manager.

       Here is an example of a configuration.  First,  run  rclone  config.   This  will  walk  you  through  an
       interactive setup process.  Type n to add the new remote, and then enter a name:

              Enter name for new remote.
              name> ionos-fra

       Type s3 to choose the connection type:

              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
                 \ (s3)
              [snip]
              Storage> s3

       Type IONOS:

              Option provider.
              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
              [snip]
              XX / IONOS Cloud
                 \ (IONOS)
              [snip]
              provider> IONOS

       Press Enter to choose the default option Enter AWS credentials in the next step:

              Option env_auth.
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth>

       Enter  your  Access Key and Secret key.  These can be retrieved in the Data Center Designer, click on the
       menu “Manager resources” / “Object Storage Key Manager”.

              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> YOUR_ACCESS_KEY

              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> YOUR_SECRET_KEY

       Choose the region where your bucket is located:

              Option region.
              Region where your bucket will be created and your data stored.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Frankfurt, Germany
                 \ (de)
               2 / Berlin, Germany
                 \ (eu-central-2)
               3 / Logrono, Spain
                 \ (eu-south-2)
              region> 2

       Choose the endpoint from the same region:

              Option endpoint.
              Endpoint for IONOS S3 Object Storage.
              Specify the endpoint from the same region.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Frankfurt, Germany
                 \ (s3-eu-central-1.ionoscloud.com)
               2 / Berlin, Germany
                 \ (s3-eu-central-2.ionoscloud.com)
               3 / Logrono, Spain
                 \ (s3-eu-south-2.ionoscloud.com)
              endpoint> 1

       Press Enter to choose the default option or choose the desired ACL setting:

              Option acl.
              Canned ACL used when creating buckets and storing or copying objects.
              This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Owner gets FULL_CONTROL.
               1 | No one else has access rights (default).
                 \ (private)
                 / Owner gets FULL_CONTROL.
              [snip]
              acl>

       Press Enter to skip the advanced config:

              Edit advanced config?
              y) Yes
              n) No (default)
              y/n>

       Press Enter to save the configuration, and then q to quit the configuration process:

              Configuration complete.
              Options:
              - type: s3
              - provider: IONOS
              - access_key_id: YOUR_ACCESS_KEY
              - secret_access_key: YOUR_SECRET_KEY
              - endpoint: s3-eu-central-1.ionoscloud.com
              Keep this "ionos-fra" remote?
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Done!  Now you can try some commands (for macOS, use ./rclone instead of rclone).

       1) Create a bucket (the name must be unique within the whole IONOS S3)

          rclone mkdir ionos-fra:my-bucket

       2) List available buckets

          rclone lsd ionos-fra:

       4) Copy a file from local to remote

          rclone copy /Users/file.txt ionos-fra:my-bucket

       3) List contents of a bucket

          rclone ls ionos-fra:my-bucket

       5) Copy a file from remote to local

          rclone copy ionos-fra:my-bucket/file.txt

   Minio
       Minio is an object storage server built for cloud application developers and devops.

       It is very easy to install and provides an S3 compatible server which can be used by rclone.

       To use it, install Minio following the instructions here.

       When it configures itself Minio will print something like this

              Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
              AccessKey: USWUXHGYZQYFYFFIT3RE
              SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
              Region:    us-east-1
              SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis

              Browser Access:
                 http://192.168.1.106:9000  http://172.23.0.1:9000

              Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
                 $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03

              Object API (Amazon S3 compatible):
                 Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
                 Java:       https://docs.minio.io/docs/java-client-quickstart-guide
                 Python:     https://docs.minio.io/docs/python-client-quickstart-guide
                 JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
                 .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

              Drive Capacity: 26 GiB Free, 165 GiB Total

       These details need to go into rclone config like this.  Note that it is important to put the region in as
       stated above.

              env_auth> 1
              access_key_id> USWUXHGYZQYFYFFIT3RE
              secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
              region> us-east-1
              endpoint> http://192.168.1.106:9000
              location_constraint>
              server_side_encryption>

       Which makes the config file look like this

              [minio]
              type = s3
              provider = Minio
              env_auth = false
              access_key_id = USWUXHGYZQYFYFFIT3RE
              secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
              region = us-east-1
              endpoint = http://192.168.1.106:9000
              location_constraint =
              server_side_encryption =

       So once set up, for example, to copy files into a bucket

              rclone copy /path/to/files minio:bucket

   Qiniu Cloud Object Storage (Kodo)
       Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by
       repeated customer experience has occupied absolute leading market leader position.  Kodo  can  be  widely
       applied to mass data management.

       To configure access to Qiniu Kodo, follow the steps below:

       1. Run rclone config and select n for a new remote.

          rclone config
          No remotes found, make a new one?
          n) New remote
          s) Set configuration password
          q) Quit config
          n/s/q> n

       2. Give the name of the configuration.  For example, name it `qiniu'.

          name> qiniu

       3. Select s3 storage.

          Choose a number from below, or type in your own value
           1 / 1Fichier
             \ (fichier)
           2 / Akamai NetStorage
             \ (netstorage)
           3 / Alias for an existing remote
             \ (alias)
           4 / Amazon Drive
             \ (amazon cloud drive)
           5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
             \ (s3)
          [snip]
          Storage> s3

       4. Select Qiniu provider.

          Choose a number from below, or type in your own value
          1 / Amazon Web Services (AWS) S3
             \ "AWS"
          [snip]
          22 / Qiniu Object Storage (Kodo)
             \ (Qiniu)
          [snip]
          provider> Qiniu

       5. Enter your SecretId and SecretKey of Qiniu Kodo.

          Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
          Only applies if access_key_id and secret_access_key is blank.
          Enter a boolean value (true or false). Press Enter for the default ("false").
          Choose a number from below, or type in your own value
           1 / Enter AWS credentials in the next step
             \ "false"
           2 / Get AWS credentials from the environment (env vars or IAM)
             \ "true"
          env_auth> 1
          AWS Access Key ID.
          Leave blank for anonymous access or runtime credentials.
          Enter a string value. Press Enter for the default ("").
          access_key_id> AKIDxxxxxxxxxx
          AWS Secret Access Key (password)
          Leave blank for anonymous access or runtime credentials.
          Enter a string value. Press Enter for the default ("").
          secret_access_key> xxxxxxxxxxx

       6. Select endpoint for Qiniu Kodo.  This is the standard endpoint for different region.

             / The default endpoint - a good choice if you are unsure.
           1 | East China Region 1.
             | Needs location constraint cn-east-1.
             \ (cn-east-1)
             / East China Region 2.
           2 | Needs location constraint cn-east-2.
             \ (cn-east-2)
             / North China Region 1.
           3 | Needs location constraint cn-north-1.
             \ (cn-north-1)
             / South China Region 1.
           4 | Needs location constraint cn-south-1.
             \ (cn-south-1)
             / North America Region.
           5 | Needs location constraint us-north-1.
             \ (us-north-1)
             / Southeast Asia Region 1.
           6 | Needs location constraint ap-southeast-1.
             \ (ap-southeast-1)
             / Northeast Asia Region 1.
           7 | Needs location constraint ap-northeast-1.
             \ (ap-northeast-1)
          [snip]
          endpoint> 1

          Option endpoint.
          Endpoint for Qiniu Object Storage.
          Choose a number from below, or type in your own value.
          Press Enter to leave empty.
           1 / East China Endpoint 1
             \ (s3-cn-east-1.qiniucs.com)
           2 / East China Endpoint 2
             \ (s3-cn-east-2.qiniucs.com)
           3 / North China Endpoint 1
             \ (s3-cn-north-1.qiniucs.com)
           4 / South China Endpoint 1
             \ (s3-cn-south-1.qiniucs.com)
           5 / North America Endpoint 1
             \ (s3-us-north-1.qiniucs.com)
           6 / Southeast Asia Endpoint 1
             \ (s3-ap-southeast-1.qiniucs.com)
           7 / Northeast Asia Endpoint 1
             \ (s3-ap-northeast-1.qiniucs.com)
          endpoint> 1

          Option location_constraint.
          Location constraint - must be set to match the Region.
          Used when creating buckets only.
          Choose a number from below, or type in your own value.
          Press Enter to leave empty.
           1 / East China Region 1
             \ (cn-east-1)
           2 / East China Region 2
             \ (cn-east-2)
           3 / North China Region 1
             \ (cn-north-1)
           4 / South China Region 1
             \ (cn-south-1)
           5 / North America Region 1
             \ (us-north-1)
           6 / Southeast Asia Region 1
             \ (ap-southeast-1)
           7 / Northeast Asia Region 1
             \ (ap-northeast-1)
          location_constraint> 1

       7. Choose acl and storage class.

          Note that this ACL is applied when server-side copying objects as S3
          doesn't copy the ACL from the source but rather writes a fresh one.
          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
             / Owner gets FULL_CONTROL.
           1 | No one else has access rights (default).
             \ (private)
             / Owner gets FULL_CONTROL.
           2 | The AllUsers group gets READ access.
             \ (public-read)
          [snip]
          acl> 2
          The storage class to use when storing new objects in Tencent COS.
          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
           1 / Standard storage class
             \ (STANDARD)
           2 / Infrequent access storage mode
             \ (LINE)
           3 / Archive storage mode
             \ (GLACIER)
           4 / Deep archive storage mode
             \ (DEEP_ARCHIVE)
          [snip]
          storage_class> 1
          Edit advanced config? (y/n)
          y) Yes
          n) No (default)
          y/n> n
          Remote config
          --------------------
          [qiniu]
          - type: s3
          - provider: Qiniu
          - access_key_id: xxx
          - secret_access_key: xxx
          - region: cn-east-1
          - endpoint: s3-cn-east-1.qiniucs.com
          - location_constraint: cn-east-1
          - acl: public-read
          - storage_class: STANDARD
          --------------------
          y) Yes this is OK (default)
          e) Edit this remote
          d) Delete this remote
          y/e/d> y
          Current remotes:

          Name                 Type
          ====                 ====
          qiniu                s3

   RackCorp
       RackCorp Object Storage is  an  S3  compatible  object storage platform from your friendly cloud provider
       RackCorp.  The service is fast, reliable, well priced and located in many strategic locations  unserviced
       by others, to ensure you can maintain data sovereignty.

       Before  you  can  use  RackCorp  Object Storage, you’ll need to “sign up” for an account on our “portal”.
       Next you can create an access key, a secret key and buckets, in your location of choice with ease.  These
       details are required for the next steps of configuration, when rclone config asks for your  access_key_id
       and secret_access_key.

       Your config should end up looking a bit like this:

              [RCS3-demo-config]
              type = s3
              provider = RackCorp
              env_auth = true
              access_key_id = YOURACCESSKEY
              secret_access_key = YOURSECRETACCESSKEY
              region = au-nsw
              endpoint = s3.rackcorp.com
              location_constraint = au-nsw

   Scaleway
       Scaleway The  Object  Storage  platform allows you to store anything from backups, logs and web assets to
       documents and photos.  Files can be dropped from the Scaleway console or transferred through our API  and
       CLI or using any S3-compatible tool.

       Scaleway provides an S3 interface which can be configured for use with rclone like this:

              [scaleway]
              type = s3
              provider = Scaleway
              env_auth = false
              endpoint = s3.nl-ams.scw.cloud
              access_key_id = SCWXXXXXXXXXXXXXX
              secret_access_key = 1111111-2222-3333-44444-55555555555555
              region = nl-ams
              location_constraint =
              acl = private
              server_side_encryption =
              storage_class =

       C14 Cold Storage is  the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3
       by accepting the “GLACIER” storage_class.  So you can configure your  remote  with  the  storage_class  =
       GLACIER  option  to  upload  directly  to C14.  Don’t forget that in this state you can’t read files back
       after, you will need to restore them to “STANDARD” storage_class first before being  able  to  read  them
       (see “restore” section above)

   Seagate Lyve Cloud
       Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.

       Here  is  a  config  run  through for a remote called remote - you may choose a different name of course.
       Note that to create an access key and secret key you will need to create a service account first.

              $ rclone config
              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote

       Choose s3 backend

              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
                 \ (s3)
              [snip]
              Storage> s3

       Choose LyveCloud as S3 provider

              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
              [snip]
              XX / Seagate Lyve Cloud
                 \ (LyveCloud)
              [snip]
              provider> LyveCloud

       Take the default (just press enter) to enter access key and secret in the config file.

              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth>

              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> XXX

              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> YYY

       Leave region blank

              Region to connect to.
              Leave blank if you are using an S3 clone and you don't have a region.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Use this if unsure.
               1 | Will use v4 signatures and an empty region.
                 \ ()
                 / Use this only if v4 signatures don't work.
               2 | E.g. pre Jewel/v10 CEPH.
                 \ (other-v2-signature)
              region>

       Choose an endpoint from the list

              Endpoint for S3 API.
              Required when using an S3 clone.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Seagate Lyve Cloud US East 1 (Virginia)
                 \ (s3.us-east-1.lyvecloud.seagate.com)
               2 / Seagate Lyve Cloud US West 1 (California)
                 \ (s3.us-west-1.lyvecloud.seagate.com)
               3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
                 \ (s3.ap-southeast-1.lyvecloud.seagate.com)
              endpoint> 1

       Leave location constraint blank

              Location constraint - must be set to match the Region.
              Leave blank if not sure. Used when creating buckets only.
              Enter a value. Press Enter to leave empty.
              location_constraint>

       Choose default ACL (private).

              Canned ACL used when creating buckets and storing or copying objects.
              This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Owner gets FULL_CONTROL.
               1 | No one else has access rights (default).
                 \ (private)
              [snip]
              acl>

       And the config file should end up looking like this:

              [remote]
              type = s3
              provider = LyveCloud
              access_key_id = XXX
              secret_access_key = YYY
              endpoint = s3.us-east-1.lyvecloud.seagate.com

   SeaweedFS
       SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, with O(1)  disk  seek
       and  a  scalable  file  metadata store.  It has an S3 compatible object storage interface.  SeaweedFS can
       also  act  as  a  https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage gateway to
       remote S3 compat‐  ible object store to  cache  data  and metadata with asynchronous write back, for fast
       local speed and minimize access cost.

       Assuming the SeaweedFS are configured with weed shell as such:

              > s3.bucket.create -name foo
              > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
              {
                "identities": [
                  {
                    "name": "me",
                    "credentials": [
                      {
                        "accessKey": "any",
                        "secretKey": "any"
                      }
                    ],
                    "actions": [
                      "Read:foo",
                      "Write:foo",
                      "List:foo",
                      "Tagging:foo",
                      "Admin:foo"
                    ]
                  }
                ]
              }

       To use rclone with SeaweedFS, above configuration should end up with something like this in your config:

              [seaweedfs_s3]
              type = s3
              provider = SeaweedFS
              access_key_id = any
              secret_access_key = any
              endpoint = localhost:8333

       So once set up, for example to copy files into a bucket

              rclone copy /path/to/files seaweedfs_s3:foo

   Wasabi
       Wasabi is a cloud-based object storage service for a broad range of applications and use  cases.   Wasabi
       is  designed for individuals and organizations that require a high-performance, reliable, and secure data
       storage infrastructure at minimal cost.

       Wasabi provides an S3 interface which can be configured for use with rclone like this.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              n/s> n
              name> wasabi
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
                 \ "s3"
              [snip]
              Storage> s3
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own value
               1 / Enter AWS credentials in the next step
                 \ "false"
               2 / Get AWS credentials from the environment (env vars or IAM)
                 \ "true"
              env_auth> 1
              AWS Access Key ID - leave blank for anonymous access or runtime credentials.
              access_key_id> YOURACCESSKEY
              AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
              secret_access_key> YOURSECRETACCESSKEY
              Region to connect to.
              Choose a number from below, or type in your own value
                 / The default endpoint - a good choice if you are unsure.
               1 | US Region, Northern Virginia, or Pacific Northwest.
                 | Leave location constraint empty.
                 \ "us-east-1"
              [snip]
              region> us-east-1
              Endpoint for S3 API.
              Leave blank if using AWS to use the default endpoint for the region.
              Specify if using an S3 clone such as Ceph.
              endpoint> s3.wasabisys.com
              Location constraint - must be set to match the Region. Used when creating buckets only.
              Choose a number from below, or type in your own value
               1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
                 \ ""
              [snip]
              location_constraint>
              Canned ACL used when creating buckets and/or storing objects in S3.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Choose a number from below, or type in your own value
               1 / Owner gets FULL_CONTROL. No one else has access rights (default).
                 \ "private"
              [snip]
              acl>
              The server-side encryption algorithm used when storing this object in S3.
              Choose a number from below, or type in your own value
               1 / None
                 \ ""
               2 / AES256
                 \ "AES256"
              server_side_encryption>
              The storage class to use when storing objects in S3.
              Choose a number from below, or type in your own value
               1 / Default
                 \ ""
               2 / Standard storage class
                 \ "STANDARD"
               3 / Reduced redundancy storage class
                 \ "REDUCED_REDUNDANCY"
               4 / Standard Infrequent Access storage class
                 \ "STANDARD_IA"
              storage_class>
              Remote config
              --------------------
              [wasabi]
              env_auth = false
              access_key_id = YOURACCESSKEY
              secret_access_key = YOURSECRETACCESSKEY
              region = us-east-1
              endpoint = s3.wasabisys.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This will leave the config file looking like this.

              [wasabi]
              type = s3
              provider = Wasabi
              env_auth = false
              access_key_id = YOURACCESSKEY
              secret_access_key = YOURSECRETACCESSKEY
              region =
              endpoint = s3.wasabisys.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =

   Alibaba OSS
       Here is an example of making an Alibaba Cloud (Aliyun) OSS configuration.  First run:

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> oss
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
               4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS
                 \ "s3"
              [snip]
              Storage> s3
              Choose your S3 provider.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Amazon Web Services (AWS) S3
                 \ "AWS"
               2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
                 \ "Alibaba"
               3 / Ceph Object Storage
                 \ "Ceph"
              [snip]
              provider> Alibaba
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Enter a boolean value (true or false). Press Enter for the default ("false").
              Choose a number from below, or type in your own value
               1 / Enter AWS credentials in the next step
                 \ "false"
               2 / Get AWS credentials from the environment (env vars or IAM)
                 \ "true"
              env_auth> 1
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a string value. Press Enter for the default ("").
              access_key_id> accesskeyid
              AWS Secret Access Key (password)
              Leave blank for anonymous access or runtime credentials.
              Enter a string value. Press Enter for the default ("").
              secret_access_key> secretaccesskey
              Endpoint for OSS API.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / East China 1 (Hangzhou)
                 \ "oss-cn-hangzhou.aliyuncs.com"
               2 / East China 2 (Shanghai)
                 \ "oss-cn-shanghai.aliyuncs.com"
               3 / North China 1 (Qingdao)
                 \ "oss-cn-qingdao.aliyuncs.com"
              [snip]
              endpoint> 1
              Canned ACL used when creating buckets and storing or copying objects.

              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Owner gets FULL_CONTROL. No one else has access rights (default).
                 \ "private"
               2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
                 \ "public-read"
                 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
              [snip]
              acl> 1
              The storage class to use when storing new objects in OSS.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Default
                 \ ""
               2 / Standard storage class
                 \ "STANDARD"
               3 / Archive storage mode.
                 \ "GLACIER"
               4 / Infrequent access storage mode.
                 \ "STANDARD_IA"
              storage_class> 1
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              --------------------
              [oss]
              type = s3
              provider = Alibaba
              env_auth = false
              access_key_id = accesskeyid
              secret_access_key = secretaccesskey
              endpoint = oss-cn-hangzhou.aliyuncs.com
              acl = private
              storage_class = Standard
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   China Mobile Ecloud Elastic Object Storage (EOS)
       Here is an example of making  an  China Mobile Ecloud Elastic Object Storage (EOS) configuration.   First
       run:

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> ChinaMobile
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
               ...
               5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
                 \ (s3)
               ...
              Storage> s3
              Option provider.
              Choose your S3 provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               ...
               4 / China Mobile Ecloud Elastic Object Storage (EOS)
                 \ (ChinaMobile)
               ...
              provider> ChinaMobile
              Option env_auth.
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth>
              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> accesskeyid
              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> secretaccesskey
              Option endpoint.
              Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / The default endpoint - a good choice if you are unsure.
               1 | East China (Suzhou)
                 \ (eos-wuxi-1.cmecloud.cn)
               2 / East China (Jinan)
                 \ (eos-jinan-1.cmecloud.cn)
               3 / East China (Hangzhou)
                 \ (eos-ningbo-1.cmecloud.cn)
               4 / East China (Shanghai-1)
                 \ (eos-shanghai-1.cmecloud.cn)
               5 / Central China (Zhengzhou)
                 \ (eos-zhengzhou-1.cmecloud.cn)
               6 / Central China (Changsha-1)
                 \ (eos-hunan-1.cmecloud.cn)
               7 / Central China (Changsha-2)
                 \ (eos-zhuzhou-1.cmecloud.cn)
               8 / South China (Guangzhou-2)
                 \ (eos-guangzhou-1.cmecloud.cn)
               9 / South China (Guangzhou-3)
                 \ (eos-dongguan-1.cmecloud.cn)
              10 / North China (Beijing-1)
                 \ (eos-beijing-1.cmecloud.cn)
              11 / North China (Beijing-2)
                 \ (eos-beijing-2.cmecloud.cn)
              12 / North China (Beijing-3)
                 \ (eos-beijing-4.cmecloud.cn)
              13 / North China (Huhehaote)
                 \ (eos-huhehaote-1.cmecloud.cn)
              14 / Southwest China (Chengdu)
                 \ (eos-chengdu-1.cmecloud.cn)
              15 / Southwest China (Chongqing)
                 \ (eos-chongqing-1.cmecloud.cn)
              16 / Southwest China (Guiyang)
                 \ (eos-guiyang-1.cmecloud.cn)
              17 / Nouthwest China (Xian)
                 \ (eos-xian-1.cmecloud.cn)
              18 / Yunnan China (Kunming)
                 \ (eos-yunnan.cmecloud.cn)
              19 / Yunnan China (Kunming-2)
                 \ (eos-yunnan-2.cmecloud.cn)
              20 / Tianjin China (Tianjin)
                 \ (eos-tianjin-1.cmecloud.cn)
              21 / Jilin China (Changchun)
                 \ (eos-jilin-1.cmecloud.cn)
              22 / Hubei China (Xiangyan)
                 \ (eos-hubei-1.cmecloud.cn)
              23 / Jiangxi China (Nanchang)
                 \ (eos-jiangxi-1.cmecloud.cn)
              24 / Gansu China (Lanzhou)
                 \ (eos-gansu-1.cmecloud.cn)
              25 / Shanxi China (Taiyuan)
                 \ (eos-shanxi-1.cmecloud.cn)
              26 / Liaoning China (Shenyang)
                 \ (eos-liaoning-1.cmecloud.cn)
              27 / Hebei China (Shijiazhuang)
                 \ (eos-hebei-1.cmecloud.cn)
              28 / Fujian China (Xiamen)
                 \ (eos-fujian-1.cmecloud.cn)
              29 / Guangxi China (Nanning)
                 \ (eos-guangxi-1.cmecloud.cn)
              30 / Anhui China (Huainan)
                 \ (eos-anhui-1.cmecloud.cn)
              endpoint> 1
              Option location_constraint.
              Location constraint - must match endpoint.
              Used when creating buckets only.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / East China (Suzhou)
                 \ (wuxi1)
               2 / East China (Jinan)
                 \ (jinan1)
               3 / East China (Hangzhou)
                 \ (ningbo1)
               4 / East China (Shanghai-1)
                 \ (shanghai1)
               5 / Central China (Zhengzhou)
                 \ (zhengzhou1)
               6 / Central China (Changsha-1)
                 \ (hunan1)
               7 / Central China (Changsha-2)
                 \ (zhuzhou1)
               8 / South China (Guangzhou-2)
                 \ (guangzhou1)
               9 / South China (Guangzhou-3)
                 \ (dongguan1)
              10 / North China (Beijing-1)
                 \ (beijing1)
              11 / North China (Beijing-2)
                 \ (beijing2)
              12 / North China (Beijing-3)
                 \ (beijing4)
              13 / North China (Huhehaote)
                 \ (huhehaote1)
              14 / Southwest China (Chengdu)
                 \ (chengdu1)
              15 / Southwest China (Chongqing)
                 \ (chongqing1)
              16 / Southwest China (Guiyang)
                 \ (guiyang1)
              17 / Nouthwest China (Xian)
                 \ (xian1)
              18 / Yunnan China (Kunming)
                 \ (yunnan)
              19 / Yunnan China (Kunming-2)
                 \ (yunnan2)
              20 / Tianjin China (Tianjin)
                 \ (tianjin1)
              21 / Jilin China (Changchun)
                 \ (jilin1)
              22 / Hubei China (Xiangyan)
                 \ (hubei1)
              23 / Jiangxi China (Nanchang)
                 \ (jiangxi1)
              24 / Gansu China (Lanzhou)
                 \ (gansu1)
              25 / Shanxi China (Taiyuan)
                 \ (shanxi1)
              26 / Liaoning China (Shenyang)
                 \ (liaoning1)
              27 / Hebei China (Shijiazhuang)
                 \ (hebei1)
              28 / Fujian China (Xiamen)
                 \ (fujian1)
              29 / Guangxi China (Nanning)
                 \ (guangxi1)
              30 / Anhui China (Huainan)
                 \ (anhui1)
              location_constraint> 1
              Option acl.
              Canned ACL used when creating buckets and storing or copying objects.
              This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Note that this ACL is applied when server-side copying objects as S3
              doesn't copy the ACL from the source but rather writes a fresh one.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
                 / Owner gets FULL_CONTROL.
               1 | No one else has access rights (default).
                 \ (private)
                 / Owner gets FULL_CONTROL.
               2 | The AllUsers group gets READ access.
                 \ (public-read)
                 / Owner gets FULL_CONTROL.
               3 | The AllUsers group gets READ and WRITE access.
                 | Granting this on a bucket is generally not recommended.
                 \ (public-read-write)
                 / Owner gets FULL_CONTROL.
               4 | The AuthenticatedUsers group gets READ access.
                 \ (authenticated-read)
                 / Object owner gets FULL_CONTROL.
              acl> private
              Option server_side_encryption.
              The server-side encryption algorithm used when storing this object in S3.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / None
                 \ ()
               2 / AES256
                 \ (AES256)
              server_side_encryption>
              Option storage_class.
              The storage class to use when storing new objects in ChinaMobile.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Default
                 \ ()
               2 / Standard storage class
                 \ (STANDARD)
               3 / Archive storage mode
                 \ (GLACIER)
               4 / Infrequent access storage mode
                 \ (STANDARD_IA)
              storage_class>
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              [ChinaMobile]
              type = s3
              provider = ChinaMobile
              access_key_id = accesskeyid
              secret_access_key = secretaccesskey
              endpoint = eos-wuxi-1.cmecloud.cn
              location_constraint = wuxi1
              acl = private
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   ArvanCloud
       ArvanCloud ArvanCloud  Object  Storage  goes  beyond  the limited traditional file storage.  It gives you
       access to backup and archived files and allows sharing.  Files like profile image in the app, images sent
       by users or scanned documents can be stored securely and easily in our Object Storage service.

       ArvanCloud provides an S3 interface which can be configured for use with rclone like this.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              n/s> n
              name> ArvanCloud
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
                 \ "s3"
              [snip]
              Storage> s3
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own value
               1 / Enter AWS credentials in the next step
                 \ "false"
               2 / Get AWS credentials from the environment (env vars or IAM)
                 \ "true"
              env_auth> 1
              AWS Access Key ID - leave blank for anonymous access or runtime credentials.
              access_key_id> YOURACCESSKEY
              AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
              secret_access_key> YOURSECRETACCESSKEY
              Region to connect to.
              Choose a number from below, or type in your own value
                 / The default endpoint - a good choice if you are unsure.
               1 | US Region, Northern Virginia, or Pacific Northwest.
                 | Leave location constraint empty.
                 \ "us-east-1"
              [snip]
              region>
              Endpoint for S3 API.
              Leave blank if using ArvanCloud to use the default endpoint for the region.
              Specify if using an S3 clone such as Ceph.
              endpoint> s3.arvanstorage.com
              Location constraint - must be set to match the Region. Used when creating buckets only.
              Choose a number from below, or type in your own value
               1 / Empty for Iran-Tehran Region.
                 \ ""
              [snip]
              location_constraint>
              Canned ACL used when creating buckets and/or storing objects in S3.
              For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
              Choose a number from below, or type in your own value
               1 / Owner gets FULL_CONTROL. No one else has access rights (default).
                 \ "private"
              [snip]
              acl>
              The server-side encryption algorithm used when storing this object in S3.
              Choose a number from below, or type in your own value
               1 / None
                 \ ""
               2 / AES256
                 \ "AES256"
              server_side_encryption>
              The storage class to use when storing objects in S3.
              Choose a number from below, or type in your own value
               1 / Default
                 \ ""
               2 / Standard storage class
                 \ "STANDARD"
              storage_class>
              Remote config
              --------------------
              [ArvanCloud]
              env_auth = false
              access_key_id = YOURACCESSKEY
              secret_access_key = YOURSECRETACCESSKEY
              region = ir-thr-at1
              endpoint = s3.arvanstorage.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This will leave the config file looking like this.

              [ArvanCloud]
              type = s3
              provider = ArvanCloud
              env_auth = false
              access_key_id = YOURACCESSKEY
              secret_access_key = YOURSECRETACCESSKEY
              region =
              endpoint = s3.arvanstorage.com
              location_constraint =
              acl =
              server_side_encryption =
              storage_class =

   Tencent COS
       Tencent Cloud Object Storage (COS) is  a  distributed  storage  service  offered  by  Tencent  Cloud  for
       unstructured data.  It is secure, stable, massive, convenient, low-delay and low-cost.

       To configure access to Tencent COS, follow the steps below:

       1. Run rclone config and select n for a new remote.

          rclone config
          No remotes found, make a new one?
          n) New remote
          s) Set configuration password
          q) Quit config
          n/s/q> n

       2. Give the name of the configuration.  For example, name it `cos'.

          name> cos

       3. Select s3 storage.

          Choose a number from below, or type in your own value
          1 / 1Fichier
             \ "fichier"
           2 / Alias for an existing remote
             \ "alias"
           3 / Amazon Drive
             \ "amazon cloud drive"
           4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS
             \ "s3"
          [snip]
          Storage> s3

       4. Select TencentCOS provider.

          Choose a number from below, or type in your own value
          1 / Amazon Web Services (AWS) S3
             \ "AWS"
          [snip]
          11 / Tencent Cloud Object Storage (COS)
             \ "TencentCOS"
          [snip]
          provider> TencentCOS

       5. Enter your SecretId and SecretKey of Tencent Cloud.

          Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
          Only applies if access_key_id and secret_access_key is blank.
          Enter a boolean value (true or false). Press Enter for the default ("false").
          Choose a number from below, or type in your own value
           1 / Enter AWS credentials in the next step
             \ "false"
           2 / Get AWS credentials from the environment (env vars or IAM)
             \ "true"
          env_auth> 1
          AWS Access Key ID.
          Leave blank for anonymous access or runtime credentials.
          Enter a string value. Press Enter for the default ("").
          access_key_id> AKIDxxxxxxxxxx
          AWS Secret Access Key (password)
          Leave blank for anonymous access or runtime credentials.
          Enter a string value. Press Enter for the default ("").
          secret_access_key> xxxxxxxxxxx

       6. Select endpoint for Tencent COS.  This is the standard endpoint for different region.

           1 / Beijing Region.
             \ "cos.ap-beijing.myqcloud.com"
           2 / Nanjing Region.
             \ "cos.ap-nanjing.myqcloud.com"
           3 / Shanghai Region.
             \ "cos.ap-shanghai.myqcloud.com"
           4 / Guangzhou Region.
             \ "cos.ap-guangzhou.myqcloud.com"
          [snip]
          endpoint> 4

       7. Choose acl and storage class.

          Note that this ACL is applied when server-side copying objects as S3
          doesn't copy the ACL from the source but rather writes a fresh one.
          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
           1 / Owner gets Full_CONTROL. No one else has access rights (default).
             \ "default"
          [snip]
          acl> 1
          The storage class to use when storing new objects in Tencent COS.
          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
           1 / Default
             \ ""
          [snip]
          storage_class> 1
          Edit advanced config? (y/n)
          y) Yes
          n) No (default)
          y/n> n
          Remote config
          --------------------
          [cos]
          type = s3
          provider = TencentCOS
          env_auth = false
          access_key_id = xxx
          secret_access_key = xxx
          endpoint = cos.ap-guangzhou.myqcloud.com
          acl = default
          --------------------
          y) Yes this is OK (default)
          e) Edit this remote
          d) Delete this remote
          y/e/d> y
          Current remotes:

          Name                 Type
          ====                 ====
          cos                  s3

   Netease NOS
       For  Netease NOS configure as per the configurator rclone config setting the provider Netease.  This will
       automatically set force_path_style = false which is necessary for it to run properly.

   Storj
       Storj is a decentralized cloud storage which can be used through its native protocol or an S3  compatible
       gateway.

       The S3 compatible gateway is configured using rclone config with a type of s3 and with a provider name of
       Storj.  Here is an example run of the configurator.

              Type of storage to configure.
              Storage> s3
              Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
              Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own boolean value (true or false).
              Press Enter for the default (false).
               1 / Enter AWS credentials in the next step.
                 \ (false)
               2 / Get AWS credentials from the environment (env vars or IAM).
                 \ (true)
              env_auth> 1
              Option access_key_id.
              AWS Access Key ID.
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              access_key_id> XXXX (as shown when creating the access grant)
              Option secret_access_key.
              AWS Secret Access Key (password).
              Leave blank for anonymous access or runtime credentials.
              Enter a value. Press Enter to leave empty.
              secret_access_key> XXXX (as shown when creating the access grant)
              Option endpoint.
              Endpoint of the Shared Gateway.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / EU1 Shared Gateway
                 \ (gateway.eu1.storjshare.io)
               2 / US1 Shared Gateway
                 \ (gateway.us1.storjshare.io)
               3 / Asia-Pacific Shared Gateway
                 \ (gateway.ap1.storjshare.io)
              endpoint> 1 (as shown when creating the access grant)
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n

       Note that s3 credentials are generated when you create an access grant.

   Backend quirks
       • --chunk-size is forced to be 64 MiB or greater.  This will use more memory than the default of 5 MiB.

       • Server side copy is disabled as it isn’t currently supported in the gateway.

       • GetTier and SetTier are not supported.

   Backend bugs
       Due  to  issue #39 uploading  multipart files via the S3 gateway causes them to lose their metadata.  For
       rclone’s purpose this means that the modification time is not stored,  nor  is  any  MD5SUM  (if  one  is
       available from the source).

       This has the following consequences:

       • Using rclone rcat will fail as the medatada doesn’t match after upload

       • Uploading files with rclone mount will fail for the same reason

         • This  can  worked  around  by  using  --vfs-cache-mode  writes  or  --vfs-cache-mode  full or setting
           --s3-upload-cutoff large

       • Files uploaded via a multipart upload won’t have their modtimes

         • This  will  mean  that  rclone  sync  will  likely  keep  trying  to   upload   files   bigger   than
           --s3-upload-cutoff

         • This can be worked around with --checksum or --size-only or setting --s3-upload-cutoff large

         • The maximum value for --s3-upload-cutoff is 5GiB though

       One general purpose workaround is to set --s3-upload-cutoff 5G.  This means that rclone will upload files
       smaller  than 5GiB as single parts.  Note that this can be set in the config file with upload_cutoff = 5G
       or configured in the advanced settings.  If you regularly  transfer  files  larger  than  5G  then  using
       --checksum or --size-only in rclone sync is the recommended workaround.

   Comparison with the native protocol
       Use  the  the  native protocol to take advantage of client-side encryption as well as to achieve the best
       possible download performance.  Uploads will be erasure-coded locally, thus a 1gb upload will  result  in
       2.68gb of data being uploaded to storage nodes across the network.

       Use  this backend and the S3 compatible Hosted Gateway to increase upload performance and reduce the load
       on your systems and network.  Uploads will be encrypted and erasure-coded server-side, thus a 1GB  upload
       will result in only in 1GB of data being uploaded to storage nodes across the network.

       For more detailed comparison please check the documentation of the storj backend.

   Limitations
       rclone  about is not supported by the S3 backend.  Backends without this capability cannot determine free
       space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Backblaze B2

       B2 is Backblaze’s cloud storage system.

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:bucket/path/to/dir.

   Configuration
       Here is an example of making a b2 configuration.  First run

              rclone config

       This  will  guide  you  through  an interactive setup process.  To authenticate you will either need your
       Account ID (a short hex number) and Master Application Key (a long hex number)  OR  an  Application  Key,
       which  is  the  recommended method.  See below for further details on generating and using an Application
       Key.

              No remotes found, make a new one?
              n) New remote
              q) Quit config
              n/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Backblaze B2
                 \ "b2"
              [snip]
              Storage> b2
              Account ID or Application Key ID
              account> 123456789abc
              Application Key
              key> 0123456789abcdef0123456789abcdef0123456789
              Endpoint for the service - leave blank normally.
              endpoint>
              Remote config
              --------------------
              [remote]
              account = 123456789abc
              key = 0123456789abcdef0123456789abcdef0123456789
              endpoint =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This remote is called remote and can now be used like this

       See all buckets

              rclone lsd remote:

       Create a new bucket

              rclone mkdir remote:bucket

       List the contents of a bucket

              rclone ls remote:bucket

       Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

              rclone sync -i /home/local/directory remote:bucket

   Application Keys
       B2 supports multiple Application Keys for different access permission to B2 Buckets.

       You can use these with rclone too; you will need to use rclone version 1.43 or later.

       Follow Backblaze’s docs  to  create  an  Application  Key  with  the  required  permission  and  add  the
       applicationKeyId as the account and the Application Key itself as the key.

       Note that you must put the applicationKeyId as the account – you can’t use the master Account ID.  If you
       try then B2 will return 401 errors.

   –fast-list
       This  remote supports --fast-list which allows you to use fewer transactions in exchange for more memory.
       See the rclone docs for more details.

   Modified time
       The modified  time  is  stored  as  metadata  on  the  object  as  X-Bz-Info-src_last_modified_millis  as
       milliseconds  since  1970-01-01  in  the Backblaze standard.  Other tools should be able to use this as a
       modified time.

       Modified times are used in syncing and are fully supported.  Note that if a modification time needs to be
       updated on an object then it will create a new version of the object.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

       Note that in 2020-05 Backblaze started allowing  characters in file names.   Rclone  hasn’t  changed  its
       encoding as this could cause syncs to re-transfer files.  If you want rclone not to replace  then see the
       --b2-encoding flag below and remove the BackSlash from the string.  This can be set in the config.

   SHA1 checksums
       The  SHA1  checksums  of  the  files  are  checked on upload and download and will be used in the syncing
       process.

       Large files (bigger than the limit in --b2-upload-cutoff) which are uploaded in chunks will  store  their
       SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.

       For  a  large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums.  The
       local disk supports SHA1 checksums so large file transfers from local disk will have an  SHA1.   See  the
       overview for exactly which remotes support SHA1.

       Sources  which  don’t  support  SHA1, in particular crypt will upload large files without SHA1 checksums.
       This may be fixed in the future (see #1767).

       Files sizes below --b2-upload-cutoff will always have an SHA1 regardless of the source.

   Transfers
       Backblaze recommends that you do lots of transfers simultaneously for maximum speed.  In  tests  from  my
       SSD  equipped  laptop the optimum setting is about --transfers 32 though higher numbers may be used for a
       slight speed improvement.  The optimum number for you may vary depending on your hardware,  how  big  the
       files  are, how much you want to load your computer, etc.  The default of --transfers 4 is definitely too
       low for Backblaze B2 though.

       Note that uploading big files (bigger than 200 MiB by default) will use a 96 MiB RAM buffer  by  default.
       There  can  be  at  most  --transfers  of these in use at any moment, so this sets the upper limit on the
       memory used.

   Versions
       When rclone uploads a new version of a file it creates a new version of it.  Likewise when you  delete  a
       file,  the  old  version  will  be marked hidden and still be available.  Conversely, you may opt in to a
       “hard delete” of files with the --b2-hard-delete flag which would permanently remove the file instead  of
       hiding it.

       Old versions of files, where available, are visible using the --b2-versions flag.

       It  is  also  possible  to  view a bucket as it was at a certain point in time, using the --b2-version-at
       flag.  This will show the file versions as they were at that time, showing files that have  been  deleted
       afterwards, and hiding files that were created since.

       If  you  wish  to  remove  all the old versions then you can use the rclone cleanup remote:bucket command
       which will delete all the old versions of files, leaving the current ones intact.  You can also supply  a
       path    and    only   old   versions   under   that   path   will   be   deleted,   e.g. rclone   cleanup
       remote:bucket/path/to/stuff.

       Note that cleanup will remove partially uploaded files from the bucket if they are more than a day old.

       When you purge a bucket, the current and the old versions  will  be  deleted  then  the  bucket  will  be
       deleted.

       However delete will cause the current versions of the files to become hidden old versions.

       Here  is  a  session showing the listing and retrieval of an old version followed by a cleanup of the old
       versions.

       Show current version and all the versions with --b2-versions flag.

              $ rclone -q ls b2:cleanup-test
                      9 one.txt

              $ rclone -q --b2-versions ls b2:cleanup-test
                      9 one.txt
                      8 one-v2016-07-04-141032-000.txt
                     16 one-v2016-07-04-141003-000.txt
                     15 one-v2016-07-02-155621-000.txt

       Retrieve an old version

              $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

              $ ls -l /tmp/one-v2016-07-04-141003-000.txt
              -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

       Clean up all the old versions and show that they’ve gone.

              $ rclone -q cleanup b2:cleanup-test

              $ rclone -q ls b2:cleanup-test
                      9 one.txt

              $ rclone -q --b2-versions ls b2:cleanup-test
                      9 one.txt

   Data usage
       It is useful to know how many requests are sent to the server in different scenarios.

       All copy commands send the following 4 requests:

              /b2api/v1/b2_authorize_account
              /b2api/v1/b2_create_bucket
              /b2api/v1/b2_list_buckets
              /b2api/v1/b2_list_file_names

       The b2_list_file_names request will be sent once for every 1k files in the  remote  path,  providing  the
       checksum  and modification time of the listed files.  As of version 1.33 issue #818 causes extra requests
       to be sent when using B2 with Crypt.  When a copy operation does not require any files to be uploaded, no
       more requests will be sent.

       Uploading files that do not require chunking, will send 2 requests per file upload:

              /b2api/v1/b2_get_upload_url
              /b2api/v1/b2_upload_file/

       Uploading files requiring chunking, will send 2 requests (one each to start and finish  the  upload)  and
       another 2 requests for each chunk:

              /b2api/v1/b2_start_large_file
              /b2api/v1/b2_get_upload_part_url
              /b2api/v1/b2_upload_part/
              /b2api/v1/b2_finish_large_file

   Versions
       Versions  can  be  viewed  with the --b2-versions flag.  When it is set rclone will show and act on older
       versions of files.  For example

       Listing without --b2-versions

              $ rclone -q ls b2:cleanup-test
                      9 one.txt

       And with

              $ rclone -q --b2-versions ls b2:cleanup-test
                      9 one.txt
                      8 one-v2016-07-04-141032-000.txt
                     16 one-v2016-07-04-141003-000.txt
                     15 one-v2016-07-02-155621-000.txt

       Showing that the current version is unchanged but older versions can be seen.  These have  the  UTC  date
       that they were uploaded to the server to the nearest millisecond appended to them.

       Note  that  when using --b2-versions no file write operations are permitted, so you can’t upload files or
       delete them.

   B2 and rclone link
       Rclone supports generating file share links for private B2 buckets.  They can either be for  a  file  for
       example:

              ./rclone link B2:bucket/path/to/file.txt
              https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx

       or if run on a directory you will get:

              ./rclone link B2:bucket/path
              https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx

       you  can  then  use the authorization token (the part of the url from the ?Authorization= on) on any file
       path under that directory.  For example:

              https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
              https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
              https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx

   Standard options
       Here are the Standard options specific to b2 (Backblaze B2).

   –b2-account
       Account ID or Application Key ID.

       Properties:

       • Config: account

       • Env Var: RCLONE_B2_ACCOUNT

       • Type: string

       • Required: true

   –b2-key
       Application Key.

       Properties:

       • Config: key

       • Env Var: RCLONE_B2_KEY

       • Type: string

       • Required: true

   –b2-hard-delete
       Permanently delete files on remote removal, otherwise hide files.

       Properties:

       • Config: hard_delete

       • Env Var: RCLONE_B2_HARD_DELETE

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to b2 (Backblaze B2).

   –b2-endpoint
       Endpoint for the service.

       Leave blank normally.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_B2_ENDPOINT

       • Type: string

       • Required: false

   –b2-test-mode
       A flag string for X-Bz-Test-Mode header for debugging.

       This is for debugging purposes only.  Setting it to one of the strings below  will  cause  b2  to  return
       specific errors:

       • “fail_some_uploads”

       • “expire_some_account_authorization_tokens”

       • “force_cap_exceeded”

       These will be set in the “X-Bz-Test-Mode” header which is documented in the b2 integrations checklist.

       Properties:

       • Config: test_mode

       • Env Var: RCLONE_B2_TEST_MODE

       • Type: string

       • Required: false

   –b2-versions
       Include old versions in directory listings.

       Note  that  when  using  this no file write operations are permitted, so you can’t upload files or delete
       them.

       Properties:

       • Config: versions

       • Env Var: RCLONE_B2_VERSIONS

       • Type: bool

       • Default: false

   –b2-version-at
       Show file versions as they were at the specified time.

       Note that when using this no file write operations are permitted, so you can’t  upload  files  or  delete
       them.

       Properties:

       • Config: version_at

       • Env Var: RCLONE_B2_VERSION_AT

       • Type: Time

       • Default: off

   –b2-upload-cutoff
       Cutoff for switching to chunked upload.

       Files above this size will be uploaded in chunks of “–b2-chunk-size”.

       This value should be set no larger than 4.657 GiB (== 5 GB).

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_B2_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 200Mi

   –b2-copy-cutoff
       Cutoff for switching to multipart copy.

       Any files larger than this that need to be server-side copied will be copied in chunks of this size.

       The minimum is 0 and the maximum is 4.6 GiB.

       Properties:

       • Config: copy_cutoff

       • Env Var: RCLONE_B2_COPY_CUTOFF

       • Type: SizeSuffix

       • Default: 4Gi

   –b2-chunk-size
       Upload chunk size.

       When uploading large files, chunk the file into this size.

       Must fit in memory.  These chunks are buffered in memory and there might a maximum of “–transfers” chunks
       in progress at once.

       5,000,000 Bytes is the minimum size.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_B2_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 96Mi

   –b2-disable-checksum
       Disable checksums for large (> upload cutoff) files.

       Normally  rclone  will  calculate  the SHA1 checksum of the input before uploading it so it can add it to
       metadata on the object.  This is great for data integrity checking but can cause long  delays  for  large
       files to start uploading.

       Properties:

       • Config: disable_checksum

       • Env Var: RCLONE_B2_DISABLE_CHECKSUM

       • Type: bool

       • Default: false

   –b2-download-url
       Custom endpoint for downloads.

       This  is  usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through
       the Cloudflare network.  Rclone works with private buckets by sending an “Authorization” header.  If  the
       custom  endpoint rewrites the requests for authentication, e.g., in Cloudflare Workers, this header needs
       to be handled properly.  Leave blank if you want to use the endpoint provided by Backblaze.

       The URL provided here SHOULD have the protocol and SHOULD NOT  have  a  trailing  slash  or  specify  the
       /file/bucket subpath as rclone will request files with “{download_url}/file/{bucket_name}/{path}”.

       Example: > https://mysubdomain.mydomain.tld (No trailing “/”, “file” or “bucket”)

       Properties:

       • Config: download_url

       • Env Var: RCLONE_B2_DOWNLOAD_URL

       • Type: string

       • Required: false

   –b2-download-auth-duration
       Time before the authorization token will expire in s or suffix ms|s|m|h|d.

       The  duration  before  the download authorization token will expire.  The minimum value is 1 second.  The
       maximum value is one week.

       Properties:

       • Config: download_auth_duration

       • Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION

       • Type: Duration

       • Default: 1w

   –b2-memory-pool-flush-time
       How often internal memory buffer pools will be flushed.  Uploads which requires additional  buffers  (f.e
       multipart)  will  use memory pool for allocations.  This option controls how often unused buffers will be
       removed from the pool.

       Properties:

       • Config: memory_pool_flush_time

       • Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME

       • Type: Duration

       • Default: 1m0s

   –b2-memory-pool-use-mmap
       Whether to use mmap buffers in internal memory pool.

       Properties:

       • Config: memory_pool_use_mmap

       • Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP

       • Type: bool

       • Default: false

   –b2-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_B2_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       rclone about is not supported by the B2 backend.  Backends without this capability cannot determine  free
       space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Box

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

       The  initial  setup for Box involves getting a token from Box which you can do either in your browser, or
       with a config.json downloaded from Box to use JWT authentication.  rclone config walks you through it.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Box
                 \ "box"
              [snip]
              Storage> box
              Box App Client Id - leave blank normally.
              client_id>
              Box App Client Secret - leave blank normally.
              client_secret>
              Box App config.json location
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              box_config_file>
              Box App Primary Access Token
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              access_token>

              Enter a string value. Press Enter for the default ("user").
              Choose a number from below, or type in your own value
               1 / Rclone should act on behalf of a user
                 \ "user"
               2 / Rclone should act on behalf of a service account
                 \ "enterprise"
              box_sub_type>
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              client_id =
              client_secret =
              token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note that rclone runs a webserver on your local machine to collect the token as returned from Box.   This
       only  runs  from the moment it opens your browser to the moment you get back the verification code.  This
       is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running  a
       host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your Box

              rclone lsd remote:

       List all the files in your Box

              rclone ls remote:

       To copy a local directory to an Box directory called backup

              rclone copy /home/source remote:backup

   Using rclone with an Enterprise account with SSO
       If  you  have  an  “Enterprise”  account  type  with  Box with single sign on (SSO), you need to create a
       password to use Box with rclone.  This can be done at your Enterprise Box account by going  to  Settings,
       “Account” Tab, and then set the password in the “Authentication” field.

       Once  you  have  done  this,  you can setup your Enterprise Box account using the same procedure detailed
       above in the, using the password you have just set.

   Invalid refresh token
       According  to  the   https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-
       tokens box docs:

              Each refresh_token is valid for one use in 60 days.

       This means that if you

       • Don’t use the box remote for 60 days

       • Copy the config file with a box refresh token in and use it in two places

       • Get an error on a token refresh

       then rclone will return an error which includes the text Invalid refresh token.

       To  fix  this  you will need to use oauth2 again to update the refresh token.  You can use the methods in
       the remote setup docs, bearing in mind that if you use the copy the config file method,  you  should  not
       use that remote on the computer you did the authentication on.

       Here is how to do it.

              $ rclone config
              Current remotes:

              Name                 Type
              ====                 ====
              remote               box

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> e
              Choose a number from below, or type in an existing value
               1 > remote
              remote> remote
              --------------------
              [remote]
              type = box
              token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
              --------------------
              Edit remote
              Value "client_id" = ""
              Edit? (y/n)>
              y) Yes
              n) No
              y/n> n
              Value "client_secret" = ""
              Edit? (y/n)>
              y) Yes
              n) No
              y/n> n
              Remote config
              Already have a token - refresh?
              y) Yes
              n) No
              y/n> y
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              type = box
              token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Modified time and hashes
       Box  allows  modification  times to be set on objects accurate to 1 second.  These will be used to detect
       whether objects need syncing or not.

       Box supports SHA1 type hashes, so you can use the --checksum flag.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \

       File names can also not end with the following characters.  These only get replaced if they are the  last
       character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Transfers
       For  files  above 50 MiB rclone will use a chunked transfer.  Rclone will upload up to --transfers chunks
       at the same time (shared among all the multipart  uploads).   Chunks  are  buffered  in  memory  and  are
       normally 8 MiB so increasing --transfers will increase memory use.

   Deleting files
       Depending  on the enterprise settings for your user, the item will either be actually deleted from Box or
       moved to the trash.

       Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed
       file and folder individually so it may take a very long time.  Emptying the trash via the WebUI does  not
       have this limitation so it is advised to empty the trash via the WebUI.

   Root folder ID
       You  can  set  the  root_folder_id  for rclone.  This is the directory (identified by its Folder ID) that
       rclone considers to be the root of your Box drive.

       Normally you will leave this blank and rclone will determine the correct root to use itself.

       However you can set this to restrict rclone to a specific folder hierarchy.

       In order to do this you will have to find the Folder ID of the directory  you  wish  rclone  to  display.
       This will be the last segment of the URL when you open the relevant folder in the Box web interface.

       So    if    the    folder    you    want    rclone    to    use    has    a    URL   which   looks   like
       https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as  the  root_folder_id
       in the config.

   Standard options
       Here are the Standard options specific to box (Box).

   –box-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_BOX_CLIENT_ID

       • Type: string

       • Required: false

   –box-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_BOX_CLIENT_SECRET

       • Type: string

       • Required: false

   –box-box-config-file
       Box App config.json location

       Leave blank normally.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: box_config_file

       • Env Var: RCLONE_BOX_BOX_CONFIG_FILE

       • Type: string

       • Required: false

   –box-access-token
       Box App Primary Access Token

       Leave blank normally.

       Properties:

       • Config: access_token

       • Env Var: RCLONE_BOX_ACCESS_TOKEN

       • Type: string

       • Required: false

   –box-box-sub-type
       Properties:

       • Config: box_sub_type

       • Env Var: RCLONE_BOX_BOX_SUB_TYPE

       • Type: string

       • Default: “user”

       • Examples:

         • “user”

           • Rclone should act on behalf of a user.

         • “enterprise”

           • Rclone should act on behalf of a service account.

   Advanced options
       Here are the Advanced options specific to box (Box).

   –box-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_BOX_TOKEN

       • Type: string

       • Required: false

   –box-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_BOX_AUTH_URL

       • Type: string

       • Required: false

   –box-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_BOX_TOKEN_URL

       • Type: string

       • Required: false

   –box-root-folder-id
       Fill in for rclone to use a non root folder as its starting point.

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_BOX_ROOT_FOLDER_ID

       • Type: string

       • Default: “0”

   –box-upload-cutoff
       Cutoff for switching to multipart upload (>= 50 MiB).

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_BOX_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 50Mi

   –box-commit-retries
       Max number of times to try committing a multipart file.

       Properties:

       • Config: commit_retries

       • Env Var: RCLONE_BOX_COMMIT_RETRIES

       • Type: int

       • Default: 100

   –box-list-chunk
       Size of listing chunk 1-1000.

       Properties:

       • Config: list_chunk

       • Env Var: RCLONE_BOX_LIST_CHUNK

       • Type: int

       • Default: 1000

   –box-owned-by
       Only show items owned by the login (email address) passed in.

       Properties:

       • Config: owned_by

       • Env Var: RCLONE_BOX_OWNED_BY

       • Type: string

       • Required: false

   –box-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_BOX_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot

   Limitations
       Note that Box is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

       Box  file names can’t have the \ character in.  rclone maps this to and from an identical looking unicode
       equivalent \ (U+FF3C Fullwidth Reverse Solidus).

       Box only supports filenames up to 255 characters in length.

       rclone about is not supported by the Box backend.  Backends without this capability cannot determine free
       space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Cache (DEPRECATED)

       The cache remote wraps another existing remote and stores file structure and its data  for  long  running
       tasks like rclone mount.

   Status
       The   cache  backend  code  is  working  but  it  currently  doesn’t  have  a  maintainer  so  there  are
       https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22
       outstanding bugs which aren’t getting fixed.

       The cache backend is due to be phased out in favour of the VFS caching layer  eventually  which  is  more
       tightly integrated into rclone.

       Until  this  happens  we  recommend  only  using the cache backend if you find you can’t work without it.
       There are many docs online describing the use of the cache backend to minimize API hits and  by-and-large
       these are out of date and the cache backend isn’t needed in those scenarios any more.

   Configuration
       To get started you just need to have an existing remote which can be configured with cache.

       Here is an example of how to make a remote called test-cache.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              n/r/c/s/q> n
              name> test-cache
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Cache a remote
                 \ "cache"
              [snip]
              Storage> cache
              Remote to cache.
              Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
              "myremote:bucket" or maybe "myremote:" (not recommended).
              remote> local:/test
              Optional: The URL of the Plex server
              plex_url> http://127.0.0.1:32400
              Optional: The username of the Plex user
              plex_username> dummyusername
              Optional: The password of the Plex user
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank
              y/g/n> y
              Enter the password:
              password:
              Confirm the password:
              password:
              The size of a chunk. Lower value good for slow connections but can affect seamless reading.
              Default: 5M
              Choose a number from below, or type in your own value
               1 / 1 MiB
                 \ "1M"
               2 / 5 MiB
                 \ "5M"
               3 / 10 MiB
                 \ "10M"
              chunk_size> 2
              How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
              Accepted units are: "s", "m", "h".
              Default: 5m
              Choose a number from below, or type in your own value
               1 / 1 hour
                 \ "1h"
               2 / 24 hours
                 \ "24h"
               3 / 24 hours
                 \ "48h"
              info_age> 2
              The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
              Default: 10G
              Choose a number from below, or type in your own value
               1 / 500 MiB
                 \ "500M"
               2 / 1 GiB
                 \ "1G"
               3 / 10 GiB
                 \ "10G"
              chunk_total_size> 3
              Remote config
              --------------------
              [test-cache]
              remote = local:/test
              plex_url = http://127.0.0.1:32400
              plex_username = dummyusername
              plex_password = *** ENCRYPTED ***
              chunk_size = 5M
              info_age = 48h
              chunk_total_size = 10G

       You can then use it like this,

       List directories in top level of your drive

              rclone lsd test-cache:

       List all the files in your drive

              rclone ls test-cache:

       To start a cached mount

              rclone mount --allow-other test-cache: /var/tmp/test-cache

   Write Features
   Offline uploading
       In an effort to make writing through cache more reliable, the backend now supports this feature which can
       be activated by specifying a cache-tmp-upload-path.

       A files goes through these states when using this feature:

       1. An upload is started (usually by copying a file on the cache remote)

       2. When  the  copy  to the temporary location is complete the file is part of the cached remote and looks
          and behaves like any other file (reading included)

       3. After cache-tmp-wait-time passes and the file is next in line, rclone move is used to move the file to
          the cloud provider

       4. Reading the file still works during the upload but most modifications on it will be prohibited

       5. Once the move is complete the file is unlocked for modifications as it becomes as  any  other  regular
          file

       6. If  the file is being read through cache when it’s actually deleted from the temporary path then cache
          will simply swap the source to the cloud provider without interrupting the  reading  (small  blip  can
          happen though)

       Files  are  uploaded  in  sequence  and only one file is uploaded at a time.  Uploads will be stored in a
       queue and be processed based on the order they were added.   The  queue  and  the  temporary  storage  is
       persistent across restarts but can be cleared on startup with the --cache-db-purge flag.

   Write Support
       Writes  are supported through cache.  One caveat is that a mounted cache remote does not add any retry or
       fallback mechanism to the upload operation.  This will  depend  on  the  implementation  of  the  wrapped
       remote.  Consider using Offline uploading for reliable writes.

       One  special  case  is  covered  with cache-writes which will cache the file data at the same time as the
       upload when it is enabled making it available from  the  cache  store  immediately  once  the  upload  is
       finished.

   Read Features
   Multiple connections
       To  counter  the  high  latency between a local PC where rclone is running and cloud providers, the cache
       remote can split multiple requests to the cloud provider  for  smaller  file  chunks  and  combines  them
       together locally where they can be available almost immediately before the reader usually needs them.

       This  is  similar  to  buffering when media files are played online.  Rclone will stay around the current
       marker but always try its best to stay ahead and prepare the data before.

   Plex Integration
       There is a direct integration with Plex which allows cache to detect during reading if  the  file  is  in
       playback or not.  This helps cache to adapt how it queries the cloud provider depending on what is needed
       for.

       Scans  will  have  a  minimum  amount  of workers (1) while in a confirmed playback cache will deploy the
       configured number of workers.

       This integration opens the doorway to additional performance improvements which will be explored  in  the
       near future.

       Note:  If  Plex  options  are  not  configured,  cache  will function with its configured options without
       adapting any of its settings.

       How to enable?  Run rclone config and add all the Plex options (endpoint, username and password) in  your
       remote and it will be automatically enabled.

       Affected settings: - cache-workers: Configured value during confirmed playback or 1 all the other times

   Certificate Validation
       When  the Plex server is configured to only accept secure connections, it is possible to use .plex.direct
       URLs to ensure certificate validation succeeds.  These URLs are used by Plex internally to connect to the
       Plex server securely.

       The format for these URLs is the following:

       https://ip-with-dots-replaced.server-hash.plex.direct:32400/

       The ip-with-dots-replaced part can be any IPv4 address, where the dots have been  replaced  with  dashes,
       e.g. 127.0.0.1 becomes 127-0-0-1.

       To get the server-hash part, the easiest way is to visit

       https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token

       This  page  will list all the available Plex servers for your account with at least one .plex.direct link
       for each.  Copy one URL and replace the IP address with the desired address.  This can  be  used  as  the
       plex_url value.

   Known issues
   Mount and –dir-cache-time
       –dir-cache-time  controls  the first layer of directory caching which works at the mount layer.  Being an
       independent caching mechanism from the cache backend, it  will  manage  its  own  entries  based  on  the
       configured time.

       To  avoid  getting  in a scenario where dir cache has obsolete data and cache would have the correct one,
       try to set --dir-cache-time to a lower time than --cache-info-age.  Default values are already configured
       in this way.

   Windows support - Experimental
       There are a couple of issues with Windows mount functionality that still require some investigations.  It
       should be considered as experimental thus far as fixes come in for this OS.

       Most of the issues seem to be related to the difference between filesystems on Linux flavors and  Windows
       as cache is heavily dependent on them.

       Any reports or feedback on how cache behaves on this OS is greatly appreciated.

       • https://github.com/rclone/rclone/issues/1935

       • https://github.com/rclone/rclone/issues/1907

       • https://github.com/rclone/rclone/issues/1834

   Risk of throttling
       Future  iterations  of the cache backend will make use of the pooling functionality of the cloud provider
       to synchronize and at the same time make writing through it more tolerant to failures.

       There are a couple of enhancements in track to add these but in the meantime there  is  a  valid  concern
       that  the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on
       it for very large mounts.

       Some recommendations: - don’t use a very small interval for entry information (--cache-info-age) -  while
       writes  aren’t  yet  optimised, you can still write through cache which gives you the advantage of adding
       the file in the cache at the same time if configured to do so.

       Future enhancements:

       • https://github.com/rclone/rclone/issues/1937

       • https://github.com/rclone/rclone/issues/1936

   cache and crypt
       One common scenario is to keep your data encrypted in the cloud provider using the crypt  remote.   crypt
       uses  a  similar  technique  to wrap around an existing remote and handles this translation in a seamless
       way.

       There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

       During testing, I experienced a lot of bans with the remotes in  this  order.   I  suspect  it  might  be
       related  to  how  crypt opens files on the cloud provider which makes it think we’re downloading the full
       file instead of small chunks.  Organizing the remotes in this order yields better results:  cloud  remote
       -> cache -> crypt

   absolute remote paths
       cache  can  not differentiate between relative and absolute paths for the wrapped remote.  Any path given
       in the remote config setting and on the command line will be passed to the wrapped remote as is, but  for
       storing the chunks on disk the path will be made relative by removing any leading / character.

       This  behavior is irrelevant for most backend types, but there are backends where a leading / changes the
       effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH
       server and paths without are relative to the user home directory.  As a  result  sftp:bin  and  sftp:/bin
       will share the same cache folder, even if they represent a different directory on the SSH server.

   Cache and Remote Control (–rc)
       Cache supports the new --rc mode in rclone and can be remote controlled through the following end points:
       By default, the listener is disabled if you do not add the flag.

   rc cache/expire
       Purge  a  remote  from  the  cache  backend.   Supports  either  a directory or a file.  It supports both
       encrypted and unencrypted file names if cache is wrapped by crypt.

       Params: - remote = path to remote (required) - withData = true/false to delete cached  data  (chunks)  as
       well (optional, false by default)

   Standard options
       Here are the Standard options specific to cache (Cache a remote).

   –cache-remote
       Remote to cache.

       Normally  should  contain  a  `:'  and  a  path,  e.g. “myremote:path/to/dir”, “myremote:bucket” or maybe
       “myremote:” (not recommended).

       Properties:

       • Config: remote

       • Env Var: RCLONE_CACHE_REMOTE

       • Type: string

       • Required: true

   –cache-plex-url
       The URL of the Plex server.

       Properties:

       • Config: plex_url

       • Env Var: RCLONE_CACHE_PLEX_URL

       • Type: string

       • Required: false

   –cache-plex-username
       The username of the Plex user.

       Properties:

       • Config: plex_username

       • Env Var: RCLONE_CACHE_PLEX_USERNAME

       • Type: string

       • Required: false

   –cache-plex-password
       The password of the Plex user.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: plex_password

       • Env Var: RCLONE_CACHE_PLEX_PASSWORD

       • Type: string

       • Required: false

   –cache-chunk-size
       The size of a chunk (partial file data).

       Use lower numbers for slower connections.  If the chunk size is changed, any downloaded  chunks  will  be
       invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_CACHE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 5Mi

       • Examples:

         • “1M”

           • 1 MiB

         • “5M”

           • 5 MiB

         • “10M”

           • 10 MiB

   –cache-info-age
       How  long to cache file structure information (directory listings, file size, times, etc.).  If all write
       operations are done through the cache then you can safely make this value very large as the  cache  store
       will also be updated in real time.

       Properties:

       • Config: info_age

       • Env Var: RCLONE_CACHE_INFO_AGE

       • Type: Duration

       • Default: 6h0m0s

       • Examples:

         • “1h”

           • 1 hour

         • “24h”

           • 24 hours

         • “48h”

           • 48 hours

   –cache-chunk-total-size
       The total size that the chunks can take up on the local disk.

       If  the  cache exceeds this value then it will start to delete the oldest chunks until it goes under this
       value.

       Properties:

       • Config: chunk_total_size

       • Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE

       • Type: SizeSuffix

       • Default: 10Gi

       • Examples:

         • “500M”

           • 500 MiB

         • “1G”

           • 1 GiB

         • “10G”

           • 10 GiB

   Advanced options
       Here are the Advanced options specific to cache (Cache a remote).

   –cache-plex-token
       The plex token for authentication - auto set normally.

       Properties:

       • Config: plex_token

       • Env Var: RCLONE_CACHE_PLEX_TOKEN

       • Type: string

       • Required: false

   –cache-plex-insecure
       Skip all certificate verification when connecting to the Plex server.

       Properties:

       • Config: plex_insecure

       • Env Var: RCLONE_CACHE_PLEX_INSECURE

       • Type: string

       • Required: false

   –cache-db-path
       Directory to store file structure metadata DB.

       The remote name is used as the DB file name.

       Properties:

       • Config: db_path

       • Env Var: RCLONE_CACHE_DB_PATH

       • Type: string

       • Default: “$HOME/.cache/rclone/cache-backend”

   –cache-chunk-path
       Directory to cache chunk files.

       Path to where partial file data (chunks) are stored locally.  The remote name is appended  to  the  final
       path.

       This  config  follows  the  “–cache-db-path”.   If you specify a custom location for “–cache-db-path” and
       don’t  specify  one  for  “–cache-chunk-path”  then  “–cache-chunk-path”  will  use  the  same  path   as
       “–cache-db-path”.

       Properties:

       • Config: chunk_path

       • Env Var: RCLONE_CACHE_CHUNK_PATH

       • Type: string

       • Default: “$HOME/.cache/rclone/cache-backend”

   –cache-db-purge
       Clear all the cached data for this remote on start.

       Properties:

       • Config: db_purge

       • Env Var: RCLONE_CACHE_DB_PURGE

       • Type: bool

       • Default: false

   –cache-chunk-clean-interval
       How often should the cache perform cleanups of the chunk storage.

       The   default   value   should   be  ok  for  most  people.   If  you  find  that  the  cache  goes  over
       “cache-chunk-total-size” too often then try to lower this value to force  it  to  perform  cleanups  more
       often.

       Properties:

       • Config: chunk_clean_interval

       • Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL

       • Type: Duration

       • Default: 1m0s

   –cache-read-retries
       How many times to retry a read from a cache storage.

       Since  reading  from a cache stream is independent from downloading file data, readers can get to a point
       where there’s no more data in the cache.  Most of the times this can indicate  a  connectivity  issue  if
       cache isn’t able to provide file data anymore.

       For  really  slow connections, increase this to a point where the stream is able to provide data but your
       experience will be very stuttering.

       Properties:

       • Config: read_retries

       • Env Var: RCLONE_CACHE_READ_RETRIES

       • Type: int

       • Default: 10

   –cache-workers
       How many workers should run in parallel to download chunks.

       Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on  the
       cloud  provider.   This  impacts  several  aspects like the cloud provider API limits, more stress on the
       hardware that rclone runs on but it also means that streams will be more fluid and data will be available
       much more faster to readers.

       Note: If the optional Plex integration is enabled then this setting will adapt to  the  type  of  reading
       performed and the value specified here will be used as a maximum number of workers to use.

       Properties:

       • Config: workers

       • Env Var: RCLONE_CACHE_WORKERS

       • Type: int

       • Default: 4

   –cache-chunk-no-memory
       Disable the in-memory cache for storing chunks during streaming.

       By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as
       possible.

       This  transient  data is evicted as soon as it is read and the number of chunks stored doesn’t exceed the
       number of workers.  However, depending on other settings like “cache-chunk-size” and “cache-workers” this
       footprint can increase if there are parallel streams too (multiple files being read at the same time).

       If the hardware permits it, use this feature to provide an overall better  performance  during  streaming
       but it can also be disabled if RAM is not available on the local machine.

       Properties:

       • Config: chunk_no_memory

       • Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY

       • Type: bool

       • Default: false

   –cache-rps
       Limits the number of requests per second to the source FS (-1 to disable).

       This  setting  places  a  hard limit on the number of requests per second that cache will be doing to the
       cloud provider remote and try to respect that value by setting waits between reads.

       If you find that you’re getting banned or limited on the cloud provider through cache  and  know  that  a
       smaller  number  of  requests per second will allow you to work with it then you can use this setting for
       that.

       A good balance of all the other settings should make this setting useless but it is available to set  for
       more special cases.

       NOTE:  This  will  limit  the number of requests during streams but other API calls to the cloud provider
       like directory listings will still pass.

       Properties:

       • Config: rps

       • Env Var: RCLONE_CACHE_RPS

       • Type: int

       • Default: -1

   –cache-writes
       Cache file data on writes through the FS.

       If you need to read files immediately after you upload them through cache you can  enable  this  flag  to
       have their data stored in the cache store at the same time during upload.

       Properties:

       • Config: writes

       • Env Var: RCLONE_CACHE_WRITES

       • Type: bool

       • Default: false

   –cache-tmp-upload-path
       Directory to keep temporary files until they are uploaded.

       This  is  the  path where cache will use as a temporary storage for new files that need to be uploaded to
       the cloud provider.

       Specifying a value will enable this feature.  Without it, it is completely disabled  and  files  will  be
       uploaded directly to the cloud provider

       Properties:

       • Config: tmp_upload_path

       • Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH

       • Type: string

       • Required: false

   –cache-tmp-wait-time
       How long should files be stored in local cache before being uploaded.

       This  is  the duration that a file must wait in the temporary location cache-tmp-upload-path before it is
       selected for upload.

       Note that only one file is uploaded at a time and it can take longer to  start  the  upload  if  a  queue
       formed for this purpose.

       Properties:

       • Config: tmp_wait_time

       • Env Var: RCLONE_CACHE_TMP_WAIT_TIME

       • Type: Duration

       • Default: 15s

   –cache-db-wait-time
       How long to wait for the DB to be available - 0 is unlimited.

       Only  one  process  can have the DB open at any one time, so rclone waits for this duration for the DB to
       become available before it gives an error.

       If you set it to 0 then it will wait forever.

       Properties:

       • Config: db_wait_time

       • Env Var: RCLONE_CACHE_DB_WAIT_TIME

       • Type: Duration

       • Default: 1s

   Backend commands
       Here are the commands specific to the cache backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   stats
       Print stats on the cache backend in JSON format.

              rclone backend stats remote: [options] [<arguments>+]

Chunker (BETA)

       The chunker overlay transparently splits large files into smaller chunks during upload to wrapped  remote
       and  transparently  assembles them back when the file is downloaded.  This allows to effectively overcome
       size limits imposed by storage providers.

   Configuration
       To use it, first set up the underlying remote following the configuration instructions for  that  remote.
       You can also use a local pathname instead of a remote.

       First  check  your  chosen remote is working - we’ll call it remote:path here.  Note that anything inside
       remote:path will be chunked and anything outside won’t.  This means that if you are using a  bucket-based
       remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote s3:bucket.

       Now  configure chunker using rclone config.  We will call this one overlay to separate it from the remote
       itself.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> overlay
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Transparently chunk/split large files
                 \ "chunker"
              [snip]
              Storage> chunker
              Remote to chunk/unchunk.
              Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
              "myremote:bucket" or maybe "myremote:" (not recommended).
              Enter a string value. Press Enter for the default ("").
              remote> remote:path
              Files larger than chunk size will be split in chunks.
              Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
              chunk_size> 100M
              Choose how chunker handles hash sums. All modes but "none" require metadata.
              Enter a string value. Press Enter for the default ("md5").
              Choose a number from below, or type in your own value
               1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise
                 \ "none"
               2 / MD5 for composite files
                 \ "md5"
               3 / SHA1 for composite files
                 \ "sha1"
               4 / MD5 for all files
                 \ "md5all"
               5 / SHA1 for all files
                 \ "sha1all"
               6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported
                 \ "md5quick"
               7 / Similar to "md5quick" but prefers SHA1 over MD5
                 \ "sha1quick"
              hash_type> md5
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              --------------------
              [overlay]
              type = chunker
              remote = remote:bucket
              chunk_size = 100M
              hash_type = md5
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Specifying the remote
       In normal use, make sure the remote has a : in.  If you specify the remote without a : then  rclone  will
       use  a  local  directory  of that name.  So if you use a remote of /path/to/secret/files then rclone will
       chunk stuff in that directory.  If you use a remote of name then rclone will put  files  in  a  directory
       called name in the current directory.

   Chunking
       When  rclone  starts  a  file  upload, chunker checks the file size.  If it doesn’t exceed the configured
       chunk size, chunker will just pass the file to the wrapped remote.  If a  file  is  large,  chunker  will
       transparently  cut data in pieces with temporary names and stream them one by one, on the fly.  Each data
       chunk will contain the specified number of bytes, except for the last one which may have less  data.   If
       file  size  is  unknown  in advance (this is called a streaming upload), chunker will internally create a
       temporary copy, record its size and repeat the above process.

       When upload completes, temporary chunk files are finally renamed.  This scheme guarantees that operations
       can be run in parallel and look from outside as atomic.  A similar method with hidden temporary chunks is
       used for other operations (copy/move/rename, etc.).  If an operation fails, hidden  chunks  are  normally
       destroyed, and the target composite file stays intact.

       When  a  composite  file  download is requested, chunker transparently assembles it by concatenating data
       chunks in order.  As the split is trivial one could even manually concatenate  data  chunks  together  to
       obtain the original content.

       When the list rclone command scans a directory on wrapped remote, the potential chunk files are accounted
       for, grouped and assembled into composite directory entries.  Any temporary chunks are hidden.

       List  and  other  commands  can  sometimes  come  across  composite files with missing or invalid chunks,
       e.g. shadowed by like-named directory or another file.  This usually means that wrapped file  system  has
       been  directly  tampered  with  or  damaged.  If chunker detects a missing chunk it will by default print
       warning, skip the whole incomplete group of chunks but proceed with current command.   You  can  set  the
       --chunker-fail-hard flag to have commands abort with error message in such cases.

   Chunk names
       The   default   chunk   name   format   is   *.rclone_chunk.###,   hence   by  default  chunk  names  are
       BIG_FILE_NAME.rclone_chunk.001, BIG_FILE_NAME.rclone_chunk.002  etc.   You  can  configure  another  name
       format  using the name_format configuration file option.  The format uses asterisk * as a placeholder for
       the base file name and one or more consecutive hash characters # as a placeholder  for  sequential  chunk
       number.   There must be one and only one asterisk.  The number of consecutive hash characters defines the
       minimum length of a string representing a chunk number.  If decimal chunk number has less digits than the
       number of hashes, it is left-padded by zeros.  If the decimal string is longer, it is  left  intact.   By
       default  numbering  starts  from 1 but there is another option that allows user to start from 0, e.g. for
       compatibility with legacy software.

       For example, if name format is big_*-##.part and original file name is data.txt and numbering starts from
       0, then the first chunk will be named big_data.txt-00.part, the 99th chunk will  be  big_data.txt-98.part
       and the 302nd chunk will become big_data.txt-301.part.

       Note  that  list  assembles composite directory entries only when chunk names match the configured format
       and treats non-conforming file names as normal non-chunked files.

       When using norename transactions, chunk names will additionally have a unique file version  suffix.   For
       example, BIG_FILE_NAME.rclone_chunk.001_bp562k.

   Metadata
       Besides  data  chunks chunker will by default create metadata object for a composite file.  The object is
       named after the original file.  Chunker allows user to disable metadata  completely  (the  none  format).
       Note  that  metadata  is normally not created for files smaller than the configured chunk size.  This may
       change in future rclone releases.

   Simple JSON metadata format
       This is the default format.  It supports hash sums  and  chunk  validation  for  composite  files.   Meta
       objects carry the following fields:

       • ver - version of format, currently 1

       • size - total size of composite file

       • nchunks - number of data chunks in file

       • md5 - MD5 hashsum of composite file (if present)

       • sha1 - SHA1 hashsum (if present)

       • txn - identifies current version of the file

       There  is no field for composite file name as it’s simply equal to the name of meta object on the wrapped
       remote.  Please refer to respective sections for details on hashsums and modified time handling.

   No metadata
       You can disable meta objects by setting the meta format option to none.  In this mode chunker  will  scan
       directory for all files that follow configured chunk name format, group them by detecting chunks with the
       same  base  name  and  show group names as virtual composite files.  This method is more prone to missing
       chunk errors (especially missing last chunk) than format with metadata enabled.

   Hashsums
       Chunker supports hashsums only when a compatible metadata is present.   Hence,  if  you  choose  metadata
       format of none, chunker will report hashsum as UNSUPPORTED.

       Please  note  that  by  default  metadata  is stored only for composite files.  If a file is smaller than
       configured chunk size, chunker will transparently redirect hash requests to wrapped  remote,  so  support
       depends  on  that.   You  will see the empty string as a hashsum of requested type for small files if the
       wrapped remote doesn’t support it.

       Many storage backends support MD5 and SHA1 hash types, so does chunker.  With chunker you can choose  one
       or  another  but not both.  MD5 is set by default as the most supported type.  Since chunker keeps hashes
       for composite files and falls back to the wrapped remote hash for non-chunked  ones,  we  advise  you  to
       choose the same hash type as supported by wrapped remote so that your file listings look coherent.

       If  your  storage  backend  does  not support MD5 or SHA1 but you need consistent file hashing, configure
       chunker with md5all or sha1all.  These two modes guarantee given hash for all files.  If  wrapped  remote
       doesn’t  support  it,  chunker will then add metadata to all files, even small.  However, this can double
       the amount of small files in storage and incur additional service charges.  You can even use  chunker  to
       force   md5/sha1   support   in  any  other  remote  at  expense  of  sidecar  meta  objects  by  setting
       e.g. chunk_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking.

       Normally, when a file is copied to chunker controlled remote,  chunker  will  ask  the  file  source  for
       compatible  file  hash  and  revert  to  on-the-fly calculation if none is found.  This involves some CPU
       overhead but provides a guarantee  that  given  hashsum  is  available.   Also,  chunker  will  reject  a
       server-side copy or move operation if source and destination hashsum types are different resulting in the
       extra network bandwidth, too.  In some rare cases this may be undesired, so chunker provides two optional
       choices:  sha1quick and md5quick.  If the source does not support primary hash type and the quick mode is
       enabled, chunker will try to fall back to the secondary type.  This will save CPU and bandwidth  but  can
       result in empty hashsums at destination.  Beware of consequences: the sync command will revert (sometimes
       silently) to time/size comparison if compatible hashsums between source and target are not found.

   Modified time
       Chunker  stores  modification  times  using  the  wrapped remote so support depends on that.  For a small
       non-chunked file the chunker overlay simply manipulates modification time of  the  wrapped  remote  file.
       For  a  composite file with metadata chunker will get and set modification time of the metadata object on
       the wrapped remote.  If file is chunked but metadata format is none then chunker  will  use  modification
       time of the first data chunk.

   Migrations
       The  idiomatic  way  to  migrate  to a different chunk size, hash type, transaction style or chunk naming
       scheme is to:

       • Collect all your chunked files under a directory and have your chunker remote point to it.

       • Create another directory (most probably on the same cloud storage) and  configure  a  new  remote  with
         desired metadata format, hash type, chunk naming etc.

       • Now  run  rclone  sync  -i  oldchunks:  newchunks: and all your data will be transparently converted in
         transfer.  This may take some time, yet chunker will try server-side copy if possible.

       • After checking data integrity you may remove configuration section of the old remote.

       If rclone gets killed during a long operation on a big composite file, hidden temporary chunks  may  stay
       in the directory.  They will not be shown by the list command but will eat up your account quota.  Please
       note  that  the  deletefile  command  deletes only active chunks of a file.  As a workaround, you can use
       remote of the wrapped file system to see them.  An easy way to get rid  of  hidden  garbage  is  to  copy
       littered directory somewhere using the chunker remote and purge the original directory.  The copy command
       will copy only active chunks while the purge will remove everything including garbage.

   Caveats and Limitations
       Chunker  requires  wrapped remote to support server-side move (or copy + delete) operations, otherwise it
       will explicitly refuse to start.  This is because it internally renames temporary chunk  files  to  their
       final names when an operation completes successfully.

       Chunker  encodes  chunk  number  in file name, so with default name_format setting it adds 17 characters.
       Also chunker adds 7 characters of temporary suffix during operations.  Many file systems limit base  file
       name  without path by 255 characters.  Using rclone’s crypt remote as a base file system limits file name
       by 143 characters.  Thus, maximum name length is 231 for most files and 119  for  chunker-over-crypt.   A
       user  in  need  can change name format to e.g. *.rcc## and save 10 characters (provided at most 99 chunks
       per file).

       Note that a move implemented using the copy-and-delete method may incur double charging with  some  cloud
       storage providers.

       Chunker  will  not  automatically  rename existing chunks when you run rclone config on a live remote and
       change the chunk name format.  Beware that in result of this some files which have been treated as chunks
       before the change can pop up in directory listings as normal files and  vice  versa.   The  same  warning
       holds  for  the chunk size.  If you desperately need to change critical chunking settings, you should run
       data migration as described above.

       If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can’t  have
       a file called “Hello.doc” and “hello.doc” in the same directory).

       Chunker  included in rclone releases up to v1.54 can sometimes fail to detect metadata produced by recent
       versions of rclone.  We recommend users to keep rclone up-to-date to avoid data corruption.

       Changing transactions is dangerous and requires explicit migration.

   Standard options
       Here are the Standard options specific to chunker (Transparently chunk/split large files).

   –chunker-remote
       Remote to chunk/unchunk.

       Normally should contain a  `:'  and  a  path,  e.g. “myremote:path/to/dir”,  “myremote:bucket”  or  maybe
       “myremote:” (not recommended).

       Properties:

       • Config: remote

       • Env Var: RCLONE_CHUNKER_REMOTE

       • Type: string

       • Required: true

   –chunker-chunk-size
       Files larger than chunk size will be split in chunks.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_CHUNKER_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 2Gi

   –chunker-hash-type
       Choose how chunker handles hash sums.

       All modes but “none” require metadata.

       Properties:

       • Config: hash_type

       • Env Var: RCLONE_CHUNKER_HASH_TYPE

       • Type: string

       • Default: “md5”

       • Examples:

         • “none”

           • Pass any hash supported by wrapped remote for non-chunked files.

           • Return nothing otherwise.

         • “md5”

           • MD5 for composite files.

         • “sha1”

           • SHA1 for composite files.

         • “md5all”

           • MD5 for all files.

         • “sha1all”

           • SHA1 for all files.

         • “md5quick”

           • Copying a file to chunker will request MD5 from the source.

           • Falling back to SHA1 if unsupported.

         • “sha1quick”

           • Similar to “md5quick” but prefers SHA1 over MD5.

   Advanced options
       Here are the Advanced options specific to chunker (Transparently chunk/split large files).

   –chunker-name-format
       String format of chunk file names.

       The  two  placeholders  are:  base file name (*) and chunk number (#...).  There must be one and only one
       asterisk and one or more consecutive hash characters.  If chunk number has less digits than the number of
       hashes, it is left-padded by zeros.  If there are more digits  in  the  number,  they  are  left  as  is.
       Possible chunk files are ignored if their name does not match given format.

       Properties:

       • Config: name_format

       • Env Var: RCLONE_CHUNKER_NAME_FORMAT

       • Type: string

       • Default: “*.rclone_chunk.###”

   –chunker-start-from
       Minimum valid chunk number.  Usually 0 or 1.

       By default chunk numbers start from 1.

       Properties:

       • Config: start_from

       • Env Var: RCLONE_CHUNKER_START_FROM

       • Type: int

       • Default: 1

   –chunker-meta-format
       Format of the metadata object or “none”.

       By default “simplejson”.  Metadata is a small JSON file named after the composite file.

       Properties:

       • Config: meta_format

       • Env Var: RCLONE_CHUNKER_META_FORMAT

       • Type: string

       • Default: “simplejson”

       • Examples:

         • “none”

           • Do not use metadata files at all.

           • Requires hash type “none”.

         • “simplejson”

           • Simple JSON supports hash sums and chunk validation.

           • It has the following fields: ver, size, nchunks, md5, sha1.

   –chunker-fail-hard
       Choose how chunker should handle files with missing or invalid chunks.

       Properties:

       • Config: fail_hard

       • Env Var: RCLONE_CHUNKER_FAIL_HARD

       • Type: bool

       • Default: false

       • Examples:

         • “true”

           • Report errors and abort current command.

         • “false”

           • Warn user, skip incomplete file and proceed.

   –chunker-transactions
       Choose how chunker should handle temporary files during transactions.

       Properties:

       • Config: transactions

       • Env Var: RCLONE_CHUNKER_TRANSACTIONS

       • Type: string

       • Default: “rename”

       • Examples:

         • “rename”

           • Rename temporary files after a successful transaction.

         • “norename”

           • Leave temporary file names and write transaction ID to metadata file.

           • Metadata is required for no rename transactions (meta format cannot be “none”).

           • If you are using norename transactions you should be careful not to downgrade Rclone

           • as older versions of Rclone don’t support this transaction style and will misinterpret

           • files manipulated by norename transactions.

           • This method is EXPERIMENTAL, don’t use on production systems.

         • “auto”

           • Rename or norename will be used depending on capabilities of the backend.

           • If meta format is set to “none”, rename transactions will always be used.

           • This method is EXPERIMENTAL, don’t use on production systems.

Citrix ShareFile

       Citrix ShareFile is a secure file sharing and transfer service aimed as business.

   Configuration
       The  initial  setup  for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in
       your browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              XX / Citrix Sharefile
                 \ "sharefile"
              Storage> sharefile
              ** See help for sharefile backend at: https://rclone.org/sharefile/ **

              ID of the root folder

              Leave blank to access "Personal Folders".  You can use one of the
              standard values here or any folder ID (long hex number ID).
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Access the Personal Folders. (Default)
                 \ ""
               2 / Access the Favorites folder.
                 \ "favorites"
               3 / Access all the shared folders.
                 \ "allshared"
               4 / Access all the individual connectors.
                 \ "connectors"
               5 / Access the home, favorites, and shared folders as well as the connectors.
                 \ "top"
              root_folder_id>
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              type = sharefile
              endpoint = https://XXX.sharefile.com
              token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note that rclone runs a webserver on your local machine to collect the  token  as  returned  from  Citrix
       ShareFile.   This  only  runs  from  the  moment  it  opens  your  browser to the moment you get back the
       verification code.  This is on http://127.0.0.1:53682/  and  this  it  may  require  you  to  unblock  it
       temporarily if you are running a host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your ShareFile

              rclone lsd remote:

       List all the files in your ShareFile

              rclone ls remote:

       To copy a local directory to an ShareFile directory called backup

              rclone copy /home/source remote:backup

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Modified time and hashes
       ShareFile  allows  modification  times  to be set on objects accurate to 1 second.  These will be used to
       detect whether objects need syncing or not.

       ShareFile supports MD5 type hashes, so you can use the --checksum flag.

   Transfers
       For files above 128 MiB rclone will use a chunked transfer.  Rclone will upload up to --transfers  chunks
       at  the  same  time  (shared  among  all  the  multipart uploads).  Chunks are buffered in memory and are
       normally 64 MiB so increasing --transfers will increase memory use.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \
       *           0x2A        *
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       :           0x3A        :
       |           0x7C        |
       ”           0x22        "

       File names can also not start or end with the following characters.  These only get replaced if they  are
       the first or last character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠
       .           0x2E        .

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to sharefile (Citrix Sharefile).

   –sharefile-root-folder-id
       ID of the root folder.

       Leave  blank  to access “Personal Folders”.  You can use one of the standard values here or any folder ID
       (long hex number ID).

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Access the Personal Folders (default).

         • “favorites”

           • Access the Favorites folder.

         • “allshared”

           • Access all the shared folders.

         • “connectors”

           • Access all the individual connectors.

         • “top”

           • Access the home, favorites, and shared folders as well as the connectors.

   Advanced options
       Here are the Advanced options specific to sharefile (Citrix Sharefile).

   –sharefile-upload-cutoff
       Cutoff for switching to multipart upload.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 128Mi

   –sharefile-chunk-size
       Upload chunk size.

       Must a power of 2 >= 256k.

       Making this larger will improve performance, but note that each chunk  is  buffered  in  memory  one  per
       transfer.

       Reducing this will reduce memory usage but decrease performance.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_SHAREFILE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 64Mi

   –sharefile-endpoint
       Endpoint for API calls.

       This  is usually auto discovered as part of the oauth process, but can be set manually to something like:
       https://XXX.sharefile.com

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_SHAREFILE_ENDPOINT

       • Type: string

       • Required: false

   –sharefile-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SHAREFILE_ENCODING

       • Type: MultiEncoder

       • Default:
         Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot

   Limitations
       Note that ShareFile is case insensitive so you can’t have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

       ShareFile only supports filenames up to 256 characters in length.

       rclone  about  is not supported by the Citrix ShareFile backend.  Backends without this capability cannot
       determine free space for an rclone mount or use policy mfs (most free space) as a  member  of  an  rclone
       union remote.

       See List of backends that do not support rclone about and rclone about

Crypt

       Rclone crypt remotes encrypt and decrypt other remotes.

       A remote of type crypt does not access a storage system directly, but instead wraps another remote, which
       in turn accesses the storage system.  This is similar to how alias, union, chunker and a few others work.
       It makes the usage very flexible, as you can add a layer, in this case an encryption layer, on top of any
       other backend, even in multiple layers.  Rclone’s functionality can be used as with any other remote, for
       example you can mount a crypt remote.

       Accessing a storage system through a crypt remote realizes client-side encryption, which makes it safe to
       keep  your  data in a location you do not trust will not get compromised.  When working against the crypt
       remote, rclone will automatically encrypt (before uploading) and  decrypt  (after  downloading)  on  your
       local  system  as  needed  on  the fly, leaving the data encrypted at rest in the wrapped remote.  If you
       access the storage system using an application other than rclone, or access the wrapped  remote  directly
       using  rclone,  there  will not be any encryption/decryption: Downloading existing content will just give
       you the encrypted (scrambled) format, and anything you upload will not become encrypted.

       The encryption is a secret-key encryption (also called  symmetric  key  encryption)  algorithm,  where  a
       password (or pass phrase) is used to generate real encryption key.  The password can be supplied by user,
       or  you  may chose to let rclone generate one.  It will be stored in the configuration file, in a lightly
       obscured form.  If you are in an environment where you are not able to keep your  configuration  secured,
       you  should add configuration encryption as protection.  As long as you have this configuration file, you
       will be able to decrypt your data.  Without the configuration file, as long as you remember the  password
       (or  keep  it in a safe place), you can re-create the configuration and gain access to the existing data.
       You may also configure a corresponding remote in a different installation to access the same  data.   See
       below for guidance to changing password.

       Encryption  uses  cryptographic salt,  to  permute  the  encryption  key  so  that the same string may be
       encrypted in different ways.  When configuring the crypt remote it is optional to enter a salt, or to let
       rclone generate a unique  salt.   If  omitted,  rclone  uses  a  built-in  unique  string.   Normally  in
       cryptography,  the salt is stored together with the encrypted content, and do not have to be memorized by
       the user.  This is not the case in rclone, because rclone does not store any  additional  information  on
       the remotes.  Use of custom salt is effectively a second password that must be memorized.

       File  content  encryption  is  performed  using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for
       integrity.  Names (file- and  directory  names)  are  also  encrypted  by  default,  but  this  has  some
       implications and is therefore possible to turned off.

   Configuration
       Here is an example of how to make a remote called secret.

       To use crypt, first set up the underlying remote.  Follow the rclone config instructions for the specific
       backend.

       Before  configuring  the  crypt  remote,  check  the  underlying  remote is working.  In this example the
       underlying remote is called remote.  We will configure a path path within  this  remote  to  contain  the
       encrypted content.  Anything inside remote:path will be encrypted and anything outside will not.

       Configure crypt using rclone config.  In this example the crypt remote is called secret, to differentiate
       it from the underlying remote.

       When  you  are  done  you  can use the crypt remote named secret just as you would with any other remote,
       e.g. rclone copy D:\docs secret:\docs, and rclone will encrypt and decrypt as needed on the fly.  If  you
       access the wrapped remote remote:path directly you will bypass the encryption, and anything you read will
       be  in  encrypted  form,  and  anything  you  write  will  be unencrypted.  To avoid issues it is best to
       configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> secret
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Encrypt/Decrypt a remote
                 \ "crypt"
              [snip]
              Storage> crypt
              ** See help for crypt backend at: https://rclone.org/crypt/ **

              Remote to encrypt/decrypt.
              Normally should contain a ':' and a path, eg "myremote:path/to/dir",
              "myremote:bucket" or maybe "myremote:" (not recommended).
              Enter a string value. Press Enter for the default ("").
              remote> remote:path
              How to encrypt the filenames.
              Enter a string value. Press Enter for the default ("standard").
              Choose a number from below, or type in your own value.
                 / Encrypt the filenames.
               1 | See the docs for the details.
                 \ "standard"
               2 / Very simple filename obfuscation.
                 \ "obfuscate"
                 / Don't encrypt the file names.
               3 | Adds a ".bin" extension only.
                 \ "off"
              filename_encryption>
              Option to either encrypt directory names or leave them intact.

              NB If filename_encryption is "off" then this option will do nothing.
              Enter a boolean value (true or false). Press Enter for the default ("true").
              Choose a number from below, or type in your own value
               1 / Encrypt directory names.
                 \ "true"
               2 / Don't encrypt directory names, leave them intact.
                 \ "false"
              directory_name_encryption>
              Password or pass phrase for encryption.
              y) Yes type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Password or pass phrase for salt. Optional but recommended.
              Should be different to the previous password.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g/n> g
              Password strength in bits.
              64 is just about memorable
              128 is secure
              1024 is the maximum
              Bits> 128
              Your password is: JAsJvRcgR-_veXNfy_sGmQ
              Use this password? Please note that an obscured version of this
              password (and not the password itself) will be stored under your
              configuration file, so keep this generated password in a safe place.
              y) Yes (default)
              n) No
              y/n>
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n>
              Remote config
              --------------------
              [secret]
              type = crypt
              remote = remote:path
              password = *** ENCRYPTED ***
              password2 = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d>

       Important The crypt password stored in rclone.conf is lightly  obscured.   That  only  protects  it  from
       cursory inspection.  It is not secure unless configuration encryption of rclone.conf is specified.

       A long passphrase is recommended, or rclone config can generate a random one.

       The  obscured  password  is  created using AES-CTR with a static key.  The salt is stored verbatim at the
       beginning of the obscured password.  This static key is shared between all versions of rclone.

       If you reconfigure rclone with the same passwords/passphrases elsewhere it will be  compatible,  but  the
       obscured version will be different due to the different salt.

       Rclone does not encrypt

       • file length - this can be calculated within 16 bytes

       • modification time - used for syncing

   Specifying the remote
       When  configuring  the  remote  to  encrypt/decrypt,  you may specify any string that rclone accepts as a
       source/destination of other commands.

       The primary use case is to specify the path into an already configured remote (e.g. remote:path/to/dir or
       remote:bucket), such that data in a remote untrusted location can be stored encrypted.

       You may also specify a local filesystem path, such as /path/to/dir on Linux, C:\path\to\dir  on  Windows.
       By  creating a crypt remote pointing to such a local filesystem path, you can use rclone as a utility for
       pure local file encryption, for example to keep encrypted files on a removable USB drive.

       Note: A string which do not contain a : will by rclone be  treated  as  a  relative  path  in  the  local
       filesystem.   For  example,  if you enter the name remote without the trailing :, it will be treated as a
       subdirectory of the current directory with name “remote”.

       If a path remote:path/to/dir is specified, rclone stores encrypted files in path/to/dir  on  the  remote.
       With  file  name  encryption,  files  saved  to  secret:subdir/subfile are stored in the unencrypted path
       path/to/dir but the subdir/subpath element is encrypted.

       The path you specify does not have to exist, rclone will create it when needed.

       If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through
       a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory
       within the wrapped remote.  If you use a bucket-based storage  system  (e.g. Swift,  S3,  Google  Compute
       Storage, B2) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket).  If
       wrapping  around  the entire root of the storage (s3:), and use the optional file name encryption, rclone
       will encrypt the bucket name.

   Changing password
       Should the password, or the configuration file containing a lightly obscured form  of  the  password,  be
       compromised,  you  need  to  re-encrypt  your  data  with  a  new password.  Since rclone uses secret-key
       encryption, where the encryption key is generated directly from the password kept on the  client,  it  is
       not  possible  to  change  the  password/key  of  already  encrypted content.  Just changing the password
       configured for an existing crypt remote means you will no longer able to decrypt any  of  the  previously
       encrypted  content.   The  only possibility is to re-upload everything via a crypt remote configured with
       your new password.

       Depending on the size of your data, your bandwidth, storage quota etc, there are different approaches you
       can take: - If you have everything in a different location, for example on your local system,  you  could
       remove  all of the prior encrypted files, change the password for your configured crypt remote (or delete
       and re-create the crypt configuration), and then re-upload everything from the alternative  location.   -
       If  you  have enough space on the storage system you can create a new crypt remote pointing to a separate
       directory on the same backend, and then use rclone to copy everything from the original crypt  remote  to
       the  new, effectively decrypting everything on the fly using the old password and re-encrypting using the
       new password.  When done, delete the original  crypt  remote  directory  and  finally  the  rclone  crypt
       configuration  with the old password.  All data will be streamed from the storage system and back, so you
       will get half the bandwidth and be charged twice if you have upload and download  quota  on  the  storage
       system.

       Note:  A  security  problem  related  to the random password generator was fixed in rclone version 1.53.3
       (released 2020-11-19).  Passwords generated by rclone config in version 1.49.0 (released  2019-08-26)  to
       1.53.2  (released  2020-10-26)  are not considered secure and should be changed.  If you made up your own
       password, or used rclone version older than 1.49.0 or newer than 1.53.2  to  generate  it,  you  are  not
       affected  by  this  issue.   See issue #4783 for more details, and a tool you can use to check if you are
       affected.

   Example
       Create the following file structure using “standard” file name encryption.

              plaintext/
              ├── file0.txt
              ├── file1.txt
              └── subdir
                  ├── file2.txt
                  ├── file3.txt
                  └── subsubdir
                      └── file4.txt

       Copy these to the remote, and list them

              $ rclone -q copy plaintext secret:
              $ rclone -q ls secret:
                      7 file1.txt
                      6 file0.txt
                      8 subdir/file2.txt
                     10 subdir/subsubdir/file4.txt
                      9 subdir/file3.txt

       The crypt remote looks like

              $ rclone -q ls remote:path
                     55 hagjclgavj2mbiqm6u6cnjjqcg
                     54 v05749mltvv1tf4onltun46gls
                     57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
                     58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
                     56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps

       The directory structure is preserved

              $ rclone -q ls secret:subdir
                      8 file2.txt
                      9 file3.txt
                     10 subsubdir/file4.txt

       Without file name encryption .bin extensions are added to underlying  names.   This  prevents  the  cloud
       provider attempting to interpret file content.

              $ rclone -q ls remote:path
                     54 file0.txt.bin
                     57 subdir/file3.txt.bin
                     56 subdir/file2.txt.bin
                     58 subdir/subsubdir/file4.txt.bin
                     55 file1.txt.bin

   File name encryption modes
       Off

       • doesn’t hide file names or directory structure

       • allows for longer file names (~246 characters)

       • can use sub paths and copy single files

       Standard

       • file names encrypted

       • file names can’t be as long (~143 characters)

       • can use sub paths and copy single files

       • directory structure visible

       • identical files names will have identical uploaded names

       • can use shortcuts to shorten the directory recursion

       Obfuscation

       This  is  a  simple “rotate” of the filename, with each file having a rot distance based on the filename.
       Rclone stores the distance at  the  beginning  of  the  filename.   A  file  called  “hello”  may  become
       “53.jgnnq”.

       Obfuscation  is  not a strong encryption of filenames, but hinders automated scanning tools picking up on
       filename patterns.  It is an intermediate between “off” and  “standard”  which  allows  for  longer  path
       segment names.

       There  is  a possibility with some unicode based filenames that the obfuscation is weak and may map lower
       case characters to upper case equivalents.

       Obfuscation cannot be relied upon for strong protection.

       • file names very lightly obfuscated

       • file names can be longer than standard encryption

       • can use sub paths and copy single files

       • directory structure visible

       • identical files names will have identical uploaded names

       Cloud storage systems have limits on file name length and total path length which rclone is  more  likely
       to breach using “Standard” file name encryption.  Where file names are less than 156 characters in length
       issues should not be encountered, irrespective of cloud storage provider.

       An  experimental  advanced  option filename_encoding is now provided to address this problem to a certain
       degree.  For cloud storage systems with case sensitive file names (e.g. Google Drive), base64 can be used
       to reduce file name length.  For cloud storage systems  using  UTF-16  to  store  file  names  internally
       (e.g. OneDrive), base32768 can be used to drastically reduce file name length.

       An alternative, future rclone file name encryption mode may tolerate backend provider path length limits.

   Directory name encryption
       Crypt offers the option of encrypting dir names or leaving them intact.  There are two options:

       True

       Encrypts   the  whole  file  path  including  directory  names  Example:  1/12/123.txt  is  encrypted  to
       p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0

       False

       Only  encrypts  file   names,   skips   directory   names   Example:   1/12/123.txt   is   encrypted   to
       1/12/qgm4avr35m5loi1th53ato71v0

   Modified time and hashes
       Crypt stores modification times using the underlying remote so support depends on that.

       Hashes  are  not stored for crypt.  However the data integrity is protected by an extremely strong crypto
       authenticator.

       Use the rclone cryptcheck command to check the integrity of a crypted  remote  instead  of  rclone  check
       which can’t check the checksums properly.

   Standard options
       Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).

   –crypt-remote
       Remote to encrypt/decrypt.

       Normally  should  contain  a  `:'  and  a  path,  e.g. “myremote:path/to/dir”, “myremote:bucket” or maybe
       “myremote:” (not recommended).

       Properties:

       • Config: remote

       • Env Var: RCLONE_CRYPT_REMOTE

       • Type: string

       • Required: true

   –crypt-filename-encryption
       How to encrypt the filenames.

       Properties:

       • Config: filename_encryption

       • Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION

       • Type: string

       • Default: “standard”

       • Examples:

         • “standard”

           • Encrypt the filenames.

           • See the docs for the details.

         • “obfuscate”

           • Very simple filename obfuscation.

         • “off”

           • Don’t encrypt the file names.

           • Adds a “.bin” extension only.

   –crypt-directory-name-encryption
       Option to either encrypt directory names or leave them intact.

       NB If filename_encryption is “off” then this option will do nothing.

       Properties:

       • Config: directory_name_encryption

       • Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION

       • Type: bool

       • Default: true

       • Examples:

         • “true”

           • Encrypt directory names.

         • “false”

           • Don’t encrypt directory names, leave them intact.

   –crypt-password
       Password or pass phrase for encryption.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_CRYPT_PASSWORD

       • Type: string

       • Required: true

   –crypt-password2
       Password or pass phrase for salt.

       Optional but recommended.  Should be different to the previous password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password2

       • Env Var: RCLONE_CRYPT_PASSWORD2

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).

   –crypt-server-side-across-configs
       Allow server-side operations (e.g. copy) to work across different crypt configs.

       Normally this option is not what you want, but if you have two crypts pointing to the  same  backend  you
       can use it.

       This  can  be  used,  for example, to change file name encryption type without re-uploading all the data.
       Just make two crypt backends pointing to two different directories with the single changed parameter  and
       use rclone move to move the files between the crypt remotes.

       Properties:

       • Config: server_side_across_configs

       • Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS

       • Type: bool

       • Default: false

   –crypt-show-mapping
       For all files listed show how the names encrypt.

       If  this  flag  is set then for each file that the remote is asked to list, it will log (at level INFO) a
       line stating the decrypted file name and the encrypted file name.

       This is so you can work out which encrypted names are which decrypted names just in case you need  to  do
       something with the encrypted file names, or for debugging purposes.

       Properties:

       • Config: show_mapping

       • Env Var: RCLONE_CRYPT_SHOW_MAPPING

       • Type: bool

       • Default: false

   –crypt-no-data-encryption
       Option to either encrypt file data or leave it unencrypted.

       Properties:

       • Config: no_data_encryption

       • Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION

       • Type: bool

       • Default: false

       • Examples:

         • “true”

           • Don’t encrypt file data, leave it unencrypted.

         • “false”

           • Encrypt file data.

   –crypt-filename-encoding
       How to encode the encrypted filename to text string.

       This  option  could help with shortening the encrypted filename.  The suitable option would depend on the
       way your remote count the filename length and if it’s case sensitive.

       Properties:

       • Config: filename_encoding

       • Env Var: RCLONE_CRYPT_FILENAME_ENCODING

       • Type: string

       • Default: “base32”

       • Examples:

         • “base32”

           • Encode using base32.  Suitable for all remote.

         • “base64”

           • Encode using base64.  Suitable for case sensitive remote.

         • “base32768”

           • Encode using base32768.  Suitable if your remote counts UTF-16 or

           • Unicode codepoint instead of UTF-8 byte length.  (Eg.  Onedrive)

   Metadata
       Any metadata supported by the underlying remote is read and written.

       See the metadata docs for more info.

   Backend commands
       Here are the commands specific to the crypt backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   encode
       Encode the given filename(s)

              rclone backend encode remote: [options] [<arguments>+]

       This encodes the filenames given as arguments returning a list of strings of the encoded results.

       Usage Example:

              rclone backend encode crypt: file1 [file2...]
              rclone rc backend/command command=encode fs=crypt: file1 [file2...]

   decode
       Decode the given filename(s)

              rclone backend decode remote: [options] [<arguments>+]

       This decodes the filenames given as arguments returning a list of strings of  the  decoded  results.   It
       will return an error if any of the inputs are invalid.

       Usage Example:

              rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
              rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]

   Backing up a crypted remote
       If  you  wish  to  backup  a  crypted remote, it is recommended that you use rclone sync on the encrypted
       files, and make sure the passwords are the same in the new encrypted remote.

       This will have the following advantages

       • rclone sync will check the checksums while copying

       • you can use rclone check between the encrypted remotes

       • you don’t decrypt and encrypt unnecessarily

       For example, let’s say you have your original remote at remote: with the encrypted  version  at  eremote:
       with  path  remote:crypt.   You  would then set up the new remote remote2: and then the encrypted version
       eremote2: with path remote2:crypt using the same passwords as eremote:.

       To sync the two remotes you would do

              rclone sync -i remote:crypt remote2:crypt

       And to check the integrity you would do

              rclone check remote:crypt remote2:crypt

   File formats
   File encryption
       Files are encrypted 1:1 source file to destination object.  The file has a header  and  is  divided  into
       chunks.

   Header
       • 8 bytes magic string RCLONE\x00\x00

       • 24 bytes Nonce (IV)

       The  initial  nonce  is  generated from the operating systems crypto strong random number generator.  The
       nonce is incremented for each chunk read making sure each nonce is unique for each  block  written.   The
       chance  of  a  nonce  being re-used is minuscule.  If you wrote an exabyte of data (10¹⁸ bytes) you would
       have a probability of approximately 2×10⁻³² of re-using a nonce.

   Chunk
       Each chunk will contain 64 KiB of data, except for the last one which may have less data.  The data chunk
       is in standard NaCl SecretBox format.  SecretBox uses XSalsa20 and Poly1305 to encrypt  and  authenticate
       messages.

       Each chunk contains:

       • 16 Bytes of Poly1305 authenticator

       • 1 - 65536 bytes XSalsa20 encrypted data

       64k  chunk size was chosen as the best performing chunk size (the authenticator takes too much time below
       this and the performance drops off due to cache effects above this).  Note that these chunks are buffered
       in memory so they can’t be too big.

       This uses a 32 byte (256 bit key) key derived from the user password.

   Examples
       1 byte file will encrypt to

       • 32 bytes header

       • 17 bytes data chunk

       49 bytes total

       1 MiB (1048576 bytes) file will encrypt to

       • 32 bytes header

       • 16 chunks of 65568 bytes

       1049120 bytes total (a 0.05% overhead).  This is the overhead for big files.

   Name encryption
       File names are encrypted segment by segment - the path is broken up into / separated  strings  and  these
       are encrypted individually.

       File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption.

       They  are  then  encrypted  with  EME  using  AES  with  256  bit key.  EME (ECB-Mix-ECB) is a wide-block
       encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.

       This makes for deterministic encryption which is what we want - the same filename  must  encrypt  to  the
       same thing otherwise we can’t find it on the cloud storage system.

       This means that

       • filenames with the same name will encrypt the same

       • filenames which start the same won’t have a common prefix

       This  uses  a  32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user
       password.

       After encryption they are written out using a modified version of standard base32 encoding  as  described
       in RFC4648.  The standard encoding is modified in two ways:

       • it becomes lower case (no-one likes upper case filenames!)

       • we strip the padding character =

       base32  is  used  rather than the more efficient base64 so rclone can be used on case insensitive remotes
       (e.g. Windows, Amazon Drive).

   Key derivation
       Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt  (password2)  to
       derive  the  32+32+16 = 80 bytes of key material required.  If the user doesn’t supply a salt then rclone
       uses an internal one.

       scrypt makes it impractical to mount a dictionary attack on rclone encrypted data.  For  full  protection
       against this you should always use a salt.

   SEE ALSO
       • rclone cryptdecode - Show forward/reverse mapping of encrypted filenames

Compress (Experimental)

   Warning
       This remote is currently experimental.  Things may break and data may be lost.  Anything you do with this
       remote  is  at  your  own  risk.  Please understand the risks associated with using experimental code and
       don’t use this remote in critical applications.

       The Compress remote adds compression to another remote.  It is best used  with  remotes  containing  many
       large compressible files.

   Configuration
       To use this remote, all you need to do is specify another remote and a compression mode to use:

              Current remotes:

              Name                 Type
              ====                 ====
              remote_to_press      sometype

              e) Edit existing remote
              $ rclone config
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> n
              name> compress
              ...
               8 / Compress a remote
                 \ "compress"
              ...
              Storage> compress
              ** See help for compress backend at: https://rclone.org/compress/ **

              Remote to compress.
              Enter a string value. Press Enter for the default ("").
              remote> remote_to_press:subdir
              Compression mode.
              Enter a string value. Press Enter for the default ("gzip").
              Choose a number from below, or type in your own value
               1 / Gzip compression balanced for speed and compression strength.
                 \ "gzip"
              compression_mode> gzip
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              --------------------
              [compress]
              type = compress
              remote = remote_to_press:subdir
              compression_mode = gzip
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Compression Modes
       Currently only gzip compression is supported.  It provides a decent balance between speed and size and is
       well  supported  by  other  applications.  Compression strength can further be configured via an advanced
       setting where 0 is no compression and 9 is strongest compression.

   File types
       If you open a remote wrapped by compress, you will see that  there  are  many  files  with  an  extension
       corresponding  to the compression algorithm you chose.  These files are standard files that can be opened
       by various archive programs, but they have some hidden metadata that allows them to be  used  by  rclone.
       While you may download and decompress these files at will, do not manually delete or rename files.  Files
       without correct metadata files will not be recognized by rclone.

   File names
       The  compressed  files  will  be named *.###########.gz where * is the base file and the # part is base64
       encoded size of the uncompressed file.  The file names should not be changed by anything other  than  the
       rclone compression backend.

   Standard options
       Here are the Standard options specific to compress (Compress a remote).

   –compress-remote
       Remote to compress.

       Properties:

       • Config: remote

       • Env Var: RCLONE_COMPRESS_REMOTE

       • Type: string

       • Required: true

   –compress-mode
       Compression mode.

       Properties:

       • Config: mode

       • Env Var: RCLONE_COMPRESS_MODE

       • Type: string

       • Default: “gzip”

       • Examples:

         • “gzip”

           • Standard gzip compression with fastest parameters.

   Advanced options
       Here are the Advanced options specific to compress (Compress a remote).

   –compress-level
       GZIP compression level (-2 to 9).

       Generally  -1  (default, equivalent to 5) is recommended.  Levels 1 to 9 increase compression at the cost
       of speed.  Going past 6 generally offers very little return.

       Level -2 uses Huffman encoding only.  Only use if you know  what  you  are  doing.   Level  0  turns  off
       compression.

       Properties:

       • Config: level

       • Env Var: RCLONE_COMPRESS_LEVEL

       • Type: int

       • Default: -1

   –compress-ram-cache-limit
       Some  remotes  don’t  allow the upload of files with unknown size.  In this case the compressed file will
       need to be cached to determine it’s size.

       Files smaller than this limit will be cached in RAM, files larger than this limit will be cached on disk.

       Properties:

       • Config: ram_cache_limit

       • Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT

       • Type: SizeSuffix

       • Default: 20Mi

   Metadata
       Any metadata supported by the underlying remote is read and written.

       See the metadata docs for more info.

Combine

       The combine backend joins remotes together into a single directory tree.

       For example you might have a remote for images on one provider:

              $ rclone tree s3:imagesbucket
              /
              ├── image1.jpg
              └── image2.jpg

       And a remote for files on another:

              $ rclone tree drive:important/files
              /
              ├── file1.txt
              └── file2.txt

       The combine backend can join these together into a synthetic directory structure like this:

              $ rclone tree combined:
              /
              ├── files
              │   ├── file1.txt
              │   └── file2.txt
              └── images
                  ├── image1.jpg
                  └── image2.jpg

       You’d do this by specifying an upstreams parameter in the config like this

              upstreams = images=s3:imagesbucket files=drive:important/files

       During the initial setup with rclone config you will specify the upstreams remotes as a  space  separated
       list.  The upstream remotes can either be a local paths or other remotes.

   Configuration
       Here is an example of how to make a combine called remote for the example above.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              ...
              XX / Combine several remotes into one
                 \ (combine)
              ...
              Storage> combine
              Option upstreams.
              Upstreams for combining
              These should be in the form
                  dir=remote:path dir2=remote2:path
              Where before the = is specified the root directory and after is the remote to
              put there.
              Embedded spaces can be added using quotes
                  "dir=remote:path with space" "dir2=remote2:path with space"
              Enter a fs.SpaceSepList value.
              upstreams> images=s3:imagesbucket files=drive:important/files
              --------------------
              [remote]
              type = combine
              upstreams = images=s3:imagesbucket files=drive:important/files
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Configuring for Google Drive Shared Drives
       Rclone  has  a convenience feature for making a combine backend for all the shared drives you have access
       to.

       Assuming your main (non shared drive) Google drive remote is called drive: you would run

              rclone backend -o config drives drive:

       This would produce something like this:

              [My Drive]
              type = alias
              remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:

              [Test Drive]
              type = alias
              remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:

              [AllDrives]
              type = combine
              upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

       If you then add that config to your config file (find it with rclone config file) then you can access all
       the shared drives in one place with the AllDrives: remote.

       See the Google Drive docs for full info.

   Standard options
       Here are the Standard options specific to combine (Combine several remotes into one).

   –combine-upstreams
       Upstreams for combining

       These should be in the form

              dir=remote:path dir2=remote2:path

       Where before the = is specified the root directory and after is the remote to put there.

       Embedded spaces can be added using quotes

              "dir=remote:path with space" "dir2=remote2:path with space"

       Properties:

       • Config: upstreams

       • Env Var: RCLONE_COMBINE_UPSTREAMS

       • Type: SpaceSepList

       • Default:

   Metadata
       Any metadata supported by the underlying remote is read and written.

       See the metadata docs for more info.

Dropbox

       Paths are specified as remote:path

       Dropbox paths may be as deep as required, e.g.  remote:directory/subdirectory.

   Configuration
       The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser.
       rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              n) New remote
              d) Delete remote
              q) Quit config
              e/n/d/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Dropbox
                 \ "dropbox"
              [snip]
              Storage> dropbox
              Dropbox App Key - leave blank normally.
              app_key>
              Dropbox App Secret - leave blank normally.
              app_secret>
              Remote config
              Please visit:
              https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
              Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
              --------------------
              [remote]
              app_key =
              app_secret =
              token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       You can then use it like this,

       List directories in top level of your dropbox

              rclone lsd remote:

       List all the files in your dropbox

              rclone ls remote:

       To copy a local directory to a dropbox directory called backup

              rclone copy /home/source remote:backup

   Dropbox for business
       Rclone supports Dropbox for business and Team Folders.

       When using Dropbox for business remote: and remote:path/to/file will refer to your personal folder.

       If you wish to see Team Folders you must use a leading / in the path, so rclone lsd remote:/  will  refer
       to the root and show you all Team Folders and your User Folder.

       You can then use team folders like this remote:/TeamFolder and remote:/TeamFolder/path/to/file.

       A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so
       it should be avoided.

   Modified time and Hashes
       Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

       This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API
       and modified times, rclone will decide to upload all your old data to fix the modification times.  If you
       don’t want this to happen use --size-only or --checksum flag to stop it.

       Dropbox supports its own hash type which is checked for all transfers.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /
       DEL         0x7F         ␡
       \           0x5C        \

       File  names can also not end with the following characters.  These only get replaced if they are the last
       character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Batch mode uploads
       Using batch mode uploads is very important for performance when using the Dropbox API.   See  the dropbox
       performance guide for more info.

       There are 3 modes rclone can use for uploads.

   –dropbox-batch-mode off
       In  this mode rclone will not use upload batching.  This was the default before rclone v1.55.  It has the
       disadvantage that it is very likely to encounter too_many_requests errors like this

              NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.

       When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really  slows
       down transfers.

       This  will  happen  especially  if  --transfers  is  large,  so  this  mode  isn’t recommended except for
       compatibility or investigating problems.

   –dropbox-batch-mode sync
       In this mode rclone will batch up uploads to the size specified by --dropbox-batch-size and  commit  them
       together.

       Using  this  mode  means  you  can  use a much higher --transfers parameter (32 or 64 works fine) without
       receiving too_many_requests errors.

       This mode ensures full data integrity.

       Note that there may be a pause when quitting rclone while rclone finishes up the last  batch  using  this
       mode.

   –dropbox-batch-mode async
       In  this  mode rclone will batch up uploads to the size specified by --dropbox-batch-size and commit them
       together.

       However it will not wait for the status of the batch to be returned to the caller.  This means rclone can
       use a much bigger batch size (much bigger than --transfers), at the cost of not being able to  check  the
       status of the upload.

       This provides the maximum possible upload speed especially with lots of small files, however rclone can’t
       check the file got uploaded properly using this mode.

       If you are using this mode then using “rclone check” after the transfer completes is recommended.  Or you
       could   do   an  initial  transfer  with  --dropbox-batch-mode  async  then  do  a  final  transfer  with
       --dropbox-batch-mode sync (the default).

       Note that there may be a pause when quitting rclone while rclone finishes up the last  batch  using  this
       mode.

   Standard options
       Here are the Standard options specific to dropbox (Dropbox).

   –dropbox-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_DROPBOX_CLIENT_ID

       • Type: string

       • Required: false

   –dropbox-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_DROPBOX_CLIENT_SECRET

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to dropbox (Dropbox).

   –dropbox-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_DROPBOX_TOKEN

       • Type: string

       • Required: false

   –dropbox-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_DROPBOX_AUTH_URL

       • Type: string

       • Required: false

   –dropbox-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_DROPBOX_TOKEN_URL

       • Type: string

       • Required: false

   –dropbox-chunk-size
       Upload chunk size (< 150Mi).

       Any files larger than this will be uploaded in chunks of this size.

       Note  that  chunks  are buffered in memory (one at a time) so rclone can deal with retries.  Setting this
       larger will increase the speed slightly (at most 10% for 128 MiB in tests) at  the  cost  of  using  more
       memory.  It can be set smaller if you are tight on memory.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_DROPBOX_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 48Mi

   –dropbox-impersonate
       Impersonate this user when using a business account.

       Note  that  if  you  want  to use impersonate, you should make sure this flag is set when running “rclone
       config” as this will cause rclone to request the “members.read” scope which it won’t normally.   This  is
       needed to lookup a members email address into the internal ID that dropbox uses in the API.

       Using the “members.read” scope will require a Dropbox Team Admin to approve during the OAuth flow.

       You  will  have  to use your own App (setting your own client_id and client_secret) to use this option as
       currently rclone’s default set of permissions doesn’t include “members.read”.  This  can  be  added  once
       v1.55 or later is in use everywhere.

       Properties:

       • Config: impersonate

       • Env Var: RCLONE_DROPBOX_IMPERSONATE

       • Type: string

       • Required: false

   –dropbox-shared-files
       Instructs rclone to work on individual shared files.

       In  this  mode  rclone’s  features are extremely limited - only list (ls, lsl, etc.)  operations and read
       operations (e.g. downloading) are supported in this mode.  All other operations will be disabled.

       Properties:

       • Config: shared_files

       • Env Var: RCLONE_DROPBOX_SHARED_FILES

       • Type: bool

       • Default: false

   –dropbox-shared-folders
       Instructs rclone to work on shared folders.

       When this flag is used with no path only the List operation is supported and all available shared folders
       will be listed.  If you specify a path the first part will be interpreted as the name of  shared  folder.
       Rclone  will  then  try  to  mount  this  shared  to the root namespace.  On success shared folder rclone
       proceeds normally.  The shared folder is now pretty much a normal folder and all  normal  operations  are
       supported.

       Note  that  we  don’t  unmount the shared folder afterwards so the –dropbox-shared-folders can be omitted
       after the first use of a particular shared folder.

       Properties:

       • Config: shared_folders

       • Env Var: RCLONE_DROPBOX_SHARED_FOLDERS

       • Type: bool

       • Default: false

   –dropbox-batch-mode
       Upload file batching sync|async|off.

       This sets the batch mode used by rclone.

       For full info see the main docs

       This has 3 possible values

       • off - no batching

       • sync - batch uploads and check completion (default)

       • async - batch upload and don’t check completion

       Rclone will close any outstanding batches when it exits which may make a delay on quit.

       Properties:

       • Config: batch_mode

       • Env Var: RCLONE_DROPBOX_BATCH_MODE

       • Type: string

       • Default: “sync”

   –dropbox-batch-size
       Max number of files in upload batch.

       This sets the batch size of files to upload.  It has to be less than 1000.

       By default this is 0 which means rclone which calculate the  batch  size  depending  on  the  setting  of
       batch_mode.

       • batch_mode: async - default batch_size is 100

       • batch_mode: sync - default batch_size is the same as –transfers

       • batch_mode: off - not in use

       Rclone will close any outstanding batches when it exits which may make a delay on quit.

       Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker.
       You can use –transfers 32 to maximise throughput.

       Properties:

       • Config: batch_size

       • Env Var: RCLONE_DROPBOX_BATCH_SIZE

       • Type: int

       • Default: 0

   –dropbox-batch-timeout
       Max time to allow an idle upload batch before uploading.

       If an upload batch is idle for more than this long then it will be uploaded.

       The  default  for  this is 0 which means rclone will choose a sensible default based on the batch_mode in
       use.

       • batch_mode: async - default batch_timeout is 500ms

       • batch_mode: sync - default batch_timeout is 10s

       • batch_mode: off - not in use

       Properties:

       • Config: batch_timeout

       • Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT

       • Type: Duration

       • Default: 0s

   –dropbox-batch-commit-timeout
       Max time to wait for a batch to finish committing

       Properties:

       • Config: batch_commit_timeout

       • Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT

       • Type: Duration

       • Default: 10m0s

   –dropbox-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_DROPBOX_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot

   Limitations
       Note that Dropbox is case insensitive so you  can’t  have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

       There  are  some file names such as thumbs.db which Dropbox can’t store.  There is a full list of them in
       the “Ignored Files” section of this document.  Rclone will issue an error message File name disallowed  -
       not uploading if it attempts to upload one of those file names, but the sync won’t fail.

       Some  errors  may  occur  if  you  try  to  sync  copyright-protected  files  because Dropbox has its own
       https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actual‐
       ly-looking-at-your-stuff/ copyright detector that prevents this sort of file being downloaded.  This will
       return  the  error  ERROR  :  /path/to/your/file:  Failed  to  copy:  failed  to  open   source   object:
       path/restricted_content/.

       If  you  have  more  than 10,000 files in a directory then rclone purge dropbox:dir will return the error
       Failed to purge: There are too many files involved in this operation.  As  a  work-around  do  an  rclone
       delete dropbox:dir followed by an rclone rmdir dropbox:dir.

       When  using  rclone  link  you’ll  need  to  set  --expire  if using a non-personal account otherwise the
       visibility may not be correct.  (Note that --expire isn’t supported on personal accounts).  See the forum
       discussion and the dropbox SDK issue.

   Get your own Dropbox App ID
       When you use rclone with Dropbox in its default configuration you are using rclone’s  App  ID.   This  is
       shared between all the rclone users.

       Here is how to create your own Dropbox App ID for rclone:

       1. Log  into the Dropbox App console with your Dropbox Account (It need not to be the same account as the
          Dropbox you want to access)

       2. Choose an API => Usually this should be Dropbox API

       3. Choose the type of access you want to use => Full Dropbox or App Folder

       4. Name your App.  The app name is global, so you can’t use rclone for example

       5. Click the button Create App

       6. Switch to the  Permissions  tab.   Enable  at  least  the  following  permissions:  account_info.read,
          files.metadata.write, files.content.write, files.content.read, sharing.write.  The files.metadata.read
          and sharing.read checkboxes will be marked too.  Click Submit

       7. Switch to the Settings tab.  Fill OAuth2 - Redirect URIs as http://localhost:53682/

       8. Find  the App key and App secret values on the Settings tab.  Use these values in rclone config to add
          a new remote or edit an existing remote.  The App key  setting  corresponds  to  client_id  in  rclone
          config, the App secret corresponds to client_secret

Enterprise File Fabric

       This  backend  supports Storage Made Easy’s Enterprise File Fabric™ which provides a software solution to
       integrate and unify File and Object Storage accessible through a global file system.

   Configuration
       The initial setup for the Enterprise File Fabric backend involves getting a  token  from  the  Enterprise
       File Fabric which you need to do in your browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Enterprise File Fabric
                 \ "filefabric"
              [snip]
              Storage> filefabric
              ** See help for filefabric backend at: https://rclone.org/filefabric/ **

              URL of the Enterprise File Fabric to connect to
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Storage Made Easy US
                 \ "https://storagemadeeasy.com"
               2 / Storage Made Easy EU
                 \ "https://eu.storagemadeeasy.com"
               3 / Connect to your Enterprise File Fabric
                 \ "https://yourfabric.smestorage.com"
              url> https://yourfabric.smestorage.com/
              ID of the root folder
              Leave blank normally.

              Fill in to make rclone start with directory of a given ID.

              Enter a string value. Press Enter for the default ("").
              root_folder_id>
              Permanent Authentication Token

              A Permanent Authentication Token can be created in the Enterprise File
              Fabric, on the users Dashboard under Security, there is an entry
              you'll see called "My Authentication Tokens". Click the Manage button
              to create one.

              These tokens are normally valid for several years.

              For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens

              Enter a string value. Press Enter for the default ("").
              permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              --------------------
              [remote]
              type = filefabric
              url = https://yourfabric.smestorage.com/
              permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Once configured you can then use rclone like this,

       List directories in top level of your Enterprise File Fabric

              rclone lsd remote:

       List all the files in your Enterprise File Fabric

              rclone ls remote:

       To copy a local directory to an Enterprise File Fabric directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       The Enterprise File Fabric allows modification times to be set on files accurate to 1 second.  These will
       be used to detect whether objects need syncing or not.

       The Enterprise File Fabric does not support any data hashes at this time.

   Restricted filename characters
       The default restricted characters set will be replaced.

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Empty files
       Empty  files  aren’t supported by the Enterprise File Fabric.  Rclone will therefore upload an empty file
       as a single space with a mime type of application/vnd.rclone.empty.file and files with that mime type are
       treated as empty.

   Root folder ID
       You can set the root_folder_id for rclone.  This is the directory (identified  by  its  Folder  ID)  that
       rclone considers to be the root of your Enterprise File Fabric.

       Normally you will leave this blank and rclone will determine the correct root to use itself.

       However you can set this to restrict rclone to a specific folder hierarchy.

       In  order  to  do  this  you will have to find the Folder ID of the directory you wish rclone to display.
       These aren’t displayed in the web interface, but you can use rclone lsf to find them, for example

              $ rclone lsf --dirs-only -Fip --csv filefabric:
              120673758,Burnt PDFs/
              120673759,My Quick Uploads/
              120673755,My Syncs/
              120673756,My backups/
              120673757,My contacts/
              120673761,S3 Storage/

       The ID for “S3 Storage” would be 120673761.

   Standard options
       Here are the Standard options specific to filefabric (Enterprise File Fabric).

   –filefabric-url
       URL of the Enterprise File Fabric to connect to.

       Properties:

       • Config: url

       • Env Var: RCLONE_FILEFABRIC_URL

       • Type: string

       • Required: true

       • Examples:

         • “https://storagemadeeasy.com”

           • Storage Made Easy US

         • “https://eu.storagemadeeasy.com”

           • Storage Made Easy EU

         • “https://yourfabric.smestorage.com”

           • Connect to your Enterprise File Fabric

   –filefabric-root-folder-id
       ID of the root folder.

       Leave blank normally.

       Fill in to make rclone start with directory of a given ID.

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID

       • Type: string

       • Required: false

   –filefabric-permanent-token
       Permanent Authentication Token.

       A Permanent Authentication Token can be created in the Enterprise File Fabric,  on  the  users  Dashboard
       under  Security, there is an entry you’ll see called “My Authentication Tokens”.  Click the Manage button
       to create one.

       These tokens are normally valid for several years.

       For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens

       Properties:

       • Config: permanent_token

       • Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to filefabric (Enterprise File Fabric).

   –filefabric-token
       Session Token.

       This is a session token which rclone caches in the config file.  It is usually valid for 1 hour.

       Don’t set this value - rclone will set it automatically.

       Properties:

       • Config: token

       • Env Var: RCLONE_FILEFABRIC_TOKEN

       • Type: string

       • Required: false

   –filefabric-token-expiry
       Token expiry time.

       Don’t set this value - rclone will set it automatically.

       Properties:

       • Config: token_expiry

       • Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY

       • Type: string

       • Required: false

   –filefabric-version
       Version read from the file fabric.

       Don’t set this value - rclone will set it automatically.

       Properties:

       • Config: version

       • Env Var: RCLONE_FILEFABRIC_VERSION

       • Type: string

       • Required: false

   –filefabric-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_FILEFABRIC_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Del,Ctl,InvalidUtf8,Dot

FTP

       FTP   is   the   File   Transfer   Protocol.    Rclone   FTP    support    is    provided    using    the
       github.com/jlaffaye/ftp package.

       Limitations of Rclone’s FTP backend

       Paths  are  specified  as  remote:path.   If  the path does not begin with a / it is relative to the home
       directory of the user.  An empty path remote: refers to the user’s home directory.

   Configuration
       To create an FTP configuration named remote, run

              rclone config

       Rclone config guides you through an interactive setup process.  A minimal rclone  FTP  remote  definition
       only requires host, username and password.  For an anonymous FTP server, see below.

              No remotes found, make a new one?
              n) New remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              n/r/c/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / FTP
                 \ "ftp"
              [snip]
              Storage> ftp
              ** See help for ftp backend at: https://rclone.org/ftp/ **

              FTP host to connect to
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Connect to ftp.example.com
                 \ "ftp.example.com"
              host> ftp.example.com
              FTP username
              Enter a string value. Press Enter for the default ("$USER").
              user>
              FTP port number
              Enter a signed integer. Press Enter for the default (21).
              port>
              FTP password
              y) Yes type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Use FTP over TLS (Implicit)
              Enter a boolean value (true or false). Press Enter for the default ("false").
              tls>
              Use FTP over TLS (Explicit)
              Enter a boolean value (true or false). Press Enter for the default ("false").
              explicit_tls>
              Remote config
              --------------------
              [remote]
              type = ftp
              host = ftp.example.com
              pass = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       To see all directories in the home directory of remote

              rclone lsd remote:

       Make a new directory

              rclone mkdir remote:path/to/directory

       List the contents of a directory

              rclone ls remote:path/to/directory

       Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

              rclone sync -i /home/local/directory remote:directory

   Anonymous FTP
       When  connecting  to  a  FTP  server  that  allows  anonymous  login, you can use the special “anonymous”
       username.  Traditionally, this user account accepts any string as a password, although it  is  common  to
       use  either  the password “anonymous” or “guest”.  Some servers require the use of a valid e-mail address
       as password.

       Using on-the-fly or connection string remotes makes it easy to access such servers, without requiring any
       configuration in advance.  The following are examples of that:

              rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
              rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):

       The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt.  They  execute
       the  rclone obscure command  to  create a password string in the format required by the pass option.  The
       following examples are exactly the same, except use an already obscured string representation of the same
       password “dummy”, and therefore works even in Windows Command Prompt:

              rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
              rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:

   Implicit TLS
       Rlone FTP supports implicit FTP over TLS servers (FTPS).  This has to  be  enabled  in  the  FTP  backend
       config  for  the  remote,  or  with  --ftp-tls.  The default FTPS port is 990, not 21 and can be set with
       --ftp-port.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       File names cannot end with the following characters.  Replacement is limited to the last character  in  a
       file name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠

       Not all FTP servers can have all characters in file names, for example:

       FTP Server   Forbidden characters
       ──────────────────────────────────
       proftpd               *
       pureftpd            \ [ ]

       This  backend’s  interactive  configuration wizard provides a selection of sensible encoding settings for
       major FTP servers: ProFTPd, PureFTPd, VsFTPd.  Just hit a selection number when prompted.

   Standard options
       Here are the Standard options specific to ftp (FTP).

   –ftp-host
       FTP host to connect to.

       E.g.  “ftp.example.com”.

       Properties:

       • Config: host

       • Env Var: RCLONE_FTP_HOST

       • Type: string

       • Required: true

   –ftp-user
       FTP username.

       Properties:

       • Config: user

       • Env Var: RCLONE_FTP_USER

       • Type: string

       • Default: “$USER”

   –ftp-port
       FTP port number.

       Properties:

       • Config: port

       • Env Var: RCLONE_FTP_PORT

       • Type: int

       • Default: 21

   –ftp-pass
       FTP password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_FTP_PASS

       • Type: string

       • Required: false

   –ftp-tls
       Use Implicit FTPS (FTP over TLS).

       When using implicit FTP over TLS the client  connects  using  TLS  right  from  the  start  which  breaks
       compatibility  with  non-TLS-aware  servers.   This  is usually served over port 990 rather than port 21.
       Cannot be used in combination with explicit FTP.

       Properties:

       • Config: tls

       • Env Var: RCLONE_FTP_TLS

       • Type: bool

       • Default: false

   –ftp-explicit-tls
       Use Explicit FTPS (FTP over TLS).

       When using explicit FTP over TLS the client explicitly requests security from  the  server  in  order  to
       upgrade a plain text connection to an encrypted one.  Cannot be used in combination with implicit FTP.

       Properties:

       • Config: explicit_tls

       • Env Var: RCLONE_FTP_EXPLICIT_TLS

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to ftp (FTP).

   –ftp-concurrency
       Maximum number of FTP simultaneous connections, 0 for unlimited.

       Note that setting this is very likely to cause deadlocks so it should be used with care.

       If  you  are  doing a sync or copy then make sure concurrency is one more than the sum of --transfers and
       --checkers.

       If you use --check-first then it  just  needs  to  be  one  more  than  the  maximum  of  --checkers  and
       --transfers.

       So for concurrency 3 you’d use --checkers 2 --transfers 2 --check-first or --checkers 1 --transfers 1.

       Properties:

       • Config: concurrency

       • Env Var: RCLONE_FTP_CONCURRENCY

       • Type: int

       • Default: 0

   –ftp-no-check-certificate
       Do not verify the TLS certificate of the server.

       Properties:

       • Config: no_check_certificate

       • Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE

       • Type: bool

       • Default: false

   –ftp-disable-epsv
       Disable using EPSV even if server advertises support.

       Properties:

       • Config: disable_epsv

       • Env Var: RCLONE_FTP_DISABLE_EPSV

       • Type: bool

       • Default: false

   –ftp-disable-mlsd
       Disable using MLSD even if server advertises support.

       Properties:

       • Config: disable_mlsd

       • Env Var: RCLONE_FTP_DISABLE_MLSD

       • Type: bool

       • Default: false

   –ftp-disable-utf8
       Disable using UTF-8 even if server advertises support.

       Properties:

       • Config: disable_utf8

       • Env Var: RCLONE_FTP_DISABLE_UTF8

       • Type: bool

       • Default: false

   –ftp-writing-mdtm
       Use MDTM to set modification time (VsFtpd quirk)

       Properties:

       • Config: writing_mdtm

       • Env Var: RCLONE_FTP_WRITING_MDTM

       • Type: bool

       • Default: false

   –ftp-force-list-hidden
       Use LIST -a to force listing of hidden files and folders.  This will disable the use of MLSD.

       Properties:

       • Config: force_list_hidden

       • Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN

       • Type: bool

       • Default: false

   –ftp-idle-timeout
       Max time before closing idle connections.

       If  no  connections  have  been  returned to the connection pool in the time given, rclone will empty the
       connection pool.

       Set to 0 to keep connections indefinitely.

       Properties:

       • Config: idle_timeout

       • Env Var: RCLONE_FTP_IDLE_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –ftp-close-timeout
       Maximum time to wait for a response to close.

       Properties:

       • Config: close_timeout

       • Env Var: RCLONE_FTP_CLOSE_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –ftp-tls-cache-size
       Size of TLS session cache for all control and data connections.

       TLS cache allows to resume TLS sessions and reuse PSK between connections.  Increase if default  size  is
       not enough resulting in TLS resumption errors.  Enabled by default.  Use 0 to disable.

       Properties:

       • Config: tls_cache_size

       • Env Var: RCLONE_FTP_TLS_CACHE_SIZE

       • Type: int

       • Default: 32

   –ftp-disable-tls13
       Disable TLS 1.3 (workaround for FTP servers with buggy TLS)

       Properties:

       • Config: disable_tls13

       • Env Var: RCLONE_FTP_DISABLE_TLS13

       • Type: bool

       • Default: false

   –ftp-shut-timeout
       Maximum time to wait for data connection closing status.

       Properties:

       • Config: shut_timeout

       • Env Var: RCLONE_FTP_SHUT_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –ftp-ask-password
       Allow asking for FTP password when needed.

       If this is set and no password is supplied then rclone will ask for a password

       Properties:

       • Config: ask_password

       • Env Var: RCLONE_FTP_ASK_PASSWORD

       • Type: bool

       • Default: false

   –ftp-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_FTP_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Del,Ctl,RightSpace,Dot

       • Examples:

         • “Asterisk,Ctl,Dot,Slash”

           • ProFTPd can’t handle ’*’ in file names

         • “BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket”

           • PureFTPd can’t handle `[]' or ’*’ in file names

         • “Ctl,LeftPeriod,Slash”

           • VsFTPd can’t handle file names starting with dot

   Limitations
       FTP servers acting as rclone remotes must support passive mode.  The mode cannot be configured as passive
       is the only supported one.  Rclone’s FTP implementation is not compatible with active mode as the library
       it uses doesn’t support it.  This will likely never be supported due to security concerns.

       Rclone’s FTP backend does not support any checksums but can compare file sizes.

       rclone about is not supported by the FTP backend.  Backends without this capability cannot determine free
       space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

       The  implementation  of  : --dump headers, --dump bodies, --dump auth for debugging isn’t the same as for
       rclone HTTP based backends - it has less fine grained control.

       --timeout isn’t supported (but --contimeout is).

       --bind isn’t supported.

       Rclone’s FTP backend could support server-side move but does not at present.

       The ftp_proxy environment variable is not currently supported.

   Modified time
       File modification time (timestamps) is supported to 1 second resolution for major FTP  servers:  ProFTPd,
       PureFTPd,  VsFTPd,  and  FileZilla FTP server.  The VsFTPd server has non-standard implementation of time
       related protocol commands and needs a special configuration setting: writing_mdtm = true.

       Support for precise file time with other FTP servers varies depending on what  protocol  extensions  they
       advertise.   If  all  the  MLSD,  MDTM  and MFTM extensions are present, rclone will use them together to
       provide precise time.  Otherwise the times you see on the FTP server through rclone are those of the last
       file upload.

       You can use the following command to check whether rclone can use precise  time  with  your  FTP  server:
       rclone  backend  features your_ftp_remote: (the trailing colon is important).  Look for the number in the
       line tagged by Precision designating the remote time precision expressed  as  nanoseconds.   A  value  of
       1000000000  means  that file time precision of 1 second is available.  A value of 3153600000000000000 (or
       another large number) means “unsupported”.

Google Cloud Storage

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:bucket/path/to/dir.

   Configuration
       The  initial  setup for google cloud storage involves getting a token from Google Cloud Storage which you
       need to do in your browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              n) New remote
              d) Delete remote
              q) Quit config
              e/n/d/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Google Cloud Storage (this is not Google Drive)
                 \ "google cloud storage"
              [snip]
              Storage> google cloud storage
              Google Application Client Id - leave blank normally.
              client_id>
              Google Application Client Secret - leave blank normally.
              client_secret>
              Project number optional - needed only for list/create/delete buckets - see your developer console.
              project_number> 12345678
              Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
              service_account_file>
              Access Control List for new objects.
              Choose a number from below, or type in your own value
               1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
                 \ "authenticatedRead"
               2 / Object owner gets OWNER access, and project team owners get OWNER access.
                 \ "bucketOwnerFullControl"
               3 / Object owner gets OWNER access, and project team owners get READER access.
                 \ "bucketOwnerRead"
               4 / Object owner gets OWNER access [default if left blank].
                 \ "private"
               5 / Object owner gets OWNER access, and project team members get access according to their roles.
                 \ "projectPrivate"
               6 / Object owner gets OWNER access, and all Users get READER access.
                 \ "publicRead"
              object_acl> 4
              Access Control List for new buckets.
              Choose a number from below, or type in your own value
               1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
                 \ "authenticatedRead"
               2 / Project team owners get OWNER access [default if left blank].
                 \ "private"
               3 / Project team members get access according to their roles.
                 \ "projectPrivate"
               4 / Project team owners get OWNER access, and all Users get READER access.
                 \ "publicRead"
               5 / Project team owners get OWNER access, and all Users get WRITER access.
                 \ "publicReadWrite"
              bucket_acl> 2
              Location for the newly created buckets.
              Choose a number from below, or type in your own value
               1 / Empty for default location (US).
                 \ ""
               2 / Multi-regional location for Asia.
                 \ "asia"
               3 / Multi-regional location for Europe.
                 \ "eu"
               4 / Multi-regional location for United States.
                 \ "us"
               5 / Taiwan.
                 \ "asia-east1"
               6 / Tokyo.
                 \ "asia-northeast1"
               7 / Singapore.
                 \ "asia-southeast1"
               8 / Sydney.
                 \ "australia-southeast1"
               9 / Belgium.
                 \ "europe-west1"
              10 / London.
                 \ "europe-west2"
              11 / Iowa.
                 \ "us-central1"
              12 / South Carolina.
                 \ "us-east1"
              13 / Northern Virginia.
                 \ "us-east4"
              14 / Oregon.
                 \ "us-west1"
              location> 12
              The storage class to use when storing objects in Google Cloud Storage.
              Choose a number from below, or type in your own value
               1 / Default
                 \ ""
               2 / Multi-regional storage class
                 \ "MULTI_REGIONAL"
               3 / Regional storage class
                 \ "REGIONAL"
               4 / Nearline storage class
                 \ "NEARLINE"
               5 / Coldline storage class
                 \ "COLDLINE"
               6 / Durable reduced availability storage class
                 \ "DURABLE_REDUCED_AVAILABILITY"
              storage_class> 5
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine or Y didn't work
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              type = google cloud storage
              client_id =
              client_secret =
              token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
              project_number = 12345678
              object_acl = private
              bucket_acl = private
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Note that rclone runs a webserver on your local machine to collect the token as returned from  Google  if
       you  use  auto  config  mode.  This only runs from the moment it opens your browser to the moment you get
       back the verification code.  This is on http://127.0.0.1:53682/ and this it may require you to unblock it
       temporarily if you are running a host firewall, or use manual mode.

       This remote is called remote and can now be used like this

       See all the buckets in your project

              rclone lsd remote:

       Make a new bucket

              rclone mkdir remote:bucket

       List the contents of a bucket

              rclone ls remote:bucket

       Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

              rclone sync -i /home/local/directory remote:bucket

   Service Account support
       You can set up rclone with Google Cloud Storage in an  unattended  mode,  i.e. not  tied  to  a  specific
       end-user Google account.  This is useful when you want to synchronise files onto machines that don’t have
       actively logged-in users, for example build machines.

       To  get  credentials  for  Google  Cloud  Platform  IAM Service Accounts,  please  head  to  the  Service
       Account section of the  Google  Developer  Console.   Service  Accounts  behave  just  like  normal  User
       permissions  in  Google Cloud Storage ACLs,  so  you  can  limit their access (e.g. make them read only).
       After creating an account, a JSON file containing the Service Account’s credentials  will  be  downloaded
       onto your machines.  These credentials are what rclone will use for authentication.

       To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials
       at  the service_account_file prompt and rclone won’t use the browser based authentication flow.  If you’d
       rather  stuff  the  contents  of  the  credentials  file  into  the  rclone  config  file,  you  can  set
       service_account_credentials  with  the  actual  contents  of  the  file  instead,  or  set the equivalent
       environment variable.

   Anonymous Access
       For downloads of objects that permit public access you can configure rclone to use  anonymous  access  by
       setting  anonymous  to  true.   With unauthorized access you can’t write or create files but only read or
       list those buckets and objects that have public read access.

   Application Default Credentials
       If no other source of credentials is provided, rclone will fall back  to  https://cloud.google.com/video-
       intelligence/docs/common/auth#authenticating_with_application_default_cre‐   dentials Application Default
       Credentials this is useful both when you  already  have  configured  authentication  for  your  developer
       account, or in production when running on a google compute host.  Note that if running in docker, you may
       need  to run additional commands on your google compute machine - https://cloud.google.com/container-reg‐
       istry/docs/advanced-authentication#gcloud_as_a_docker_creden‐ tial_helper see this page.

       Note that in the case application default credentials are used, there is no need to explicitly  configure
       a project number.

   –fast-list
       This  remote supports --fast-list which allows you to use fewer transactions in exchange for more memory.
       See the rclone docs for more details.

   Custom upload headers
       You can set custom upload headers with the --header-upload  flag.   Google  Cloud  Storage  supports  the
       headers                    as                    described                     in                     the
       https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata   working  with  metadata
       documentation

       • Cache-Control

       • Content-Disposition

       • Content-Encoding

       • Content-Language

       • Content-Type

       • X-Goog-Storage-Class


       • X-Goog-Meta-
       Eg --header-upload "Content-Type text/potato"

       Note  that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key:
       value"

   Modification time
       Google Cloud Storage  stores  md5sum  natively.   Google’s  gsutil tool  stores  modification  time  with
       one-second precision as goog-reserved-file-mtime in file metadata.

       To  ensure  compatibility  with  gsutil,  rclone stores modification time in 2 separate metadata entries.
       mtime uses RFC3339 format with one-nanosecond precision.  goog-reserved-file-mtime  uses  Unix  timestamp
       format  with  one-second  precision.   To  get  modification  time from object metadata, rclone reads the
       metadata in the following order: mtime, goog-reserved-file-mtime, object updated time.

       Note that rclone’s default modify window is 1ns.  Files uploaded by gsutil only contain  timestamps  with
       one-second precision.  If you use rclone to sync files previously uploaded by gsutil, rclone will attempt
       to  update  modification  time  for  all  these  files.  To avoid these possibly unnecessary updates, use
       --modify-window 1s.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       LF          0x0A         ␊
       CR          0x0D         ␍
       /           0x2F        /

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not  Google
       Drive)).

   –gcs-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_GCS_CLIENT_ID

       • Type: string

       • Required: false

   –gcs-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_GCS_CLIENT_SECRET

       • Type: string

       • Required: false

   –gcs-project-number
       Project number.

       Optional - needed only for list/create/delete buckets - see your developer console.

       Properties:

       • Config: project_number

       • Env Var: RCLONE_GCS_PROJECT_NUMBER

       • Type: string

       • Required: false

   –gcs-service-account-file
       Service Account Credentials JSON file path.

       Leave blank normally.  Needed only if you want use SA instead of interactive login.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: service_account_file

       • Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE

       • Type: string

       • Required: false

   –gcs-service-account-credentials
       Service Account Credentials JSON blob.

       Leave blank normally.  Needed only if you want use SA instead of interactive login.

       Properties:

       • Config: service_account_credentials

       • Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS

       • Type: string

       • Required: false

   –gcs-anonymous
       Access public buckets and objects without credentials.

       Set to `true' if you just want to download files and don’t configure credentials.

       Properties:

       • Config: anonymous

       • Env Var: RCLONE_GCS_ANONYMOUS

       • Type: bool

       • Default: false

   –gcs-object-acl
       Access Control List for new objects.

       Properties:

       • Config: object_acl

       • Env Var: RCLONE_GCS_OBJECT_ACL

       • Type: string

       • Required: false

       • Examples:

         • “authenticatedRead”

           • Object owner gets OWNER access.

           • All Authenticated Users get READER access.

         • “bucketOwnerFullControl”

           • Object owner gets OWNER access.

           • Project team owners get OWNER access.

         • “bucketOwnerRead”

           • Object owner gets OWNER access.

           • Project team owners get READER access.

         • “private”

           • Object owner gets OWNER access.

           • Default if left blank.

         • “projectPrivate”

           • Object owner gets OWNER access.

           • Project team members get access according to their roles.

         • “publicRead”

           • Object owner gets OWNER access.

           • All Users get READER access.

   –gcs-bucket-acl
       Access Control List for new buckets.

       Properties:

       • Config: bucket_acl

       • Env Var: RCLONE_GCS_BUCKET_ACL

       • Type: string

       • Required: false

       • Examples:

         • “authenticatedRead”

           • Project team owners get OWNER access.

           • All Authenticated Users get READER access.

         • “private”

           • Project team owners get OWNER access.

           • Default if left blank.

         • “projectPrivate”

           • Project team members get access according to their roles.

         • “publicRead”

           • Project team owners get OWNER access.

           • All Users get READER access.

         • “publicReadWrite”

           • Project team owners get OWNER access.

           • All Users get WRITER access.

   –gcs-bucket-policy-only
       Access checks should use bucket-level IAM policies.

       If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.

       When it is set, rclone:

       • ignores ACLs set on buckets

       • ignores ACLs set on objects

       • creates buckets with Bucket Policy Only set

       Docs: https://cloud.google.com/storage/docs/bucket-policy-only

       Properties:

       • Config: bucket_policy_only

       • Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY

       • Type: bool

       • Default: false

   –gcs-location
       Location for the newly created buckets.

       Properties:

       • Config: location

       • Env Var: RCLONE_GCS_LOCATION

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Empty for default location (US)

         • “asia”

           • Multi-regional location for Asia

         • “eu”

           • Multi-regional location for Europe

         • “us”

           • Multi-regional location for United States

         • “asia-east1”

           • Taiwan

         • “asia-east2”

           • Hong Kong

         • “asia-northeast1”

           • Tokyo

         • “asia-northeast2”

           • Osaka

         • “asia-northeast3”

           • Seoul

         • “asia-south1”

           • Mumbai

         • “asia-south2”

           • Delhi

         • “asia-southeast1”

           • Singapore

         • “asia-southeast2”

           • Jakarta

         • “australia-southeast1”

           • Sydney

         • “australia-southeast2”

           • Melbourne

         • “europe-north1”

           • Finland

         • “europe-west1”

           • Belgium

         • “europe-west2”

           • London

         • “europe-west3”

           • Frankfurt

         • “europe-west4”

           • Netherlands

         • “europe-west6”

           • Zürich

         • “europe-central2”

           • Warsaw

         • “us-central1”

           • Iowa

         • “us-east1”

           • South Carolina

         • “us-east4”

           • Northern Virginia

         • “us-west1”

           • Oregon

         • “us-west2”

           • California

         • “us-west3”

           • Salt Lake City

         • “us-west4”

           • Las Vegas

         • “northamerica-northeast1”

           • Montréal

         • “northamerica-northeast2”

           • Toronto

         • “southamerica-east1”

           • São Paulo

         • “southamerica-west1”

           • Santiago

         • “asia1”

           • Dual region: asia-northeast1 and asia-northeast2.

         • “eur4”

           • Dual region: europe-north1 and europe-west4.

         • “nam4”

           • Dual region: us-central1 and us-east1.

   –gcs-storage-class
       The storage class to use when storing objects in Google Cloud Storage.

       Properties:

       • Config: storage_class

       • Env Var: RCLONE_GCS_STORAGE_CLASS

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “MULTI_REGIONAL”

           • Multi-regional storage class

         • “REGIONAL”

           • Regional storage class

         • “NEARLINE”

           • Nearline storage class

         • “COLDLINE”

           • Coldline storage class

         • “ARCHIVE”

           • Archive storage class

         • “DURABLE_REDUCED_AVAILABILITY”

           • Durable reduced availability storage class

   Advanced options
       Here  are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google
       Drive)).

   –gcs-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_GCS_TOKEN

       • Type: string

       • Required: false

   –gcs-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_GCS_AUTH_URL

       • Type: string

       • Required: false

   –gcs-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_GCS_TOKEN_URL

       • Type: string

       • Required: false

   –gcs-no-check-bucket
       If set, don’t attempt to check the bucket exists or create it.

       This can be useful when trying to minimise the number of transactions rclone does if you know the  bucket
       exists already.

       Properties:

       • Config: no_check_bucket

       • Env Var: RCLONE_GCS_NO_CHECK_BUCKET

       • Type: bool

       • Default: false

   –gcs-decompress
       If set this will decompress gzip encoded objects.

       It is possible to upload objects to GCS with “Content-Encoding: gzip” set.  Normally rclone will download
       these files as compressed objects.

       If  this  flag  is  set then rclone will decompress these files with “Content-Encoding: gzip” as they are
       received.  This means that rclone  can’t  check  the  size  and  hash  but  the  file  contents  will  be
       decompressed.

       Properties:

       • Config: decompress

       • Env Var: RCLONE_GCS_DECOMPRESS

       • Type: bool

       • Default: false

   –gcs-endpoint
       Endpoint for the service.

       Leave blank normally.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_GCS_ENDPOINT

       • Type: string

       • Required: false

   –gcs-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_GCS_ENCODING

       • Type: MultiEncoder

       • Default: Slash,CrLf,InvalidUtf8,Dot

   Limitations
       rclone  about  is  not  supported  by the Google Cloud Storage backend.  Backends without this capability
       cannot determine free space for an rclone mount or use policy mfs (most free space) as  a  member  of  an
       rclone union remote.

       See List of backends that do not support rclone about and rclone about

Google Drive

       Paths are specified as drive:path

       Drive paths may be as deep as required, e.g. drive:directory/subdirectory.

   Configuration
       The  initial  setup  for  drive  involves  getting a token from Google drive which you need to do in your
       browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              n/r/c/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Google Drive
                 \ "drive"
              [snip]
              Storage> drive
              Google Application Client Id - leave blank normally.
              client_id>
              Google Application Client Secret - leave blank normally.
              client_secret>
              Scope that rclone should use when requesting access from drive.
              Choose a number from below, or type in your own value
               1 / Full access all files, excluding Application Data Folder.
                 \ "drive"
               2 / Read-only access to file metadata and file contents.
                 \ "drive.readonly"
                 / Access to files created by rclone only.
               3 | These are visible in the drive website.
                 | File authorization is revoked when the user deauthorizes the app.
                 \ "drive.file"
                 / Allows read and write access to the Application Data folder.
               4 | This is not visible in the drive website.
                 \ "drive.appfolder"
                 / Allows read-only access to file metadata but
               5 | does not allow any access to read or download file content.
                 \ "drive.metadata.readonly"
              scope> 1
              Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
              service_account_file>
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine or Y didn't work
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              Configure this as a Shared Drive (Team Drive)?
              y) Yes
              n) No
              y/n> n
              --------------------
              [remote]
              client_id =
              client_secret =
              scope = drive
              root_folder_id =
              service_account_file =
              token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Note that rclone runs a webserver on your local machine to collect the token as returned from  Google  if
       you  use  auto  config  mode.  This only runs from the moment it opens your browser to the moment you get
       back the verification code.  This is on http://127.0.0.1:53682/ and it may  require  you  to  unblock  it
       temporarily if you are running a host firewall, or use manual mode.

       You can then use it like this,

       List directories in top level of your drive

              rclone lsd remote:

       List all the files in your drive

              rclone ls remote:

       To copy a local directory to a drive directory called backup

              rclone copy /home/source remote:backup

   Scopes
       Rclone  allows  you  to  select  which scope you would like for rclone to use.  This changes what type of
       token is granted to rclone.  The scopes are defined here.

       The scope are

   drive
       This is the default scope and allows full access to all files, except for  the  Application  Data  Folder
       (see below).

       Choose this one if you aren’t sure.

   drive.readonly
       This  allows read only access to all files.  Files may be listed and downloaded but not uploaded, renamed
       or deleted.

   drive.file
       With this scope rclone can read/view/modify only those files and folders it creates.

       So if you uploaded files to drive via the web interface (or any other means) they will not be visible  to
       rclone.

       This  can  be  useful if you are using rclone to backup data and you want to be sure confidential data on
       your drive is not visible to rclone.

       Files created with this scope are visible in the web interface.

   drive.appfolder
       This gives rclone its own private area to store files.  Rclone will not be able to see any other files on
       your drive and you won’t be able to see rclone’s files from the web interface either.

   drive.metadata.readonly
       This allows read only access to file names only.  It does not allow rclone to download or upload data, or
       rename or delete files or directories.

   Root folder ID
       This option has been moved to the advanced section.  You can set the root_folder_id for rclone.  This  is
       the directory (identified by its Folder ID) that rclone considers to be the root of your drive.

       Normally you will leave this blank and rclone will determine the correct root to use itself.

       However  you  can set this to restrict rclone to a specific folder hierarchy or to access data within the
       “Computers” tab on the drive web interface (where files from Google’s Backup  and  Sync  desktop  program
       go).

       In  order  to  do  this  you will have to find the Folder ID of the directory you wish rclone to display.
       This will be the last segment of the URL when you open the relevant folder in the drive web interface.

       So   if   the   folder    you    want    rclone    to    use    has    a    URL    which    looks    like
       https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh  in  the  browser,  then you use
       1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.

       NB folders under the “Computers” tab seem to be read only (drive gives a 500 error) when using rclone.

       There doesn’t appear to be an API to discover the folder IDs of the “Computers” tab - please  contact  us
       if you know otherwise!

       Note  also  that  rclone  can’t access any data under the “Backups” tab on the google drive web interface
       yet.

   Service Account support
       You can set up rclone with Google Drive in an unattended mode,  i.e. not  tied  to  a  specific  end-user
       Google account.  This is useful when you want to synchronise files onto machines that don’t have actively
       logged-in users, for example build machines.

       To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials
       at  the  service_account_file  prompt  during  rclone  config  and  rclone  won’t  use  the browser based
       authentication flow.  If you’d rather stuff the contents of the credentials file into the  rclone  config
       file,  you  can  set service_account_credentials with the actual contents of the file instead, or set the
       equivalent environment variable.

   Use case - Google Apps/G-suite account and individual Drive
       Let’s say that you are the administrator of a Google Apps (old) or G-suite account.  The goal is to store
       data on an individual’s Drive account, who IS a member of the domain.  We’ll call the domain example.com,
       and the user foo@example.com.

       There’s a few steps we need to go through to accomplish this:

   1. Create a service account for example.com
       • To create a service account and obtain its credentials, go to the Google Developer Console.

       • You must have a project - create one if you don’t.

       • Then go to “IAM & admin” -> “Service Accounts”.

       • Use the “Create Credentials” button.  Fill in “Service account name”  with  something  that  identifies
         your client.  “Role” can be empty.

       • Tick “Furnish a new private key” - select “Key type JSON”.

       • Tick  “Enable  G  Suite  Domain-wide  Delegation”.   This  option  makes  “impersonation”  possible, as
         documented                                                                                        here:
         https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority    Delegating
         domain-wide authority to the service account

       • These credentials are what rclone will use for authentication.  If you  ever  need  to  remove  access,
         press the “Delete service account key” button.

   2. Allowing API access to example.com Google Drive
       • Go to example.com’s admin console

       • Go into “Security” (or use the search bar)

       • Select “Show more” and then “Advanced settings”

       • Select “Manage API client access” in the “Authentication” section

       • In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer
         Console  under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service
         account.  It is a ~21 character numerical string.

       • In the next field, “One or More  API  Scopes”,  enter  https://www.googleapis.com/auth/drive  to  grant
         access to Google Drive specifically.

   3. Configure rclone, assuming a new install
              rclone config

              n/s/q> n         # New
              name>gdrive      # Gdrive is an example name
              Storage>         # Select the number shown for Google Drive
              client_id>       # Can be left blank
              client_secret>   # Can be left blank
              scope>           # Select your scope, 1 for example
              root_folder_id>  # Can be left blank
              service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
              y/n>             # Auto config, n

   4. Verify that it’s working
       • rclone -v --drive-impersonate foo@example.com lsf gdrive:backup

       • The arguments do:

         • -v - verbose logging

         • --drive-impersonate foo@example.com - this is what does the magic, pretending to be user foo.

         • lsf - list files in a parsing friendly way

         • gdrive:backup - use the remote called gdrive, work in the folder named backup.

       Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents
       of that folder when using --drive-impersonate, do this instead: - in the gdrive web interface, share your
       root  folder  with the user/email of the new Service Account you created/selected at step #1 - use rclone
       without specifying the --drive-impersonate option, like this: rclone -v lsf gdrive:backup

   Shared drives (team drives)
       If you want to configure the remote to point to a Google Shared Drive (previously known as  Team  Drives)
       then answer y to the question Configure this as a Shared Drive (Team Drive)?.

       This  will  fetch  the list of Shared Drives from google and allow you to configure which one you want to
       use.  You can also type in a Shared Drive ID if you prefer.

       For example:

              Configure this as a Shared Drive (Team Drive)?
              y) Yes
              n) No
              y/n> y
              Fetching Shared Drive list...
              Choose a number from below, or type in your own value
               1 / Rclone Test
                 \ "xxxxxxxxxxxxxxxxxxxx"
               2 / Rclone Test 2
                 \ "yyyyyyyyyyyyyyyyyyyy"
               3 / Rclone Test 3
                 \ "zzzzzzzzzzzzzzzzzzzz"
              Enter a Shared Drive ID> 1
              --------------------
              [remote]
              client_id =
              client_secret =
              token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
              team_drive = xxxxxxxxxxxxxxxxxxxx
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   –fast-list
       This remote supports --fast-list which allows you to use fewer transactions in exchange for more  memory.
       See the rclone docs for more details.

       It does this by combining multiple list calls into a single API request.

       This  works  by  combining  many  '%s'  in  parents filters into one expression.  To list the contents of
       directories a, b and c, the following requests will be send by the regular List function:

              trashed=false and 'a' in parents
              trashed=false and 'b' in parents
              trashed=false and 'c' in parents

       These can now be combined into a single request:

              trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)

       The implementation of ListR will put up to 50  parents  filters  into  one  request.   It  will  use  the
       --checkers value to specify the number of requests to run in parallel.

       In  tests,  these  batch  requests  were up to 20x faster than the regular method.  Running the following
       command against different sized folders gives:

              rclone lsjson -vv -R --checkers=6 gdrive:folder

       small folder (220 directories, 700 files):

       • without --fast-list: 38s

       • with --fast-list: 10s

       large folder (10600 directories, 39000 files):

       • without --fast-list: 22:05 min

       • with --fast-list: 58s

   Modified time
       Google drive stores modification times accurate to 1 ms.

   Restricted filename characters
       Only Invalid UTF-8 bytes will be replaced, as they can’t be used in JSON strings.

       In contrast to other backends, / can also be used in names and . or .. are valid names.

   Revisions
       Google drive stores revisions of files.  When you upload a change to an existing  file  to  google  drive
       using rclone it will create a new revision of that file.

       Revisions follow the standard google policy which at time of writing was

       • They are deleted after 30 days or 100 revisions (whatever comes first).

       • They do not count towards a user storage quota.

   Deleting files
       By  default rclone will send all files to the trash when deleting files.  If deleting them permanently is
       required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

   Shortcuts
       In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API).   These  will
       (by   September   2020)  https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-
       structure-and-sharing- models replace the ability for files or folders to be in multiple folders at once.

       Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they
       point to the underlying file data (e.g. the inode in unix terms) so they don’t break  if  the  source  is
       renamed or moved about.

       Be default rclone treats these as follows.

       For shortcuts pointing to files:

       • When listing a file shortcut appears as the destination file.

       • When downloading the contents of the destination file is downloaded.

       • When  updating  shortcut  file  with  a  non  shortcut file, the shortcut is removed then a new file is
         uploaded in place of the shortcut.

       • When server-side moving (renaming) the shortcut is renamed, not the destination file.

       • When server-side  copying  the  shortcut  is  copied,  not  the  contents  of  the  shortcut.   (unless
         --drive-copy-shortcut-content is in use in which case the contents of the shortcut gets copied).

       • When deleting the shortcut is deleted not the linked file.

       • When setting the modification time, the modification time of the linked file will be set.

       For shortcuts pointing to folders:

       • When  listing  the shortcut appears as a folder and that folder will contain the contents of the linked
         folder appear (including any sub folders)

       • When downloading the contents of the linked folder and sub contents are downloaded

       • When uploading to a shortcut folder the file will be placed in the linked folder

       • When server-side moving (renaming) the shortcut is renamed, not the destination folder

       • When server-side copying the contents of the linked folder is copied, not the shortcut.

       • When deleting with rclone rmdir or rclone purge the shortcut is deleted not the linked folder.

       • NB When deleting with rclone remove or rclone mount the contents of the linked folder will be deleted.

       The rclone backend command can be used to create shortcuts.

       Shortcuts  can  be  completely  ignored  with  the  --drive-skip-shortcuts  flag  or  the   corresponding
       skip_shortcuts configuration setting.

   Emptying trash
       If  you  wish  to  empty your trash you can use the rclone cleanup remote: command which will permanently
       delete all your trashed files.  This command does not take any path arguments.

       Note that Google Drive takes some time (minutes to days) to empty  the  trash  even  though  the  command
       returns  within  a  few  seconds.  No output is echoed, so there will be no confirmation even using -v or
       -vv.

   Quota information
       To view your current quota you can use the rclone about remote: command which  will  display  your  usage
       limit  (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other
       Google services such as Gmail.  This command does not take any path arguments.

   Import/Export of google documents
       Google documents can be exported from and uploaded to Google Drive.

       When  rclone  downloads  a  Google  doc  it  chooses  a   format   to   download   depending   upon   the
       --drive-export-formats  setting.   By  default  the  export  formats  are  docx,xlsx,pptx,svg which are a
       sensible default for an editable document.

       When choosing a format, rclone runs down the list provided in order and chooses the first file format the
       doc can be exported as from the list.  If the file can’t be exported to a format  on  the  formats  list,
       then rclone will choose a format from the default list.

       If  you  prefer  an  archive  copy  then  you  might  use  --drive-export-formats  pdf,  or if you prefer
       openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp.

       Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet on google  docs,
       it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

       When   importing  files  into  Google  Drive,  rclone  will  convert  all  files  with  an  extension  in
       --drive-import-formats to their associated document type.  rclone will not convert any files by  default,
       since the conversion is lossy process.

       The  conversion  must  result in a file with the same extension when the --drive-export-formats rules are
       applied to the uploaded document.

       Here are some examples for allowed and prohibited conversions.

       export-formats      import-formats      Upload Ext      Document Ext     Allowed
       ───────────────────────────────────────────────────────────────────────────────────
       odt                 odt                 odt             odt              Yes
       odt                 docx,odt            odt             odt              Yes
                           docx                docx            docx             Yes
                           odt                 odt             docx             No
       odt,docx            docx,odt            docx            odt              No
       docx,odt            docx,odt            docx            docx             Yes
       docx,odt            docx,odt            odt             docx             No

       This limitation can be disabled by specifying --drive-allow-import-name-change.  When  using  this  flag,
       rclone  can  convert  multiple  files  types  resulting  in  the  same  document  type at once, e.g. with
       --drive-import-formats docx,odt,txt, all  files  having  these  extension  would  result  in  a  document
       represented as a docx file.  This brings the additional risk of overwriting a document, if multiple files
       have  the same stem.  Many rclone operations will not handle this name change in any way.  They assume an
       equal name when copying files and might copy the file again or delete them when the name changes.

       Here are the possible export extensions with their corresponding mime types.  Most of these can  also  be
       used for importing, but there more that are not listed here.  Some of these additional ones might only be
       available when the operating system provides the correct MIME type entries.

       This  list  can  be  changed  by Google Drive at any time and might not represent the currently available
       conversions.

       Extension              Mime Type                                                                   Description
       ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       bmp                    image/bmp                                                                   Windows Bitmap format
       csv                    text/csv                                                                    Standard  CSV  format  for
                                                                                                          Spreadsheets
       doc                    application/msword                                                          Classic Word file
       docx                   application/vnd.openxmlformats-officedocument.wordprocessingml.document     Microsoft Office Document
       epub                   application/epub+zip                                                        E-book format
       html                   text/html                                                                   An HTML Document
       jpg                    image/jpeg                                                                  A JPEG Image File
       json                   application/vnd.google-apps.script+json                                     JSON   Text   Format   for
                                                                                                          Google Apps scripts
       odp                    application/vnd.oasis.opendocument.presentation                             Openoffice Presentation
       ods                    application/vnd.oasis.opendocument.spreadsheet                              Openoffice Spreadsheet
       ods                    application/x-vnd.oasis.opendocument.spreadsheet                            Openoffice Spreadsheet
       odt                    application/vnd.oasis.opendocument.text                                     Openoffice Document
       pdf                    application/pdf                                                             Adobe PDF Format
       pjpeg                  image/pjpeg                                                                 Progressive JPEG Image
       png                    image/png                                                                   PNG Image Format
       pptx                   application/vnd.openxmlformats-officedocument.presentationml.presentation   Microsoft           Office
                                                                                                          Powerpoint
       rtf                    application/rtf                                                             Rich Text Format
       svg                    image/svg+xml                                                               Scalable  Vector  Graphics
                                                                                                          Format
       tsv                    text/tab-separated-values                                                   Standard  TSV  format  for
                                                                                                          spreadsheets
       txt                    text/plain                                                                  Plain Text
       wmf                    application/x-msmetafile                                                    Windows Meta File
       xls                    application/vnd.ms-excel                                                    Classic Excel file
       xlsx                   application/vnd.openxmlformats-officedocument.spreadsheetml.sheet           Microsoft           Office
                                                                                                          Spreadsheet
       zip                    application/zip                                                             A ZIP file of HTML, Images
                                                                                                          CSS

       Google documents can also be exported as link files.  These files will open  a  browser  window  for  the
       Google  Docs  website  of  that  document  when opened.  The link file extension has to be specified as a
       --drive-export-formats parameter.  They will match all available Google Documents.

       Extension   Description                    OS Support
       ──────────────────────────────────────────────────────────
       desktop     freedesktop.org    specified   Linux
                   desktop entry
       link.html   An   HTML  Document  with  a   All
                   redirect
       url         INI style link file            macOS, Windows
       webloc      macOS specific XML format      macOS

   Standard options
       Here are the Standard options specific to drive (Google Drive).

   –drive-client-id
       Google     Application     Client     Id     Setting     your     own      is      recommended.       See
       https://rclone.org/drive/#making-your-own-client-id for how to create your own.  If you leave this blank,
       it will use an internal key which is low performance.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_DRIVE_CLIENT_ID

       • Type: string

       • Required: false

   –drive-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_DRIVE_CLIENT_SECRET

       • Type: string

       • Required: false

   –drive-scope
       Scope that rclone should use when requesting access from drive.

       Properties:

       • Config: scope

       • Env Var: RCLONE_DRIVE_SCOPE

       • Type: string

       • Required: false

       • Examples:

         • “drive”

           • Full access all files, excluding Application Data Folder.

         • “drive.readonly”

           • Read-only access to file metadata and file contents.

         • “drive.file”

           • Access to files created by rclone only.

           • These are visible in the drive website.

           • File authorization is revoked when the user deauthorizes the app.

         • “drive.appfolder”

           • Allows read and write access to the Application Data folder.

           • This is not visible in the drive website.

         • “drive.metadata.readonly”

           • Allows read-only access to file metadata but

           • does not allow any access to read or download file content.

   –drive-service-account-file
       Service Account Credentials JSON file path.

       Leave blank normally.  Needed only if you want use SA instead of interactive login.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: service_account_file

       • Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE

       • Type: string

       • Required: false

   –drive-alternate-export
       Deprecated: No longer needed.

       Properties:

       • Config: alternate_export

       • Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to drive (Google Drive).

   –drive-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_DRIVE_TOKEN

       • Type: string

       • Required: false

   –drive-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_DRIVE_AUTH_URL

       • Type: string

       • Required: false

   –drive-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_DRIVE_TOKEN_URL

       • Type: string

       • Required: false

   –drive-root-folder-id
       ID of the root folder.  Leave blank normally.

       Fill  in to access “Computers” folders (see docs), or for rclone to use a non root folder as its starting
       point.

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID

       • Type: string

       • Required: false

   –drive-service-account-credentials
       Service Account Credentials JSON blob.

       Leave blank normally.  Needed only if you want use SA instead of interactive login.

       Properties:

       • Config: service_account_credentials

       • Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS

       • Type: string

       • Required: false

   –drive-team-drive
       ID of the Shared Drive (Team Drive).

       Properties:

       • Config: team_drive

       • Env Var: RCLONE_DRIVE_TEAM_DRIVE

       • Type: string

       • Required: false

   –drive-auth-owner-only
       Only consider files owned by the authenticated user.

       Properties:

       • Config: auth_owner_only

       • Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY

       • Type: bool

       • Default: false

   –drive-use-trash
       Send files to the trash instead of deleting permanently.

       Defaults to true, namely sending files  to  the  trash.   Use  --drive-use-trash=false  to  delete  files
       permanently instead.

       Properties:

       • Config: use_trash

       • Env Var: RCLONE_DRIVE_USE_TRASH

       • Type: bool

       • Default: true

   –drive-copy-shortcut-content
       Server side copy contents of shortcuts instead of the shortcut.

       When doing server side copies, normally rclone will copy shortcuts as shortcuts.

       If  this  flag  is  used then rclone will copy the contents of shortcuts rather than shortcuts themselves
       when doing server side copies.

       Properties:

       • Config: copy_shortcut_content

       • Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT

       • Type: bool

       • Default: false

   –drive-skip-gdocs
       Skip google documents in all listings.

       If given, gdocs practically become invisible to rclone.

       Properties:

       • Config: skip_gdocs

       • Env Var: RCLONE_DRIVE_SKIP_GDOCS

       • Type: bool

       • Default: false

   –drive-skip-checksum-gphotos
       Skip MD5 checksum on Google photos and videos only.

       Use this if you get checksum errors when transferring Google photos or videos.

       Setting this flag will cause Google photos and videos to return a blank MD5 checksum.

       Google photos are identified by being in the “photos” space.

       Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.

       Properties:

       • Config: skip_checksum_gphotos

       • Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS

       • Type: bool

       • Default: false

   –drive-shared-with-me
       Only show files that are shared with me.

       Instructs rclone to operate on your “Shared with me” folder (where Google Drive lets you access the files
       and folders others have shared with you).

       This works both with the “list” (lsd, lsl, etc.)  and the “copy” commands (copy, sync,  etc.),  and  with
       all other commands too.

       Properties:

       • Config: shared_with_me

       • Env Var: RCLONE_DRIVE_SHARED_WITH_ME

       • Type: bool

       • Default: false

   –drive-trashed-only
       Only show files that are in the trash.

       This will show trashed files in their original directory structure.

       Properties:

       • Config: trashed_only

       • Env Var: RCLONE_DRIVE_TRASHED_ONLY

       • Type: bool

       • Default: false

   –drive-starred-only
       Only show files that are starred.

       Properties:

       • Config: starred_only

       • Env Var: RCLONE_DRIVE_STARRED_ONLY

       • Type: bool

       • Default: false

   –drive-formats
       Deprecated: See export_formats.

       Properties:

       • Config: formats

       • Env Var: RCLONE_DRIVE_FORMATS

       • Type: string

       • Required: false

   –drive-export-formats
       Comma separated list of preferred formats for downloading Google docs.

       Properties:

       • Config: export_formats

       • Env Var: RCLONE_DRIVE_EXPORT_FORMATS

       • Type: string

       • Default: “docx,xlsx,pptx,svg”

   –drive-import-formats
       Comma separated list of preferred formats for uploading Google docs.

       Properties:

       • Config: import_formats

       • Env Var: RCLONE_DRIVE_IMPORT_FORMATS

       • Type: string

       • Required: false

   –drive-allow-import-name-change
       Allow the filetype to change when uploading Google docs.

       E.g.  file.doc to file.docx.  This will confuse sync and reupload every time.

       Properties:

       • Config: allow_import_name_change

       • Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE

       • Type: bool

       • Default: false

   –drive-use-created-date
       Use file created date instead of modified date.

       Useful when downloading data and you want the creation date used in place of the last modified date.

       WARNING: This flag may have some unexpected consequences.

       When  uploading to your drive all files will be overwritten unless they haven’t been modified since their
       creation.  And the inverse will occur while downloading.  This side effect can be avoided  by  using  the
       “–checksum” flag.

       This  feature was implemented to retain photos capture date as recorded by google photos.  You will first
       need to check the “Create a Google Photos folder” option in your google drive  settings.   You  can  then
       copy  or  move  the photos locally and use the date the image was taken (created) set as the modification
       date.

       Properties:

       • Config: use_created_date

       • Env Var: RCLONE_DRIVE_USE_CREATED_DATE

       • Type: bool

       • Default: false

   –drive-use-shared-date
       Use date file was shared instead of modified date.

       Note  that,  as  with  “–drive-use-created-date”,  this  flag  may  have  unexpected  consequences   when
       uploading/downloading files.

       If both this flag and “–drive-use-created-date” are set, the created date is used.

       Properties:

       • Config: use_shared_date

       • Env Var: RCLONE_DRIVE_USE_SHARED_DATE

       • Type: bool

       • Default: false

   –drive-list-chunk
       Size of listing chunk 100-1000, 0 to disable.

       Properties:

       • Config: list_chunk

       • Env Var: RCLONE_DRIVE_LIST_CHUNK

       • Type: int

       • Default: 1000

   –drive-impersonate
       Impersonate this user when using a service account.

       Properties:

       • Config: impersonate

       • Env Var: RCLONE_DRIVE_IMPERSONATE

       • Type: string

       • Required: false

   –drive-upload-cutoff
       Cutoff for switching to chunked upload.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 8Mi

   –drive-chunk-size
       Upload chunk size.

       Must a power of 2 >= 256k.

       Making  this  larger  will  improve  performance,  but note that each chunk is buffered in memory one per
       transfer.

       Reducing this will reduce memory usage but decrease performance.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_DRIVE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 8Mi

   –drive-acknowledge-abuse
       Set to allow files which return cannotDownloadAbusiveFile to be downloaded.

       If downloading a file returns the error “This file has been identified as malware or spam and  cannot  be
       downloaded”  with  the error code “cannotDownloadAbusiveFile” then supply this flag to rclone to indicate
       you acknowledge the risks of downloading the file and rclone will download it anyway.

       Properties:

       • Config: acknowledge_abuse

       • Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE

       • Type: bool

       • Default: false

   –drive-keep-revision-forever
       Keep new head revision of each file forever.

       Properties:

       • Config: keep_revision_forever

       • Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER

       • Type: bool

       • Default: false

   –drive-size-as-quota
       Show sizes as storage quota usage, not actual size.

       Show the size of a file as the storage quota used.  This is the current version plus any  older  versions
       that have been set to keep forever.

       WARNING: This flag may have some unexpected consequences.

       It  is  not  recommended  to  set this flag in your config - the recommended usage is using the flag form
       –drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.

       If you do use this flag for syncing (not recommended) then you will need to use –ignore size also.

       Properties:

       • Config: size_as_quota

       • Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA

       • Type: bool

       • Default: false

   –drive-v2-download-min-size
       If Object’s are greater, use drive v2 API to download.

       Properties:

       • Config: v2_download_min_size

       • Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE

       • Type: SizeSuffix

       • Default: off

   –drive-pacer-min-sleep
       Minimum time to sleep between API calls.

       Properties:

       • Config: pacer_min_sleep

       • Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP

       • Type: Duration

       • Default: 100ms

   –drive-pacer-burst
       Number of API calls to allow without sleeping.

       Properties:

       • Config: pacer_burst

       • Env Var: RCLONE_DRIVE_PACER_BURST

       • Type: int

       • Default: 100

   –drive-server-side-across-configs
       Allow server-side operations (e.g. copy) to work across different drive configs.

       This can be useful if you wish to do a server-side copy between two different Google drives.   Note  that
       this  isn’t  enabled  by  default  because  it  isn’t  easy  to  tell  if  it  will  work between any two
       configurations.

       Properties:

       • Config: server_side_across_configs

       • Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS

       • Type: bool

       • Default: false

   –drive-disable-http2
       Disable drive using http2.

       There is currently an unsolved issue with the google drive  backend  and  HTTP/2.   HTTP/2  is  therefore
       disabled by default for the drive backend but can be re-enabled here.  When the issue is solved this flag
       will be removed.

       See: https://github.com/rclone/rclone/issues/3631

       Properties:

       • Config: disable_http2

       • Env Var: RCLONE_DRIVE_DISABLE_HTTP2

       • Type: bool

       • Default: true

   –drive-stop-on-upload-limit
       Make upload limit errors be fatal.

       At  the  time  of writing it is only possible to upload 750 GiB of data to Google Drive a day (this is an
       undocumented limit).  When this limit is  reached  Google  Drive  produces  a  slightly  different  error
       message.   When  this  flag  is  set it causes these errors to be fatal.  These will stop the in-progress
       sync.

       Note that this detection is relying on error message strings which Google don’t document so it may  break
       in the future.

       See: https://github.com/rclone/rclone/issues/3857

       Properties:

       • Config: stop_on_upload_limit

       • Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT

       • Type: bool

       • Default: false

   –drive-stop-on-download-limit
       Make download limit errors be fatal.

       At the time of writing it is only possible to download 10 TiB of data from Google Drive a day (this is an
       undocumented  limit).   When  this  limit  is  reached  Google  Drive produces a slightly different error
       message.  When this flag is set it causes these errors to be fatal.   These  will  stop  the  in-progress
       sync.

       Note  that this detection is relying on error message strings which Google don’t document so it may break
       in the future.

       Properties:

       • Config: stop_on_download_limit

       • Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT

       • Type: bool

       • Default: false

   –drive-skip-shortcuts
       If set skip shortcut files.

       Normally rclone dereferences shortcut files making them appear as if they are the original file (see  the
       shortcuts section).  If this flag is set then rclone will ignore shortcut files completely.

       Properties:

       • Config: skip_shortcuts

       • Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS

       • Type: bool

       • Default: false

   –drive-skip-dangling-shortcuts
       If set skip dangling shortcut files.

       If this is set then rclone will not show any dangling shortcuts in listings.

       Properties:

       • Config: skip_dangling_shortcuts

       • Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS

       • Type: bool

       • Default: false

   –drive-resource-key
       Resource key for accessing a link-shared file.

       If you need to access files shared with a link like this

              https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing

       Then  you  will need to use the first part “XXX” as the “root_folder_id” and the second part “YYY” as the
       “resource_key” otherwise you will get 404 not found errors when trying to access the directory.

       See: https://developers.google.com/drive/api/guides/resource-keys

       This resource key requirement only applies to a subset of old files.

       Note also that opening the folder once in the web interface (with the user  you’ve  authenticated  rclone
       with) seems to be enough so that the resource key is no needed.

       Properties:

       • Config: resource_key

       • Env Var: RCLONE_DRIVE_RESOURCE_KEY

       • Type: string

       • Required: false

   –drive-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_DRIVE_ENCODING

       • Type: MultiEncoder

       • Default: InvalidUtf8

   Backend commands
       Here are the commands specific to the drive backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   get
       Get command for fetching the drive config parameters

              rclone backend get remote: [options] [<arguments>+]

       This is a get command which will be used to fetch the various drive config parameters

       Usage Examples:

              rclone backend get drive: [-o service_account_file] [-o chunk_size]
              rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]

       Options:

       • “chunk_size”: show the current upload chunk size

       • “service_account_file”: show the current service account file

   set
       Set command for updating the drive config parameters

              rclone backend set remote: [options] [<arguments>+]

       This is a set command which will be used to update the various drive config parameters

       Usage Examples:

              rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
              rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]

       Options:

       • “chunk_size”: update the current upload chunk size

       • “service_account_file”: update the current service account file

   shortcut
       Create shortcuts from files or directories

              rclone backend shortcut remote: [options] [<arguments>+]

       This command creates shortcuts from files or directories.

       Usage:

              rclone backend shortcut drive: source_item destination_shortcut
              rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut

       In the first example this creates a shortcut from the “source_item” which can be a file or a directory to
       the  “destination_shortcut”.   The  “source_item” and the “destination_shortcut” should be relative paths
       from “drive:”

       In the second example this creates a  shortcut  from  the  “source_item”  relative  to  “drive:”  to  the
       “destination_shortcut”  relative  to  “drive2:”.   This  may  fail  with  a  permission error if the user
       authenticated with “drive2:” can’t read files from “drive:”.

       Options:

       • “target”: optional target remote for the shortcut destination

   drives
       List the Shared Drives available to this account

              rclone backend drives remote: [options] [<arguments>+]

       This command lists the Shared Drives (Team Drives) available to this account.

       Usage:

              rclone backend [-o config] drives drive:

       This will return a JSON list of objects like this

              [
                  {
                      "id": "0ABCDEF-01234567890",
                      "kind": "drive#teamDrive",
                      "name": "My Drive"
                  },
                  {
                      "id": "0ABCDEFabcdefghijkl",
                      "kind": "drive#teamDrive",
                      "name": "Test Drive"
                  }
              ]

       With the -o config parameter it will output the list in a format suitable for adding to a config file  to
       make aliases for all the drives found and a combined drive.

              [My Drive]
              type = alias
              remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:

              [Test Drive]
              type = alias
              remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:

              [AllDrives]
              type = combine
              upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

       Adding  this  to  the  rclone  config file will cause those team drives to be accessible with the aliases
       shown.  Any illegal characters will be substituted  with  “_”  and  duplicate  names  will  have  numbers
       suffixed.  It will also add a remote called AllDrives which shows all the shared drives combined into one
       directory tree.

   untrash
       Untrash files and directories

              rclone backend untrash remote: [options] [<arguments>+]

       This command untrashes all the files and directories in the directory passed in recursively.

       Usage:

       This takes an optional directory to trash which make this easier to use via the API.

              rclone backend untrash drive:directory
              rclone backend -i untrash drive:directory subdir

       Use the -i flag to see what would be restored before restoring it.

       Result:

              {
                  "Untrashed": 17,
                  "Errors": 0
              }

   copyid
       Copy files by ID

              rclone backend copyid remote: [options] [<arguments>+]

       This command copies files by ID

       Usage:

              rclone backend copyid drive: ID path
              rclone backend copyid drive: ID1 path1 ID2 path2

       It  copies  the  drive  file with ID given to the path (an rclone path which will be passed internally to
       rclone copyto).  The ID and path pairs can be repeated.

       The path should end with a / to indicate copy the file as named to this directory.   If  it  doesn’t  end
       with a / then the last path component will be used as the file name.

       If the destination is a drive backend then server-side copying will be attempted if possible.

       Use the -i flag to see what would be copied before copying.

   exportformats
       Dump the export formats for debug purposes

              rclone backend exportformats remote: [options] [<arguments>+]

   importformats
       Dump the import formats for debug purposes

              rclone backend importformats remote: [options] [<arguments>+]

   Limitations
       Drive  has  quite a lot of rate limiting.  This causes rclone to be limited to transferring about 2 files
       per second only.  Individual files may be transferred much faster at 100s of  MiB/s  but  lots  of  small
       files can take a long time.

       Server  side  copies  are  also  subject  to  a separate rate limit.  If you see User rate limit exceeded
       errors, wait at least 24 hours and retry.  You can disable server-side  copies  with  --disable  copy  to
       download and upload the files if you prefer.

   Limitations of Google Docs
       Google  docs  will  appear as size -1 in rclone ls, rclone ncdu etc, and as size 0 in anything which uses
       the VFS layer, e.g. rclone mount and rclone serve.  When calculating  directory  totals,  e.g. in  rclone
       size and rclone ncdu, they will be counted in as empty files.

       This is because rclone can’t find out the size of the Google docs without downloading them.

       Google  docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size
       when doing the transfer.

       However an unfortunate consequence of this is that you may not be able  to  download  Google  docs  using
       rclone  mount.   If  it  doesn’t work you will get a 0 sized file.  If you try again the doc may gain its
       correct size and be downloadable.  Whether it will work on not depends on the application  accessing  the
       mount and the OS you are running - experiment to find out if it does work for you!

   Duplicated files
       Sometimes,  for  no reason I’ve been able to track down, drive will duplicate a file that rclone uploads.
       Drive unlike all the other remotes can have duplicated files.

       Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

       Use rclone dedupe to fix duplicated files.

       Note that this isn’t just a problem with rclone, even Google Photos on Android duplicates files on  drive
       sometimes.

   Rclone appears to be re-copying files it shouldn’t
       The  most likely cause of this is the duplicated file issue above - run rclone dedupe and check your logs
       for duplicate object or directory messages.

       This can also be caused by a delay/caching on google  drive’s  end  when  comparing  directory  listings.
       Specifically with team drives used in combination with –fast-list.  Files that were uploaded recently may
       not appear on the directory list sent to rclone when using –fast-list.

       Waiting  a  moderate  period  of  time between attempts (estimated to be approximately 1 hour) and/or not
       using –fast-list both seem to be effective in preventing the problem.

   Making your own client_id
       When you use rclone with Google drive in its default configuration  you  are  using  rclone’s  client_id.
       This  is  shared between all the rclone users.  There is a global rate limit on the number of queries per
       second that each client_id can do set by Google.  rclone already has a high quota and I will continue  to
       make sure it is high enough by contacting Google.

       It  is  strongly  recommended to use your own client ID as the default rclone ID is heavily used.  If you
       have multiple services running, it is recommended to use an API key for each service.  The default Google
       quota is 10 transactions per second so it is recommended to stay under that number as  if  you  use  more
       than that, it will cause rclone to rate limit and make things slower.

       Here is how to create your own Google Drive client ID for rclone:

        1. Log  into the Google API Console with your Google account.  It doesn’t matter what Google account you
           use.  (It need not be the same account as the Google Drive you want to access)

        2. Select a project or create a new project.

        3. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”.

        4. Click “Credentials” in the left-side panel (not “Create credentials”, which opens the  wizard),  then
           “Create credentials”

        5. If  you  already  configured  an “Oauth Consent Screen”, then skip to the next step; if not, click on
           “CONFIGURE CONSENT SCREEN” button (near the top  right  corner  of  the  right  panel),  then  select
           “External”  and  click on “CREATE”; on the next screen, enter an “Application name” (“rclone” is OK);
           enter “User Support Email” (your own email is OK); enter “Developer Contact Email” (your own email is
           OK); then click on “Save” (all other data is optional).  Click again on  “Credentials”  on  the  left
           panel to go back to the “Credentials” screen.

           (PS: if you are a GSuite user, you could also select “Internal” instead of “External” above, but this
           will restrict API use to Google Workspace users in your organisation).

        6. Click on the “+ CREATE CREDENTIALS” button at the top of the screen, then select “OAuth client ID”.

        7. Choose an application type of “Desktop app” and click “Create”.  (the default name is fine)

        8. It will show you a client ID and client secret.  Make a note of these.

           (If  you selected “External” at Step 5 continue to “Publish App” in the Steps 9 and 10.  If you chose
           “Internal” you don’t need to publish and can skip straight to Step 11.)

        9. Go to “Oauth consent screen” and press “Publish App”

       10. Click “OAuth consent screen”, then click “PUBLISH APP” button and confirm, or add your account  under
           “Test users”.

       11. Provide the noted client ID and client secret to rclone.

       Be  aware  that,  due  to  the  “enhanced  security” recently introduced by Google, you are theoretically
       expected to “submit your app for verification” and then wait a  few  weeks(!)   for  their  response;  in
       practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will
       be  a very scary confirmation screen shown when you connect via your browser for rclone to be able to get
       its token-id (but as this only happens during the remote configuration, it’s not such a big deal).

       (Thanks to @balazer on github for these instructions.)

       Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The  request
       failed  because  changes  to  one  of  the  field  of  the  resource  is not supported”.  As a convenient
       workaround, the necessary Google Drive API key can be created on the Python Quickstart page.   Just  push
       the  Enable  the  Drive  API button to receive the Client ID and Secret.  Note that it will automatically
       create a new project in the API Console.

Google Photos

       The rclone backend for Google Photos is a specialized backend for transferring photos and videos  to  and
       from Google Photos.

       NB  The  Google  Photos API which rclone uses has quite a few limitations, so please read the limitations
       section carefully to make sure it is suitable for your use.

   Configuration
       The initial setup for google cloud storage involves getting a token from Google Photos which you need  to
       do in your browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Google Photos
                 \ "google photos"
              [snip]
              Storage> google photos
              ** See help for google photos backend at: https://rclone.org/googlephotos/ **

              Google Application Client Id
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_id>
              Google Application Client Secret
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_secret>
              Set to make the Google Photos backend read only.

              If you choose read only then rclone will only request read only access
              to your photos, otherwise rclone will request full access.
              Enter a boolean value (true or false). Press Enter for the default ("false").
              read_only>
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code

              *** IMPORTANT: All media items uploaded to Google Photos with rclone
              *** are stored in full resolution at original quality.  These uploads
              *** will count towards storage in your Google Account.

              --------------------
              [remote]
              type = google photos
              token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Note  that  rclone runs a webserver on your local machine to collect the token as returned from Google if
       you use auto config mode.  This only runs from the moment it opens your browser to  the  moment  you  get
       back  the  verification  code.  This is on http://127.0.0.1:53682/ and this may require you to unblock it
       temporarily if you are running a host firewall, or use manual mode.

       This remote is called remote and can now be used like this

       See all the albums in your photos

              rclone lsd remote:album

       Make a new album

              rclone mkdir remote:album/newAlbum

       List the contents of an album

              rclone ls remote:album/newAlbum

       Sync /home/local/images to the Google Photos, removing any excess files in the album.

              rclone sync -i /home/local/image remote:album/newAlbum

   Layout
       As Google Photos is not a general purpose cloud storage system the  backend  is  laid  out  to  help  you
       navigate it.

       The  directories  under  media  show  different  ways  of  categorizing the media.  Each file will appear
       multiple times.  So if you want to make a backup of  your  google  photos  you  might  choose  to  backup
       remote:media/by-month.  (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)

       Note  that  all  your  photos and videos will appear somewhere under media, but they may not appear under
       album unless you’ve put them into albums.

              /
              - upload
                  - file1.jpg
                  - file2.jpg
                  - ...
              - media
                  - all
                      - file1.jpg
                      - file2.jpg
                      - ...
                  - by-year
                      - 2000
                          - file1.jpg
                          - ...
                      - 2001
                          - file2.jpg
                          - ...
                      - ...
                  - by-month
                      - 2000
                          - 2000-01
                              - file1.jpg
                              - ...
                          - 2000-02
                              - file2.jpg
                              - ...
                      - ...
                  - by-day
                      - 2000
                          - 2000-01-01
                              - file1.jpg
                              - ...
                          - 2000-01-02
                              - file2.jpg
                              - ...
                      - ...
              - album
                  - album name
                  - album name/sub
              - shared-album
                  - album name
                  - album name/sub
              - feature
                  - favorites
                      - file1.jpg
                      - file2.jpg

       There are two writable parts of the  tree,  the  upload  directory  and  sub  directories  of  the  album
       directory.

       The  upload  directory  is  for uploading files you don’t want to put into albums.  This will be empty to
       start with and will contain the files you’ve uploaded for one rclone session only, becoming  empty  again
       when  you  restart  rclone.   The use case for this would be if you have a load of files you just want to
       once off dump into Google Photos.  For repeated syncing, uploading to album will work better.

       Directories within the album directory are also writeable and you may  create  new  directories  (albums)
       under  album.   If you copy files with a directory hierarchy in there then rclone will create albums with
       the / character in them.  For example if you do

              rclone copy /path/to/images remote:album/images

       and the images directory contains

              images
                  - file1.jpg
                  dir
                      file2.jpg
                  dir2
                      dir3
                          file3.jpg

       Then rclone will create the following albums with the following files in

       • images

         • file1.jpg

       • images/dir

         • file2.jpg

       • images/dir2/dir3

         • file3.jpg

       This means that you can use the album path pretty much like a normal filesystem and it is a  good  target
       for repeated syncing.

       The shared-album directory shows albums shared with you or by you.  This is similar to the Sharing tab in
       the Google Photos web interface.

   Standard options
       Here are the Standard options specific to google photos (Google Photos).

   –gphotos-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_GPHOTOS_CLIENT_ID

       • Type: string

       • Required: false

   –gphotos-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_GPHOTOS_CLIENT_SECRET

       • Type: string

       • Required: false

   –gphotos-read-only
       Set to make the Google Photos backend read only.

       If  you  choose read only then rclone will only request read only access to your photos, otherwise rclone
       will request full access.

       Properties:

       • Config: read_only

       • Env Var: RCLONE_GPHOTOS_READ_ONLY

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to google photos (Google Photos).

   –gphotos-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_GPHOTOS_TOKEN

       • Type: string

       • Required: false

   –gphotos-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_GPHOTOS_AUTH_URL

       • Type: string

       • Required: false

   –gphotos-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_GPHOTOS_TOKEN_URL

       • Type: string

       • Required: false

   –gphotos-read-size
       Set to read the size of media items.

       Normally rclone does not read the size of media items since this takes another transaction.   This  isn’t
       necessary  for syncing.  However rclone mount needs to know the size of files in advance of reading them,
       so setting this flag when using rclone mount is recommended if you want to read the media.

       Properties:

       • Config: read_size

       • Env Var: RCLONE_GPHOTOS_READ_SIZE

       • Type: bool

       • Default: false

   –gphotos-start-year
       Year limits the photos to be downloaded to those which are uploaded after the given year.

       Properties:

       • Config: start_year

       • Env Var: RCLONE_GPHOTOS_START_YEAR

       • Type: int

       • Default: 2000

   –gphotos-include-archived
       Also view and download archived media.

       By default, rclone does not request archived media.  Thus, when syncing, archived media is not visible in
       directory listings or transferred.

       Note that media in albums is always visible and synced, no matter their archive status.

       With this flag, archived media are always visible in directory listings and transferred.

       Without this flag, archived media will not be visible in directory listings and won’t be transferred.

       Properties:

       • Config: include_archived

       • Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED

       • Type: bool

       • Default: false

   –gphotos-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_GPHOTOS_ENCODING

       • Type: MultiEncoder

       • Default: Slash,CrLf,InvalidUtf8,Dot

   Limitations
       Only images and videos can be uploaded.  If you attempt to upload non videos or images  or  formats  that
       Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when
       it is put turned into a media item.

       Note  that  all  media  items  uploaded to Google Photos through the API are stored in full resolution at
       “original quality” and will count towards your storage quota in your Google Account.  The  API  does  not
       offer a way to upload in “high quality” mode..

       rclone  about  is  not  supported  by the Google Photos backend.  Backends without this capability cannot
       determine free space for an rclone mount or use policy mfs (most free space) as a  member  of  an  rclone
       union remote.

       See List of backends that do not support rclone about See rclone about

   Downloading Images
       When  Images  are  downloaded  this strips EXIF location (according to the docs and my tests).  This is a
       limitation of the Google Photos API and is covered by bug #112096115.

       The current google API does not allow photos to be downloaded  at  original  resolution.   This  is  very
       important  if  you are, for example, relying on “Google Photos” as a backup of your photos.  You will not
       be able to use rclone to redownload original images.  You could  use  `google  takeout'  to  recover  the
       original photos as a last resort

   Downloading Videos
       When  videos  are  downloaded they are downloaded in a really compressed version of the video compared to
       downloading it via the Google Photos web interface.  This is covered by bug #113672044.

   Duplicates
       If a file name is duplicated in a directory then rclone will add the file ID into its name.  So two files
       called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs  are  a  lot
       longer alas!).

       If  you  upload  the same image (with the same binary data) twice then Google Photos will deduplicate it.
       However it will retain the filename from the first upload which may confuse rclone.  For example  if  you
       uploaded  an  image to upload then uploaded the same image to album/my_album the filename of the image in
       album/my_album will be what it was uploaded with initially, not what you uploaded it with to  album.   In
       practise this shouldn’t cause too many problems.

   Modified time
       The  date  shown of media in Google Photos is the creation date as determined by the EXIF information, or
       the upload date if that is not known.

       This is not changeable by rclone and is not the modification date of the media on local disk.  This means
       that rclone cannot use the dates from Google Photos for syncing purposes.

   Size
       The Google Photos API does not return the size of media.  This means that when syncing to Google  Photos,
       rclone can only do a file existence check.

       It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so
       is  very slow and uses up a lot of transactions.  This can be enabled with the --gphotos-read-size option
       or the read_size = true config parameter.

       If you want to use the backend with rclone mount you may need to enable this flag (depending on  your  OS
       and application using the photos) otherwise you may not be able to read media off the mount.  You’ll need
       to experiment to see if it works for you without the flag.

   Albums
       Rclone can only upload files to albums it created.  This is a limitation of the Google Photos API.

       Rclone can remove files it uploaded from albums it created only.

   Deleting files
       Rclone  can remove files from albums it created, but note that the Google Photos API does not allow media
       to be deleted permanently so this media will still remain.  See bug #109759781.

       Rclone cannot delete files anywhere except under album.

   Deleting albums
       The Google Photos API does not support deleting albums - see bug #135714733.

Hasher (EXPERIMENTAL)

       Hasher is a special overlay backend to create remotes which handle checksums  for  other  remotes.   It’s
       main  functions  include:  -  Emulate hash types unimplemented by backends - Cache checksums to help with
       slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files

   Getting started
       To use Hasher, first set up the underlying remote  following  the  configuration  instructions  for  that
       remote.  You can also use a local pathname instead of a remote.  Check that your base remote is working.

       Let’s  call  the base remote myRemote:path here.  Note that anything inside myRemote:path will be handled
       by hasher and anything outside won’t.  This means that if you are using a bucket based  remote  (S3,  B2,
       Swift) then you should put the bucket in the remote s3:bucket.

       Now proceed to interactive or manual configuration.

   Interactive configuration
       Run rclone config:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> Hasher1
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Handle checksums for other remotes
                 \ "hasher"
              [snip]
              Storage> hasher
              Remote to cache checksums for, like myremote:mypath.
              Enter a string value. Press Enter for the default ("").
              remote> myRemote:path
              Comma separated list of supported checksum types.
              Enter a string value. Press Enter for the default ("md5,sha1").
              hashsums> md5
              Maximum time to keep checksums in cache. 0 = no cache, off = cache forever.
              max_age> off
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              --------------------
              [Hasher1]
              type = hasher
              remote = myRemote:path
              hashsums = md5
              max_age = off
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Manual configuration
       Run    rclone    config   path   to   see   the   path   of   current   active   config   file,   usually
       YOURHOME/.config/rclone/rclone.conf.  Open it in your favorite text editor, find  section  for  the  base
       remote and create new section for hasher like in the following examples:

              [Hasher1]
              type = hasher
              remote = myRemote:path
              hashes = md5
              max_age = off

              [Hasher2]
              type = hasher
              remote = /local/path
              hashes = dropbox,sha1
              max_age = 24h

       Hasher takes basically the following parameters: - remote is required, - hashes is a comma separated list
       of  supported  checksums  (by default md5,sha1), - max_age - maximum time to keep a checksum value in the
       cache, 0 will disable caching completely, off will cache “forever” (that is until the files get changed).

       Make sure the remote has : (colon) in.  If you specify the remote without a colon then rclone will use  a
       local  directory  of that name.  So if you use a remote of /local/path then rclone will handle hashes for
       that directory.  If you use remote = name literally then rclone will put files in a directory called name
       located under current directory.

   Usage
   Basic operations
       Now you can use it as Hasher2:subdir/file instead of base remote.  Hasher will transparently update cache
       with new checksums when a file is fully read or overwritten, like:

              rclone copy External:path/file Hasher:dest/path

              rclone cat Hasher:path/to/file > /dev/null

       The way to refresh all cached checksums (even unsupported by the  base  backend)  for  a  subtree  is  to
       re-download all files in the subtree.  For example, use hashsum --download using any supported hashsum on
       the command line (we just care to re-read):

              rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null

              rclone backend dump Hasher:path/to/subtree

       You can print or drop hashsum cache using custom backend commands:

              rclone backend dump Hasher:dir/subdir

              rclone backend drop Hasher:

   Pre-Seed from a SUM File
       Hasher  supports  two  backend  commands:  generic  SUM  file  import  and  faster  but  less  consistent
       stickyimport.

              rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]

       Instead of SHA1 it can be any hash supported by the remote.  The last argument  can  point  to  either  a
       local  or  an  other-remote:path text file in SUM format.  The command will parse the SUM file, then walk
       down the path given by the first argument, snapshot current fingerprints and fill in  the  cache  entries
       correspondingly.   -  Paths  in the SUM file are treated as relative to hasher:dir/subdir.  - The command
       will not check that supplied values are correct.  You must know what you are doing.  - This is a one-time
       action.  The SUM file will not get “attached” to the remote.  Cache  entries  can  still  be  overwritten
       later, should the object’s fingerprint change.  - The tree walk can take long depending on the tree size.
       You  can increase --checkers to make it faster.  Or use stickyimport if you don’t care about fingerprints
       and consistency.

              rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1

       stickyimport is similar to import but works much faster because it does not need to stat  existing  files
       and  skips  initial  tree  walk.  Instead of binding cache entries to file fingerprints it creates sticky
       entries bound to the file name alone ignoring size, modification time etc.   Such  hash  entries  can  be
       replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

   Configuration reference
   Standard options
       Here are the Standard options specific to hasher (Better checksums for other remotes).

   –hasher-remote
       Remote to cache checksums for (e.g. myRemote:path).

       Properties:

       • Config: remote

       • Env Var: RCLONE_HASHER_REMOTE

       • Type: string

       • Required: true

   –hasher-hashes
       Comma separated list of supported checksum types.

       Properties:

       • Config: hashes

       • Env Var: RCLONE_HASHER_HASHES

       • Type: CommaSepList

       • Default: md5,sha1

   –hasher-max-age
       Maximum time to keep checksums in cache (0 = no cache, off = cache forever).

       Properties:

       • Config: max_age

       • Env Var: RCLONE_HASHER_MAX_AGE

       • Type: Duration

       • Default: off

   Advanced options
       Here are the Advanced options specific to hasher (Better checksums for other remotes).

   –hasher-auto-size
       Auto-update checksum for files smaller than this size (disabled by default).

       Properties:

       • Config: auto_size

       • Env Var: RCLONE_HASHER_AUTO_SIZE

       • Type: SizeSuffix

       • Default: 0

   Metadata
       Any metadata supported by the underlying remote is read and written.

       See the metadata docs for more info.

   Backend commands
       Here are the commands specific to the hasher backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   drop
       Drop cache

              rclone backend drop remote: [options] [<arguments>+]

       Completely drop checksum cache.  Usage Example: rclone backend drop hasher:

   dump
       Dump the database

              rclone backend dump remote: [options] [<arguments>+]

       Dump cache records covered by the current remote

   fulldump
       Full dump of the database

              rclone backend fulldump remote: [options] [<arguments>+]

       Dump all cache records in the database

   import
       Import a SUM file

              rclone backend import remote: [options] [<arguments>+]

       Amend hash cache from a SUM file and bind checksums to files by size/time.  Usage Example: rclone backend
       import hasher:subdir md5 /path/to/sum.md5

   stickyimport
       Perform fast import of a SUM file

              rclone backend stickyimport remote: [options] [<arguments>+]

       Fill  hash  cache  from  a  SUM  file without verifying file fingerprints.  Usage Example: rclone backend
       stickyimport hasher:subdir md5 remote:path/to/sum.md5

   Implementation details (advanced)
       This section explains how various rclone operations work on a hasher remote.

       Disclaimer.  This section describes current implementation which can change in future rclone versions!.

   Hashsum command
       The rclone hashsum (or md5sum or sha1sum) command will:

       1. if requested hash is supported by lower level, just pass it.

       2. if object size is below auto_size then download object and calculate requested hashes on the fly.

       3. if unsupported and the size is big enough,  build  object  fingerprint  (including  size,  modtime  if
          supported, first-found other hash if any).

       4. if the strict match is found in cache for the requested remote, return the stored hash.

       5. if remote found but fingerprint mismatched, then purge the entry and proceed to step 6.

       6. if  remote  not  found  or  had no requested hash type or after step 5: download object, calculate all
          supported hashes on the fly and store in cache; return requested hash.

   Other operations
       • whenever a file is uploaded or downloaded in full, capture the stream to calculate all supported hashes
         on the fly and update database

       • server-side move will update keys of existing cache entries

       • deletefile will remove a single cache entry

       • purge will remove all cache entries under the purged path

       Note that setting max_age = 0 will disable checksum caching completely.

       If you set max_age = off, checksums in cache will never age, unless you fully rewrite or delete the file.

   Cache storage
       Cached  checksums  are  stored  as  bolt  database  files   under   rclone   cache   directory,   usually
       ~/.cache/rclone/kv/.   Databases  are maintained one per base backend, named like BaseRemote~hasher.bolt.
       Checksums for multiple alias-es into a single base backend will be stored in the  single  database.   All
       local  paths  are  treated  as  aliases  into the local backend (unless crypted or chunked) and stored in
       ~/.cache/rclone/kv/local~hasher.bolt.  Databases can be shared between multiple rclone processes.

HDFS

       https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html HDFS is           a
       distributed file-system, part of the Apache Hadoop framework.

       Paths are specified as remote: or remote:path/to/dir.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [skip]
              XX / Hadoop distributed file system
                 \ "hdfs"
              [skip]
              Storage> hdfs
              ** See help for hdfs backend at: https://rclone.org/hdfs/ **

              hadoop name node and port
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Connect to host namenode at port 8020
                 \ "namenode:8020"
              namenode> namenode.hadoop:8020
              hadoop user name
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Connect to hdfs as root
                 \ "root"
              username> root
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              --------------------
              [remote]
              type = hdfs
              namenode = namenode.hadoop:8020
              username = root
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              hadoop               hdfs

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

       This remote is called remote and can now be used like this

       See all the top level directories

              rclone lsd remote:

       List the contents of a directory

              rclone ls remote:directory

       Sync the remote directory to /home/local/directory, deleting any excess files.

              rclone sync -i remote:directory /home/local/directory

   Setting up your own HDFS instance for testing
       You     may     start     with     a    https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
       common/SingleCluster.html manual setup or use the docker image from the tests:

       If you want to build the docker image

              git clone https://github.com/rclone/rclone.git
              cd rclone/fstest/testserver/images/test-hdfs
              docker build --rm -t rclone/test-hdfs .

       Or you can just use the latest one pushed

              docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs

       NB it need few seconds to startup.

       For this docker image the remote needs to be configured like this:

              [remote]
              type = hdfs
              namenode = 127.0.0.1:8020
              username = root

       You can stop this image with docker kill rclone-hdfs (NB it does not use volumes, so  all  data  uploaded
       will be lost.)

   Modified time
       Time accurate to 1 second is stored.

   Checksum
       No checksums are implemented.

   Usage information
       You can use the rclone about remote: command which will display filesystem size and current usage.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       :           0x3A        :

       Invalid UTF-8 bytes will also be replaced.

   Standard options
       Here are the Standard options specific to hdfs (Hadoop distributed file system).

   –hdfs-namenode
       Hadoop name node and port.

       E.g.  “namenode:8020” to connect to host namenode at port 8020.

       Properties:

       • Config: namenode

       • Env Var: RCLONE_HDFS_NAMENODE

       • Type: string

       • Required: true

   –hdfs-username
       Hadoop user name.

       Properties:

       • Config: username

       • Env Var: RCLONE_HDFS_USERNAME

       • Type: string

       • Required: false

       • Examples:

         • “root”

           • Connect to hdfs as root.

   Advanced options
       Here are the Advanced options specific to hdfs (Hadoop distributed file system).

   –hdfs-service-principal-name
       Kerberos service principal name for the namenode.

       Enables  KERBEROS  authentication.  Specifies the Service Principal Name (SERVICE/FQDN) for the namenode.
       E.g.    "hdfs/namenode.hadoop.docker"   for   namenode   running   as   service    `hdfs'    with    FQDN
       `namenode.hadoop.docker'.

       Properties:

       • Config: service_principal_name

       • Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME

       • Type: string

       • Required: false

   –hdfs-data-transfer-protection
       Kerberos data transfer protection: authentication|integrity|privacy.

       Specifies whether or not authentication, data signature integrity checks, and wire encryption is required
       when  communicating  the the datanodes.  Possible values are `authentication', `integrity' and `privacy'.
       Used only with KERBEROS enabled.

       Properties:

       • Config: data_transfer_protection

       • Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION

       • Type: string

       • Required: false

       • Examples:

         • “privacy”

           • Ensure authentication, integrity and encryption enabled.

   –hdfs-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_HDFS_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot

   Limitations
       • No server-side Move or DirMove.

       • Checksums not implemented.

HiDrive

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

       The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser.
       rclone config walks you through it.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found - make a new one
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / HiDrive
                 \ "hidrive"
              [snip]
              Storage> hidrive
              OAuth Client Id - Leave blank normally.
              client_id>
              OAuth Client Secret - Leave blank normally.
              client_secret>
              Access permissions that rclone should use when requesting access from HiDrive.
              Leave blank normally.
              scope_access>
              Edit advanced config?
              y/n> n
              Use auto config?
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              type = hidrive
              token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"}
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       You should be aware that OAuth-tokens can be used to access your account and hence should not  be  shared
       with other persons. See the below section for more information.

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note  that  rclone  runs a webserver on your local machine to collect the token as returned from HiDrive.
       This only runs from the moment it opens your browser to the moment you get back  the  verification  code.
       The  webserver  runs  on http://127.0.0.1:53682/.  If local port 53682 is protected by a firewall you may
       need to temporarily unblock the firewall to complete authorization.

       Once configured you can then use rclone like this,

       List directories in top level of your HiDrive root folder

              rclone lsd remote:

       List all the files in your HiDrive filesystem

              rclone ls remote:

       To copy a local directory to a HiDrive directory called backup

              rclone copy /home/source remote:backup

   Keeping your tokens safe
       Any OAuth-tokens will be stored by rclone in the remote’s configuration file as unencrypted text.  Anyone
       can use a valid refresh-token to access your HiDrive filesystem without knowing your password.  Therefore
       you should make sure no one else can access your configuration.

       It is possible to encrypt rclone’s configuration  file.   You  can  find  information  on  securing  your
       configuration file by viewing the configuration encryption docs.

   Invalid refresh token
       As  can  be verified here, each refresh_token (for Native Applications) is valid for 60 days.  If used to
       access HiDrivei, its validity will be automatically extended.

       This means that if you

       • Don’t use the HiDrive remote for 60 days

       then rclone will return an error which includes a text that implies  the  refresh  token  is  invalid  or
       expired.

       To fix this you will need to authorize rclone to access your HiDrive account again.

       Using

              rclone config reconnect remote:

       the process is very similar to the process of initial setup exemplified before.

   Modified time and hashes
       HiDrive allows modification times to be set on objects accurate to 1 second.

       HiDrive  supports  its own hash type which  is  used  to  verify  the  integrity  of  file contents after
       successful transfers.

   Restricted filename characters
       HiDrive cannot store files or folders that include / (0x2F) or null-bytes  (0x00)  in  their  name.   Any
       other  characters can be used in the names of files or folders.  Additionally, files or folders cannot be
       named either of the following: . or ..

       Therefore rclone will automatically replace these characters, if files or folders are stored or  accessed
       with such names.

       You can read about how this filename encoding works in general here.

       Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less.

   Transfers
       HiDrive limits file sizes per single request to a maximum of 2 GiB.  To allow storage of larger files and
       allow  for  better  upload  performance, the hidrive backend will use a chunked transfer for files larger
       than 96 MiB.  Rclone will upload multiple parts/chunks of the file at  the  same  time.   Chunks  in  the
       process  of  being uploaded are buffered in memory, so you may want to restrict this behaviour on systems
       with limited resources.

       You can customize this behaviour using the following options:

       • chunk_size: size of file parts

       • upload_cutoff: files larger or equal to this in size will use a chunked transfer

       • upload_concurrency: number of file-parts to upload at the same time

       See the below section about configuration options for more details.

   Root folder
       You can set the root folder for rclone.  This is the directory that rclone considers to be  the  root  of
       your HiDrive.

       Usually, you will leave this blank, and rclone will use the root of the account.

       However, you can set this to restrict rclone to a specific folder hierarchy.

       This  works  by  prepending  the contents of the root_prefix option to any paths accessed by rclone.  For
       example, the following two ways to access the home directory are equivalent:

              rclone lsd --hidrive-root-prefix="/users/test/" remote:path

              rclone lsd remote:/users/test/path

       See the below section about configuration options for more details.

   Directory member count
       By default, rclone will know the number of directory members contained  in  a  directory.   For  example,
       rclone lsd uses this information.

       The acquisition of this information will result in additional time costs for HiDrive’s API.  When dealing
       with  large  directory structures, it may be desirable to circumvent this time cost, especially when this
       information is not explicitly needed.  For this, the disable_fetching_member_count option can be used.

       See the below section about configuration options for more details.

   Standard options
       Here are the Standard options specific to hidrive (HiDrive).

   –hidrive-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_HIDRIVE_CLIENT_ID

       • Type: string

       • Required: false

   –hidrive-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_HIDRIVE_CLIENT_SECRET

       • Type: string

       • Required: false

   –hidrive-scope-access
       Access permissions that rclone should use when requesting access from HiDrive.

       Properties:

       • Config: scope_access

       • Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS

       • Type: string

       • Default: “rw”

       • Examples:

         • “rw”

           • Read and write access to resources.

         • “ro”

           • Read-only access to resources.

   Advanced options
       Here are the Advanced options specific to hidrive (HiDrive).

   –hidrive-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_HIDRIVE_TOKEN

       • Type: string

       • Required: false

   –hidrive-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_HIDRIVE_AUTH_URL

       • Type: string

       • Required: false

   –hidrive-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_HIDRIVE_TOKEN_URL

       • Type: string

       • Required: false

   –hidrive-scope-role
       User-level that rclone should use when requesting access from HiDrive.

       Properties:

       • Config: scope_role

       • Env Var: RCLONE_HIDRIVE_SCOPE_ROLE

       • Type: string

       • Default: “user”

       • Examples:

         • “user”

           • User-level access to management permissions.

           • This will be sufficient in most cases.

         • “admin”

           • Extensive access to management permissions.

         • “owner”

           • Full access to management permissions.

   –hidrive-root-prefix
       The root/parent folder for all paths.

       Fill in to use the specified folder as the parent for all paths given to the remote.  This way rclone can
       use any folder as its starting point.

       Properties:

       • Config: root_prefix

       • Env Var: RCLONE_HIDRIVE_ROOT_PREFIX

       • Type: string

       • Default: “/”

       • Examples:

         • “/”

           • The topmost directory accessible by rclone.

           • This will be equivalent with “root” if rclone uses a regular HiDrive user account.

         • “root”

           • The topmost directory of the HiDrive user account

         • “”

           • This specifies that there is no root-prefix for your paths.

           • When using this you will always  need  to  specify  paths  to  this  remote  with  a  valid  parent
             e.g. “remote:/path/to/dir” or “remote:root/path/to/dir”.

   –hidrive-endpoint
       Endpoint for the service.

       This is the URL that API-calls will be made to.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_HIDRIVE_ENDPOINT

       • Type: string

       • Default: “https://api.hidrive.strato.com/2.1”

   –hidrive-disable-fetching-member-count
       Do not fetch number of objects in directories unless it is absolutely necessary.

       Requests may be faster if the number of objects in subdirectories is not fetched.

       Properties:

       • Config: disable_fetching_member_count

       • Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT

       • Type: bool

       • Default: false

   –hidrive-chunk-size
       Chunksize for chunked uploads.

       Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this
       size.

       The  upper  limit  for  this  is 2147483647 bytes (about 2.000Gi).  That is the maximum amount of bytes a
       single upload-operation will support.  Setting this above the upper limit or to  a  negative  value  will
       cause uploads to fail.

       Setting  this to larger values may increase the upload speed at the cost of using more memory.  It can be
       set to smaller values smaller to save on memory.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_HIDRIVE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 48Mi

   –hidrive-upload-cutoff
       Cutoff/Threshold for chunked uploads.

       Any files larger than this will be uploaded in chunks of the configured chunksize.

       The upper limit for this is 2147483647 bytes (about 2.000Gi).  That is the  maximum  amount  of  bytes  a
       single upload-operation will support.  Setting this above the upper limit will cause uploads to fail.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 96Mi

   –hidrive-upload-concurrency
       Concurrency for chunked uploads.

       This  is the upper limit for how many transfers for the same file are running concurrently.  Setting this
       above to a value smaller than 1 will cause uploads to deadlock.

       If you are uploading small numbers of large files over high-speed links and these uploads  do  not  fully
       utilize your bandwidth, then increasing this may help to speed up the transfers.

       Properties:

       • Config: upload_concurrency

       • Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY

       • Type: int

       • Default: 4

   –hidrive-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_HIDRIVE_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Dot

   Limitations
   Symbolic links
       HiDrive  is  able  to  store  symbolic  links (symlinks) by design, for example, when unpacked from a zip
       archive.

       There exists no direct mechanism to manage native symlinks in remotes.  As such this  implementation  has
       chosen  to  ignore  any native symlinks present in the remote.  rclone will not be able to access or show
       any symlinks stored in the hidrive-remote.  This means symlinks cannot be individually  removed,  copied,
       or moved, except when removing, copying, or moving the parent folder.

       This does not affect the .rclonelink-files that rclone uses to encode and store symbolic links.

   Sparse files
       It is possible to store sparse files in HiDrive.

       Note  that  copying  a  sparse  file  will  expand the holes into null-byte (0x00) regions that will then
       consume disk space.  Likewise, when downloading a sparse file, the resulting  file  will  have  null-byte
       regions in the place of file holes.

HTTP

       The  HTTP  remote  is  a read only remote for reading files of a webserver.  The webserver should provide
       file listings which rclone will read and turn into a remote.  This has been tested with common webservers
       such as Apache/Nginx/Caddy and will likely work with file listings from most web servers.  (If it doesn’t
       then please file an issue, or send a pull request!)

       Paths are specified as remote: or remote:path.

       The remote: represents the configured url, and any path following it will be resolved  relative  to  this
       url,  according  to the URL standard.  This means with remote url https://beta.rclone.org/branch and path
       fix, the resolved URL will be https://beta.rclone.org/branch/fix, while with path /fix the  resolved  URL
       will be https://beta.rclone.org/fix as the absolute path is resolved from the root of the domain.

       If  the  path  following the remote: ends with / it will be assumed to point to a directory.  If the path
       does not end with /, then a HEAD request is sent and the response used to decide if it it is treated as a
       file or a directory (run with -vv to see details).  When  –http-no-head  is  specified,  a  path  without
       ending  / is always assumed to be a file.  If rclone incorrectly assumes the path is a file, the solution
       is to specify the path with ending /.  When you know the path is a directory, ending it with / is  always
       better as it avoids the initial HEAD request.

       To just download a single file it is easier to use copyurl.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / HTTP
                 \ "http"
              [snip]
              Storage> http
              URL of http host to connect to
              Choose a number from below, or type in your own value
               1 / Connect to example.com
                 \ "https://example.com"
              url> https://beta.rclone.org
              Remote config
              --------------------
              [remote]
              url = https://beta.rclone.org
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              remote               http

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

       This remote is called remote and can now be used like this

       See all the top level directories

              rclone lsd remote:

       List the contents of a directory

              rclone ls remote:directory

       Sync the remote directory to /home/local/directory, deleting any excess files.

              rclone sync -i remote:directory /home/local/directory

   Read only
       This remote is read only - you can’t upload files to an HTTP server.

   Modified time
       Most HTTP servers store time accurate to 1 second.

   Checksum
       No checksums are stored.

   Usage without a config file
       Since the http remote only has one config parameter it is easy to use without a config file:

              rclone lsd --http-url https://beta.rclone.org :http:

       or:

              rclone lsd :http,url='https://beta.rclone.org':

   Standard options
       Here are the Standard options specific to http (HTTP).

   –http-url
       URL of HTTP host to connect to.

       E.g.  “https://example.com”, or “https://user:pass@example.com” to use a username and password.

       Properties:

       • Config: url

       • Env Var: RCLONE_HTTP_URL

       • Type: string

       • Required: true

   Advanced options
       Here are the Advanced options specific to http (HTTP).

   –http-headers
       Set HTTP headers for all transactions.

       Use this to set additional HTTP headers for all transactions.

       The input format is comma separated list of key,value pairs.  Standard CSV encoding may be used.

       For example, to set a Cookie use `Cookie,name=value', or `“Cookie”,“name=value”'.

       You can set multiple headers, e.g. `“Cookie”,“name=value”,“Authorization”,“xxx”'.

       Properties:

       • Config: headers

       • Env Var: RCLONE_HTTP_HEADERS

       • Type: CommaSepList

       • Default:

   –http-no-slash
       Set this if the site doesn’t end directories with /.

       Use this if your target website does not use / on the end of directories.

       A  /  on the end of a path is how rclone normally tells the difference between files and directories.  If
       this flag is set, then rclone will treat all files with Content-Type: text/html as directories  and  read
       URLs from them rather than downloading them.

       Note that this may cause rclone to confuse genuine HTML files with directories.

       Properties:

       • Config: no_slash

       • Env Var: RCLONE_HTTP_NO_SLASH

       • Type: bool

       • Default: false

   –http-no-head
       Don’t use HEAD requests.

       HEAD requests are mainly used to find file sizes in dir listing.  If your site is being very slow to load
       then you can try this option.  Normally rclone does a HEAD request for each potential file in a directory
       listing to:

       • find its size

       • check it really exists

       • check to see if it is a directory

       If  you set this option, rclone will not do the HEAD request.  This will mean that directory listings are
       much quicker, but rclone won’t have the times or sizes of any files, and some files that don’t exist  may
       be in the listing.

       Properties:

       • Config: no_head

       • Env Var: RCLONE_HTTP_NO_HEAD

       • Type: bool

       • Default: false

   Limitations
       rclone  about  is  not  supported by the HTTP backend.  Backends without this capability cannot determine
       free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Internet Archive

       The Internet Archive backend utilizes Items on archive.org

       Refer to IAS3 API documentation for the API this backend uses.

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:item/path/to/dir.

       Unlike S3, listing up all items uploaded by you isn’t supported.

       Once you have made a remote, you can use it like this:

       Make a new item

              rclone mkdir remote:item

       List the contents of a item

              rclone ls remote:item

       Sync /home/local/directory to the remote item, deleting any excess files in the item.

              rclone sync -i /home/local/directory remote:item

   Notes
       Because  of Internet Archive’s architecture, it enqueues write operations (and extra post-processings) in
       a per-item queue.  You can check item’s queue  at  https://catalogd.archive.org/history/item-name-here  .
       Because  of  that,  all uploads/deletes will not show up immediately and takes some time to be available.
       The per-item queue is enqueued to an another queue, Item Deriver Queue.  You can check the status of Item
       Deriver Queue here. This queue has a limit, and it may block you from uploading, or even  deleting.   You
       should avoid uploading a lot of small files for better behavior.

       You  can optionally wait for the server’s processing to finish, by setting non-zero value to wait_archive
       key.  By making it wait, rclone can do normal file comparison.  Make sure to set  a  large  enough  value
       (e.g. 30m0s for smaller files) as it can take a long time depending on server’s queue.

   About metadata
       This  backend  supports setting, updating and reading metadata of each file.  The metadata will appear as
       file metadata on Internet Archive.  However, some fields  are  reserved  by  both  Internet  Archive  and
       rclone.

       The  following  are  reserved by Internet Archive: - name - source - size - md5 - crc32 - sha1 - format -
       old_version - viruscheck - summation

       Trying to set values to these keys is ignored with a warning.  Only setting mtime is an exception.  Doing
       so make it the identical behavior as setting ModTime.

       rclone reserves all the keys starting with rclone-.  Setting value for these keys will give you warnings,
       but values are set according to request.

       If there are multiple values for a key, only the first one is returned.  This is a limitation of  rclone,
       that supports one value per one key.  It can be triggered when you did a server-side copy.

       Reading metadata will also provide custom (non-standard nor reserved) ones.

   Configuration
       Here  is  an  example of making an internetarchive configuration.  Most applies to the other providers as
       well, any differences are described below.

       First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              XX / InternetArchive Items
                 \ (internetarchive)
              Storage> internetarchive
              Option access_key_id.
              IAS3 Access Key.
              Leave blank for anonymous access.
              You can find one here: https://archive.org/account/s3.php
              Enter a value. Press Enter to leave empty.
              access_key_id> XXXX
              Option secret_access_key.
              IAS3 Secret Key (password).
              Leave blank for anonymous access.
              Enter a value. Press Enter to leave empty.
              secret_access_key> XXXX
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> y
              Option endpoint.
              IAS3 Endpoint.
              Leave blank for default value.
              Enter a string value. Press Enter for the default (https://s3.us.archive.org).
              endpoint>
              Option front_endpoint.
              Host of InternetArchive Frontend.
              Leave blank for default value.
              Enter a string value. Press Enter for the default (https://archive.org).
              front_endpoint>
              Option disable_checksum.
              Don't store MD5 checksum with object metadata.
              Normally rclone will calculate the MD5 checksum of the input before
              uploading it so it can ask the server to check the object against checksum.
              This is great for data integrity checking but can cause long delays for
              large files to start uploading.
              Enter a boolean value (true or false). Press Enter for the default (true).
              disable_checksum> true
              Option encoding.
              The encoding for the backend.
              See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
              Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot).
              encoding>
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              [remote]
              type = internetarchive
              access_key_id = XXXX
              secret_access_key = XXXX
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Standard options
       Here are the Standard options specific to internetarchive (Internet Archive).

   –internetarchive-access-key-id
       IAS3 Access Key.

       Leave blank for anonymous access.  You can find one here: https://archive.org/account/s3.php

       Properties:

       • Config: access_key_id

       • Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID

       • Type: string

       • Required: false

   –internetarchive-secret-access-key
       IAS3 Secret Key (password).

       Leave blank for anonymous access.

       Properties:

       • Config: secret_access_key

       • Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to internetarchive (Internet Archive).

   –internetarchive-endpoint
       IAS3 Endpoint.

       Leave blank for default value.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT

       • Type: string

       • Default: “https://s3.us.archive.org”

   –internetarchive-front-endpoint
       Host of InternetArchive Frontend.

       Leave blank for default value.

       Properties:

       • Config: front_endpoint

       • Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT

       • Type: string

       • Default: “https://archive.org”

   –internetarchive-disable-checksum
       Don’t ask the server to test against MD5 checksum calculated by rclone.  Normally rclone  will  calculate
       the  MD5  checksum  of the input before uploading it so it can ask the server to check the object against
       checksum.  This is great for data integrity checking but can cause long delays for large files  to  start
       uploading.

       Properties:

       • Config: disable_checksum

       • Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM

       • Type: bool

       • Default: true

   –internetarchive-wait-archive
       Timeout  for  waiting  the  server’s processing tasks (specifically archive and book_op) to finish.  Only
       enable if you need to be guaranteed to be reflected after write operations.  0 to  disable  waiting.   No
       errors to be thrown in case of timeout.

       Properties:

       • Config: wait_archive

       • Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE

       • Type: Duration

       • Default: 0s

   –internetarchive-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_INTERNETARCHIVE_ENCODING

       • Type: MultiEncoder

       • Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot

   Metadata
       Metadata fields provided by Internet Archive.  If there are multiple values for a key, only the first one
       is returned.  This is a limitation of Rclone, that supports one value per one key.

       Owner is able to add custom keys.  Metadata feature grabs all the keys including them.

       Here are the possible system metadata items for the internetarchive backend.

       Name                  Help                               Type          Example                                      Read Only
       ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       crc32                 CRC32                              string        01234567                                     Y
                             calculated
                             by Internet
                             Archive
       format                Name     of                        string        Comma-Separated                              Y
                             format                                           Values
                             identified
                             by Internet
                             Archive
       md5                   MD5    hash                        string        01234567012345670123456701234567             Y
                             calculated
                             by Internet
                             Archive
       mtime                 Time     of                        RFC 3339      2006-01-02T15:04:05.999999999Z               Y
                             last
                             modification,
                             managed  by
                             Rclone
       name                  Full     file                      filename      backend/internetarchive/internetarchive.go   Y
                             path, without
                             the    bucket
                             part
       old_version           Whether   the                      boolean       true                                         Y
                             file      was
                             replaced  and
                             moved      by
                             keep-old-version
                             flag
       rclone-ia-mtime       Time   of   last                   RFC 3339      2006-01-02T15:04:05.999999999Z               N
                             modification,
                             managed       by
                             Internet Archive
       rclone-mtime          Time   of   last                   RFC 3339      2006-01-02T15:04:05.999999999Z               N
                             modification,
                             managed       by
                             Rclone
       rclone-update-track   Random     value                   string        aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa             N
                             used  by  Rclone
                             for     tracking
                             changes   inside
                             Internet Archive
       sha1                  SHA1        hash                   string        0123456701234567012345670123456701234567     Y
                             calculated    by
                             Internet Archive
       size                  File   size   in                   decimal       123456                                       Y
                             bytes                              number
       source                The  source   of                   string        original                                     Y
                             the file
       summation             Check                              string        md5                                          Y
                             https://forum.rclone.org/t/31922
                             for  how  it  is
                             used
       viruscheck            The last time viruscheck process   unixtime      1654191352                                   Y
                             was run for the file (?)

       See the metadata docs for more info.

Jottacloud

       Jottacloud is a cloud storage service provider from a Norwegian company, using  its  own  datacenters  in
       Norway.  In addition to the official service at jottacloud.com, it also provides white-label solutions to
       different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 *
       Tele2  Cloud  (mittcloud.tele2.se)  *  Elkjøp  (with  subsidiaries):  *  Elkjøp Cloud (cloud.elkjop.no) *
       Elgiganten Sweden (cloud.elgiganten.se)  *  Elgiganten  Denmark  (cloud.elgiganten.dk)  *  Giganti  Cloud
       (cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is)

       Most  of  the  white-label  versions  are  supported  by  this  backend,  although  may require different
       authentication setup - described below.

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Authentication types
       Some of the whitelabel versions uses a different authentication method than the official service, and you
       have to choose the correct one when setting up the remote.

   Standard authentication
       The standard authentication method used by the official service (jottacloud.com), as well as some of  the
       whitelabel services, requires you to generate a single-use personal login token from the account security
       settings  in  the service’s web interface.  Log in to your account, go to “Settings” and then “Security”,
       or   use   the   direct   link   presented   to   you   by   rclone   when   configuring   the    remote:
       https://www.jottacloud.com/web/secure.   Scroll down to the section “Personal login token”, and click the
       “Generate” button.  Note that if you are using a whitelabel service you probably  can’t  use  the  direct
       link,  you need to find the same page in their dedicated web interface, and also it may be in a different
       location than described above.

       To access your account from multiple instances of rclone, you need to  configure  each  of  them  with  a
       separate  personal  login  token.   E.g.  you create a Jottacloud remote with rclone in one location, and
       copy the configuration file to a second location where you also want to run rclone and  access  the  same
       remote.   Then  you  need to replace the token for one of them, using the config reconnect command, which
       requires you to generate a new personal login token and supply as input.  If you  do  not  do  this,  the
       token  may  easily  end  up  being invalidated, resulting in both instances failing with an error message
       something along the lines of:

              oauth2: cannot fetch token: 400 Bad Request
              Response: {"error":"invalid_grant","error_description":"Stale token"}

       When this happens, you need to replace the token as described above to be able to use your remote again.

       All personal login tokens you have taken into use will be listed in the web interface under “My logged in
       devices”, and from the right side of that list you can click the “X” button to revoke individual tokens.

   Legacy authentication
       If you are using one of the whitelabel versions (e.g. from  Elkjøp)  you  may  not  have  the  option  to
       generate  a CLI token.  In this case you’ll have to use the legacy authentication.  To do this select yes
       when the setup asks for legacy authentication and enter your username and  password.   The  rest  of  the
       setup is identical to the default setup.

   Telia Cloud authentication
       Similar  to  other  whitelabel versions Telia Cloud doesn’t offer the option of creating a CLI token, and
       additionally uses a separate authentication flow where the username is generated  internally.   To  setup
       rclone  to  use  Telia  Cloud,  choose Telia Cloud authentication in the setup.  The rest of the setup is
       identical to the default setup.

   Tele2 Cloud authentication
       As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and  Tele2
       Cloud  customers  as  no  support  for  creating  a  CLI  token  exists, and additionally uses a separate
       authentication flow where the username is generated internally.  To setup  rclone  to  use  Tele2  Cloud,
       choose Tele2 Cloud authentication in the setup.  The rest of the setup is identical to the default setup.

   Configuration
       Here is an example of how to make a remote called remote with the default setup.  First run:

              rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              XX / Jottacloud
                 \ (jottacloud)
              [snip]
              Storage> jottacloud
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              Option config_type.
              Select authentication type.
              Choose a number from below, or type in an existing string value.
              Press Enter for the default (standard).
                 / Standard authentication.
               1 | Use this if you're a normal Jottacloud user.
                 \ (standard)
                 / Legacy authentication.
               2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.
                 \ (legacy)
                 / Telia Cloud authentication.
               3 | Use this if you are using Telia Cloud.
                 \ (telia)
                 / Tele2 Cloud authentication.
               4 | Use this if you are using Tele2 Cloud.
                 \ (tele2)
              config_type> 1
              Personal login token.
              Generate here: https://www.jottacloud.com/web/secure
              Login Token> <your token here>
              Use a non-standard device/mountpoint?
              Choosing no, the default, will let you access the storage used for the archive
              section of the official Jottacloud client. If you instead want to access the
              sync or the backup section, for example, you must choose yes.
              y) Yes
              n) No (default)
              y/n> y
              Option config_device.
              The device to use. In standard setup the built-in Jotta device is used,
              which contains predefined mountpoints for archive, sync etc. All other devices
              are treated as backup devices by the official Jottacloud client. You may create
              a new by entering a unique name.
              Choose a number from below, or type in your own string value.
              Press Enter for the default (DESKTOP-3H31129).
               1 > DESKTOP-3H31129
               2 > Jotta
              config_device> 2
              Option config_mountpoint.
              The mountpoint to use for the built-in device Jotta.
              The standard setup is to use the Archive mountpoint. Most other mountpoints
              have very limited support in rclone and should generally be avoided.
              Choose a number from below, or type in an existing string value.
              Press Enter for the default (Archive).
               1 > Archive
               2 > Shared
               3 > Sync
              config_mountpoint> 1
              --------------------
              [remote]
              type = jottacloud
              configVersion = 1
              client_id = jottacli
              client_secret =
              tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token
              token = {........}
              username = 2940e57271a93d987d6f8a21
              device = Jotta
              mountpoint = Archive
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Once configured you can then use rclone like this,

       List directories in top level of your Jottacloud

              rclone lsd remote:

       List all the files in your Jottacloud

              rclone ls remote:

       To copy a local directory to an Jottacloud directory called backup

              rclone copy /home/source remote:backup

   Devices and Mountpoints
       The  official Jottacloud client registers a device for each computer you install it on, and shows them in
       the backup section of the user interface.  For each folder  you  select  for  backup  it  will  create  a
       mountpoint  within  this  device.   A  built-in  device called Jotta is special, and contains mountpoints
       Archive, Sync and some others, used for corresponding features in official clients.

       With rclone you’ll want to use the standard Jotta/Archive device/mountpoint in most cases.  However,  you
       may  for  example  want  to  access  files from the sync or backup functionality provided by the official
       clients, and rclone therefore provides the option to select other devices and mountpoints during config.

       You are allowed to create new devices and mountpoints.  All devices except the built-in Jotta device  are
       treated  as  backup  devices  by  official Jottacloud clients, and the mountpoints on them are individual
       backup sets.

       With the built-in Jotta device, only existing, built-in, mountpoints can be selected.  In addition to the
       mentioned Archive and Sync, it may contain several other mountpoints such as: Latest, Links,  Shared  and
       Trash.   All of these are special mountpoints with a different internal representation than the “regular”
       mountpoints.  Rclone will only to a very limited degree support them.  Generally you should avoid  these,
       unless you know what you are doing.

   –fast-list
       This  remote supports --fast-list which allows you to use fewer transactions in exchange for more memory.
       See the rclone docs for more details.

       Note that the implementation in Jottacloud always uses only a single API request to get the entire  list,
       so for large folders this could lead to long wait time before the first results are shown.

       Note  also  that  with  rclone version 1.58 and newer information about MIME types are not available when
       using --fast-list.

   Modified time and hashes
       Jottacloud allows modification times to be set on objects accurate to 1 second.  These will  be  used  to
       detect whether objects need syncing or not.

       Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

       Note  that  Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum
       then the file will be cached temporarily on disk (in location given by –temp-dir) before it is  uploaded.
       Small  files  will  be cached in memory - see the –jottacloud-md5-memory-limit flag.  When uploading from
       local disk the source checksum is always available, so this does not apply.  Starting with rclone version
       1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes
       for uploads from local disk, so the Jottacloud backend had to do it as described above).

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       ”           0x22        "
       *           0x2A        *
       :           0x3A        :
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       |           0x7C        |

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.

   Deleting files
       By default, rclone will send all files to the trash  when  deleting  files.   They  will  be  permanently
       deleted  automatically  after 30 days.  You may bypass the trash and permanently delete files immediately
       by using the –jottacloud-hard-delete flag, or set the  equivalent  environment  variable.   Emptying  the
       trash is supported by the cleanup command.

   Versions
       Jottacloud  supports  file  versioning.   When  rclone  uploads  a new version of a file it creates a new
       version of it.  Currently rclone only supports retrieving the current version but older versions  can  be
       accessed via the Jottacloud Website.

       Versioning  can  be disabled by --jottacloud-no-versions option.  This is achieved by deleting the remote
       file prior to uploading a new version.  If the upload the fails no version of the file will be  available
       in the remote.

   Quota information
       To  view  your  current  quota you can use the rclone about remote: command which will display your usage
       limit (unless it is unlimited) and the current usage.

   Advanced options
       Here are the Advanced options specific to jottacloud (Jottacloud).

   –jottacloud-md5-memory-limit
       Files bigger than this will be cached on disk to calculate the MD5 if required.

       Properties:

       • Config: md5_memory_limit

       • Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT

       • Type: SizeSuffix

       • Default: 10Mi

   –jottacloud-trashed-only
       Only show files that are in the trash.

       This will show trashed files in their original directory structure.

       Properties:

       • Config: trashed_only

       • Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY

       • Type: bool

       • Default: false

   –jottacloud-hard-delete
       Delete files permanently rather than putting them into the trash.

       Properties:

       • Config: hard_delete

       • Env Var: RCLONE_JOTTACLOUD_HARD_DELETE

       • Type: bool

       • Default: false

   –jottacloud-upload-resume-limit
       Files bigger than this can be resumed if the upload fail’s.

       Properties:

       • Config: upload_resume_limit

       • Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT

       • Type: SizeSuffix

       • Default: 10Mi

   –jottacloud-no-versions
       Avoid server side versioning by deleting files and recreating files instead of overwriting them.

       Properties:

       • Config: no_versions

       • Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS

       • Type: bool

       • Default: false

   –jottacloud-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_JOTTACLOUD_ENCODING

       • Type: MultiEncoder

       • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot

   Limitations
       Note that Jottacloud is case insensitive so you can’t have a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

       There  are quite a few characters that can’t be in Jottacloud file names.  Rclone will map these names to
       and from an identical looking unicode equivalent.  For example if a file has a ?  in it will be mapped to
       ? instead.

       Jottacloud only supports filenames up to 255 characters in length.

   Troubleshooting
       Jottacloud exhibits some inconsistent behaviours regarding deleted files  and  folders  which  may  cause
       Copy, Move and DirMove operations to previously deleted paths to fail.  Emptying the trash should help in
       such cases.

Koofr

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       The  initial  setup  for  Koofr involves creating an application password for rclone.  You can do that by
       opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

       Here is an example of how to make a remote called koofr.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> koofr
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              22 / Koofr, Digi Storage and other Koofr-compatible storage providers
                 \ (koofr)
              [snip]
              Storage> koofr
              Option provider.
              Choose your storage provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Koofr, https://app.koofr.net/
                 \ (koofr)
               2 / Digi Storage, https://storage.rcs-rds.ro/
                 \ (digistorage)
               3 / Any other Koofr API compatible storage service
                 \ (other)
              provider> 1
              Option user.
              Your user name.
              Enter a value.
              user> USERNAME
              Option password.
              Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
              Choose an alternative below.
              y) Yes, type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              --------------------
              [koofr]
              type = koofr
              provider = koofr
              user = USERNAME
              password = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or
       white label Koofr instance, or choose an alternative mount instead of your primary storage.

       Once configured you can then use rclone like this,

       List directories in top level of your Koofr

              rclone lsd koofr:

       List all the files in your Koofr

              rclone ls koofr:

       To copy a local directory to an Koofr directory called backup

              rclone copy /home/source koofr:backup

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.

   Standard options
       Here are the Standard options specific to koofr (Koofr, Digi Storage and other  Koofr-compatible  storage
       providers).

   –koofr-provider
       Choose your storage provider.

       Properties:

       • Config: provider

       • Env Var: RCLONE_KOOFR_PROVIDER

       • Type: string

       • Required: false

       • Examples:

         • “koofr”

           • Koofr, https://app.koofr.net/

         • “digistorage”

           • Digi Storage, https://storage.rcs-rds.ro/

         • “other”

           • Any other Koofr API compatible storage service

   –koofr-endpoint
       The Koofr API endpoint to use.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_KOOFR_ENDPOINT

       • Provider: other

       • Type: string

       • Required: true

   –koofr-user
       Your user name.

       Properties:

       • Config: user

       • Env Var: RCLONE_KOOFR_USER

       • Type: string

       • Required: true

   –koofr-password
       Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_KOOFR_PASSWORD

       • Provider: koofr

       • Type: string

       • Required: true

   –koofr-password
       Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_KOOFR_PASSWORD

       • Provider: digistorage

       • Type: string

       • Required: true

   –koofr-password
       Your password for rclone (generate one at your service’s settings page).

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_KOOFR_PASSWORD

       • Provider: other

       • Type: string

       • Required: true

   Advanced options
       Here  are  the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage
       providers).

   –koofr-mountid
       Mount ID of the mount to use.

       If omitted, the primary mount is used.

       Properties:

       • Config: mountid

       • Env Var: RCLONE_KOOFR_MOUNTID

       • Type: string

       • Required: false

   –koofr-setmtime
       Does the backend support setting modification time.

       Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

       Properties:

       • Config: setmtime

       • Env Var: RCLONE_KOOFR_SETMTIME

       • Type: bool

       • Default: true

   –koofr-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_KOOFR_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       Note that Koofr is case insensitive  so  you  can’t  have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

   Providers
   Koofr
       This  is  the  original  Koofr storage  provider  used as main example and described in the configuration
       section above.

   Digi Storage
       Digi Storage is a cloud storage service run by Digi.ro that provides a Koofr API.

       Here is an example of how to make a remote called ds.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> ds
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              22 / Koofr, Digi Storage and other Koofr-compatible storage providers
                 \ (koofr)
              [snip]
              Storage> koofr
              Option provider.
              Choose your storage provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Koofr, https://app.koofr.net/
                 \ (koofr)
               2 / Digi Storage, https://storage.rcs-rds.ro/
                 \ (digistorage)
               3 / Any other Koofr API compatible storage service
                 \ (other)
              provider> 2
              Option user.
              Your user name.
              Enter a value.
              user> USERNAME
              Option password.
              Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
              Choose an alternative below.
              y) Yes, type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              [ds]
              type = koofr
              provider = digistorage
              user = USERNAME
              password = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Other
       You may also want to use another, public or private storage provider that runs  a  Koofr  API  compatible
       service, by simply providing the base URL to connect to.

       Here is an example of how to make a remote called other.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> other
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              22 / Koofr, Digi Storage and other Koofr-compatible storage providers
                 \ (koofr)
              [snip]
              Storage> koofr
              Option provider.
              Choose your storage provider.
              Choose a number from below, or type in your own value.
              Press Enter to leave empty.
               1 / Koofr, https://app.koofr.net/
                 \ (koofr)
               2 / Digi Storage, https://storage.rcs-rds.ro/
                 \ (digistorage)
               3 / Any other Koofr API compatible storage service
                 \ (other)
              provider> 3
              Option endpoint.
              The Koofr API endpoint to use.
              Enter a value.
              endpoint> https://koofr.other.org
              Option user.
              Your user name.
              Enter a value.
              user> USERNAME
              Option password.
              Your password for rclone (generate one at your service's settings page).
              Choose an alternative below.
              y) Yes, type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              [other]
              type = koofr
              provider = other
              endpoint = https://koofr.other.org
              user = USERNAME
              password = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

Mail.ru Cloud

       Mail.ru Cloud is  a  cloud  storage  provided  by a Russian internet company Mail.Ru Group.  The official
       desktop client is Disk-O:, available on Windows and Mac OS.

       Currently it is recommended to disable 2FA  on  Mail.ru  accounts  intended  for  rclone  until  it  gets
       eventually implemented.

   Features highlights
       • Paths may be as deep as required, e.g. remote:directory/subdirectory

       • Files have a last modified time property, directories don’t

       • Deleted files are by default moved to the trash

       • Files and directories can be shared via public links

       • Partial uploads or streaming are not supported, file size must be known before upload

       • Maximum file size is limited to 2G for a free account, unlimited for paid accounts

       • Storage  keeps  hash  for  all  files  and  performs transparent deduplication, the hash algorithm is a
         modified SHA1

       • If a particular file is already present in storage, one can quickly submit file hash  instead  of  long
         file upload (this optimization is supported by rclone)

   Configuration
       Here is an example of making a mailru configuration.

       First create a Mail.ru Cloud account and choose a tariff.

       You  will  need  to  log in and create an app password for rclone.  Rclone will not work with your normal
       username and password - it will give an error like oauth2: server response missing access_token.

       • Click on your user icon in the top right

       • Go to Security / “Пароль и безопасность”

       • Click password for apps / “Пароли для внешних приложений”

       • Add the password - give it a name - eg “rclone”

       • Copy the password and use this password below - your normal login password won’t work.

       Now run

              rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Mail.ru Cloud
                 \ "mailru"
              [snip]
              Storage> mailru
              User name (usually email)
              Enter a string value. Press Enter for the default ("").
              user> username@mail.ru
              Password

              This must be an app password - rclone will not work with your normal
              password. See the Configuration section in the docs for how to make an
              app password.
              y) Yes type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Skip full upload if there is another file with same data hash.
              This feature is called "speedup" or "put by hash". It is especially efficient
              in case of generally available files like popular books, video or audio clips
              [snip]
              Enter a boolean value (true or false). Press Enter for the default ("true").
              Choose a number from below, or type in your own value
               1 / Enable
                 \ "true"
               2 / Disable
                 \ "false"
              speedup_enable> 1
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              --------------------
              [remote]
              type = mailru
              user = username@mail.ru
              pass = *** ENCRYPTED ***
              speedup_enable = true
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Configuration of this backend does not require a local web browser.  You can use the  configured  backend
       as shown below:

       See top level directories

              rclone lsd remote:

       Make a new directory

              rclone mkdir remote:directory

       List the contents of a directory

              rclone ls remote:directory

       Sync /home/local/directory to the remote path, deleting any excess files in the path.

              rclone sync -i /home/local/directory remote:directory

   Modified time
       Files  support  a  modification  time attribute with up to 1 second precision.  Directories do not have a
       modification time, which is shown as “Jan 1 1970”.

   Hash checksums
       Hash sums use a custom Mail.ru algorithm based on SHA1.  If file size is less than or equal to  the  SHA1
       block  size  (20  bytes), its hash is simply its data right-padded with zero bytes.  Hash sum of a larger
       file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation  of  the
       data length.

   Emptying Trash
       Removing  a  file  or directory actually moves it to the trash, which is not visible to rclone but can be
       seen in a web browser.  The trashed file still occupies part of total quota.  If you wish to  empty  your
       trash  and free some quota, you can use the rclone cleanup remote: command, which will permanently delete
       all your trashed files.  This command does not take any path arguments.

   Quota information
       To view your current quota you can use the rclone about remote: command which  will  display  your  usage
       limit (quota) and the current usage.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       ”           0x22        "
       *           0x2A        *
       :           0x3A        :
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       \           0x5C        \
       |           0x7C        |

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to mailru (Mail.ru Cloud).

   –mailru-user
       User name (usually email).

       Properties:

       • Config: user

       • Env Var: RCLONE_MAILRU_USER

       • Type: string

       • Required: true

   –mailru-pass
       Password.

       This  must  be  an  app password - rclone will not work with your normal password.  See the Configuration
       section in the docs for how to make an app password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_MAILRU_PASS

       • Type: string

       • Required: true

   –mailru-speedup-enable
       Skip full upload if there is another file with same data hash.

       This feature is called “speedup” or “put by hash”.  It is  especially  efficient  in  case  of  generally
       available  files  like  popular  books,  video  or audio clips, because files are searched by hash in all
       accounts of all mailru users.  It is meaningless and ineffective if source file is unique  or  encrypted.
       Please  note  that  rclone  may need local memory and disk space to calculate content hash in advance and
       decide whether full upload is required.  Also, if rclone does not know file size in advance (e.g. in case
       of streaming or partial uploads), it will not even try this optimization.

       Properties:

       • Config: speedup_enable

       • Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE

       • Type: bool

       • Default: true

       • Examples:

         • “true”

           • Enable

         • “false”

           • Disable

   Advanced options
       Here are the Advanced options specific to mailru (Mail.ru Cloud).

   –mailru-speedup-file-patterns
       Comma separated list of file name patterns eligible for speedup (put by hash).

       Patterns are case insensitive and can contain ’*’ or `?' meta characters.

       Properties:

       • Config: speedup_file_patterns

       • Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS

       • Type: string

       • Default: “.mkv,.avi,.mp4,.mp3,.zip,.gz,.rar,.pdf”

       • Examples:

         • “”

           • Empty list completely disables speedup (put by hash).

         • “*”

           • All files will be attempted for speedup.

         • “.mkv,.avi,.mp4,.mp3”

           • Only common audio/video files will be tried for put by hash.

         • “.zip,.gz,.rar,.pdf”

           • Only common archives or PDF books will be tried for speedup.

   –mailru-speedup-max-disk
       This option allows you to disable speedup (put by hash) for large files.

       Reason is that preliminary hashing can exhaust your RAM or disk space.

       Properties:

       • Config: speedup_max_disk

       • Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK

       • Type: SizeSuffix

       • Default: 3Gi

       • Examples:

         • “0”

           • Completely disable speedup (put by hash).

         • “1G”

           • Files larger than 1Gb will be uploaded directly.

         • “3G”

           • Choose this option if you have less than 3Gb free on local disk.

   –mailru-speedup-max-memory
       Files larger than the size given below will always be hashed on disk.

       Properties:

       • Config: speedup_max_memory

       • Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY

       • Type: SizeSuffix

       • Default: 32Mi

       • Examples:

         • “0”

           • Preliminary hashing will always be done in a temporary disk location.

         • “32M”

           • Do not dedicate more than 32Mb RAM for preliminary hashing.

         • “256M”

           • You have at most 256Mb RAM free for hash calculations.

   –mailru-check-hash
       What should copy do if file checksum is mismatched or invalid.

       Properties:

       • Config: check_hash

       • Env Var: RCLONE_MAILRU_CHECK_HASH

       • Type: bool

       • Default: true

       • Examples:

         • “true”

           • Fail with error.

         • “false”

           • Ignore and continue.

   –mailru-user-agent
       HTTP user agent used internally by client.

       Defaults to “rclone/VERSION” or “–user-agent” provided on command line.

       Properties:

       • Config: user_agent

       • Env Var: RCLONE_MAILRU_USER_AGENT

       • Type: string

       • Required: false

   –mailru-quirks
       Comma separated list of internal maintenance flags.

       This option must  not  be  used  by  an  ordinary  user.   It  is  intended  only  to  facilitate  remote
       troubleshooting  of  backend  issues.   Strict  meaning  of flags is not documented and not guaranteed to
       persist between releases.  Quirks will be removed when  the  backend  grows  stable.   Supported  quirks:
       atomicmkdir binlist unknowndirs

       Properties:

       • Config: quirks

       • Env Var: RCLONE_MAILRU_QUIRKS

       • Type: string

       • Required: false

   –mailru-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_MAILRU_ENCODING

       • Type: MultiEncoder

       • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       File  size  limits  depend  on  your account.  A single file size is limited by 2G for a free account and
       unlimited for paid tariffs.  Please refer to the Mail.ru site for the total uploaded size limits.

       Note that Mailru is case insensitive so  you  can’t  have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

Mega

       Mega is  a  cloud  storage  and  file  hosting service known for its security feature where all files are
       encrypted locally before they are uploaded.  This prevents anyone  (including  employees  of  Mega)  from
       accessing the files without knowledge of the key used for encryption.

       This  is  an  rclone  backend  for  Mega which supports the file transfer features of Mega using the same
       client side encryption.

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Mega
                 \ "mega"
              [snip]
              Storage> mega
              User name
              user> you@example.com
              Password.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank
              y/g/n> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Remote config
              --------------------
              [remote]
              type = mega
              user = you@example.com
              pass = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       NOTE: The encryption keys need to have been already generated after a  regular  login  via  the  browser,
       otherwise attempting to use the credentials in rclone will fail.

       Once configured you can then use rclone like this,

       List directories in top level of your Mega

              rclone lsd remote:

       List all the files in your Mega

              rclone ls remote:

       To copy a local directory to an Mega directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       Mega does not support modification times or hashes yet.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Duplicated files
       Mega can have two files with exactly the same name and path (unlike a normal file system).

       Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

       Use rclone dedupe to fix duplicated files.

   Failure to log-in
   Object not found
       If you are connecting to your Mega remote for the first time, to test access and synchronization, you may
       receive an error such as

              Failed to create file system for "my-mega-remote:":
              couldn't login: Object (typically, node or user) not found

       The  diagnostic  steps  often  recommended in the rclone forum start with the MEGAcmd utility.  Note that
       this refers to the official C++ command from https://github.com/meganz/MEGAcmd and not  the  go  language
       built command from t3rm1n4l/megacmd that is no longer maintained.

       Follow  the instructions for installing MEGAcmd and try accessing your remote as they recommend.  You can
       establish whether or not you can log in using MEGAcmd, and obtain diagnostic information to help you, and
       search or work with others in the forum.

              MEGA CMD> login me@example.com
              Password:
              Fetching nodes ...
              Loading transfers from local cache
              Login complete as me@example.com
              me@example.com:/$

       Note that some have found issues with passwords containing special characters.  If you  can  not  log  on
       with  rclone,  but  MEGAcmd  logs  on just fine, then consider changing your password temporarily to pure
       alphanumeric characters, in case that helps.

   Repeated commands blocks access
       Mega remotes seem to get blocked (reject logins) under “heavy use”.  We  haven’t  worked  out  the  exact
       blocking rules but it seems to be related to fast paced, successive rclone commands.

       For  example,  executing  this command 90 times in a row rclone link remote:file will cause the remote to
       become “blocked”.  This is not an abnormal situation, for example if you wish to get the public links  of
       a  directory  with  hundred  of files...  After more or less a week, the remote will remote accept rclone
       logins normally again.

       You can mitigate this issue by mounting the remote it with rclone mount.  This will log-in when  mounting
       and  a  log-out  when  unmounting  only.   You  can also run rclone rcd and then use rclone rc to run the
       commands over the API to avoid logging in each time.

       Rclone does not currently close mega sessions (you can see them in the web  interface),  however  closing
       the sessions does not solve the issue.

       If  you  space rclone commands by 3 seconds it will avoid blocking the remote.  We haven’t identified the
       exact blocking rules, so perhaps one could execute  the  command  80  times  without  waiting  and  avoid
       blocking by waiting 3 seconds, then continuing...

       Note that this has been observed by trial and error and might not be set in stone.

       Other  tools  seem  not  to  produce  this  blocking  effect,  as  they  use a different working approach
       (state-based, using sessionIDs instead of log-in) which  isn’t  compatible  with  the  current  stateless
       rclone approach.

       Note  that  once  blocked,  the  use of other tools (such as megacmd) is not a sure workaround: following
       megacmd login times have been observed in succession for blocked remote: 7 minutes,  20  min,  30min,  30
       min, 30min.  Web access looks unaffected though.

       Investigation  is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits
       - if you discover something relevant, please post on the forum.

       So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and  the
       password are correct, likely you have got the remote blocked for a while.

   Standard options
       Here are the Standard options specific to mega (Mega).

   –mega-user
       User name.

       Properties:

       • Config: user

       • Env Var: RCLONE_MEGA_USER

       • Type: string

       • Required: true

   –mega-pass
       Password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_MEGA_PASS

       • Type: string

       • Required: true

   Advanced options
       Here are the Advanced options specific to mega (Mega).

   –mega-debug
       Output more debug from Mega.

       If this flag is set (along with -vv) it will print further debugging information from the mega backend.

       Properties:

       • Config: debug

       • Env Var: RCLONE_MEGA_DEBUG

       • Type: bool

       • Default: false

   –mega-hard-delete
       Delete files permanently rather than putting them into the trash.

       Normally  the  mega  backend will put all deletions into the trash rather than permanently deleting them.
       If you specify this then rclone will permanently delete objects instead.

       Properties:

       • Config: hard_delete

       • Env Var: RCLONE_MEGA_HARD_DELETE

       • Type: bool

       • Default: false

   –mega-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_MEGA_ENCODING

       • Type: MultiEncoder

       • Default: Slash,InvalidUtf8,Dot

   Limitations
       This backend uses the go-mega go library which is an opensource go library  implementing  the  Mega  API.
       There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so
       there are likely quite a few errors still remaining in this library.

       Mega allows duplicate files which may confuse rclone.

Memory

       The memory backend is an in RAM backend.  It does not persist its data - use the local backend for that.

       The  memory  backend behaves like a bucket-based remote (e.g. like s3).  Because it has no parameters you
       can just use it with the :memory: remote name.

   Configuration
       You can configure it as a remote like this with rclone config too if you want to:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Memory
                 \ "memory"
              [snip]
              Storage> memory
              ** See help for memory backend at: https://rclone.org/memory/ **

              Remote config

              --------------------
              [remote]
              type = memory
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Because the memory backend isn’t persistent it is most useful for testing or with  an  rclone  server  or
       rclone mount, e.g.

              rclone mount :memory: /mnt/tmp
              rclone serve webdav :memory:
              rclone serve sftp :memory:

   Modified time and hashes
       The memory backend supports MD5 hashes and modification times accurate to 1 nS.

   Restricted filename characters
       The memory backend replaces the default restricted characters set.

Akamai NetStorage

       Paths  are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir.  If you have
       a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories
       within cpcode>.

       For  example,  this  is  commonly  configured  with  or  without  a  CP  code:  *   With   a   CP   code.
       [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/       *      Without      a      CP      code.
       [your-domain-prefix]-nsu.akamaihd.net

       See all buckets rclone lsd remote: The initial setup for  Netstorage  involves  getting  an  account  and
       secret.  Use rclone config to walk you through the setup process.

   Configuration
       Here’s an example of how to make a remote called ns1.

       1. To begin the interactive configuration process, enter this command:

          rclone config

       2. Type n to create a new remote.

          n) New remote
          d) Delete remote
          q) Quit config
          e/n/d/q> n

       3. For this example, enter ns1 when you reach the name> prompt.

          name> ns1

       4. Enter netstorage as the type of storage to configure.

          Type of storage to configure.
          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
          XX / NetStorage
             \ "netstorage"
          Storage> netstorage

       5. Select  between  the  HTTP  or  HTTPS protocol.  Most users should choose HTTPS, which is the default.
          HTTP is provided primarily for debugging purposes.

          Enter a string value. Press Enter for the default ("").
          Choose a number from below, or type in your own value
           1 / HTTP protocol
             \ "http"
           2 / HTTPS protocol
             \ "https"
          protocol> 1

       6. Specify  your  NetStorage  host,  CP  code,  and  any  necessary  content  paths  using  this  format:
          <domain>/<cpcode>/<content>/

          Enter a string value. Press Enter for the default ("").
          host> baseball-nsu.akamaihd.net/123456/content/

       7. Set the netstorage account name

          Enter a string value. Press Enter for the default ("").
          account> username

       8. Set  the Netstorage account secret/G2O key which will be used for authentication purposes.  Select the
          y option to set your own password then  enter  your  secret.   Note:  The  secret  is  stored  in  the
          rclone.conf file with hex-encoded encryption.

          y) Yes type in my own password
          g) Generate random password
          y/g> y
          Enter the password:
          password:
          Confirm the password:
          password:

       9. View the summary and confirm your remote configuration.

          [ns1]
          type = netstorage
          protocol = http
          host = baseball-nsu.akamaihd.net/123456/content/
          account = username
          secret = *** ENCRYPTED ***
          --------------------
          y) Yes this is OK (default)
          e) Edit this remote
          d) Delete this remote
          y/e/d> y

       This remote is called ns1 and can now be used.

   Example operations
       Get  started  with  rclone  and  NetStorage  with  these examples.  For additional rclone commands, visit
       https://rclone.org/commands/.

   See contents of a directory in your project
              rclone lsd ns1:/974012/testing/

   Sync the contents local with remote
              rclone sync . ns1:/974012/testing/

   Upload local content to remote
              rclone copy notes.txt ns1:/974012/testing/

   Delete content on remote
              rclone delete ns1:/974012/testing/notes.txt

   Move or copy content between CP codes.
       Your credentials must have access to two CP codes on the  same  remote.   You  can’t  perform  operations
       between different remotes.

              rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/

   Features
   Symlink Support
       The  Netstorage backend changes the rclone --links, -l behavior.  When uploading, instead of creating the
       .rclonelink file, use the “symlink” API in order to create the corresponding symlink on the remote.   The
       .rclonelink  file  will  not  be  created,  the upload will be intercepted and only the symlink file that
       matches the source file name with no suffix will be created on the remote.

       This will effectively allow commands like copy/copyto, move/moveto and  sync  to  upload  from  local  to
       remote  and download from remote to local directories with symlinks.  Due to internal rclone limitations,
       it is not possible to upload an individual symlink file to any remote backend.  You can  always  use  the
       “backend symlink” command to create a symlink on the NetStorage server, refer to “symlink” section below.

       Individual  symlink files on the remote can be used with the commands like “cat” to print the destination
       name, or “delete” to delete symlink, or copy, copy/to and move/moveto to  download  from  the  remote  to
       local.   Note:  individual  symlink  files  on  the  remote  should  be  specified  including  the suffix
       .rclonelink.

       Note: No file with the suffix .rclonelink should ever exist on the server since it  is  not  possible  to
       actually  upload/create  a  file with .rclonelink suffix with rclone, it can only exist if it is manually
       created through a non-rclone method on the remote.

   Implicit vs. Explicit Directories
       With NetStorage, directories can exist in one of two forms:

       1. Explicit Directory.  This is an actual, physical directory that you have created in a storage group.

       2. Implicit Directory.  This refers to a directory within a path that has not  been  physically  created.
          For  example, during upload of a file, nonexistent subdirectories can be specified in the target path.
          NetStorage creates these as “implicit.” While the directories aren’t physically  created,  they  exist
          implicitly and the noted path is connected with the uploaded file.

       Rclone  will  intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly
       issue  the  mkdir  command  for  each  directory  in  the  uploading  path.   This  will  help  with  the
       interoperability  with the other Akamai services such as SFTP and the Content Management Shell (CMShell).
       Rclone will not guarantee correctness of operations with  implicit  directories  which  might  have  been
       created as a result of using an upload API directly.

   --fast-list / ListR support
       NetStorage  remote  supports  the  ListR  feature  by  using the “list” NetStorage API action to return a
       lexicographical list of all objects within the  specified  CP  code,  recursing  into  subdirectories  as
       they’re encountered.

       • Rclone  will use the ListR method for some commands by default.  Commands such as lsf -R will use ListR
         by default.  To disable this, include the --disable listR option to use  the  non-recursive  method  of
         listing objects.

       • Rclone  will  not  use  the  ListR  method for some commands.  Commands such as sync don’t use ListR by
         default.  To force using the ListR method, include the --fast-list option.

       There are pros and cons of using the ListR method, refer to rclone documentation.  In general,  the  sync
       command  over  an  existing  deep  tree on the remote will run faster with the “–fast-list” flag but with
       extra memory usage as a side effect.  It might also result in higher CPU utilization but the  whole  task
       can be completed faster.

       Note:  There  is  a  known  limitation  that  “lsf  -R” will display number of files in the directory and
       directory size as -1 when ListR method is used.  The workaround is to pass “–disable listR” flag if these
       numbers are important in the output.

   Purge
       NetStorage remote supports the purge feature by using the  “quick-delete”  NetStorage  API  action.   The
       quick-delete  action  is  disabled  by  default  for  security reasons and can be enabled for the account
       through the Akamai portal.  Rclone will first try to use quick-delete action for the purge command and if
       this functionality is disabled then will fall back to a standard delete method.

       Note:    Read    the     https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-
       guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html NetStorage Usage API for  considerations  when using
       “quick-delete”.  In general, using quick-delete method will not delete the tree immediately  and  objects
       targeted for quick-delete may still be accessible.

   Standard options
       Here are the Standard options specific to netstorage (Akamai NetStorage).

   –netstorage-host
       Domain+path of NetStorage host to connect to.

       Format should be <domain>/<internal folders>

       Properties:

       • Config: host

       • Env Var: RCLONE_NETSTORAGE_HOST

       • Type: string

       • Required: true

   –netstorage-account
       Set the NetStorage account name

       Properties:

       • Config: account

       • Env Var: RCLONE_NETSTORAGE_ACCOUNT

       • Type: string

       • Required: true

   –netstorage-secret
       Set the NetStorage account secret/G2O key for authentication.

       Please choose the `y' option to set your own password then enter your secret.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: secret

       • Env Var: RCLONE_NETSTORAGE_SECRET

       • Type: string

       • Required: true

   Advanced options
       Here are the Advanced options specific to netstorage (Akamai NetStorage).

   –netstorage-protocol
       Select between HTTP or HTTPS protocol.

       Most users should choose HTTPS, which is the default.  HTTP is provided primarily for debugging purposes.

       Properties:

       • Config: protocol

       • Env Var: RCLONE_NETSTORAGE_PROTOCOL

       • Type: string

       • Default: “https”

       • Examples:

         • “http”

           • HTTP protocol

         • “https”

           • HTTPS protocol

   Backend commands
       Here are the commands specific to the netstorage backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   du
       Return disk usage information for a specified directory

              rclone backend du remote: [options] [<arguments>+]

       The  usage  information  returned,  includes  the  targeted  directory as well as all files stored in any
       sub-directories that may exist.

   symlink
       You can create a symbolic link in ObjectStore with the symlink action.

              rclone backend symlink remote: [options] [<arguments>+]

       The desired path location (including applicable sub-directories) ending in the object that  will  be  the
       target  of  the  symlink  (for  example,  /links/mylink).   Include the file extension for the object, if
       applicable.  rclone backend symlink <src> <path>

Microsoft Azure Blob Storage

       Paths are specified as remote:container (or remote: for the lsd command.)  You may put subdirectories  in
       too, e.g.  remote:container/path/to/dir.

   Configuration
       Here  is  an example of making a Microsoft Azure Blob Storage configuration.  For a remote called remote.
       First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Microsoft Azure Blob Storage
                 \ "azureblob"
              [snip]
              Storage> azureblob
              Storage Account Name
              account> account_name
              Storage Account Key
              key> base64encodedkey==
              Endpoint for the service - leave blank normally.
              endpoint>
              Remote config
              --------------------
              [remote]
              account = account_name
              key = base64encodedkey==
              endpoint =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See all containers

              rclone lsd remote:

       Make a new container

              rclone mkdir remote:container

       List the contents of a container

              rclone ls remote:container

       Sync /home/local/directory to the remote container, deleting any excess files in the container.

              rclone sync -i /home/local/directory remote:container

   –fast-list
       This remote supports --fast-list which allows you to use fewer transactions in exchange for more  memory.
       See the rclone docs for more details.

   Modified time
       The  modified  time  is  stored as metadata on the object with the mtime key.  It is stored using RFC3339
       Format time with nanosecond precision.  The metadata is supplied during directory listings so there is no
       overhead to using it.

   Performance
       When uploading  large  files,  increasing  the  value  of  --azureblob-upload-concurrency  will  increase
       performance  at the cost of using more memory.  The default of 16 is set quite conservatively to use less
       memory.  It maybe be necessary raise it to 64 or higher to fully utilize a 1 GBit/s link  with  a  single
       file transfer.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       /           0x2F        /
       \           0x5C        \

       File  names can also not end with the following characters.  These only get replaced if they are the last
       character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       .           0x2E        .

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Hashes
       MD5 hashes are stored with blobs.  However blobs that were uploaded in chunks only have  an  MD5  if  the
       source remote was capable of MD5 hashes, e.g. the local disk.

   Authenticating with Azure Blob Storage
       Rclone has 3 ways of authenticating with Azure Blob Storage:

   Account and Key
       This  is  the  most  straight forward and least flexible way.  Just fill in the account and key lines and
       leave the rest blank.

   SAS URL
       This can be an account level SAS URL or container level SAS URL.

       To use it leave account, key blank and fill in sas_url.

       An account level SAS URL or container level SAS URL can be obtained from the Azure portal  or  the  Azure
       Storage Explorer.  To get a container level SAS URL right click on a container in the Azure Blob explorer
       in the Azure portal.

       If  you  use  a  container level SAS URL, rclone operations are permitted only on a particular container,
       e.g.

              rclone ls azureblob:container

       You can also list the single container from the root.  This will only show the container specified by the
       SAS URL.

              $ rclone lsd azureblob:
              container/

       Note that you can’t see or access any other containers - this will fail

              rclone ls azureblob:othercontainer

       Container level SAS URLs are useful for temporarily allowing third parties access to a  single  container
       or putting credentials into an untrusted environment such as a CI build server.

   Standard options
       Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).

   –azureblob-account
       Storage Account Name.

       Leave blank to use SAS URL or Emulator.

       Properties:

       • Config: account

       • Env Var: RCLONE_AZUREBLOB_ACCOUNT

       • Type: string

       • Required: false

   –azureblob-service-principal-file
       Path to file containing credentials for use with a service principal.

       Leave blank normally.  Needed only if you want to use a service principal instead of interactive login.

              $ az ad sp create-for-rbac --name "<name>" \
                --role "Storage Blob Data Owner" \
                --scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
                > azure-principal.json

       See   https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli  “Create  an
       Azure service principal” and https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-
       cli “Assign an Azure role for access to blob data” pages for more details.

       Properties:

       • Config: service_principal_file

       • Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE

       • Type: string

       • Required: false

   –azureblob-key
       Storage Account Key.

       Leave blank to use SAS URL or Emulator.

       Properties:

       • Config: key

       • Env Var: RCLONE_AZUREBLOB_KEY

       • Type: string

       • Required: false

   –azureblob-sas-url
       SAS URL for container level access only.

       Leave blank if using account/key or Emulator.

       Properties:

       • Config: sas_url

       • Env Var: RCLONE_AZUREBLOB_SAS_URL

       • Type: string

       • Required: false

   –azureblob-use-msi
       Use a managed service identity to authenticate (only works in Azure).

       When  true,   use   a   https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-
       resources/  managed ser‐ vice identity to authenticate to Azure Storage instead of a SAS token or account
       key.

       If  the  VM(SS)  on  which  this  program  is  running has a system-assigned identity, it will be used by
       default.   If  the  resource  has  no  system-assigned  but  exactly  one  user-assigned  identity,   the
       user-assigned  identity  will be used by default.  If the resource has multiple user-assigned identities,
       the identity to use must be explicitly specified using exactly one of the  msi_object_id,  msi_client_id,
       or msi_mi_res_id parameters.

       Properties:

       • Config: use_msi

       • Env Var: RCLONE_AZUREBLOB_USE_MSI

       • Type: bool

       • Default: false

   –azureblob-use-emulator
       Uses local storage emulator if provided as `true'.

       Leave blank if using real azure storage endpoint.

       Properties:

       • Config: use_emulator

       • Env Var: RCLONE_AZUREBLOB_USE_EMULATOR

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).

   –azureblob-msi-object-id
       Object ID of the user-assigned MSI to use, if any.

       Leave blank if msi_client_id or msi_mi_res_id specified.

       Properties:

       • Config: msi_object_id

       • Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID

       • Type: string

       • Required: false

   –azureblob-msi-client-id
       Object ID of the user-assigned MSI to use, if any.

       Leave blank if msi_object_id or msi_mi_res_id specified.

       Properties:

       • Config: msi_client_id

       • Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID

       • Type: string

       • Required: false

   –azureblob-msi-mi-res-id
       Azure resource ID of the user-assigned MSI to use, if any.

       Leave blank if msi_client_id or msi_object_id specified.

       Properties:

       • Config: msi_mi_res_id

       • Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID

       • Type: string

       • Required: false

   –azureblob-endpoint
       Endpoint for the service.

       Leave blank normally.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_AZUREBLOB_ENDPOINT

       • Type: string

       • Required: false

   –azureblob-upload-cutoff
       Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF

       • Type: string

       • Required: false

   –azureblob-chunk-size
       Upload chunk size.

       Note  that this is stored in memory and there may be up to “–transfers” * “–azureblob-upload-concurrency”
       chunks stored at once in memory.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 4Mi

   –azureblob-upload-concurrency
       Concurrency for multipart uploads.

       This is the number of chunks of the same file that are uploaded concurrently.

       If you are uploading small numbers of large files over high-speed links and these uploads  do  not  fully
       utilize your bandwidth, then increasing this may help to speed up the transfers.

       In  tests, upload speed increases almost linearly with upload concurrency.  For example to fill a gigabit
       pipe it may be necessary to raise this to 64.  Note that this will use more memory.

       Note   that   chunks   are   stored   in   memory   and   there   may   be   up   to    “–transfers”    *
       “–azureblob-upload-concurrency” chunks stored at once in memory.

       Properties:

       • Config: upload_concurrency

       • Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY

       • Type: int

       • Default: 16

   –azureblob-list-chunk
       Size of blob list.

       This  sets  the  number  of  blobs requested in each listing chunk.  Default is the maximum, 5000.  “List
       blobs” requests are permitted 2 minutes per megabyte to complete.  If an operation is taking longer  than
       2  minutes per megabyte on average, it will time out ( https://docs.microsoft.com/en-us/rest/api/storage‐
       services/setting-timeouts-for-blob-service-opera‐  tions#exceptions-to-default-timeout-interval source ).
       This can be used to limit the number of blobs items to return, to avoid the time out.

       Properties:

       • Config: list_chunk

       • Env Var: RCLONE_AZUREBLOB_LIST_CHUNK

       • Type: int

       • Default: 5000

   –azureblob-access-tier
       Access tier of blob: hot, cool or archive.

       Archived  blobs  can be restored by setting access tier to hot or cool.  Leave blank if you intend to use
       default access tier, which is set at account level

       If there is no “access tier” specified, rclone doesn’t  apply  any  tier.   rclone  performs  “Set  Tier”
       operation on blobs while uploading, if objects are not modified, specifying “access tier” to new one will
       have  no  effect.   If  blobs are in “archive tier” at remote, trying to perform data transfer operations
       from remote will not be allowed.  User should first restore by tiering blob to “Hot” or “Cool”.

       Properties:

       • Config: access_tier

       • Env Var: RCLONE_AZUREBLOB_ACCESS_TIER

       • Type: string

       • Required: false

   –azureblob-archive-tier-delete
       Delete archive tier blobs before overwriting.

       Archive tier blobs cannot be updated.  So without this flag, if you attempt to  update  an  archive  tier
       blob, then rclone will produce the error:

              can't update archive tier blob without --azureblob-archive-tier-delete

       With  this  flag  set  then  before rclone attempts to overwrite an archive tier blob, it will delete the
       existing blob before uploading its replacement.  This has the potential for data loss if the upload fails
       (unlike updating a normal blob) and also may cost more since deleting archive tier  blobs  early  may  be
       chargable.

       Properties:

       • Config: archive_tier_delete

       • Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE

       • Type: bool

       • Default: false

   –azureblob-disable-checksum
       Don’t store MD5 checksum with object metadata.

       Normally  rclone  will  calculate  the  MD5 checksum of the input before uploading it so it can add it to
       metadata on the object.  This is great for data integrity checking but can cause long  delays  for  large
       files to start uploading.

       Properties:

       • Config: disable_checksum

       • Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM

       • Type: bool

       • Default: false

   –azureblob-memory-pool-flush-time
       How often internal memory buffer pools will be flushed.

       Uploads  which  requires  additional  buffers (f.e multipart) will use memory pool for allocations.  This
       option controls how often unused buffers will be removed from the pool.

       Properties:

       • Config: memory_pool_flush_time

       • Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME

       • Type: Duration

       • Default: 1m0s

   –azureblob-memory-pool-use-mmap
       Whether to use mmap buffers in internal memory pool.

       Properties:

       • Config: memory_pool_use_mmap

       • Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP

       • Type: bool

       • Default: false

   –azureblob-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_AZUREBLOB_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8

   –azureblob-public-access
       Public access level of a container: blob or container.

       Properties:

       • Config: public_access

       • Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS

       • Type: string

       • Required: false

       • Examples:

         • “”

           • The container and its blobs can be accessed only with an authorized request.

           • It’s a default value.

         • “blob”

           • Blob data within this container can be read via anonymous request.

         • “container”

           • Allow full public read access for container and blob data.

   –azureblob-no-head-object
       If set, do not do HEAD before GET when getting objects.

       Properties:

       • Config: no_head_object

       • Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT

       • Type: bool

       • Default: false

   Limitations
       MD5 sums are only uploaded with chunked files if the source has an MD5 sum.  This will always be the case
       for a local to azure copy.

       rclone about is not supported by the  Microsoft  Azure  Blob  storage  backend.   Backends  without  this
       capability  cannot  determine  free  space  for  an rclone mount or use policy mfs (most free space) as a
       member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

   Azure Storage Emulator Support
       You can run rclone with storage emulator (usually azurite).

       To do this, just set up a new remote with rclone config following instructions described in  introduction
       and  set use_emulator config as true.  You do not need to provide default account name neither an account
       key.

       Also, if you want to access a storage emulator instance running on a different machine, you can  override
       Endpoint   parameter   in  advanced  settings,  setting  it  to  http(s)://<host>:<port>/devstoreaccount1
       (e.g. http://10.254.2.5:10000/devstoreaccount1).

Microsoft OneDrive

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       The initial setup for OneDrive involves getting a token from Microsoft which  you  need  to  do  in  your
       browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Microsoft OneDrive
                 \ "onedrive"
              [snip]
              Storage> onedrive
              Microsoft App Client Id
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_id>
              Microsoft App Client Secret
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_secret>
              Edit advanced config? (y/n)
              y) Yes
              n) No
              y/n> n
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              Choose a number from below, or type in an existing value
               1 / OneDrive Personal or Business
                 \ "onedrive"
               2 / Sharepoint site
                 \ "sharepoint"
               3 / Type in driveID
                 \ "driveid"
               4 / Type in SiteID
                 \ "siteid"
               5 / Search a Sharepoint site
                 \ "search"
              Your choice> 1
              Found 1 drives, please select the one you want to use:
              0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
              Chose drive to use:> 0
              Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
              Is that okay?
              y) Yes
              n) No
              y/n> y
              --------------------
              [remote]
              type = onedrive
              token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
              drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
              drive_type = business
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note  that rclone runs a webserver on your local machine to collect the token as returned from Microsoft.
       This only runs from the moment it opens your browser to the moment you get back  the  verification  code.
       This  is  on  http://127.0.0.1:53682/  and  this  it may require you to unblock it temporarily if you are
       running a host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your OneDrive

              rclone lsd remote:

       List all the files in your OneDrive

              rclone ls remote:

       To copy a local directory to an OneDrive directory called backup

              rclone copy /home/source remote:backup

   Getting your own Client ID and Key
       rclone uses a default Client ID when talking to OneDrive, unless a custom client_id is specified  in  the
       config.  The default Client ID and Key are shared by all rclone users when performing requests.

       You  may choose to create and use your own Client ID, in case the default one does not work well for you.
       For example, you might see throttling.

   Creating Client ID for OneDrive Personal
       To create your own Client ID, please follow these steps:

       1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click
          New registration.

       2. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure  AD
          directory  -  Multitenant)  and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect
          URI, then type (do not copy and paste) http://localhost:53682/ and click Register.  Copy and keep  the
          Application (client) ID under the app name for later use.

       3. Under  manage  select  Certificates  &  secrets, click New client secret.  Enter a description (can be
          anything) and set Expires to 24 months.  Copy and keep that secret Value for later use (you  won’t  be
          able to see this value afterwards).

       4. Under  manage  select  API  permissions, click Add a permission and select Microsoft Graph then select
          delegated permissions.

       5. Search  and  select  the   following   permissions:   Files.Read,   Files.ReadWrite,   Files.Read.All,
          Files.ReadWrite.All,  offline_access,  User.Read  and  Sites.Read.All  (if  custom  access  scopes are
          configured, select the permissions accordingly).  Once selected click Add permissions at the bottom.

       Now the application is complete.  Run rclone config to create or edit a OneDrive remote.  Supply the  app
       ID and password as Client ID and Secret, respectively.  rclone will walk you through the remaining steps.

       The   access_scopes   option   allows  you  to  configure  the  permissions  requested  by  rclone.   See
       https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions Microsoft Docs for    more
       information about the different scopes.

       The  Sites.Read.All  permission  is  required if you need to search SharePoint sites when configuring the
       remote.  However, if that permission is not assigned, you need to exclude Sites.Read.All from your access
       scopes or set disable_site_permission option to true in the advanced options.

   Creating Client ID for OneDrive Business
       The steps for OneDrive Personal may or may not work for OneDrive  Business,  depending  on  the  security
       settings of the organization.  A common error is that the publisher of the App is not verified.

       You may  try  to  https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-
       overview    verify you account, or try to limit the App to your organization only, as shown below.

       1. Make sure to create the App with your business account.

       2. Follow  the steps above to create an App.  However, we need a different account type here: Accounts in
          this organizational directory only (*** - Single tenant).  Note that you can also change  the  account
          type after creating the App.

       3. Find the https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-
          find-ten‐ ant tenant ID of your organization.

       4. In            the            rclone            config,            set            auth_url           to
          https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize.

       5. In           the           rclone            config,            set            token_url            to
          https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token.

       Note:  If  you  have  a  special  region,  you  may  need  a  different  host  in step 4 and 5.  Here are
       https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/back‐
       end/onedrive/onedrive.go#L86 some hints.

   Modification time and hashes
       OneDrive allows modification times to be set on objects accurate to 1 second.   These  will  be  used  to
       detect whether objects need syncing or not.

       OneDrive  personal  supports  SHA1  type  hashes.   OneDrive  for  business and Sharepoint Server support
       https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash QuickXorHash.

       For all types of OneDrive you can use the --checksum flag.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       ”           0x22        "
       *           0x2A        *
       :           0x3A        :
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       \           0x5C        \
       |           0x7C        |

       File names can also not end with the following characters.  These only get replaced if they are the  last
       character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠
       .           0x2E        .

       File  names  can  also  not begin with the following characters.  These only get replaced if they are the
       first character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠
       ~           0x7E        ~

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Deleting files
       Any files you delete with rclone will end  up  in  the  trash.   Microsoft  doesn’t  provide  an  API  to
       permanently  delete  files,  nor  to empty the trash, so you will have to do that with one of Microsoft’s
       apps or via the OneDrive website.

   Standard options
       Here are the Standard options specific to onedrive (Microsoft OneDrive).

   –onedrive-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_ONEDRIVE_CLIENT_ID

       • Type: string

       • Required: false

   –onedrive-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET

       • Type: string

       • Required: false

   –onedrive-region
       Choose national cloud region for OneDrive.

       Properties:

       • Config: region

       • Env Var: RCLONE_ONEDRIVE_REGION

       • Type: string

       • Default: “global”

       • Examples:

         • “global”

           • Microsoft Cloud Global

         • “us”

           • Microsoft Cloud for US Government

         • “de”

           • Microsoft Cloud Germany

         • “cn”

           • Azure and Office 365 operated by Vnet Group in China

   Advanced options
       Here are the Advanced options specific to onedrive (Microsoft OneDrive).

   –onedrive-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_ONEDRIVE_TOKEN

       • Type: string

       • Required: false

   –onedrive-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_ONEDRIVE_AUTH_URL

       • Type: string

       • Required: false

   –onedrive-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_ONEDRIVE_TOKEN_URL

       • Type: string

       • Required: false

   –onedrive-chunk-size
       Chunk size to upload files with - must be multiple of 320k (327,680 bytes).

       Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and  should  not  exceed
       250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException:
       The request message is too big." Note that the chunks will be buffered into memory.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 10Mi

   –onedrive-drive-id
       The ID of the drive to use.

       Properties:

       • Config: drive_id

       • Env Var: RCLONE_ONEDRIVE_DRIVE_ID

       • Type: string

       • Required: false

   –onedrive-drive-type
       The type of the drive (personal | business | documentLibrary).

       Properties:

       • Config: drive_type

       • Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE

       • Type: string

       • Required: false

   –onedrive-root-folder-id
       ID of the root folder.

       This  isn’t  normally  needed, but in special circumstances you might know the folder ID that you wish to
       access but not be able to get there through a path traversal.

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID

       • Type: string

       • Required: false

   –onedrive-access-scopes
       Set scopes to be requested by rclone.

       Choose or manually enter a custom space separated list with all scopes, that rclone should request.

       Properties:

       • Config: access_scopes

       • Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES

       • Type: SpaceSepList

       • Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access

       • Examples:

         • “Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access”

           • Read and write access to all resources

         • “Files.Read Files.Read.All Sites.Read.All offline_access”

           • Read only access to all resources

         • “Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access”

           • Read and write access to all resources, without the ability to browse SharePoint sites.

           • Same as if disable_site_permission was set to true

   –onedrive-disable-site-permission
       Disable the request for Sites.Read.All permission.

       If set to true, you will no longer be able to search for a SharePoint site  when  configuring  drive  ID,
       because  rclone  will  not request Sites.Read.All permission.  Set it to true if your organization didn’t
       assign Sites.Read.All permission to the application, and your organization disallows users to consent app
       permission request on their own.

       Properties:

       • Config: disable_site_permission

       • Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION

       • Type: bool

       • Default: false

   –onedrive-expose-onenote-files
       Set to make OneNote files show up in directory listings.

       By default, rclone will hide OneNote files in directory  listings  because  operations  like  “Open”  and
       “Update” won’t work on them.  But this behaviour may also prevent you from deleting them.  If you want to
       delete OneNote files or otherwise want them to show up in directory listing, set this option.

       Properties:

       • Config: expose_onenote_files

       • Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES

       • Type: bool

       • Default: false

   –onedrive-server-side-across-configs
       Allow server-side operations (e.g. copy) to work across different onedrive configs.

       This  will  only  work  if you are copying between two OneDrive Personal drives AND the files to copy are
       already shared between them.  In other cases, rclone will  fall  back  to  normal  copy  (which  will  be
       slightly slower).

       Properties:

       • Config: server_side_across_configs

       • Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS

       • Type: bool

       • Default: false

   –onedrive-list-chunk
       Size of listing chunk.

       Properties:

       • Config: list_chunk

       • Env Var: RCLONE_ONEDRIVE_LIST_CHUNK

       • Type: int

       • Default: 1000

   –onedrive-no-versions
       Remove all versions on modifying operations.

       Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when
       it sets the modification time.

       These versions take up space out of the quota.

       This  flag  checks  for  versions after file upload and setting modification time and removes all but the
       last version.

       NB Onedrive personal can’t currently delete versions so don’t use this flag there.

       Properties:

       • Config: no_versions

       • Env Var: RCLONE_ONEDRIVE_NO_VERSIONS

       • Type: bool

       • Default: false

   –onedrive-link-scope
       Set the scope of the links created by the link command.

       Properties:

       • Config: link_scope

       • Env Var: RCLONE_ONEDRIVE_LINK_SCOPE

       • Type: string

       • Default: “anonymous”

       • Examples:

         • “anonymous”

           • Anyone with the link has access, without needing to sign in.

           • This may include people outside of your organization.

           • Anonymous link support may be disabled by an administrator.

         • “organization”

           • Anyone signed into your organization (tenant) can use the link to get access.

           • Only available in OneDrive for Business and SharePoint.

   –onedrive-link-type
       Set the type of the links created by the link command.

       Properties:

       • Config: link_type

       • Env Var: RCLONE_ONEDRIVE_LINK_TYPE

       • Type: string

       • Default: “view”

       • Examples:

         • “view”

           • Creates a read-only link to the item.

         • “edit”

           • Creates a read-write link to the item.

         • “embed”

           • Creates an embeddable link to the item.

   –onedrive-link-password
       Set the password for links created by the link command.

       At the time of writing this only works with OneDrive personal paid accounts.

       Properties:

       • Config: link_password

       • Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD

       • Type: string

       • Required: false

   –onedrive-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_ONEDRIVE_ENCODING

       • Type: MultiEncoder

       • Default:
         Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot

   Limitations
       If you don’t use rclone for 90 days the refresh token will expire.  This  will  result  in  authorization
       problems.   This is easy to fix by running the rclone config reconnect remote: command to get a new token
       and refresh token.

   Naming
       Note that OneDrive is case insensitive so you can’t  have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

       There  are  quite  a  few  characters that can’t be in OneDrive file names.  These can’t occur on Windows
       platforms, but on non-Windows platforms they are common.  Rclone will map these  names  to  and  from  an
       identical looking unicode equivalent.  For example if a file has a ? in it will be mapped to ? instead.

   File sizes
       The largest allowed file  size  is  250  GiB  for  both  OneDrive  Personal  and  OneDrive  for  Business
       https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-share‐
       point-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize   (Updated  13  Jan
       2021).

   Path length
       The  entire  path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive
       for Business and SharePoint Online.  If you are encrypting file and folder names  with  rclone,  you  may
       want  to  pay  attention  to  this  limitation  because the encrypted names are typically longer than the
       original ones.

   Number of files
       OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000  rclone  will  get  errors
       listing the directory like couldn’t list files: UnknownError:.  See #2707 for more info.

       An  official  document  about  the  limitations for different types of OneDrive can be found https://sup‐
       port.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-busi‐  ness-and-
       sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa here.

   Versions
       Every  change  in  a  file  OneDrive causes the service to create a new version of the file.  This counts
       against a users quota.  For example changing the modification time of a file creates a second version, so
       the file apparently uses twice the space.

       For example the copy command is affected by this as rclone copies the file and then afterwards  sets  the
       modification time to match the source file which uses another version.

       You can use the rclone cleanup command (see below) to remove all old versions.

       Or  you  can set the no_versions parameter to true and rclone will remove versions after operations which
       create new versions.  This takes extra transactions so only enable it if you need it.

       Note At the time of writing Onedrive Personal creates versions (but  not  for  setting  the  modification
       time) but the API for removing them returns “API not found” so cleanup and no_versions should not be used
       on Onedrive Personal.

   Disabling versioning
       Starting  October  2018,  users will no longer be able to disable versioning by default.  This is because
       Microsoft has brought  an  https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-
       OneDrive-and-SharePoint-  Team-Site-Versioning/ba-p/204390 update to  the  mechanism.  To change this new
       default setting, a PowerShell command is required to be run by a SharePoint admin.  If you are an  admin,
       you can run these commands in PowerShell to change that setting:

       1. Install-Module  -Name  Microsoft.Online.SharePoint.PowerShell  (in  case  you  haven’t  installed this
          already)

       2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

       3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM  (replacing
          YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)

       4. Set-SPOTenant -EnableMinimumVersionRequirement $False

       5. Disconnect-SPOService (to disconnect from the server)

       Below are the steps for normal users to disable versioning.  If you don’t see the “No Versioning” option,
       make sure the above requirements are met.

       User Weropol has found a method to disable versioning on OneDrive

       1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.

       2. Click Site settings.

       3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.

       4. Click Customize “Documents”.

       5. Click General Settings > Versioning Settings.

       6. Under  Document Version History select the option No versioning.  Note: This will disable the creation
          of new file versions, but will not remove any previous versions.  Your documents are safe.

       7. Apply the changes by clicking OK.

       8. Use rclone to upload or modify files.  (I also use the –no-update-modtime flag)

       9. Restore the versioning settings after using rclone.  (Optional)

   Cleanup
       OneDrive supports rclone cleanup which causes rclone to look through every file under the  path  supplied
       and  delete  all  version  but the current version.  Because this involves traversing all the files, then
       querying each file for versions it can be quite slow.  Rclone does --checkers  tests  in  parallel.   The
       command also supports -i which is a great way to see what it would do.

              rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
              rclone cleanup remote:path/subdir    # unconditionally remove all old version for path/subdir

       NB Onedrive personal can’t currently delete versions

   Troubleshooting
   Excessive throttling or blocked on SharePoint
       If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user
       agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"

       The specific details can be found  in  the  Microsoft  document:  https://docs.microsoft.com/en-us/share‐
       point/dev/general-development/how-to-avoid-getting-throttled-or- blocked-in-sharepoint-online#how-to-dec‐
       orate-your-http-traffic-to-avoid-throttling  Avoid  getting throt‐ tled or blocked in SharePoint Online

   Unexpected file size/hash differences on Sharepoint
       It  is a https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631 known issue that
       Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office  files
       (.docx,  .xlsx,  etc.),  causing file size and hash checks to fail.  There are also other situations that
       will cause OneDrive to report inconsistent file sizes.   To  use  rclone  with  such  affected  files  on
       Sharepoint, you may disable these checks with the following command line arguments:

              --ignore-checksum --ignore-size

       Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for
       certain  files, by attempting the steps below.  Open the web interface for OneDrive and find the affected
       files (which will be in the error messages/log for rclone).  Simply click on each of these files, causing
       OneDrive to open them on the web.  This will cause each file to be converted in place to a format that is
       functionally equivalent but which will no longer trigger the  size  discrepancy.   Once  all  problematic
       files are converted you will no longer need the ignore options above.

   Replacing/deleting existing files on Sharepoint gets “item not found”
       It  is  a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return “item not found”
       errors when users try to replace or delete uploaded files; this  seems  to  mainly  affect  Office  files
       (.docx,  .xlsx, etc.)  and web files (.html, .aspx, etc.).  As a workaround, you may use the --backup-dir
       <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a  given  backup
       directory  (instead  of  directly  replacing/deleting them).  For example, to instruct rclone to move the
       files into the directory rclone-backup-dir on backend mysharepoint, you may use:

              --backup-dir mysharepoint:rclone-backup-dir

   access_denied (AADSTS65005)
              Error: access_denied
              Code: AADSTS65005
              Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

       This means that rclone can’t use the OneDrive for Business API with your  account.   You  can’t  do  much
       about it, maybe write an email to your admins.

       However, there are other ways to interact with your OneDrive account.  Have a look at the WebDAV backend:
       https://rclone.org/webdav/#sharepoint

   invalid_grant (AADSTS50076)
              Error: invalid_grant
              Code: AADSTS50076
              Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

       If you see the error above after enabling multi-factor authentication for your account, you can fix it by
       refreshing  your  OAuth  refresh  token.  To do that, run rclone config, and choose to edit your OneDrive
       backend.  Then, you don’t need to actually make any changes until you reach this question: Already have a
       token - refresh?.  For this question, answer y and go through the process to  refresh  your  token,  just
       like the first time the backend is configured.  After this, rclone should work again for this backend.

   Invalid request when making public links
       On  Sharepoint  and OneDrive for Business, rclone link may return an “Invalid request” error.  A possible
       cause is that the organisation admin didn’t allow public links to be made for the organisation/sharepoint
       library.  To fix the permissions as an admin, take a look  at  the  docs:  https://docs.microsoft.com/en-
       us/sharepoint/turn-external-sharing-on-or-off 1,   https://support.microsoft.com/en-us/office/set-up-and-
       manage-access-re‐ quests-94b26e0b-2822-49d4-929a-8455698654b3 2.

OpenDrive

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              n) New remote
              d) Delete remote
              q) Quit config
              e/n/d/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / OpenDrive
                 \ "opendrive"
              [snip]
              Storage> opendrive
              Username
              username>
              Password
              y) Yes type in my own password
              g) Generate random password
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              --------------------
              [remote]
              username =
              password = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       List directories in top level of your OpenDrive

              rclone lsd remote:

       List all the files in your OpenDrive

              rclone ls remote:

       To copy a local directory to an OpenDrive directory called backup

              rclone copy /home/source remote:backup

   Modified time and MD5SUMs
       OpenDrive allows modification times to be set on objects accurate to 1 second.  These  will  be  used  to
       detect whether objects need syncing or not.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /
       ”           0x22        "
       *           0x2A        *
       :           0x3A        :
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       \           0x5C        \
       |           0x7C        |

       File  names can also not begin or end with the following characters.  These only get replaced if they are
       the first or last character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠
       HT          0x09         ␉
       LF          0x0A         ␊
       VT          0x0B         ␋
       CR          0x0D         ␍

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to opendrive (OpenDrive).

   –opendrive-username
       Username.

       Properties:

       • Config: username

       • Env Var: RCLONE_OPENDRIVE_USERNAME

       • Type: string

       • Required: true

   –opendrive-password
       Password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_OPENDRIVE_PASSWORD

       • Type: string

       • Required: true

   Advanced options
       Here are the Advanced options specific to opendrive (OpenDrive).

   –opendrive-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_OPENDRIVE_ENCODING

       • Type: MultiEncoder

       • Default:
         Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot

   –opendrive-chunk-size
       Files will be uploaded in chunks this size.

       Note that these chunks are buffered in memory so increasing them will increase memory use.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 10Mi

   Limitations
       Note that OpenDrive is case insensitive so you can’t have  a  file  called  “Hello.doc”  and  one  called
       “hello.doc”.

       There  are  quite  a  few characters that can’t be in OpenDrive file names.  These can’t occur on Windows
       platforms, but on non-Windows platforms they are common.  Rclone will map these  names  to  and  from  an
       identical looking unicode equivalent.  For example if a file has a ? in it will be mapped to ? instead.

       rclone  about  is  not  supported  by  the  OpenDrive  backend.   Backends without this capability cannot
       determine free space for an rclone mount or use policy mfs (most free space) as a  member  of  an  rclone
       union remote.

       See List of backends that do not support rclone about and rclone about

Oracle Object Storage

       https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm  Oracle Object Stor‐
       age Overview

       Oracle Object Storage FAQ

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:bucket/path/to/dir.

   Configuration
       Here is an example of making an oracle object storage configuration.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> n

              Enter name for new remote.
              name> remote

              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              [snip]
              XX / Oracle Cloud Infrastructure Object Storage
                 \ (oracleobjectstorage)
              Storage> oracleobjectstorage

              Option provider.
              Choose your Auth Provider
              Choose a number from below, or type in your own string value.
              Press Enter for the default (env_auth).
               1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
                 \ (env_auth)
                 / use an OCI user and an API key for authentication.
               2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
                 | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
                 \ (user_principal_auth)
                 / use instance principals to authorize an instance to make API calls.
               3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
                 | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
                 \ (instance_principal_auth)
               4 / use resource principals to make API calls
                 \ (resource_principal_auth)
               5 / no credentials needed, this is typically for reading public buckets
                 \ (no_auth)
              provider> 2

              Option namespace.
              Object storage namespace
              Enter a value.
              namespace> idbamagbg734

              Option compartment.
              Object storage compartment OCID
              Enter a value.
              compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba

              Option region.
              Object storage Region
              Enter a value.
              region> us-ashburn-1

              Option endpoint.
              Endpoint for Object storage API.
              Leave blank to use the default endpoint for the region.
              Enter a value. Press Enter to leave empty.
              endpoint>

              Option config_file.
              Path to OCI config file
              Choose a number from below, or type in your own string value.
              Press Enter for the default (~/.oci/config).
               1 / oci configuration file location
                 \ (~/.oci/config)
              config_file> /etc/oci/dev.conf

              Option config_profile.
              Profile name inside OCI config file
              Choose a number from below, or type in your own string value.
              Press Enter for the default (Default).
               1 / Use the default profile
                 \ (Default)
              config_profile> Test

              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n

              Configuration complete.
              Options:
              - type: oracleobjectstorage
              - namespace: idbamagbg734
              - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
              - region: us-ashburn-1
              - provider: user_principal_auth
              - config_file: /etc/oci/dev.conf
              - config_profile: Test
              Keep this "remote" remote?
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See all buckets

              rclone lsd remote:

       Create a new bucket

              rclone mkdir remote:bucket

       List the contents of a bucket

              rclone ls remote:bucket
              rclone ls remote:bucket --max-depth 1

   Modified time
       The  modified  time  is  stored  as  metadata on the object as opc-meta-mtime as floating point since the
       epoch, accurate to 1 ns.

       If the modification time needs to be updated rclone will attempt to perform a server side copy to  update
       the  modification  if  the  object can be copied in a single part.  In the case the object is larger than
       5Gb, the object will be uploaded rather than copied.

       Note that reading this from the object takes an additional HEAD request as the metadata isn’t returned in
       object listings.

   Multipart uploads
       rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.

       Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

       rclone  switches  from  single  part  uploads  to  multipart  uploads   at   the   point   specified   by
       --oos-upload-cutoff.   This  can  be  a  maximum  of 5 GiB and a minimum of 0 (ie always upload multipart
       files).

       The chunk sizes used in the multipart upload are specified by --oos-chunk-size and the number  of  chunks
       uploaded concurrently is specified by --oos-upload-concurrency.

       Multipart  uploads  will  use  --transfers  *  --oos-upload-concurrency  * --oos-chunk-size extra memory.
       Single part uploads to not use extra memory.

       Single part transfers can be faster than multipart transfers or slower depending on your latency from oos
       - the more latency, the more likely single part transfers will be faster.

       Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing
       --oos-chunk-size also increases throughput (16M would be sensible).  Increasing either of these will  use
       more  memory.   The default values are high enough to gain most of the possible performance without using
       too much memory.

   Standard options
       Here are the Standard  options  specific  to  oracleobjectstorage  (Oracle  Cloud  Infrastructure  Object
       Storage).

   –oos-provider
       Choose your Auth Provider

       Properties:

       • Config: provider

       • Env Var: RCLONE_OOS_PROVIDER

       • Type: string

       • Default: “env_auth”

       • Examples:

         • “env_auth”

           • automatically pickup the credentials from runtime(env), first one to provide auth wins

         • “user_principal_auth”

           • use an OCI user and an API key for authentication.

           • you’ll  need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to
             an API key.

           • https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm

         • “instance_principal_auth”

           • use instance principals to authorize an instance to make API calls.

           • each instance has its own identity, and authenticates using the certificates  that  are  read  from
             instance metadata.

           • https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm

         • “resource_principal_auth”

           • use resource principals to make API calls

         • “no_auth”

           • no credentials needed, this is typically for reading public buckets

   –oos-namespace
       Object storage namespace

       Properties:

       • Config: namespace

       • Env Var: RCLONE_OOS_NAMESPACE

       • Type: string

       • Required: true

   –oos-compartment
       Object storage compartment OCID

       Properties:

       • Config: compartment

       • Env Var: RCLONE_OOS_COMPARTMENT

       • Provider: !no_auth

       • Type: string

       • Required: true

   –oos-region
       Object storage Region

       Properties:

       • Config: region

       • Env Var: RCLONE_OOS_REGION

       • Type: string

       • Required: true

   –oos-endpoint
       Endpoint for Object storage API.

       Leave blank to use the default endpoint for the region.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_OOS_ENDPOINT

       • Type: string

       • Required: false

   –oos-config-file
       Path to OCI config file

       Properties:

       • Config: config_file

       • Env Var: RCLONE_OOS_CONFIG_FILE

       • Provider: user_principal_auth

       • Type: string

       • Default: “~/.oci/config”

       • Examples:

         • “~/.oci/config”

           • oci configuration file location

   –oos-config-profile
       Profile name inside the oci config file

       Properties:

       • Config: config_profile

       • Env Var: RCLONE_OOS_CONFIG_PROFILE

       • Provider: user_principal_auth

       • Type: string

       • Default: “Default”

       • Examples:

         • “Default”

           • Use the default profile

   Advanced options
       Here  are  the  Advanced  options  specific  to  oracleobjectstorage  (Oracle Cloud Infrastructure Object
       Storage).

   –oos-upload-cutoff
       Cutoff for switching to chunked upload.

       Any files larger than this will be uploaded in chunks of chunk_size.  The minimum is 0 and the maximum is
       5 GiB.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_OOS_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 200Mi

   –oos-chunk-size
       Chunk size to use for uploading.

       When uploading files larger than upload_cutoff or files with unknown size  (e.g. from  “rclone  rcat”  or
       uploaded  with “rclone mount” or google photos or google docs) they will be uploaded as multipart uploads
       using this chunk size.

       Note that “upload_concurrency” chunks of this size are buffered in memory per transfer.

       If you are transferring large files over high-speed links and you have  enough  memory,  then  increasing
       this will speed up the transfers.

       Rclone will automatically increase the chunk size when uploading a large file of known size to stay below
       the 10,000 chunks limit.

       Files of unknown size are uploaded with the configured chunk_size.  Since the default chunk size is 5 MiB
       and  there  can  be  at most 10,000 chunks, this means that by default the maximum size of a file you can
       stream upload is 48 GiB.  If you wish to stream upload larger  files  then  you  will  need  to  increase
       chunk_size.

       Increasing the chunk size decreases the accuracy of the progress statistics displayed with “-P” flag.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_OOS_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 5Mi

   –oos-upload-concurrency
       Concurrency for multipart uploads.

       This is the number of chunks of the same file that are uploaded concurrently.

       If  you  are  uploading small numbers of large files over high-speed links and these uploads do not fully
       utilize your bandwidth, then increasing this may help to speed up the transfers.

       Properties:

       • Config: upload_concurrency

       • Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY

       • Type: int

       • Default: 10

   –oos-copy-cutoff
       Cutoff for switching to multipart copy.

       Any files larger than this that need to be server-side copied will be copied in chunks of this size.

       The minimum is 0 and the maximum is 5 GiB.

       Properties:

       • Config: copy_cutoff

       • Env Var: RCLONE_OOS_COPY_CUTOFF

       • Type: SizeSuffix

       • Default: 4.656Gi

   –oos-copy-timeout
       Timeout for copy.

       Copy is an asynchronous operation, specify timeout to wait for copy to succeed

       Properties:

       • Config: copy_timeout

       • Env Var: RCLONE_OOS_COPY_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –oos-disable-checksum
       Don’t store MD5 checksum with object metadata.

       Normally rclone will calculate the MD5 checksum of the input before uploading it so  it  can  add  it  to
       metadata  on  the  object.  This is great for data integrity checking but can cause long delays for large
       files to start uploading.

       Properties:

       • Config: disable_checksum

       • Env Var: RCLONE_OOS_DISABLE_CHECKSUM

       • Type: bool

       • Default: false

   –oos-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_OOS_ENCODING

       • Type: MultiEncoder

       • Default: Slash,InvalidUtf8,Dot

   –oos-leave-parts-on-error
       If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual
       recovery.

       It should be set to true for resuming uploads across different sessions.

       WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and
       will add additional costs if not cleaned up.

       Properties:

       • Config: leave_parts_on_error

       • Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR

       • Type: bool

       • Default: false

   –oos-no-check-bucket
       If set, don’t attempt to check the bucket exists or create it.

       This can be useful when trying to minimise the number of transactions rclone does if you know the  bucket
       exists already.

       It can also be needed if the user you are using does not have bucket creation permissions.

       Properties:

       • Config: no_check_bucket

       • Env Var: RCLONE_OOS_NO_CHECK_BUCKET

       • Type: bool

       • Default: false

   Backend commands
       Here are the commands specific to the oracleobjectstorage backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   rename
       change the name of an object

              rclone backend rename remote: [options] [<arguments>+]

       This command can be used to rename a object.

       Usage Examples:

              rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name

   list-multipart-uploads
       List the unfinished multipart uploads

              rclone backend list-multipart-uploads remote: [options] [<arguments>+]

       This command lists the unfinished multipart uploads in JSON format.

              rclone backend list-multipart-uploads oos:bucket/path/to/object

       It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

       You  can  call  it  with  no bucket in which case it lists all bucket, with a bucket or with a bucket and
       path.

              {
                "test-bucket": [
                          {
                                  "namespace": "test-namespace",
                                  "bucket": "test-bucket",
                                  "object": "600m.bin",
                                  "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
                                  "timeCreated": "2022-07-29T06:21:16.595Z",
                                  "storageTier": "Standard"
                          }
                  ]

   cleanup
       Remove unfinished multipart uploads.

              rclone backend cleanup remote: [options] [<arguments>+]

       This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

       Note that you can use -i/–dry-run with this command to see what it would do.

              rclone backend cleanup oos:bucket/path/to/object
              rclone backend cleanup -o max-age=7w oos:bucket/path/to/object

       Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

       Options:

       • “max-age”: Max age of upload to delete

QingStor

       Paths are specified as remote:bucket (or remote: for the lsd command.)  You  may  put  subdirectories  in
       too, e.g. remote:bucket/path/to/dir.

   Configuration
       Here is an example of making an QingStor configuration.  First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              n/r/c/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / QingStor Object Storage
                 \ "qingstor"
              [snip]
              Storage> qingstor
              Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
              Choose a number from below, or type in your own value
               1 / Enter QingStor credentials in the next step
                 \ "false"
               2 / Get QingStor credentials from the environment (env vars or IAM)
                 \ "true"
              env_auth> 1
              QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
              access_key_id> access_key
              QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
              secret_access_key> secret_key
              Enter an endpoint URL to connection QingStor API.
              Leave blank will use the default value "https://qingstor.com:443"
              endpoint>
              Zone connect to. Default is "pek3a".
              Choose a number from below, or type in your own value
                 / The Beijing (China) Three Zone
               1 | Needs location constraint pek3a.
                 \ "pek3a"
                 / The Shanghai (China) First Zone
               2 | Needs location constraint sh1a.
                 \ "sh1a"
              zone> 1
              Number of connection retry.
              Leave blank will use the default value "3".
              connection_retries>
              Remote config
              --------------------
              [remote]
              env_auth = false
              access_key_id = access_key
              secret_access_key = secret_key
              endpoint =
              zone = pek3a
              connection_retries =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This remote is called remote and can now be used like this

       See all buckets

              rclone lsd remote:

       Make a new bucket

              rclone mkdir remote:bucket

       List the contents of a bucket

              rclone ls remote:bucket

       Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

              rclone sync -i /home/local/directory remote:bucket

   –fast-list
       This  remote supports --fast-list which allows you to use fewer transactions in exchange for more memory.
       See the rclone docs for more details.

   Multipart uploads
       rclone supports multipart uploads with QingStor which means that it can upload files bigger than  5  GiB.
       Note that files uploaded with multipart upload don’t have an MD5SUM.

       Note  that  incomplete  multipart  uploads  older  than  24  hours  can  be  removed  with rclone cleanup
       remote:bucket just for one bucket rclone cleanup remote: for all buckets.  QingStor does not ever  remove
       incomplete multipart uploads so it may be necessary to run this from time to time.

   Buckets and Zone
       With  QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a
       bucket from the zone it was created in.  If you attempt to access a bucket from the wrong zone, you  will
       get an error, incorrect zone, the bucket is not in 'XXX' zone.

   Authentication
       There are two ways to supply rclone with a set of QingStor credentials.  In order of precedence:

       • Directly in the rclone configuration file (as configured by rclone config)

         • set access_key_id and secret_access_key

       • Runtime configuration:

         • set env_auth to true in the config file

         • Exporting the following environment variables before running rclone

           • Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY

           • Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY

   Restricted filename characters
       The  control  characters  0x00-0x1F and / are replaced as in the default restricted characters set.  Note
       that 0x7F is not replaced.

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to qingstor (QingCloud Object Storage).

   –qingstor-env-auth
       Get QingStor credentials from runtime.

       Only applies if access_key_id and secret_access_key is blank.

       Properties:

       • Config: env_auth

       • Env Var: RCLONE_QINGSTOR_ENV_AUTH

       • Type: bool

       • Default: false

       • Examples:

         • “false”

           • Enter QingStor credentials in the next step.

         • “true”

           • Get QingStor credentials from the environment (env vars or IAM).

   –qingstor-access-key-id
       QingStor Access Key ID.

       Leave blank for anonymous access or runtime credentials.

       Properties:

       • Config: access_key_id

       • Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID

       • Type: string

       • Required: false

   –qingstor-secret-access-key
       QingStor Secret Access Key (password).

       Leave blank for anonymous access or runtime credentials.

       Properties:

       • Config: secret_access_key

       • Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY

       • Type: string

       • Required: false

   –qingstor-endpoint
       Enter an endpoint URL to connection QingStor API.

       Leave blank will use the default value “https://qingstor.com:443”.

       Properties:

       • Config: endpoint

       • Env Var: RCLONE_QINGSTOR_ENDPOINT

       • Type: string

       • Required: false

   –qingstor-zone
       Zone to connect to.

       Default is “pek3a”.

       Properties:

       • Config: zone

       • Env Var: RCLONE_QINGSTOR_ZONE

       • Type: string

       • Required: false

       • Examples:

         • “pek3a”

           • The Beijing (China) Three Zone.

           • Needs location constraint pek3a.

         • “sh1a”

           • The Shanghai (China) First Zone.

           • Needs location constraint sh1a.

         • “gd2a”

           • The Guangdong (China) Second Zone.

           • Needs location constraint gd2a.

   Advanced options
       Here are the Advanced options specific to qingstor (QingCloud Object Storage).

   –qingstor-connection-retries
       Number of connection retries.

       Properties:

       • Config: connection_retries

       • Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES

       • Type: int

       • Default: 3

   –qingstor-upload-cutoff
       Cutoff for switching to chunked upload.

       Any files larger than this will be uploaded in chunks of chunk_size.  The minimum is 0 and the maximum is
       5 GiB.

       Properties:

       • Config: upload_cutoff

       • Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF

       • Type: SizeSuffix

       • Default: 200Mi

   –qingstor-chunk-size
       Chunk size to use for uploading.

       When uploading files larger than upload_cutoff they will be uploaded  as  multipart  uploads  using  this
       chunk size.

       Note that “–qingstor-upload-concurrency” chunks of this size are buffered in memory per transfer.

       If  you  are  transferring  large files over high-speed links and you have enough memory, then increasing
       this will speed up the transfers.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_QINGSTOR_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 4Mi

   –qingstor-upload-concurrency
       Concurrency for multipart uploads.

       This is the number of chunks of the same file that are uploaded concurrently.

       NB if you set this to > 1  then  the  checksums  of  multipart  uploads  become  corrupted  (the  uploads
       themselves are not corrupted though).

       If  you  are  uploading small numbers of large files over high-speed links and these uploads do not fully
       utilize your bandwidth, then increasing this may help to speed up the transfers.

       Properties:

       • Config: upload_concurrency

       • Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY

       • Type: int

       • Default: 1

   –qingstor-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_QINGSTOR_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Ctl,InvalidUtf8

   Limitations
       rclone about is not supported by the qingstor backend.  Backends without this capability cannot determine
       free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

       See List of backends that do not support rclone about and rclone about

Sia

       Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology.  With rclone
       you can use it like any other remote filesystem or mount Sia folders locally.  The technology  behind  it
       involves  a  number  of  new  concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and
       Hosting, and so on.  If you are new to it, you’d better first familiarize yourself using their  excellent
       support documentation.

   Introduction
       Before  you  can  use  rclone  with  Sia, you will need to have a running copy of Sia-UI or siad (the Sia
       daemon) locally on your computer or on local network (e.g. a NAS).  Please follow  the  Get started guide
       and install one.

       rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on
       port 9980.  By default you will run the daemon locally on the same computer so it’s safe to leave the API
       password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

       However,  if you want to access Sia daemon running on another node, for example due to memory constraints
       or because you want to share single daemon between several rclone and Sia-UI instances,  you’ll  need  to
       make  a  few  more  provisions:  -  Ensure  you  have  Sia  daemon  installed  directly  or  in  a docker
       container because Sia-UI does not support this mode natively.  - Run it on  externally  accessible  port,
       for  example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line.  -
       Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD  or  text  file  named
       apipassword  in  the  daemon  directory.   -  Set rclone backend option api_password taking it from above
       locations.

       Notes: 1.  If your wallet is locked, rclone cannot unlock it automatically.  You should either unlock  it
       in  advance  by  using  Sia-UI  or  via command line siac wallet unlock.  Alternatively you can make siad
       unlock  your  wallet   automatically   upon   startup   by   running   it   with   environment   variable
       SIA_WALLET_PASSWORD.   2.   If  siad cannot find the SIA_API_PASSWORD variable or the apipassword file in
       the SIA_DIR directory, it will generate a random password and store in the text  file  named  apipassword
       under  YOUR_HOME/.sia/  directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows.
       Remember this when you configure password in rclone.  3.  The only way to use siad without  API  password
       is  to  run  it  on  localhost with command line argument --authorize-api=false, but this is insecure and
       strongly discouraged.

   Configuration
       Here is an example of how to make a sia remote called mySia.  First, run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> mySia
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              ...
              29 / Sia Decentralized Cloud
                 \ "sia"
              ...
              Storage> sia
              Sia daemon API URL, like http://sia.daemon.host:9980.
              Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
              Keep default if Sia daemon runs on localhost.
              Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
              api_url> http://127.0.0.1:9980
              Sia Daemon API Password.
              Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g/n> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n
              --------------------
              [mySia]
              type = sia
              api_url = http://127.0.0.1:9980
              api_password = *** ENCRYPTED ***
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Once configured, you can then use rclone like this:

       • List directories in top level of your Sia storage

         rclone lsd mySia:

       • List all the files in your Sia storage

         rclone ls mySia:

       • Upload a local directory to the Sia directory called backup

         rclone copy /home/source mySia:backup

   Standard options
       Here are the Standard options specific to sia (Sia Decentralized Cloud).

   –sia-api-url
       Sia daemon API URL, like http://sia.daemon.host:9980.

       Note that siad must run with –disable-api-security to open API port for other  hosts  (not  recommended).
       Keep default if Sia daemon runs on localhost.

       Properties:

       • Config: api_url

       • Env Var: RCLONE_SIA_API_URL

       • Type: string

       • Default: “http://127.0.0.1:9980”

   –sia-api-password
       Sia Daemon API Password.

       Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: api_password

       • Env Var: RCLONE_SIA_API_PASSWORD

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to sia (Sia Decentralized Cloud).

   –sia-user-agent
       Siad User Agent

       Sia daemon requires the `Sia-Agent' user agent by default for security

       Properties:

       • Config: user_agent

       • Env Var: RCLONE_SIA_USER_AGENT

       • Type: string

       • Default: “Sia-Agent”

   –sia-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SIA_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot

   Limitations
       • Modification times not supported

       • Checksums not supported

       • rclone about not supported

       • rclone can work only with Siad or Sia-UI at the moment, the SkyNet daemon is not supported yet.

       • Sia  does  not allow control characters or symbols like question and pound signs in file names.  rclone
         will transparently encode them for you, but you’d better be aware

Swift

       Swift refers to OpenStack Object Storage.  Commercial implementations of that being:

       • Rackspace Cloud Files

       • Memset Memstore

       • OVH Object Storage

       • https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html Oracle Cloud Storage

       • https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html IBM  Bluemix  Cloud  Ob‐
         jectStorage Swift

       Paths  are specified as remote:container (or remote: for the lsd command.)  You may put subdirectories in
       too, e.g. remote:container/path/to/dir.

   Configuration
       Here is an example of making a swift configuration.  First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
                 \ "swift"
              [snip]
              Storage> swift
              Get swift credentials from environment variables in standard OpenStack form.
              Choose a number from below, or type in your own value
               1 / Enter swift credentials in the next step
                 \ "false"
               2 / Get swift credentials from environment vars. Leave other fields blank if using this.
                 \ "true"
              env_auth> true
              User name to log in (OS_USERNAME).
              user>
              API key or password (OS_PASSWORD).
              key>
              Authentication URL for server (OS_AUTH_URL).
              Choose a number from below, or type in your own value
               1 / Rackspace US
                 \ "https://auth.api.rackspacecloud.com/v1.0"
               2 / Rackspace UK
                 \ "https://lon.auth.api.rackspacecloud.com/v1.0"
               3 / Rackspace v2
                 \ "https://identity.api.rackspacecloud.com/v2.0"
               4 / Memset Memstore UK
                 \ "https://auth.storage.memset.com/v1.0"
               5 / Memset Memstore UK v2
                 \ "https://auth.storage.memset.com/v2.0"
               6 / OVH
                 \ "https://auth.cloud.ovh.net/v3"
              auth>
              User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
              user_id>
              User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
              domain>
              Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
              tenant>
              Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
              tenant_id>
              Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
              tenant_domain>
              Region name - optional (OS_REGION_NAME)
              region>
              Storage URL - optional (OS_STORAGE_URL)
              storage_url>
              Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
              auth_token>
              AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
              auth_version>
              Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
              Choose a number from below, or type in your own value
               1 / Public (default, choose this if not sure)
                 \ "public"
               2 / Internal (use internal service net)
                 \ "internal"
               3 / Admin
                 \ "admin"
              endpoint_type>
              Remote config
              --------------------
              [test]
              env_auth = true
              user =
              key =
              auth =
              user_id =
              domain =
              tenant =
              tenant_id =
              tenant_domain =
              region =
              storage_url =
              auth_token =
              auth_version =
              endpoint_type =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This remote is called remote and can now be used like this

       See all containers

              rclone lsd remote:

       Make a new container

              rclone mkdir remote:container

       List the contents of a container

              rclone ls remote:container

       Sync /home/local/directory to the remote container, deleting any excess files in the container.

              rclone sync -i /home/local/directory remote:container

   Configuration from an OpenStack credentials file
       An OpenStack credentials file typically looks something something like this (without the comments)

              export OS_AUTH_URL=https://a.provider.net/v2.0
              export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
              export OS_TENANT_NAME="1234567890123456"
              export OS_USERNAME="123abc567xy"
              echo "Please enter your OpenStack Password: "
              read -sr OS_PASSWORD_INPUT
              export OS_PASSWORD=$OS_PASSWORD_INPUT
              export OS_REGION_NAME="SBG1"
              if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

       The config file needs to look something  like  this  where  $OS_USERNAME  represents  the  value  of  the
       OS_USERNAME variable - 123abc567xy in the example above.

              [remote]
              type = swift
              user = $OS_USERNAME
              key = $OS_PASSWORD
              auth = $OS_AUTH_URL
              tenant = $OS_TENANT_NAME

       Note that you may (or may not) need to set region too - try without first.

   Configuration from the environment
       If  you  prefer  you  can  configure  rclone  to  use swift using a standard set of OpenStack environment
       variables.

       When you run through the config, make sure you choose true for env_auth and leave everything else blank.

       rclone will then  set  any  empty  config  parameters  from  the  environment  using  standard  OpenStack
       environment variables.  There is a list of the variables in the docs for the swift library.

   Using an alternate authentication method
       If  your OpenStack installation uses a non-standard authentication method that might not be yet supported
       by rclone or the underlying swift library, you can authenticate  externally  (e.g. calling  manually  the
       openstack  commands  to  get  a  token).   Then,  you  just  need to pass the two configuration variables
       auth_token and storage_url.  If they are both provided, the other variables are ignored.  rclone will not
       try to authenticate but instead assume it is already authenticated and use these two variables to  access
       the OpenStack installation.

   Using rclone without a config file
       You can use rclone with swift without a config file, if desired, like this:

              source openstack-credentials-file
              export RCLONE_CONFIG_MYREMOTE_TYPE=swift
              export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
              rclone lsd myremote:

   –fast-list
       This  remote supports --fast-list which allows you to use fewer transactions in exchange for more memory.
       See the rclone docs for more details.

   –update and –use-server-modtime
       As noted below, the modified time is stored on metadata on the object.  It is used  by  default  for  all
       operations  that require checking the time a file was last updated.  It allows rclone to treat the remote
       more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve  the
       metadata.

       For many operations, the time the object was last uploaded to the remote is sufficient to determine if it
       is  “dirty”.   By  using  --update  along with --use-server-modtime, you can avoid the extra API call and
       simply upload files whose local modtime is newer than the time it was last uploaded.

   Modified time
       The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since  the
       epoch accurate to 1 ns.

       This  is  a  de  facto  standard (used in the official python-swiftclient amongst others) for storing the
       modification time for an object.

   Restricted filename characters
       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore,
       OVH)).

   –swift-env-auth
       Get swift credentials from environment variables in standard OpenStack form.

       Properties:

       • Config: env_auth

       • Env Var: RCLONE_SWIFT_ENV_AUTH

       • Type: bool

       • Default: false

       • Examples:

         • “false”

           • Enter swift credentials in the next step.

         • “true”

           • Get swift credentials from environment vars.

           • Leave other fields blank if using this.

   –swift-user
       User name to log in (OS_USERNAME).

       Properties:

       • Config: user

       • Env Var: RCLONE_SWIFT_USER

       • Type: string

       • Required: false

   –swift-key
       API key or password (OS_PASSWORD).

       Properties:

       • Config: key

       • Env Var: RCLONE_SWIFT_KEY

       • Type: string

       • Required: false

   –swift-auth
       Authentication URL for server (OS_AUTH_URL).

       Properties:

       • Config: auth

       • Env Var: RCLONE_SWIFT_AUTH

       • Type: string

       • Required: false

       • Examples:

         • “https://auth.api.rackspacecloud.com/v1.0”

           • Rackspace US

         • “https://lon.auth.api.rackspacecloud.com/v1.0”

           • Rackspace UK

         • “https://identity.api.rackspacecloud.com/v2.0”

           • Rackspace v2

         • “https://auth.storage.memset.com/v1.0”

           • Memset Memstore UK

         • “https://auth.storage.memset.com/v2.0”

           • Memset Memstore UK v2

         • “https://auth.cloud.ovh.net/v3”

           • OVH

   –swift-user-id
       User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).

       Properties:

       • Config: user_id

       • Env Var: RCLONE_SWIFT_USER_ID

       • Type: string

       • Required: false

   –swift-domain
       User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)

       Properties:

       • Config: domain

       • Env Var: RCLONE_SWIFT_DOMAIN

       • Type: string

       • Required: false

   –swift-tenant
       Tenant  name  -  optional  for  v1  auth,  this  or  tenant_id  required  otherwise  (OS_TENANT_NAME   or
       OS_PROJECT_NAME).

       Properties:

       • Config: tenant

       • Env Var: RCLONE_SWIFT_TENANT

       • Type: string

       • Required: false

   –swift-tenant-id
       Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).

       Properties:

       • Config: tenant_id

       • Env Var: RCLONE_SWIFT_TENANT_ID

       • Type: string

       • Required: false

   –swift-tenant-domain
       Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).

       Properties:

       • Config: tenant_domain

       • Env Var: RCLONE_SWIFT_TENANT_DOMAIN

       • Type: string

       • Required: false

   –swift-region
       Region name - optional (OS_REGION_NAME).

       Properties:

       • Config: region

       • Env Var: RCLONE_SWIFT_REGION

       • Type: string

       • Required: false

   –swift-storage-url
       Storage URL - optional (OS_STORAGE_URL).

       Properties:

       • Config: storage_url

       • Env Var: RCLONE_SWIFT_STORAGE_URL

       • Type: string

       • Required: false

   –swift-auth-token
       Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).

       Properties:

       • Config: auth_token

       • Env Var: RCLONE_SWIFT_AUTH_TOKEN

       • Type: string

       • Required: false

   –swift-application-credential-id
       Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).

       Properties:

       • Config: application_credential_id

       • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID

       • Type: string

       • Required: false

   –swift-application-credential-name
       Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).

       Properties:

       • Config: application_credential_name

       • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME

       • Type: string

       • Required: false

   –swift-application-credential-secret
       Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).

       Properties:

       • Config: application_credential_secret

       • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET

       • Type: string

       • Required: false

   –swift-auth-version
       AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).

       Properties:

       • Config: auth_version

       • Env Var: RCLONE_SWIFT_AUTH_VERSION

       • Type: int

       • Default: 0

   –swift-endpoint-type
       Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).

       Properties:

       • Config: endpoint_type

       • Env Var: RCLONE_SWIFT_ENDPOINT_TYPE

       • Type: string

       • Default: “public”

       • Examples:

         • “public”

           • Public (default, choose this if not sure)

         • “internal”

           • Internal (use internal service net)

         • “admin”

           • Admin

   –swift-storage-policy
       The storage policy to use when creating a new container.

       This  applies  the  specified storage policy when creating a new container.  The policy cannot be changed
       afterwards.  The allowed configuration values and their meaning depend on your Swift storage provider.

       Properties:

       • Config: storage_policy

       • Env Var: RCLONE_SWIFT_STORAGE_POLICY

       • Type: string

       • Required: false

       • Examples:

         • “”

           • Default

         • “pcs”

           • OVH Public Cloud Storage

         • “pca”

           • OVH Public Cloud Archive

   Advanced options
       Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore,
       OVH)).

   –swift-leave-parts-on-error
       If true avoid calling abort upload on a failure.

       It should be set to true for resuming uploads across different sessions.

       Properties:

       • Config: leave_parts_on_error

       • Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR

       • Type: bool

       • Default: false

   –swift-chunk-size
       Above this size files will be chunked into a _segments container.

       Above this size files will be chunked into a _segments container.  The default for this is 5 GiB which is
       its maximum value.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_SWIFT_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 5Gi

   –swift-no-chunk
       Don’t chunk files during streaming upload.

       When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to
       not upload chunked files.

       This will limit the maximum upload size to 5 GiB.  However non chunked files are easier to deal with  and
       have an MD5SUM.

       Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

       Properties:

       • Config: no_chunk

       • Env Var: RCLONE_SWIFT_NO_CHUNK

       • Type: bool

       • Default: false

   –swift-no-large-objects
       Disable support for static and dynamic large objects

       Swift  cannot  transparently store files bigger than 5 GiB.  There are two schemes for doing that, static
       or dynamic large objects, and the API does not allow rclone to determine whether a file is  a  static  or
       dynamic  large  object  without  doing a HEAD on the object.  Since these need to be treated differently,
       this means rclone has to issue HEAD requests for objects for example when reading checksums.

       When no_large_objects is set, rclone will assume that there  are  no  static  or  dynamic  large  objects
       stored.   This  means  it can stop doing the extra HEAD calls which in turn increases performance greatly
       especially when doing a swift to swift transfer with --checksum set.

       Setting this option implies no_chunk and also that no files will be uploaded in chunks, so  files  bigger
       than 5 GiB will just fail on upload.

       If  you  set  this  option  and  there are static or dynamic large objects, then this will give incorrect
       hashes for them.  Downloads will succeed, but other operations such as Remove and Copy will fail.

       Properties:

       • Config: no_large_objects

       • Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS

       • Type: bool

       • Default: false

   –swift-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SWIFT_ENCODING

       • Type: MultiEncoder

       • Default: Slash,InvalidUtf8

   Limitations
       The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static  Large  Objects)  so
       rclone won’t check or use the MD5SUM for these.

   Troubleshooting
   Rclone gives Failed to create file system for “remote:”: Bad Request
       Due  to  an  oddity  of  the  underlying swift library, it gives a “Bad Request” error rather than a more
       sensible error when the authentication fails for Swift.

       So this most likely means your username / password is  wrong.   You  can  investigate  further  with  the
       --dump-bodies flag.

       This may also be caused by specifying the region when you shouldn’t have (e.g. OVH).

   Rclone gives Failed to create file system: Response didn’t have storage url and auth token
       This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

   OVH Cloud Archive
       To  use  rclone  with  OVH  cloud  archive,  first  use rclone config to set up a swift backend with OVH,
       choosing pca as the storage_policy.

   Uploading Objects
       Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command
       you like (move, copy or sync) to upload the objects.  Once uploaded the objects will show in  a  “Frozen”
       state within the OVH control panel.

   Retrieving Objects
       To retrieve objects use rclone copy as normal.  If the objects are in a frozen state then rclone will ask
       for them all to be unfrozen and it will wait at the end of the output with a message like the following:

       2019/03/23     13:06:33     NOTICE:     Received     retry     after     error     -    sleeping    until
       2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

       Rclone will wait for the time specified then retry the copy.

pCloud

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       The initial setup for pCloud involves getting a token from pCloud which you need to do in  your  browser.
       rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Pcloud
                 \ "pcloud"
              [snip]
              Storage> pcloud
              Pcloud App Client Id - leave blank normally.
              client_id>
              Pcloud App Client Secret - leave blank normally.
              client_secret>
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              client_id =
              client_secret =
              token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note  that  rclone  runs  a webserver on your local machine to collect the token as returned from pCloud.
       This only runs from the moment it opens your browser to the moment you get back  the  verification  code.
       This  is  on  http://127.0.0.1:53682/  and  this  it may require you to unblock it temporarily if you are
       running a host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your pCloud

              rclone lsd remote:

       List all the files in your pCloud

              rclone ls remote:

       To copy a local directory to a pCloud directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       pCloud allows modification times to be set on objects accurate to 1 second.  These will be used to detect
       whether objects need syncing or not.  In order to set a Modification time pCloud requires the  object  be
       re-uploaded.

       pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you
       can use the --checksum flag.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Deleting files
       Deleted  files will be moved to the trash.  Your subscription level will determine how long items stay in
       the trash.  rclone cleanup can be used to empty the trash.

   Emptying the trash
       Due to an API limitation, the rclone cleanup command will only work if you set your username and password
       in the advanced options for this backend.  Since we generally want to avoid storing user passwords in the
       rclone config file, we advise you to only set this up if you need the rclone cleanup command to work.

   Root folder ID
       You can set the root_folder_id for rclone.  This is the directory (identified  by  its  Folder  ID)  that
       rclone considers to be the root of your pCloud drive.

       Normally you will leave this blank and rclone will determine the correct root to use itself.

       However you can set this to restrict rclone to a specific folder hierarchy.

       In  order  to  do  this  you will have to find the Folder ID of the directory you wish rclone to display.
       This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

       So   if   the   folder    you    want    rclone    to    use    has    a    URL    which    looks    like
       https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid  in  the  browser,  then you use
       5xxxxxxxx8 as the root_folder_id in the config.

   Standard options
       Here are the Standard options specific to pcloud (Pcloud).

   –pcloud-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_PCLOUD_CLIENT_ID

       • Type: string

       • Required: false

   –pcloud-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_PCLOUD_CLIENT_SECRET

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to pcloud (Pcloud).

   –pcloud-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_PCLOUD_TOKEN

       • Type: string

       • Required: false

   –pcloud-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_PCLOUD_AUTH_URL

       • Type: string

       • Required: false

   –pcloud-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_PCLOUD_TOKEN_URL

       • Type: string

       • Required: false

   –pcloud-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_PCLOUD_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot

   –pcloud-root-folder-id
       Fill in for rclone to use a non root folder as its starting point.

       Properties:

       • Config: root_folder_id

       • Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID

       • Type: string

       • Default: “d0”

   –pcloud-hostname
       Hostname to connect to.

       This is normally set when rclone initially does the oauth connection, however you will need to set it  by
       hand if you are using remote config with rclone authorize.

       Properties:

       • Config: hostname

       • Env Var: RCLONE_PCLOUD_HOSTNAME

       • Type: string

       • Default: “api.pcloud.com”

       • Examples:

         • “api.pcloud.com”

           • Original/US region

         • “eapi.pcloud.com”

           • EU region

   –pcloud-username
       Your pcloud username.

       This  is  only  required  when  you  want to use the cleanup command.  Due to a bug in the pcloud API the
       required API does not support OAuth authentication so we have to rely on user password authentication for
       it.

       Properties:

       • Config: username

       • Env Var: RCLONE_PCLOUD_USERNAME

       • Type: string

       • Required: false

   –pcloud-password
       Your pcloud password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: password

       • Env Var: RCLONE_PCLOUD_PASSWORD

       • Type: string

       • Required: false

premiumize.me

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       The initial setup for premiumize.me involves getting a token from premiumize.me which you need to  do  in
       your browser.  rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / premiumize.me
                 \ "premiumizeme"
              [snip]
              Storage> premiumizeme
              ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **

              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              type = premiumizeme
              token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d>

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note  that  rclone  runs  a  webserver  on  your  local  machine  to  collect  the token as returned from
       premiumize.me.  This only runs from the moment it opens your browser to  the  moment  you  get  back  the
       verification  code.   This  is  on  http://127.0.0.1:53682/  and  this  it  may require you to unblock it
       temporarily if you are running a host firewall.

       Once configured you can then use rclone like this,

       List directories in top level of your premiumize.me

              rclone lsd remote:

       List all the files in your premiumize.me

              rclone ls remote:

       To copy a local directory to an premiumize.me directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       premiumize.me does  not  support  modification  times  or  hashes,  therefore  syncing  will  default  to
       --size-only checking.  Note that using --update will work.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \
       ”           0x22        "

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to premiumizeme (premiumize.me).

   –premiumizeme-api-key
       API Key.

       This is not normally used - use oauth instead.

       Properties:

       • Config: api_key

       • Env Var: RCLONE_PREMIUMIZEME_API_KEY

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to premiumizeme (premiumize.me).

   –premiumizeme-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_PREMIUMIZEME_ENCODING

       • Type: MultiEncoder

       • Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       Note  that  premiumize.me  is case insensitive so you can’t have a file called “Hello.doc” and one called
       “hello.doc”.

       premiumize.me file names can’t have the \ or " characters in.  rclone maps these to and from an identical
       looking unicode equivalents \ and "

       premiumize.me only supports filenames up to 255 characters in length.

put.io

       Paths are specified as remote:path

       put.io paths may be as deep as required, e.g.  remote:directory/subdirectory.

   Configuration
       The initial setup for put.io involves getting a token from put.io which you need to do in  your  browser.
       rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> putio
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Put.io
                 \ "putio"
              [snip]
              Storage> putio
              ** See help for putio backend at: https://rclone.org/putio/ **

              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [putio]
              type = putio
              token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              putio                putio

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

       Note  that  rclone runs a webserver on your local machine to collect the token as returned from Google if
       you use auto config mode.  This only runs from the moment it opens your browser to  the  moment  you  get
       back the verification code.  This is on http://127.0.0.1:53682/ and this it may require you to unblock it
       temporarily if you are running a host firewall, or use manual mode.

       You can then use it like this,

       List directories in top level of your put.io

              rclone lsd remote:

       List all the files in your put.io

              rclone ls remote:

       To copy a local directory to a put.io directory called backup

              rclone copy /home/source remote:backup

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       \           0x5C        \

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Advanced options
       Here are the Advanced options specific to putio (Put.io).

   –putio-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_PUTIO_ENCODING

       • Type: MultiEncoder

       • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       put.io has rate limiting.  When you hit a limit, rclone automatically retries after waiting the amount of
       time requested by the server.

       If  you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number.  Note
       that the imposed limits may be different for different operations, and may change over time.

Seafile

       This is a backend for the Seafile storage service: - It works with both the free community edition or the
       professional edition.  - Seafile versions 6.x and 7.x are all supported.  - Encrypted libraries are  also
       supported.  - It supports 2FA enabled users

   Configuration
       There  are  two  distinct  modes  you  can  setup your remote: - you point your remote to the root of the
       server,  meaning  you  don’t  specify  a  library  during  the  configuration:  Paths  are  specified  as
       remote:library.   You  may  put subdirectories in too, e.g. remote:library/path/to/dir.  - you point your
       remote to a specific library during the configuration: Paths are specified as  remote:path/to/dir.   This
       is  the recommended mode when using encrypted libraries.  (This mode is possibly slightly faster than the
       root mode)

   Configuration in root mode
       Here is an example of making a seafile configuration for a user with no two-factor authentication.  First
       run

              rclone config

       This will guide you through an interactive setup process.  To authenticate you will need the URL of  your
       server, your email (or username) and your password.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> seafile
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Seafile
                 \ "seafile"
              [snip]
              Storage> seafile
              ** See help for seafile backend at: https://rclone.org/seafile/ **

              URL of seafile host to connect to
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Connect to cloud.seafile.com
                 \ "https://cloud.seafile.com/"
              url> http://my.seafile.server/
              User name (usually email address)
              Enter a string value. Press Enter for the default ("").
              user> me@example.com
              Password
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Two-factor authentication ('true' if the account has 2FA enabled)
              Enter a boolean value (true or false). Press Enter for the default ("false").
              2fa> false
              Name of the library. Leave blank to access all non-encrypted libraries.
              Enter a string value. Press Enter for the default ("").
              library>
              Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g/n> n
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              Two-factor authentication is not enabled on this account.
              --------------------
              [seafile]
              type = seafile
              url = http://my.seafile.server/
              user = me@example.com
              pass = *** ENCRYPTED ***
              2fa = false
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This remote is called seafile.  It’s pointing to the root of your seafile server and can now be used like
       this:

       See all libraries

              rclone lsd seafile:

       Create a new library

              rclone mkdir seafile:library

       List the contents of a library

              rclone ls seafile:library

       Sync /home/local/directory to the remote library, deleting any excess files in the library.

              rclone sync -i /home/local/directory seafile:library

   Configuration in library mode
       Here’s  an  example of a configuration in library mode with a user that has the two-factor authentication
       enabled.  Your 2FA code will be asked at the end of the configuration, and will attempt  to  authenticate
       you:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> seafile
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Seafile
                 \ "seafile"
              [snip]
              Storage> seafile
              ** See help for seafile backend at: https://rclone.org/seafile/ **

              URL of seafile host to connect to
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
               1 / Connect to cloud.seafile.com
                 \ "https://cloud.seafile.com/"
              url> http://my.seafile.server/
              User name (usually email address)
              Enter a string value. Press Enter for the default ("").
              user> me@example.com
              Password
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Two-factor authentication ('true' if the account has 2FA enabled)
              Enter a boolean value (true or false). Press Enter for the default ("false").
              2fa> true
              Name of the library. Leave blank to access all non-encrypted libraries.
              Enter a string value. Press Enter for the default ("").
              library> My Library
              Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank (default)
              y/g/n> n
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              Two-factor authentication: please enter your 2FA code
              2fa code> 123456
              Authenticating...
              Success!
              --------------------
              [seafile]
              type = seafile
              url = http://my.seafile.server/
              user = me@example.com
              pass =
              2fa = true
              library = My Library
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       You’ll  notice  your  password  is blank in the configuration.  It’s because we only need the password to
       authenticate you once.

       You specified My Library during the configuration.  The root of the remote is pointing at the root of the
       library My Library:

       See all files in the library:

              rclone lsd seafile:

       Create a new directory inside the library

              rclone mkdir seafile:directory

       List the contents of a directory

              rclone ls seafile:directory

       Sync /home/local/directory to the remote library, deleting any excess files in the library.

              rclone sync -i /home/local/directory seafile:

   –fast-list
       Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange  for  more
       memory.   See  the  rclone docs for  more  details.   Please note this is not supported on seafile server
       version 6.x

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       /           0x2F        /
       ”           0x22        "
       \           0x5C        \

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Seafile and rclone link
       Rclone supports generating share links for non-encrypted libraries only.  They can either be for  a  file
       or a directory:

              rclone link seafile:seafile-tutorial.doc
              http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/

       or if run on a directory you will get:

              rclone link seafile:dir
              http://my.seafile.server/d/9ea2455f6f55478bbb0d/

       Please  note  a share link is unique for each file or directory.  If you run a link command on a file/dir
       that has already been shared, you will get the exact same link.

   Compatibility
       It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition -
       7.0.5 community edition - 7.1.3 community edition

       Versions below 6.0 are not supported.  Versions between 6.0 and 6.3 haven’t been  tested  and  might  not
       work properly.

   Standard options
       Here are the Standard options specific to seafile (seafile).

   –seafile-url
       URL of seafile host to connect to.

       Properties:

       • Config: url

       • Env Var: RCLONE_SEAFILE_URL

       • Type: string

       • Required: true

       • Examples:

         • “https://cloud.seafile.com/”

           • Connect to cloud.seafile.com.

   –seafile-user
       User name (usually email address).

       Properties:

       • Config: user

       • Env Var: RCLONE_SEAFILE_USER

       • Type: string

       • Required: true

   –seafile-pass
       Password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_SEAFILE_PASS

       • Type: string

       • Required: false

   –seafile-2fa
       Two-factor authentication (`true' if the account has 2FA enabled).

       Properties:

       • Config: 2fa

       • Env Var: RCLONE_SEAFILE_2FA

       • Type: bool

       • Default: false

   –seafile-library
       Name of the library.

       Leave blank to access all non-encrypted libraries.

       Properties:

       • Config: library

       • Env Var: RCLONE_SEAFILE_LIBRARY

       • Type: string

       • Required: false

   –seafile-library-key
       Library password (for encrypted libraries only).

       Leave blank if you pass it through the command line.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: library_key

       • Env Var: RCLONE_SEAFILE_LIBRARY_KEY

       • Type: string

       • Required: false

   –seafile-auth-token
       Authentication token.

       Properties:

       • Config: auth_token

       • Env Var: RCLONE_SEAFILE_AUTH_TOKEN

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to seafile (seafile).

   –seafile-create-library
       Should rclone create a library if it doesn’t exist.

       Properties:

       • Config: create_library

       • Env Var: RCLONE_SEAFILE_CREATE_LIBRARY

       • Type: bool

       • Default: false

   –seafile-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SEAFILE_ENCODING

       • Type: MultiEncoder

       • Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8

SFTP

       SFTP is the Secure (or SSH) File Transfer Protocol.

       The SFTP backend can be used with a number of different providers:

       • Hetzner Storage Box

       • rsync.net

       SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

       Paths  are  specified  as  remote:path.   If  the path does not begin with a / it is relative to the home
       directory of the user.  An empty path remote: refers to the user’s home directory.  For  example,  rclone
       lsd  remote:  would  list  the  home  directory  of  the user configured in the rclone remote config (i.e
       /home/sftpuser).  However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

       Note that some SFTP servers will need the leading / - Synology is a good example of this.  rsync.net  and
       Hetzner, on the other hand, requires users to OMIT the leading /.

       Note  that  by  default  rclone  will  try  to  execute  shell  commands  on the server, see shell access
       considerations.

   Configuration
       Here is an example of making an SFTP configuration.  First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / SSH/SFTP
                 \ "sftp"
              [snip]
              Storage> sftp
              SSH host to connect to
              Choose a number from below, or type in your own value
               1 / Connect to example.com
                 \ "example.com"
              host> example.com
              SSH username
              Enter a string value. Press Enter for the default ("$USER").
              user> sftpuser
              SSH port number
              Enter a signed integer. Press Enter for the default (22).
              port>
              SSH password, leave blank to use ssh-agent.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank
              y/g/n> n
              Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
              key_file>
              Remote config
              --------------------
              [remote]
              host = example.com
              user = sftpuser
              port =
              pass =
              key_file =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       This remote is called remote and can now be used like this:

       See all directories in the home directory

              rclone lsd remote:

       See all directories in the root directory

              rclone lsd remote:/

       Make a new directory

              rclone mkdir remote:path/to/directory

       List the contents of a directory

              rclone ls remote:path/to/directory

       Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

              rclone sync -i /home/local/directory remote:directory

       Mount the remote path /srv/www-data/ to the local path /mnt/www-data

              rclone mount remote:/srv/www-data/ /mnt/www-data

   SSH Authentication
       The SFTP remote supports three authentication methods:

       • Password

       • Key file, including certificate signed keys

       • ssh-agent

       Key files  should  be  PEM-encoded  private  key  files.   For  instance  /home/$USER/.ssh/id_rsa.   Only
       unencrypted OpenSSH or PEM encrypted files are supported.

       The key file can be specified in either an external file (key_file) or contained within the rclone config
       file  (key_pem).  If using key_pem in the config file, the entry should be on a single line with new line
       (`' or `') separating lines.  i.e.


              key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
       This will generate it correctly for key_pem for use in the config:

              awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa

       If you don’t specify pass, key_file, or key_pem or ask_password then rclone will attempt  to  contact  an
       ssh-agent.  You can also specify key_use_agent to force the usage of an ssh-agent.  In this case key_file
       or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

       Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

       If  you  set  the  ask_password option, rclone will prompt for a password when needed and no password has
       been configured.

   Certificate-signed keys
       With traditional key-based authentication, you configure your private key only, and the public key  built
       into it will be used during the authentication process.

       If  you  have  a  certificate  you  may  use  it  to  sign  your public key, creating a separate SSH user
       certificate that should be used instead of the plain public key extracted from the private key.  Then you
       must provide the path to the user certificate public key file in pubkey_file.

       Note: This is not  the  traditional  public  key  paired  with  your  private  key,  typically  saved  as
       /home/$USER/.ssh/id_rsa.pub.  Setting this path in pubkey_file will not work.

       Example:

              [remote]
              type = sftp
              host = example.com
              user = sftpuser
              key_file = ~/id_rsa
              pubkey_file = ~/id_rsa-cert.pub

       If you concatenate a cert with a private key then you can specify the merged file in both places.

       Note: the cert must come first in the file.  e.g.

              cat id_rsa-cert.pub id_rsa > merged_key

   Host key validation
       By  default  rclone  will  not check the server’s host key for validation.  This can allow an attacker to
       replace a server with their own and if you use  password  authentication  then  this  can  lead  to  that
       password being exposed.

       Host  key  matching,  using  standard known_hosts files can be turned on by enabling the known_hosts_file
       option.  This can point to the file maintained by OpenSSH or can point to a unique file.

       e.g. using the OpenSSH known_hosts file:

              [remote]
              type = sftp
              host = example.com
              user = sftpuser
              pass =
              known_hosts_file = ~/.ssh/known_hosts

       Alternatively you can create your own known hosts file like this:

              ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts

       There are some limitations:

       • rclone will not manage this file for you.  If the key is missing or wrong then the connection  will  be
         refused.

       • If  the  server is set up for a certificate host key then the entry in the known_hosts file must be the
         @cert-authority entry for the CA

       If the host key provided by the server does not match the one in  the  file  (or  is  missing)  then  the
       connection will be aborted and an error returned such as

              NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch

       or

              NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown

       If you see an error such as

              NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22

       then  it  is  likely  the  server has presented a CA signed host certificate and you will need to add the
       appropriate @cert-authority entry.

       The known_hosts_file setting can be set during rclone config as an advanced option.

   ssh-agent on macOS
       Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the
       OS.  The most effective work-around seems to be to start an ssh-agent in each session, e.g.

              eval `ssh-agent -s` && ssh-add -A

       And then at the end of the session

              eval `ssh-agent -k`

       These commands can be used in scripts of course.

   Shell access
       Some functionality of the SFTP backend relies on remote shell access,  and  the  possibility  to  execute
       commands.   This  includes  checksum,  and  in  some  cases  also about.  The shell commands that must be
       executed may be different on different type of shells, and also quoting/escaping of file  path  arguments
       containing special characters may be different.  Rclone therefore needs to know what type of shell it is,
       and if shell access is available at all.

       Most  servers  run  on  some version of Unix, and then a basic Unix shell can be assumed, without further
       distinction.  Windows 10, Server 2019, and later can also run a SSH server, which is a  port  of  OpenSSH
       (see                          official                          https://docs.microsoft.com/en-us/windows-
       server/administration/openssh/openssh_install_firstuse installa‐ tion guide).  On a  Windows  server  the
       shell  handling  is different: Although it can also be set up to use a Unix type shell, e.g. Cygwin bash,
       the default is to use Windows Command Prompt (cmd.exe), and PowerShell is a recommended alternative.  All
       of these have behave differently, which rclone must handle.

       Rclone tries to auto-detect what type of shell is used on the server, first  time  you  access  the  SFTP
       remote.   If  a remote shell session is successfully created, it will look for indications that it is CMD
       or PowerShell, with fall-back to Unix if not something else is detected.  If  unable  to  even  create  a
       remote  shell  session,  then shell command execution will be disabled entirely.  The result is stored in
       the SFTP remote configuration, in option shell_type, so that the auto-detection only have to be performed
       once.  If you manually set a value for this option before first run, the auto-detection will be  skipped,
       and  if  you set a different value later this will override any existing.  Value none can be set to avoid
       any attempts at executing shell commands, e.g. if this is not allowed on the server.

       When the server is rclone serve sftp, the rclone SFTP remote will detect this as a Unix type shell - even
       if it is running on Windows.  This server does not actually have a shell, but it accepts  input  commands
       matching  the specific ones that the SFTP backend relies on for Unix shells, e.g. md5sum and df.  Also it
       handles the string escape rules used for Unix shell.  Treating it as a Unix type shell from a SFTP remote
       will therefore always be correct, and support all features.

   Shell access considerations
       The shell type auto-detection logic, described above, means that by default rclone  will  try  to  run  a
       shell  command  the  first  time a new sftp remote is accessed.  If you configure a sftp remote without a
       config file, e.g. an on the fly remote, rclone will have nowhere to store the result, and it will  re-run
       the  command  on  every  access.   To  avoid  this you should explicitly set the shell_type option to the
       correct value, or to none if you want to prevent rclone from executing any remote shell commands.

       It is also important to note that, since the shell type decides how quoting and escaping  of  file  paths
       used  as  command-line arguments are performed, configuring the wrong shell type may leave you exposed to
       command injection exploits.  Make sure to confirm the auto-detected shell type,  or  explicitly  set  the
       shell type you know is correct, or disable shell access until you know.

   Checksum
       SFTP  does not natively support checksums (file hash), but rclone is able to use checksumming if the same
       login has shell access, and can execute remote commands.  If  there  is  a  command  that  can  calculate
       compatible  checksums  on  the  remote  system,  Rclone can then be configured to execute this whenever a
       checksum is needed, and read back the results.  Currently MD5 and SHA-1 are supported.

       Normally this requires an external utility being available on the server.  By  default  rclone  will  try
       commands  md5sum, md5 and rclone md5sum for MD5 checksums, and the first one found usable will be picked.
       Same with sha1sum, sha1 and rclone sha1sum commands for SHA-1 checksums.  These utilities  normally  need
       to be in the remote’s PATH to be found.

       In  some  cases the shell itself is capable of calculating checksums.  PowerShell is an example of such a
       shell.  If rclone detects that the remote shell is PowerShell, which means it most probably is a  Windows
       OpenSSH  server,  rclone  will  use  a  predefined script block to produce the checksums when no external
       checksum commands are found (see shell access).  This assumes PowerShell version 4.0 or newer.

       The options md5sum_command and sha1_command can be used to customize  the  command  to  be  executed  for
       calculation  of  checksums.   You  can  for  example  set  a  specific  path  to where md5sum and sha1sum
       executables are located, or use them to specify some other  tools  that  print  checksums  in  compatible
       format.   The  value  can include command-line arguments, or even shell script blocks as with PowerShell.
       Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have  an  rclone
       executable  on  the  server  it can be used.  As mentioned above, they will be automatically picked up if
       found in PATH, but if not you can set something like  /path/to/rclone  md5sum  as  the  value  of  option
       md5sum_command to make sure a specific executable is used.

       Remote  checksumming is recommended and enabled by default.  First time rclone is using a SFTP remote, if
       options md5sum_command or sha1_command are not set, it will check if any of the default commands for each
       of them, as described above, can be used.  The result will be saved in the remote configuration, so  next
       time  it  will  use the same.  Value none will be set if none of the default commands could be used for a
       specific algorithm, and this algorithm will not be supported by the remote.

       Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your
       control, and to which the execution of remote shell commands is prohibited.  Set the configuration option
       disable_hashcheck to true to disable checksumming entirely, or set shell_type  to  none  to  disable  all
       functionality based on remote shell command execution.

   Modified time
       Modified times are stored on the server to 1 second precision.

       Modified times are used in syncing and are fully supported.

       Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain
       configurations  of ProFTPd with mod_sftp).  If you are using one of these servers, you can set the option
       set_modtime = false in your RClone backend configuration to disable this behaviour.

   About command
       The about command returns the total space, free space, and used space on the remote for the disk  of  the
       specified path on the remote or, if not set, the disk of the root on the remote.

       SFTP  usually  supports  the  about command,  but it depends on the server.  If the server implements the
       vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it  will  be
       used.   If not, but the same login has access to a Unix shell, where the df command is available (e.g. in
       the remote’s PATH), then this will be used instead.  If the server shell is PowerShell, probably  with  a
       Windows  OpenSSH  server,  rclone  will  use a built-in shell command (see shell access).  If none of the
       above is applicable, about will fail.

   Standard options
       Here are the Standard options specific to sftp (SSH/SFTP).

   –sftp-host
       SSH host to connect to.

       E.g.  “example.com”.

       Properties:

       • Config: host

       • Env Var: RCLONE_SFTP_HOST

       • Type: string

       • Required: true

   –sftp-user
       SSH username.

       Properties:

       • Config: user

       • Env Var: RCLONE_SFTP_USER

       • Type: string

       • Default: “$USER”

   –sftp-port
       SSH port number.

       Properties:

       • Config: port

       • Env Var: RCLONE_SFTP_PORT

       • Type: int

       • Default: 22

   –sftp-pass
       SSH password, leave blank to use ssh-agent.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_SFTP_PASS

       • Type: string

       • Required: false

   –sftp-key-pem
       Raw PEM-encoded private key.

       If specified, will override key_file parameter.

       Properties:

       • Config: key_pem

       • Env Var: RCLONE_SFTP_KEY_PEM

       • Type: string

       • Required: false

   –sftp-key-file
       Path to PEM-encoded private key file.

       Leave blank or set key-use-agent to use ssh-agent.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: key_file

       • Env Var: RCLONE_SFTP_KEY_FILE

       • Type: string

       • Required: false

   –sftp-key-file-pass
       The passphrase to decrypt the PEM-encoded private key file.

       Only PEM encrypted key files (old OpenSSH format) are supported.   Encrypted  keys  in  the  new  OpenSSH
       format can’t be used.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: key_file_pass

       • Env Var: RCLONE_SFTP_KEY_FILE_PASS

       • Type: string

       • Required: false

   –sftp-pubkey-file
       Optional path to public key file.

       Set this if you have a signed certificate you want to use for authentication.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: pubkey_file

       • Env Var: RCLONE_SFTP_PUBKEY_FILE

       • Type: string

       • Required: false

   –sftp-key-use-agent
       When set forces the usage of the ssh-agent.

       When  key-file is also set, the “.pub” file of the specified key-file is read and only the associated key
       is requested from the ssh-agent.  This allows to avoid Too many authentication  failures  for  *username*
       errors when the ssh-agent contains many keys.

       Properties:

       • Config: key_use_agent

       • Env Var: RCLONE_SFTP_KEY_USE_AGENT

       • Type: bool

       • Default: false

   –sftp-use-insecure-cipher
       Enable the use of insecure ciphers and key exchange methods.

       This enables the use of the following insecure ciphers and key exchange methods:

       • aes128-cbc

       • aes192-cbc

       • aes256-cbc

       • 3des-cbc

       • diffie-hellman-group-exchange-sha256

       • diffie-hellman-group-exchange-sha1

       Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.

       Properties:

       • Config: use_insecure_cipher

       • Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER

       • Type: bool

       • Default: false

       • Examples:

         • “false”

           • Use default Cipher list.

         • “true”

           • Enables    the   use   of   the   aes128-cbc   cipher   and   diffie-hellman-group-exchange-sha256,
             diffie-hellman-group-exchange-sha1 key exchange.

   –sftp-disable-hashcheck
       Disable the execution of SSH commands to determine if remote file hashing is available.

       Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.

       Properties:

       • Config: disable_hashcheck

       • Env Var: RCLONE_SFTP_DISABLE_HASHCHECK

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to sftp (SSH/SFTP).

   –sftp-known-hosts-file
       Optional path to known_hosts file.

       Set this value to enable server host key validation.

       Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

       Properties:

       • Config: known_hosts_file

       • Env Var: RCLONE_SFTP_KNOWN_HOSTS_FILE

       • Type: string

       • Required: false

       • Examples:

         • “~/.ssh/known_hosts”

           • Use OpenSSH’s known_hosts file.

   –sftp-ask-password
       Allow asking for SFTP password when needed.

       If this is set and no password is supplied then rclone will: - ask for a password - not contact  the  ssh
       agent

       Properties:

       • Config: ask_password

       • Env Var: RCLONE_SFTP_ASK_PASSWORD

       • Type: bool

       • Default: false

   –sftp-path-override
       Override path used by SSH shell commands.

       This  allows checksum calculation when SFTP and SSH paths are different.  This issue affects among others
       Synology NAS boxes.

       E.g.  if shared folders can be found in directories representing volumes:

              rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory

       E.g.  if home directory can be found in a shared folder called “home”:

              rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory

       Properties:

       • Config: path_override

       • Env Var: RCLONE_SFTP_PATH_OVERRIDE

       • Type: string

       • Required: false

   –sftp-set-modtime
       Set the modified time on the remote if set.

       Properties:

       • Config: set_modtime

       • Env Var: RCLONE_SFTP_SET_MODTIME

       • Type: bool

       • Default: true

   –sftp-shell-type
       The type of SSH shell on remote server, if any.

       Leave blank for autodetect.

       Properties:

       • Config: shell_type

       • Env Var: RCLONE_SFTP_SHELL_TYPE

       • Type: string

       • Required: false

       • Examples:

         • “none”

           • No shell access

         • “unix”

           • Unix shell

         • “powershell”

           • PowerShell

         • “cmd”

           • Windows Command Prompt

   –sftp-md5sum-command
       The command used to read md5 hashes.

       Leave blank for autodetect.

       Properties:

       • Config: md5sum_command

       • Env Var: RCLONE_SFTP_MD5SUM_COMMAND

       • Type: string

       • Required: false

   –sftp-sha1sum-command
       The command used to read sha1 hashes.

       Leave blank for autodetect.

       Properties:

       • Config: sha1sum_command

       • Env Var: RCLONE_SFTP_SHA1SUM_COMMAND

       • Type: string

       • Required: false

   –sftp-skip-links
       Set to skip any symlinks and any other non regular files.

       Properties:

       • Config: skip_links

       • Env Var: RCLONE_SFTP_SKIP_LINKS

       • Type: bool

       • Default: false

   –sftp-subsystem
       Specifies the SSH2 subsystem on the remote host.

       Properties:

       • Config: subsystem

       • Env Var: RCLONE_SFTP_SUBSYSTEM

       • Type: string

       • Default: “sftp”

   –sftp-server-command
       Specifies the path or command to run a sftp server on the remote host.

       The subsystem option is ignored when server_command is defined.

       Properties:

       • Config: server_command

       • Env Var: RCLONE_SFTP_SERVER_COMMAND

       • Type: string

       • Required: false

   –sftp-use-fstat
       If set use fstat instead of stat.

       Some servers limit the amount of open files and calling Stat after opening the file will throw  an  error
       from  the  server.   Setting this flag will call Fstat instead of Stat which is called on an already open
       file handle.

       It has been found that this helps with IBM Sterling SFTP servers which have “extractability” level set to
       1 which means only 1 file can be opened at any given time.

       Properties:

       • Config: use_fstat

       • Env Var: RCLONE_SFTP_USE_FSTAT

       • Type: bool

       • Default: false

   –sftp-disable-concurrent-reads
       If set don’t use concurrent reads.

       Normally concurrent reads are safe to use and not using them will degrade performance, so this option  is
       disabled by default.

       Some  servers  limit  the  amount  number  of times a file can be downloaded.  Using concurrent reads can
       trigger this limit, so if you have a server which returns

              Failed to copy: file does not exist

       Then you may need to enable this flag.

       If concurrent reads are disabled, the use_fstat option is ignored.

       Properties:

       • Config: disable_concurrent_reads

       • Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_READS

       • Type: bool

       • Default: false

   –sftp-disable-concurrent-writes
       If set don’t use concurrent writes.

       Normally rclone uses  concurrent  writes  to  upload  files.   This  improves  the  performance  greatly,
       especially for distant servers.

       This option disables concurrent writes should that be necessary.

       Properties:

       • Config: disable_concurrent_writes

       • Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES

       • Type: bool

       • Default: false

   –sftp-idle-timeout
       Max time before closing idle connections.

       If  no  connections  have  been  returned to the connection pool in the time given, rclone will empty the
       connection pool.

       Set to 0 to keep connections indefinitely.

       Properties:

       • Config: idle_timeout

       • Env Var: RCLONE_SFTP_IDLE_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –sftp-chunk-size
       Upload and download chunk size.

       This controls the maximum size of payload in SFTP protocol packets.  The RFC limits this to  32768  bytes
       (32k),  which  is  the  default.   However, a lot of servers support larger sizes, typically limited to a
       maximum total package size of 256k, and setting it larger will increase transfer  speed  dramatically  on
       high latency links.  This includes OpenSSH, and, for example, using the value of 255k works well, leaving
       plenty of room for overhead while still being within a total packet size of 256k.

       Make  sure to test thoroughly before using a value higher than 32k, and only use it if you always connect
       to the same server or after sufficiently broad testing.  If you get errors such as “failed to send packet
       payload: EOF”, lots of “connection lost”, or “corrupted on transfer”, when copying  a  larger  file,  try
       lowering  the value.  The server run by rclone serve sftp sends packets with standard 32k maximum payload
       so you must not set a different chunk_size when downloading files, but it accepts packets up to the  256k
       total size, so for uploads the chunk_size can be set as for the OpenSSH example above.

       Properties:

       • Config: chunk_size

       • Env Var: RCLONE_SFTP_CHUNK_SIZE

       • Type: SizeSuffix

       • Default: 32Ki

   –sftp-concurrency
       The maximum number of outstanding requests for one file

       This  controls  the  maximum  number  of  outstanding requests for one file.  Increasing it will increase
       throughput on high latency links at the cost of using more memory.

       Properties:

       • Config: concurrency

       • Env Var: RCLONE_SFTP_CONCURRENCY

       • Type: int

       • Default: 64

   –sftp-set-env
       Environment variables to pass to sftp and commands

       Set environment variables in the form:

              VAR=value

       to be passed to the sftp client and to any commands run (eg md5sum).

       Pass multiple variables space separated, eg

              VAR1=value VAR2=value

       and pass variables with spaces in in quotes, eg

              "VAR3=value with space" "VAR4=value with space" VAR5=nospacehere

       Properties:

       • Config: set_env

       • Env Var: RCLONE_SFTP_SET_ENV

       • Type: SpaceSepList

       • Default:

   Limitations
       On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP  so  the  hashes  can’t  be
       calculated properly.  For them using disable_hashcheck is a good idea.

       The only ssh agent supported under Windows is Putty’s pageant.

       The  Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns.  This
       can be  re-enabled  on  a  per-connection  basis  by  setting  the  use_insecure_cipher  setting  in  the
       configuration file to true.  Further details on the insecurity of this cipher can be found in this paper.

       SFTP isn’t supported under plan9 until this issue is fixed.

       Note  that  since  SFTP  isn’t  HTTP  based  the  following  flags  don’t  work  with it: --dump-headers,
       --dump-bodies, --dump-auth.

       Note that --timeout and --contimeout are both supported.

   rsync.net
       rsync.net is supported through the SFTP backend.

       See rsync.net’s documentation of rclone examples.

   Hetzner Storage Box
       Hetzner Storage Boxes are supported through the SFTP backend on port 23.

       See             https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone  Hetzner’s
       documentation for details

SMB

       SMB is a communication protocol to share files over network.

       This relies on go-smb2 library for communication with SMB protocol.

       Paths are specified as remote:sharename (or remote: for the lsd command.)  You may put subdirectories  in
       too, e.g. remote:item/path/to/dir.

   Notes
       The  first  path  segment  must  be the name of the share, which you entered when you started to share on
       Windows.  On smbd, it’s the section title in smb.conf (usually in /etc/samba/) file.  You can find shares
       by quering the root if you’re unsure (e.g. rclone lsd remote:).

       You can’t access to the shared printers from rclone, obviously.

       You can’t use Anonymous access for logging in.  You have to use the guest user  with  an  empty  password
       instead.  The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and
       periods.   Alternatively,  the local backend on  Windows  can  access  SMB  servers  using  UNC paths, by
       \\server\share.  This doesn’t apply to non-Windows OSes, such as Linux and macOS.

   Configuration
       Here is an example of making a SMB configuration.

       First run

              rclone config

       This will guide you through an interactive setup process.

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Option Storage.
              Type of storage to configure.
              Choose a number from below, or type in your own value.
              XX / SMB / CIFS
                 \ (smb)
              Storage> smb

              Option host.
              Samba hostname to connect to.
              E.g. "example.com".
              Enter a value.
              host> localhost

              Option user.
              Samba username.
              Enter a string value. Press Enter for the default (lesmi).
              user> guest

              Option port.
              Samba port number.
              Enter a signed integer. Press Enter for the default (445).
              port>

              Option pass.
              Samba password.
              Choose an alternative below. Press Enter for the default (n).
              y) Yes, type in my own password
              g) Generate random password
              n) No, leave this optional password blank (default)
              y/g/n> g
              Password strength in bits.
              64 is just about memorable
              128 is secure
              1024 is the maximum
              Bits> 64
              Your password is: XXXX
              Use this password? Please note that an obscured version of this
              password (and not the password itself) will be stored under your
              configuration file, so keep this generated password in a safe place.
              y) Yes (default)
              n) No
              y/n> y

              Option domain.
              Domain name for NTLM authentication.
              Enter a string value. Press Enter for the default (WORKGROUP).
              domain>

              Edit advanced config?
              y) Yes
              n) No (default)
              y/n> n

              Configuration complete.
              Options:
              - type: samba
              - host: localhost
              - user: guest
              - pass: *** ENCRYPTED ***
              Keep this "remote" remote?
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> d

   Standard options
       Here are the Standard options specific to smb (SMB / CIFS).

   –smb-host
       SMB server hostname to connect to.

       E.g.  “example.com”.

       Properties:

       • Config: host

       • Env Var: RCLONE_SMB_HOST

       • Type: string

       • Required: true

   –smb-user
       SMB username.

       Properties:

       • Config: user

       • Env Var: RCLONE_SMB_USER

       • Type: string

       • Default: “$USER”

   –smb-port
       SMB port number.

       Properties:

       • Config: port

       • Env Var: RCLONE_SMB_PORT

       • Type: int

       • Default: 445

   –smb-pass
       SMB password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_SMB_PASS

       • Type: string

       • Required: false

   –smb-domain
       Domain name for NTLM authentication.

       Properties:

       • Config: domain

       • Env Var: RCLONE_SMB_DOMAIN

       • Type: string

       • Default: “WORKGROUP”

   Advanced options
       Here are the Advanced options specific to smb (SMB / CIFS).

   –smb-idle-timeout
       Max time before closing idle connections.

       If no connections have been returned to the connection pool in the time  given,  rclone  will  empty  the
       connection pool.

       Set to 0 to keep connections indefinitely.

       Properties:

       • Config: idle_timeout

       • Env Var: RCLONE_SMB_IDLE_TIMEOUT

       • Type: Duration

       • Default: 1m0s

   –smb-hide-special-share
       Hide special shares (e.g. print$) which users aren’t supposed to access.

       Properties:

       • Config: hide_special_share

       • Env Var: RCLONE_SMB_HIDE_SPECIAL_SHARE

       • Type: bool

       • Default: true

   –smb-case-insensitive
       Whether the server is configured to be case-insensitive.

       Always true on Windows shares.

       Properties:

       • Config: case_insensitive

       • Env Var: RCLONE_SMB_CASE_INSENSITIVE

       • Type: bool

       • Default: true

   –smb-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SMB_ENCODING

       • Type: MultiEncoder

       • Default:
         Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot

Storj

       Storj is  an encrypted, secure, and cost-effective object storage service that enables you to store, back
       up, and archive large amounts of data in a decentralized manner.

   Backend options
       Storj can be used both with this native backend  and  with  the  s3 backend using the Storj S3 compatible
       gateway (shared or private).

       Use  this  backend  to  take  advantage of client-side encryption as well as to achieve the best possible
       download performance.  Uploads will be erasure-coded locally, thus a 1gb upload will result in 2.68gb  of
       data being uploaded to storage nodes across the network.

       Use the s3 backend and one of the S3 compatible Hosted Gateways to increase upload performance and reduce
       the  load  on  your systems and network.  Uploads will be encrypted and erasure-coded server-side, thus a
       1GB upload will result in only in 1GB of data being uploaded to storage nodes across the network.

       Side by side comparison with more details:

       • Characteristics:

         • Storj backend: Uses native RPC protocol, connects directly to the storage nodes which hosts the data.
           Requires more CPU resource of encoding/decoding and has network amplification (especially during  the
           upload), uses lots of TCP connections

         • S3  backend:  Uses  S3  compatible  HTTP  Rest  API  via  the  shared  gateways.  There is no network
           amplification, but performance depends on the shared gateways and the secret encryption key is shared
           with the gateway.

       • Typical usage:

         • Storj  backend:  Server  environments  and  desktops  with  enough  resources,  internet  speed   and
           connectivity - and applications where storjs client-side encryption is required.

         • S3 backend: Desktops and similar with limited resources, internet speed or connectivity.

       • Security:

         • Storj backend: strong.  Private encryption key doesn’t need to leave the local computer.

         • S3 backend: weaker.  Private encryption key is https://docs.storj.io/dcs/api-reference/s3-compatible-
           gateway#security-and-encryption shared with the  authentication  service of the hosted gateway, where
           it’s stored encrypted.  It can be stronger when combining with the rclone crypt backend.

       • Bandwidth usage (upload):

         • Storj backend: higher.  As data is erasure coded on the client side both the original  data  and  the
           parities  should  be  uploaded.   About  ~2.7 times more data is required to be uploaded.  Client may
           start to upload with even higher number of nodes (~3.7 times more) and abandon/stop the slow uploads.

         • S3 backend: normal.  Only the raw data is uploaded, erasure coding happens on the gateway.

       • Bandwidth usage (download)

         • Storj backend: almost normal.  Only the minimal number of data is required, but to  avoid  very  slow
           data providers a few more sources are used and the slowest are ignored (max 1.2x overhead).

         • S3 backend: normal.  Only the raw data is downloaded, erasure coding happens on the shared gateway.

       • CPU usage:

         • Storj  backend: higher, but more predictable.  Erasure code and encryption/decryption happens locally
           which requires significant CPU usage.

         • S3 backend: less.  Erasure code and encryption/decryption happens on shared s3 gateways (and  as  is,
           it depends on the current load on the gateways)

       • TCP connection usage:

         • Storj  backend:  high.   A  direct connection is required to each of the Storj nodes resulting in 110
           connections on upload and 35 on download per 64 MB segment.  Not all  the  connections  are  actively
           used  (slow  ones  are  pruned),  but  they are all opened.  Adjusting the max open file limit may be
           required.

         • S3 backend: normal.  Only one connection  per  download/upload  thread  is  required  to  the  shared
           gateway.

       • Overall performance:

         • Storj  backend:  with  enough  resources (CPU and bandwidth) storj backend can provide even 2x better
           performance.  Data is directly downloaded to / uploaded from to the client instead of the gateway.

         • S3 backend: Can be faster on edge devices where CPU and network bandwidth is limited as the shared S3
           compatible  gateways  take  care  about  the  encrypting/decryption  and  erasure   coding   and   no
           download/upload amplification.

       • Decentralization:

         • Storj backend: high.  Data is downloaded directly from the distributed cloud of storage providers.

         • S3 backend: low.  Requires a running S3 gateway (either self-hosted or Storj-hosted).

       • Limitations:

         • Storj  backend:  rclone  checksum  is  not  possible  without  download,  as checksum metadata is not
           calculated during upload

         • S3 backend: secret encryption key is shared with the gateway

   Configuration
       To make a new Storj configuration you need one of the following: * Access Grant that someone else  shared
       with you.  * https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key
       API Key of a Storj project you are a member of.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

   Setup with access grant
              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Storj Decentralized Cloud Storage
                 \ "storj"
              [snip]
              Storage> storj
              ** See help for storj backend at: https://rclone.org/storj/ **

              Choose an authentication method.
              Enter a string value. Press Enter for the default ("existing").
              Choose a number from below, or type in your own value
               1 / Use an existing access grant.
                 \ "existing"
               2 / Create a new access grant from satellite address, API key, and passphrase.
                 \ "new"
              provider> existing
              Access Grant.
              Enter a string value. Press Enter for the default ("").
              access_grant> your-access-grant-received-by-someone-else
              Remote config
              --------------------
              [remote]
              type = storj
              access_grant = your-access-grant-received-by-someone-else
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Setup with API key and passphrase
              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Storj Decentralized Cloud Storage
                 \ "storj"
              [snip]
              Storage> storj
              ** See help for storj backend at: https://rclone.org/storj/ **

              Choose an authentication method.
              Enter a string value. Press Enter for the default ("existing").
              Choose a number from below, or type in your own value
               1 / Use an existing access grant.
                 \ "existing"
               2 / Create a new access grant from satellite address, API key, and passphrase.
                 \ "new"
              provider> new
              Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
              Enter a string value. Press Enter for the default ("us-central-1.storj.io").
              Choose a number from below, or type in your own value
               1 / US Central 1
                 \ "us-central-1.storj.io"
               2 / Europe West 1
                 \ "europe-west-1.storj.io"
               3 / Asia East 1
                 \ "asia-east-1.storj.io"
              satellite_address> 1
              API Key.
              Enter a string value. Press Enter for the default ("").
              api_key> your-api-key-for-your-storj-project
              Encryption Passphrase. To access existing objects enter passphrase used for uploading.
              Enter a string value. Press Enter for the default ("").
              passphrase> your-human-readable-encryption-passphrase
              Remote config
              --------------------
              [remote]
              type = storj
              satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
              api_key = your-api-key-for-your-storj-project
              passphrase = your-human-readable-encryption-passphrase
              access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

   Standard options
       Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

   –storj-provider
       Choose an authentication method.

       Properties:

       • Config: provider

       • Env Var: RCLONE_STORJ_PROVIDER

       • Type: string

       • Default: “existing”

       • Examples:

         • “existing”

           • Use an existing access grant.

         • “new”

           • Create a new access grant from satellite address, API key, and passphrase.

   –storj-access-grant
       Access grant.

       Properties:

       • Config: access_grant

       • Env Var: RCLONE_STORJ_ACCESS_GRANT

       • Provider: existing

       • Type: string

       • Required: false

   –storj-satellite-address
       Satellite address.

       Custom satellite address should match the format: <nodeid>@<address>:<port>.

       Properties:

       • Config: satellite_address

       • Env Var: RCLONE_STORJ_SATELLITE_ADDRESS

       • Provider: new

       • Type: string

       • Default: “us-central-1.storj.io”

       • Examples:

         • “us-central-1.storj.io”

           • US Central 1

         • “europe-west-1.storj.io”

           • Europe West 1

         • “asia-east-1.storj.io”

           • Asia East 1

   –storj-api-key
       API key.

       Properties:

       • Config: api_key

       • Env Var: RCLONE_STORJ_API_KEY

       • Provider: new

       • Type: string

       • Required: false

   –storj-passphrase
       Encryption passphrase.

       To access existing objects enter passphrase used for uploading.

       Properties:

       • Config: passphrase

       • Env Var: RCLONE_STORJ_PASSPHRASE

       • Provider: new

       • Type: string

       • Required: false

   Usage
       Paths  are  specified  as  remote:bucket (or remote: for the lsf command.)  You may put subdirectories in
       too, e.g. remote:bucket/path/to/dir.

       Once configured you can then use rclone like this.

   Create a new bucket
       Use the mkdir command to create new bucket, e.g. bucket.

              rclone mkdir remote:bucket

   List all buckets
       Use the lsf command to list all buckets.

              rclone lsf remote:

       Note the colon (:) character at the end of the command line.

   Delete a bucket
       Use the rmdir command to delete an empty bucket.

              rclone rmdir remote:bucket

       Use the purge command to delete a non-empty bucket with all its content.

              rclone purge remote:bucket

   Upload objects
       Use the copy command to upload an object.

              rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/

       The --progress flag  is  for  displaying  progress  information.   Remove  it  if  you  don’t  need  this
       information.

       Use a folder in the local path to upload all its objects.

              rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/

       Only modified files will be copied.

   List objects
       Use the ls command to list recursively all objects in a bucket.

              rclone ls remote:bucket

       Add the folder to the remote path to list recursively all objects in this folder.

              rclone ls remote:bucket/path/to/dir/

       Use the lsf command to list non-recursively all objects in a bucket or a folder.

              rclone lsf remote:bucket/path/to/dir/

   Download objects
       Use the copy command to download an object.

              rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/

       The  --progress  flag  is  for  displaying  progress  information.   Remove  it  if  you  don’t need this
       information.

       Use a folder in the remote path to download all its objects.

              rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/

   Delete objects
       Use the deletefile command to delete a single object.

              rclone deletefile remote:bucket/path/to/dir/file.ext

       Use the delete command to delete all object in a folder.

              rclone delete remote:bucket/path/to/dir/

   Print the total size of objects
       Use the size command to print the total size of objects in a bucket or a folder.

              rclone size remote:bucket/path/to/dir/

   Sync two Locations
       Use the sync command to sync the source to the destination, changing the destination only,  deleting  any
       excess files.

              rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/

       The  --progress  flag  is  for  displaying  progress  information.   Remove  it  if  you  don’t need this
       information.

       Since this can cause data loss, test first with the --dry-run flag to see exactly what  would  be  copied
       and deleted.

       The sync can be done also from Storj to the local file system.

              rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/

       Or between two Storj buckets.

              rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

       Or even between another cloud storage and Storj.

              rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/

   Limitations
       rclone  about  is  not  supported  by  the rclone Storj backend.  Backends without this capability cannot
       determine free space for an rclone mount or use policy mfs (most free space) as a  member  of  an  rclone
       union remote.

       See List of backends that do not support rclone about and rclone about

   Known issues
       If  you  get  errors like too many open files this usually happens when the default ulimit for system max
       open files is exceeded.  Native Storj protocol opens a large number of TCP connections (each of which  is
       counted  as  an  open file).  For a single upload stream you can expect 110 TCP connections to be opened.
       For a single download stream you can expect 35.  This batch of connections will be opened  for  every  64
       MiB  segment  and  you  should  also  expect  TCP connections to be reused.  If you do many transfers you
       eventually open a connection to most storage nodes (thousands of nodes).

       To fix these, please raise your system limits.  You can do this issuing a ulimit -n 65536 just before you
       run rclone.  To change the limits more permanently you  can  add  this  to  your  shell  startup  script,
       e.g. $HOME/.bashrc,   or   change   the   system-wide   configuration,  usually  /etc/sysctl.conf  and/or
       /etc/security/limits.conf, but please refer to your operating system manual.

SugarSync

       SugarSync is a cloud service that enables active synchronization of  files  across  computers  and  other
       devices for file backup, access, syncing, and sharing.

   Configuration
       The  initial  setup  for  SugarSync involves getting a token from SugarSync which you can do with rclone.
       rclone config walks you through it.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Sugarsync
                 \ "sugarsync"
              [snip]
              Storage> sugarsync
              ** See help for sugarsync backend at: https://rclone.org/sugarsync/ **

              Sugarsync App ID.
              Leave blank to use rclone's.
              Enter a string value. Press Enter for the default ("").
              app_id>
              Sugarsync Access Key ID.
              Leave blank to use rclone's.
              Enter a string value. Press Enter for the default ("").
              access_key_id>
              Sugarsync Private Access Key
              Leave blank to use rclone's.
              Enter a string value. Press Enter for the default ("").
              private_access_key>
              Permanently delete files if true
              otherwise put them in the deleted files.
              Enter a boolean value (true or false). Press Enter for the default ("false").
              hard_delete>
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              Username (email address)> nick@craig-wood.com
              Your Sugarsync password is only required during setup and will not be stored.
              password:
              --------------------
              [remote]
              type = sugarsync
              refresh_token = https://api.sugarsync.com/app-authorization/XXXXXXXXXXXXXXXXXX
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Note that the config asks for your email and password but doesn’t store them, it only uses  them  to  get
       the initial token.

       Once configured you can then use rclone like this,

       List directories (sync folders) in top level of your SugarSync

              rclone lsd remote:

       List all the files in your SugarSync folder “Test”

              rclone ls remote:Test

       To copy a local directory to an SugarSync folder called backup

              rclone copy /home/source remote:backup

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

       NB  you  can’t create files in the top level folder you have to create a folder, which rclone will create
       as a “Sync Folder” with SugarSync.

   Modified time and hashes
       SugarSync does not support modification times or hashes, therefore syncing will  default  to  --size-only
       checking.  Note that using --update will work as rclone can read the time files were uploaded.

   Restricted filename characters
       SugarSync replaces the default restricted characters set except for DEL.

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.

   Deleting files
       Deleted files will be moved to the “Deleted items” folder by default.

       However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if
       you would like files to be deleted straight away.

   Standard options
       Here are the Standard options specific to sugarsync (Sugarsync).

   –sugarsync-app-id
       Sugarsync App ID.

       Leave blank to use rclone’s.

       Properties:

       • Config: app_id

       • Env Var: RCLONE_SUGARSYNC_APP_ID

       • Type: string

       • Required: false

   –sugarsync-access-key-id
       Sugarsync Access Key ID.

       Leave blank to use rclone’s.

       Properties:

       • Config: access_key_id

       • Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID

       • Type: string

       • Required: false

   –sugarsync-private-access-key
       Sugarsync Private Access Key.

       Leave blank to use rclone’s.

       Properties:

       • Config: private_access_key

       • Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY

       • Type: string

       • Required: false

   –sugarsync-hard-delete
       Permanently delete files if true otherwise put them in the deleted files.

       Properties:

       • Config: hard_delete

       • Env Var: RCLONE_SUGARSYNC_HARD_DELETE

       • Type: bool

       • Default: false

   Advanced options
       Here are the Advanced options specific to sugarsync (Sugarsync).

   –sugarsync-refresh-token
       Sugarsync refresh token.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: refresh_token

       • Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN

       • Type: string

       • Required: false

   –sugarsync-authorization
       Sugarsync authorization.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: authorization

       • Env Var: RCLONE_SUGARSYNC_AUTHORIZATION

       • Type: string

       • Required: false

   –sugarsync-authorization-expiry
       Sugarsync authorization expiry.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: authorization_expiry

       • Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY

       • Type: string

       • Required: false

   –sugarsync-user
       Sugarsync user.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: user

       • Env Var: RCLONE_SUGARSYNC_USER

       • Type: string

       • Required: false

   –sugarsync-root-id
       Sugarsync root id.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: root_id

       • Env Var: RCLONE_SUGARSYNC_ROOT_ID

       • Type: string

       • Required: false

   –sugarsync-deleted-id
       Sugarsync deleted folder id.

       Leave blank normally, will be auto configured by rclone.

       Properties:

       • Config: deleted_id

       • Env Var: RCLONE_SUGARSYNC_DELETED_ID

       • Type: string

       • Required: false

   –sugarsync-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_SUGARSYNC_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Ctl,InvalidUtf8,Dot

   Limitations
       rclone  about  is  not  supported  by  the  SugarSync  backend.   Backends without this capability cannot
       determine free space for an rclone mount or use policy mfs (most free space) as a  member  of  an  rclone
       union remote.

       See List of backends that do not support rclone about and rclone about

Tardigrade

       The  Tardigrade  backend has been renamed to be the Storj backend.  Old configuration files will continue
       to work.

Uptobox

       This is a Backend for Uptobox file storage service.  Uptobox is closer  to  a  one-click  hoster  than  a
       traditional cloud storage provider and therefore not suitable for long term storage.

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       To  configure  an  Uptobox  backend  you’ll need your personal api token.  You’ll find it in your account
       settings

       Here is an example of how to make a remote called remote with the default setup.  First run:

              rclone config

       This will guide you through an interactive setup process:

              Current remotes:

              Name                 Type
              ====                 ====
              TestUptobox          uptobox

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> n
              name> uptobox
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [...]
              37 / Uptobox
                 \ "uptobox"
              [...]
              Storage> uptobox
              ** See help for uptobox backend at: https://rclone.org/uptobox/ **

              Your API Key, get it from https://uptobox.com/my_account
              Enter a string value. Press Enter for the default ("").
              api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              --------------------
              [uptobox]
              type = uptobox
              api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d>

       Once configured you can then use rclone like this,

       List directories in top level of your Uptobox

              rclone lsd remote:

       List all the files in your Uptobox

              rclone ls remote:

       To copy a local directory to an Uptobox directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       Uptobox supports neither modified times nor checksums.

   Restricted filename characters
       In addition to the default restricted characters set the following characters are also replaced:

       Character   Value   Replacement
       ────────────────────────────────
       ”           0x22        "
       `           0x41        `

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.

   Standard options
       Here are the Standard options specific to uptobox (Uptobox).

   –uptobox-access-token
       Your access token.

       Get it from https://uptobox.com/my_account.

       Properties:

       • Config: access_token

       • Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to uptobox (Uptobox).

   –uptobox-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_UPTOBOX_ENCODING

       • Type: MultiEncoder

       • Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot

   Limitations
       Uptobox will delete inactive files that have not been accessed in 60 days.

       rclone about is not supported by this backend an overview of used space can  however  been  seen  in  the
       uptobox web interface.

Union

       The union remote provides a unification similar to UnionFS using other remotes.

       Paths   may   be   as   deep   as   required  or  a  local  path,  e.g. remote:directory/subdirectory  or
       /directory/subdirectory.

       During the initial setup with rclone config you will specify the upstream remotes as  a  space  separated
       list.  The upstream remotes can either be a local paths or other remotes.

       Attribute  :ro  and  :nc  can  be  attach to the end of path to tag the remote as read only or no create,
       e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

       Subfolders can be used in upstream remotes.   Assume  a  union  remote  named  backup  with  the  remotes
       mydrive:private/backup.   Invoking  rclone  mkdir  backup:desktop  is exactly the same as invoking rclone
       mkdir mydrive:private/backup/desktop.

       There  will  be  no  special  handling  of  paths  containing  ..   segments.   Invoking   rclone   mkdir
       backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

   Configuration
       Here is an example of how to make a union called remote for local folders.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Union merges the contents of several remotes
                 \ "union"
              [snip]
              Storage> union
              List of space separated upstreams.
              Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
              Enter a string value. Press Enter for the default ("").
              upstreams> remote1:dir1 remote2:dir2 remote3:dir3
              Policy to choose upstream on ACTION class.
              Enter a string value. Press Enter for the default ("epall").
              action_policy>
              Policy to choose upstream on CREATE class.
              Enter a string value. Press Enter for the default ("epmfs").
              create_policy>
              Policy to choose upstream on SEARCH class.
              Enter a string value. Press Enter for the default ("ff").
              search_policy>
              Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
              Enter a signed integer. Press Enter for the default ("120").
              cache_time>
              Remote config
              --------------------
              [remote]
              type = union
              upstreams = remote1:dir1 remote2:dir2 remote3:dir3
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y
              Current remotes:

              Name                 Type
              ====                 ====
              remote               union

              e) Edit existing remote
              n) New remote
              d) Delete remote
              r) Rename remote
              c) Copy remote
              s) Set configuration password
              q) Quit config
              e/n/d/r/c/s/q> q

       Once configured you can then use rclone like this,

       List directories in top level in remote1:dir1, remote2:dir2 and remote3:dir3

              rclone lsd remote:

       List all the files in remote1:dir1, remote2:dir2 and remote3:dir3

              rclone ls remote:

       Copy another local directory to the union directory called source, which will be placed into remote3:dir3

              rclone copy C:\source remote:source

   Behavior / Policies
       The  behavior  of  union  backend  is  inspired  by  trapexit/mergerfs.  All functions are grouped into 3
       categories: action, create and search.  These functions and categories can be  assigned  a  policy  which
       dictates what file or directory is chosen when performing that behavior.  Any policy can be assigned to a
       function  or category though some may not be very useful in practice.  For instance: rand (random) may be
       useful for file creation (create) but could lead to very odd behavior if used for delete  if  there  were
       more than one copy of the file.

   Function / Category classifications
       Category   Description       Functions
       ───────────────────────────────────────────────────────────────────────────────
       action     Writing           move, rmdir, rmdirs, delete, purge and copy, sync
                  Existing file     (as destination when file exist)
       create     Create            copy, sync (as destination when file not exist)
                  non-existing
                  file
       search     Reading     and   ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync
                  listing file      (as source)
       N/A                          size, about

   Path Preservation
       Policies, as described below, are of two basic types.  path preserving and non-path preserving.

       All policies which start with ep (epff, eplfs, eplus, epmfs, eprand) are path preserving.  ep stands  for
       existing path.

       A  path  preserving  policy  will  only consider upstreams where the relative path being accessed already
       exists.

       When using non-path preserving policies paths will be created in target upstreams as necessary.

   Quota Relevant Policies
       Some policies rely on quota information.  These policies should be used only if  your  upstreams  support
       the respective quota fields.

       Policy       Required Field
       ────────────────────────────
       lfs, eplfs   Free
       mfs, epmfs   Free
       lus, eplus   Used
       lno, eplno   Objects

       To  check  if  your upstream supports the field, run rclone about remote: [flags] and see if the required
       field exists.

   Filters
       Policies basically search upstream remotes and create a list of files / paths for functions to  work  on.
       The  policy  is responsible for filtering and sorting.  The policy type defines the sorting but filtering
       is mostly uniform as described below.

       • No search policies filter.

       • All action policies will filter out remotes which are tagged as read-only.

       • All create policies will filter out remotes which are tagged read-only or no-create.

       If all remotes are filtered an error will be returned.

   Policy descriptions
       The policies definition are  inspired  by  trapexit/mergerfs but  not  exactly  the  same.   Some  policy
       definition could be different due to the much larger latency of remote file systems.

       Policy             Description
       ──────────────────────────────────────────────────────────────────────────
       all                Search category: same as epall.  Action category: same
                          as epall.  Create category: act on all upstreams.
       epall  (existing   Search category: Given this order configured,  act  on
       path, all)         the  first  one  found where the relative path exists.
                          Action category: apply to all found.  Create category:
                          act on all upstreams where the relative path exists.
       epff   (existing   Act on the first one  found,  by  the  time  upstreams
       path,      first   reply, where the relative path exists.
       found)
       eplfs  (existing   Of all the upstreams on which the relative path exists
       path, least free   choose the one with the least free space.
       space)
       eplus  (existing   Of all the upstreams on which the relative path exists
       path, least used   choose the one with the least used space.
       space)
       eplno  (existing   Of all the upstreams on which the relative path exists
       path,      least   choose the one with the least number of objects.
       number        of
       objects)
       epmfs  (existing   Of all the upstreams on which the relative path exists
       path,  most free   choose the one with the most free space.
       space)
       eprand (existing   Calls epall and then  randomizes.   Returns  only  one
       path, random)      upstream.
       ff (first found)   Search  category: same as epff.  Action category: same
                          as epff.  Create category: Act on the first one  found
                          by the time upstreams reply.
       lfs  (least free   Search category: same as eplfs.  Action category: same
       space)             as eplfs.  Create category: Pick the upstream with the
                          least available free space.
       lus (least  used   Search category: same as eplus.  Action category: same
       space)             as eplus.  Create category: Pick the upstream with the
                          least used space.
       lno       (least   Search category: same as eplno.  Action category: same
       number        of   as eplno.  Create category: Pick the upstream with the
       objects)           least number of objects.
       mfs  (most  free   Search category: same as epmfs.  Action category: same
       space)             as epmfs.  Create category: Pick the upstream with the
                          most available free space.
       newest             Pick the file / directory with the largest mtime.
       rand (random)      Calls all  and  then  randomizes.   Returns  only  one
                          upstream.

   Standard options
       Here are the Standard options specific to union (Union merges the contents of several upstream fs).

   –union-upstreams
       List of space separated upstreams.

       Can be `upstreama:test/dir upstreamb:', `“upstreama:test/space:ro dir” upstreamb:', etc.

       Properties:

       • Config: upstreams

       • Env Var: RCLONE_UNION_UPSTREAMS

       • Type: string

       • Required: true

   –union-action-policy
       Policy to choose upstream on ACTION category.

       Properties:

       • Config: action_policy

       • Env Var: RCLONE_UNION_ACTION_POLICY

       • Type: string

       • Default: “epall”

   –union-create-policy
       Policy to choose upstream on CREATE category.

       Properties:

       • Config: create_policy

       • Env Var: RCLONE_UNION_CREATE_POLICY

       • Type: string

       • Default: “epmfs”

   –union-search-policy
       Policy to choose upstream on SEARCH category.

       Properties:

       • Config: search_policy

       • Env Var: RCLONE_UNION_SEARCH_POLICY

       • Type: string

       • Default: “ff”

   –union-cache-time
       Cache time of usage and free space (in seconds).

       This option is only useful when a path preserving policy is used.

       Properties:

       • Config: cache_time

       • Env Var: RCLONE_UNION_CACHE_TIME

       • Type: int

       • Default: 120

   Advanced options
       Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

   –union-min-free-space
       Minimum viable free space for lfs/eplfs policies.

       If  a  remote  has  less  than  this  much free space then it won’t be considered for use in lfs or eplfs
       policies.

       Properties:

       • Config: min_free_space

       • Env Var: RCLONE_UNION_MIN_FREE_SPACE

       • Type: SizeSuffix

       • Default: 1Gi

   Metadata
       Any metadata supported by the underlying remote is read and written.

       See the metadata docs for more info.

WebDAV

       Paths are specified as remote:path

       Paths may be as deep as required, e.g. remote:directory/subdirectory.

   Configuration
       To configure the WebDAV remote you will need to have a URL for it, and a username and password.   If  you
       know what kind of system you are connecting to then rclone can enable extra features.

       Here is an example of how to make a remote called remote.  First run:

               rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              q) Quit config
              n/s/q> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / WebDAV
                 \ "webdav"
              [snip]
              Storage> webdav
              URL of http host to connect to
              Choose a number from below, or type in your own value
               1 / Connect to example.com
                 \ "https://example.com"
              url> https://example.com/remote.php/webdav/
              Name of the WebDAV site/service/software you are using
              Choose a number from below, or type in your own value
               1 / Nextcloud
                 \ "nextcloud"
               2 / Owncloud
                 \ "owncloud"
               3 / Sharepoint Online, authenticated by Microsoft account.
                 \ "sharepoint"
               4 / Sharepoint with NTLM authentication. Usually self-hosted or on-premises.
                 \ "sharepoint-ntlm"
               5 / Other site/service or software
                 \ "other"
              vendor> 1
              User name
              user> user
              Password.
              y) Yes type in my own password
              g) Generate random password
              n) No leave this optional password blank
              y/g/n> y
              Enter the password:
              password:
              Confirm the password:
              password:
              Bearer token instead of user/pass (e.g. a Macaroon)
              bearer_token>
              Remote config
              --------------------
              [remote]
              type = webdav
              url = https://example.com/remote.php/webdav/
              vendor = nextcloud
              user = user
              pass = *** ENCRYPTED ***
              bearer_token =
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       Once configured you can then use rclone like this,

       List directories in top level of your WebDAV

              rclone lsd remote:

       List all the files in your WebDAV

              rclone ls remote:

       To copy a local directory to an WebDAV directory called backup

              rclone copy /home/source remote:backup

   Modified time and hashes
       Plain  WebDAV  does not support modified times.  However when used with Owncloud or Nextcloud rclone will
       support modified times.

       Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud  rclone  will
       support  SHA1  and MD5 hashes.  Depending on the exact version of Owncloud or Nextcloud hashes may appear
       on all objects, or only on objects which had a hash uploaded with them.

   Standard options
       Here are the Standard options specific to webdav (WebDAV).

   –webdav-url
       URL of http host to connect to.

       E.g.  https://example.com.

       Properties:

       • Config: url

       • Env Var: RCLONE_WEBDAV_URL

       • Type: string

       • Required: true

   –webdav-vendor
       Name of the WebDAV site/service/software you are using.

       Properties:

       • Config: vendor

       • Env Var: RCLONE_WEBDAV_VENDOR

       • Type: string

       • Required: false

       • Examples:

         • “nextcloud”

           • Nextcloud

         • “owncloud”

           • Owncloud

         • “sharepoint”

           • Sharepoint Online, authenticated by Microsoft account

         • “sharepoint-ntlm”

           • Sharepoint with NTLM authentication, usually self-hosted or on-premises

         • “other”

           • Other site/service or software

   –webdav-user
       User name.

       In case NTLM authentication is used, the username should be in the format `Domain'.

       Properties:

       • Config: user

       • Env Var: RCLONE_WEBDAV_USER

       • Type: string

       • Required: false

   –webdav-pass
       Password.

       NB Input to this must be obscured - see rclone obscure.

       Properties:

       • Config: pass

       • Env Var: RCLONE_WEBDAV_PASS

       • Type: string

       • Required: false

   –webdav-bearer-token
       Bearer token instead of user/pass (e.g. a Macaroon).

       Properties:

       • Config: bearer_token

       • Env Var: RCLONE_WEBDAV_BEARER_TOKEN

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to webdav (WebDAV).

   –webdav-bearer-token-command
       Command to run to get a bearer token.

       Properties:

       • Config: bearer_token_command

       • Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND

       • Type: string

       • Required: false

   –webdav-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Default                                            encoding                                            is
       Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8
       for sharepoint-ntlm or identity otherwise.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_WEBDAV_ENCODING

       • Type: string

       • Required: false

   –webdav-headers
       Set HTTP headers for all transactions.

       Use this to set additional HTTP headers for all transactions

       The input format is comma separated list of key,value pairs.  Standard CSV encoding may be used.

       For example, to set a Cookie use `Cookie,name=value', or `“Cookie”,“name=value”'.

       You can set multiple headers, e.g. `“Cookie”,“name=value”,“Authorization”,“xxx”'.

       Properties:

       • Config: headers

       • Env Var: RCLONE_WEBDAV_HEADERS

       • Type: CommaSepList

       • Default:

   Provider notes
       See below for notes on specific providers.

   Owncloud
       Click  on  the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone
       needs in the config step.  It will look something like https://example.com/remote.php/webdav/.

       Owncloud supports modified times using the X-OC-Mtime header.

   Nextcloud
       This is configured in an identical way to Owncloud.   Note  that  Nextcloud  initially  did  not  support
       streaming  of files (rcat) whereas Owncloud did, but this seems to be fixed as of 2020-11-27 (tested with
       rclone v1.53.1 and Nextcloud Server v19).

   Sharepoint Online
       Rclone can be used with Sharepoint provided by OneDrive for Business  or  Office365  Education  Accounts.
       This feature is only needed for a few of these Accounts, mostly Office365 Education ones.  These accounts
       are sometimes not verified by the domain owner github#1975

       This means that these accounts can’t be added using the official API (other Accounts should work with the
       “onedrive” option).  However, it is possible to access them using webdav.

       To use a sharepoint remote with rclone, add it like this: First, you need to get your remote’s URL:

       • Go here to open your OneDrive or to sign in

       • Now    take    a    look    at    your    address    bar,    the    URL    should   look   like   this:
         https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx

       You’ll only need this URL up to  the  email  address.   After  that,  you’ll  most  likely  want  to  add
       “/Documents”.  That subdirectory contains the actual data stored on your OneDrive.

       Add      the      remote      to      rclone      like      this:      Configure      the      url     as
       https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email
       and password for user and pass.  If you have 2FA enabled, you have to generate an app password.  Set  the
       vendor to sharepoint.

       Your config file should look like this:

              [sharepoint]
              type = webdav
              url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
              vendor = sharepoint
              user = YourEmailAddress
              pass = encryptedpassword

   Sharepoint with NTLM Authentication
       Use  this  option  in  case  your  (hosted)  Sharepoint  is  not  tied to OneDrive accounts and uses NTLM
       authentication.

       To get the url configuration, similarly to the above, first navigate to the  desired  directory  in  your
       browser to get the URL, then strip everything after the name of the opened directory.

       Example: If the URL is: https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx

       The configuration to use would be: https://example.sharepoint.com/sites/12345/Documents

       Set the vendor to sharepoint-ntlm.

       NTLM uses domain and user name combination for authentication, set user to DOMAIN\username.

       Your config file should look like this:

              [sharepoint]
              type = webdav
              url = https://[YOUR-DOMAIN]/some-path-to/Documents
              vendor = sharepoint-ntlm
              user = DOMAIN\user
              pass = encryptedpassword

   Required Flags for SharePoint
       As  SharePoint  does  some special things with uploaded documents, you won’t be able to use the documents
       size or the documents hash to compare if a file has been changed since the upload / which file is newer.

       For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.)   from/to  SharePoint
       (like copy, sync, etc.), you should append these flags to ensure Rclone uses the “Last Modified” datetime
       property to compare your documents:

              --ignore-size --ignore-checksum --update

   dCache
       dCache  is  a  storage system that supports many protocols and authentication/authorisation schemes.  For
       WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos,  and
       various                          bearer                         tokens,                         including
       https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf Macaroons and
       OpenID-Connect access tokens.

       Configure as normal using the other type.  Don’t  enter  a  username  or  password,  instead  enter  your
       Macaroon as the bearer_token.

       The config will end up looking something like this.

              [dcache]
              type = webdav
              url = https://dcache...
              vendor = other
              user =
              pass =
              bearer_token = your-macaroon

       There  is  a  script that  obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config
       file.

       Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache.

   OpenID-Connect
       dCache also supports authenticating with OpenID-Connect access  tokens.   OpenID-Connect  is  a  protocol
       (based  on  OAuth  2.0)  that  allows services to identify users who have authenticated with some central
       service.

       Support for OpenID-Connect in  rclone  is  currently  achieved  using  another  software  package  called
       oidc-agent.   This is a command-line tool that facilitates obtaining an access token.  Once installed and
       configured, an access token is obtained by running the oidc-token command.  The following example shows a
       (shortened) access token obtained from the XDC OIDC Provider.

              paul@celebrimbor:~$ oidc-token XDC
              eyJraWQ[...]QFXDt0
              paul@celebrimbor:~$

       Note Before the oidc-token command will work, the refresh token must be loaded into the oidc agent.  This
       is done with the oidc-add command (e.g., oidc-add XDC).  This is typically done once per  login  session.
       Full  details  on  this  and  how  to  register  oidc-agent  with  your OIDC Provider are provided in the
       oidc-agent documentation.

       The rclone bearer_token_command configuration option is used to fetch the access token from oidc-agent.

       Configure as a normal WebDAV endpoint, using the `other' vendor, leaving the username and password empty.
       When prompted, choose to edit the advanced config and enter the command to  get  a  bearer  token  (e.g.,
       oidc-agent XDC).

       The  following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from
       the XDC OIDC Provider.

              [dcache]
              type = webdav
              url = https://dcache.example.org/
              vendor = other
              bearer_token_command = oidc-token XDC

Yandex Disk

       Yandex Disk is a cloud storage solution created by Yandex.

   Configuration
       Here is an example of making a yandex configuration.  First run

              rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              n/s> n
              name> remote
              Type of storage to configure.
              Choose a number from below, or type in your own value
              [snip]
              XX / Yandex Disk
                 \ "yandex"
              [snip]
              Storage> yandex
              Yandex Client Id - leave blank normally.
              client_id>
              Yandex Client Secret - leave blank normally.
              client_secret>
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes
              n) No
              y/n> y
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              --------------------
              [remote]
              client_id =
              client_secret =
              token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"OAuth","expiry":"2016-12-29T12:27:11.362788025Z"}
              --------------------
              y) Yes this is OK
              e) Edit this remote
              d) Delete this remote
              y/e/d> y

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Note that rclone runs a webserver on your local machine to collect the  token  as  returned  from  Yandex
       Disk.   This  only runs from the moment it opens your browser to the moment you get back the verification
       code.  This is on http://127.0.0.1:53682/ and this it may require you to unblock it  temporarily  if  you
       are running a host firewall.

       Once configured you can then use rclone like this,

       See top level directories

              rclone lsd remote:

       Make a new directory

              rclone mkdir remote:directory

       List the contents of a directory

              rclone ls remote:directory

       Sync /home/local/directory to the remote path, deleting any excess files in the path.

              rclone sync -i /home/local/directory remote:directory

       Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.

   Modified time
       Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in
       RFC3339 with nanoseconds format.

   MD5 checksums
       MD5 checksums are natively supported by Yandex Disk.

   Emptying Trash
       If  you  wish  to  empty your trash you can use the rclone cleanup remote: command which will permanently
       delete all your trashed files.  This command does not take any path arguments.

   Quota information
       To view your current quota you can use the rclone about remote: command which  will  display  your  usage
       limit (quota) and the current usage.

   Restricted filename characters
       The default restricted characters set are replaced.

       Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.

   Standard options
       Here are the Standard options specific to yandex (Yandex Disk).

   –yandex-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_YANDEX_CLIENT_ID

       • Type: string

       • Required: false

   –yandex-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_YANDEX_CLIENT_SECRET

       • Type: string

       • Required: false

   Advanced options
       Here are the Advanced options specific to yandex (Yandex Disk).

   –yandex-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_YANDEX_TOKEN

       • Type: string

       • Required: false

   –yandex-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_YANDEX_AUTH_URL

       • Type: string

       • Required: false

   –yandex-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_YANDEX_TOKEN_URL

       • Type: string

       • Required: false

   –yandex-hard-delete
       Delete files permanently rather than putting them into the trash.

       Properties:

       • Config: hard_delete

       • Env Var: RCLONE_YANDEX_HARD_DELETE

       • Type: bool

       • Default: false

   –yandex-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_YANDEX_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Del,Ctl,InvalidUtf8,Dot

   Limitations
       When  uploading  very  large  files  (bigger  than  about  5 GiB) you will need to increase the --timeout
       parameter.  This is because Yandex pauses (perhaps to calculate the MD5SUM for the  entire  file)  before
       returning confirmation that the file has been uploaded.  The default handling of timeouts in rclone is to
       assume  a  5  minute  pause  is an error and close the connection - you’ll see net/http: timeout awaiting
       response headers errors in the logs if this is happening.  Setting the timeout to twice the max  size  of
       file  in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that
       is --timeout 60m.

       Having a Yandex Mail account is mandatory to use the Yandex.Disk  subscription.   Token  generation  will
       work without a mail account, but Rclone won’t be able to complete any actions.

              [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

Zoho Workdrive

       Zoho WorkDrive is a cloud storage solution created by Zoho.

   Configuration
       Here is an example of making a zoho configuration.  First run

              rclone config

       This will guide you through an interactive setup process:

              No remotes found, make a new one?
              n) New remote
              s) Set configuration password
              n/s> n
              name> remote
              Type of storage to configure.
              Enter a string value. Press Enter for the default ("").
              Choose a number from below, or type in your own value
              [snip]
              XX / Zoho
                 \ "zoho"
              [snip]
              Storage> zoho
              ** See help for zoho backend at: https://rclone.org/zoho/ **

              OAuth Client Id
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_id>
              OAuth Client Secret
              Leave blank normally.
              Enter a string value. Press Enter for the default ("").
              client_secret>
              Edit advanced config? (y/n)
              y) Yes
              n) No (default)
              y/n> n
              Remote config
              Use auto config?
               * Say Y if not sure
               * Say N if you are working on a remote or headless machine
              y) Yes (default)
              n) No
              y/n>
              If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=LVn0IHzxej1ZkmQw31d0wQ
              Log in and authorize rclone for access
              Waiting for code...
              Got code
              Choose a number from below, or type in your own value
               1 / MyTeam
                 \ "4u28602177065ff22426787a6745dba8954eb"
              Enter a Team ID> 1
              Choose a number from below, or type in your own value
               1 / General
                 \ "4u2869d2aa6fca04f4f2f896b6539243b85b1"
              Enter a Workspace ID> 1
              --------------------
              [remote]
              type = zoho
              token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"Zoho-oauthtoken","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","expiry":"2020-10-12T00:54:52.370275223+02:00"}
              root_folder_id = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              --------------------
              y) Yes this is OK (default)
              e) Edit this remote
              d) Delete this remote
              y/e/d>

       See the remote setup docs for how to set it up on a machine with no Internet browser available.

       Rclone  runs  a  webserver on your local computer to collect the authorization token from Zoho Workdrive.
       This is only from the moment your browser is opened until the token is returned.  The webserver  runs  on
       http://127.0.0.1:53682/.   If  local  port  53682  is protected by a firewall you may need to temporarily
       unblock the firewall to complete authorization.

       Once configured you can then use rclone like this,

       See top level directories

              rclone lsd remote:

       Make a new directory

              rclone mkdir remote:directory

       List the contents of a directory

              rclone ls remote:directory

       Sync /home/local/directory to the remote path, deleting any excess files in the path.

              rclone sync -i /home/local/directory remote:directory

       Zoho paths may be as deep as required, eg remote:directory/subdirectory.

   Modified time
       Modified times are currently not supported for Zoho Workdrive

   Checksums
       No checksums are supported.

   Usage information
       To view your current quota you can use the rclone about remote: command which will display  your  current
       usage.

   Restricted filename characters
       Only  control  characters and invalid UTF-8 are replaced.  In addition most Unicode full-width characters
       are not supported at all and will be removed from filenames during upload.

   Standard options
       Here are the Standard options specific to zoho (Zoho).

   –zoho-client-id
       OAuth Client Id.

       Leave blank normally.

       Properties:

       • Config: client_id

       • Env Var: RCLONE_ZOHO_CLIENT_ID

       • Type: string

       • Required: false

   –zoho-client-secret
       OAuth Client Secret.

       Leave blank normally.

       Properties:

       • Config: client_secret

       • Env Var: RCLONE_ZOHO_CLIENT_SECRET

       • Type: string

       • Required: false

   –zoho-region
       Zoho region to connect to.

       You’ll have to use the region your organization is registered in.  If not sure use  the  same  top  level
       domain as you connect to in your browser.

       Properties:

       • Config: region

       • Env Var: RCLONE_ZOHO_REGION

       • Type: string

       • Required: false

       • Examples:

         • “com”

           • United states / Global

         • “eu”

           • Europe

         • “in”

           • India

         • “jp”

           • Japan

         • “com.cn”

           • China

         • “com.au”

           • Australia

   Advanced options
       Here are the Advanced options specific to zoho (Zoho).

   –zoho-token
       OAuth Access Token as a JSON blob.

       Properties:

       • Config: token

       • Env Var: RCLONE_ZOHO_TOKEN

       • Type: string

       • Required: false

   –zoho-auth-url
       Auth server URL.

       Leave blank to use the provider defaults.

       Properties:

       • Config: auth_url

       • Env Var: RCLONE_ZOHO_AUTH_URL

       • Type: string

       • Required: false

   –zoho-token-url
       Token server url.

       Leave blank to use the provider defaults.

       Properties:

       • Config: token_url

       • Env Var: RCLONE_ZOHO_TOKEN_URL

       • Type: string

       • Required: false

   –zoho-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_ZOHO_ENCODING

       • Type: MultiEncoder

       • Default: Del,Ctl,InvalidUtf8

   Setting up your own client_id
       For Zoho we advise you to set up your own client_id.  To do so you have to complete the following steps.

       1. Log in to the Zoho API Console

       2. Create  a  new  client of type “Server-based Application”.  The name and website don’t matter, but you
          must add the redirect URL http://localhost:53682/.

       3. Once the client is created, you can go to the settings tab and enable it in other regions.

       The client id and client secret can now be used with rclone.

Local Filesystem

       Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

              rclone sync -i /home/source /tmp/destination

       Will sync /home/source to /tmp/destination.

   Configuration
       For consistencies sake one can also configure a remote of type local in the config file, and  access  the
       local  filesystem  using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not
       to.

   Modified time
       Rclone reads and writes the modified time using an accuracy determined by the OS.  Typically this is  1ns
       on Linux, 10 ns on Windows and 1 Second on OS X.

   Filenames
       Filenames should be encoded in UTF-8 on disk.  This is the normal case for Windows and OS X.

       There  is  a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files
       names.  If you are using an old Linux filesystem with non UTF-8 file names (e.g. latin1) then you can use
       the convmv tool to convert the filesystem to UTF-8.   This  tool  is  available  in  most  distributions’
       package managers.

       If  an  invalid  (non-UTF8)  filename  is  read,  the  invalid  characters will be replaced with a quoted
       representation of the invalid bytes.  The name gro\xdf will be transferred as gro‛DF.  rclone will emit a
       debug message in this case (use -v to see), e.g.

              Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"

   Restricted characters
       With the local backend, restrictions on the characters that are usable in file or directory names  depend
       on  the  operating  system.  To check what rclone will replace by default on your system, run rclone help
       flags local-encoding.

       On non Windows platforms the following characters are replaced when handling file names.

       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       /           0x2F        /

       When   running   on  Windows  the  following  characters  are  replaced.   This  list  is  based  on  the
       https://docs.microsoft.com/de-de/windows/desktop/FileIO/naming-a-file#naming-conventions   Windows   file
       naming conventions.

       Character   Value   Replacement
       ────────────────────────────────
       NUL         0x00         ␀
       SOH         0x01         ␁
       STX         0x02         ␂
       ETX         0x03         ␃
       EOT         0x04         ␄
       ENQ         0x05         ␅
       ACK         0x06         ␆
       BEL         0x07         ␇
       BS          0x08         ␈
       HT          0x09         ␉
       LF          0x0A         ␊
       VT          0x0B         ␋
       FF          0x0C         ␌
       CR          0x0D         ␍
       SO          0x0E         ␎
       SI          0x0F         ␏
       DLE         0x10         ␐
       DC1         0x11         ␑
       DC2         0x12         ␒
       DC3         0x13         ␓
       DC4         0x14         ␔
       NAK         0x15         ␕
       SYN         0x16         ␖
       ETB         0x17         ␗
       CAN         0x18         ␘
       EM          0x19         ␙
       SUB         0x1A         ␚
       ESC         0x1B         ␛
       FS          0x1C         ␜
       GS          0x1D         ␝
       RS          0x1E         ␞
       US          0x1F         ␟
       /           0x2F        /
       ”           0x22        "
       *           0x2A        *
       :           0x3A        :
       <           0x3C        <
       >           0x3E        >
       ?           0x3F        ?
       \           0x5C        \
       |           0x7C        |

       File names on Windows can also not end with the following characters.  These only get  replaced  if  they
       are the last character in the name:

       Character   Value   Replacement
       ────────────────────────────────
       SP          0x20         ␠
       .           0x2E        .

       Invalid UTF-8 bytes will also be replaced, as they can’t be converted to UTF-16.

   Paths on Windows
       On  Windows  there  are  many  ways  of  specifying a path to a file system resource.  Local paths can be
       absolute, like C:\path\to\wherever,  or  relative,  like  ..\wherever.   Network  paths  in  UNC  format,
       \\server\share,  are also supported.  Path separator can be either \ (as in C:\path\to\wherever) or / (as
       in C:/path/to/wherever).  Length of these  paths  are  limited  to  259  characters  for  files  and  247
       characters  for directories, but there is an alternative extended-length path format increasing the limit
       to (approximately) 32,767 characters.  This format requires absolute paths and the use  of  prefix  \\?\,
       e.g. \\?\D:\some\very\long\path.   For  convenience  rclone will automatically convert regular paths into
       the corresponding extended-length paths, so in most cases you do not have to worry about this (read  more
       below).

       Note  that  Windows  supports  using  the same prefix \\?\ to specify path to volumes identified by their
       GUID, e.g.  \\?\Volume{b75e2c83-0000-0000-0000-602f00000000}\some\path.  This is not supported in rclone,
       due to an issue in go.

   Long paths
       Rclone  handles  long  paths  automatically,  by  converting  all paths to https://docs.microsoft.com/en-
       us/windows/win32/fileio/maximum-file-path-limitation  extended-length  path format, which allows paths up
       to 32,767 characters.

       This conversion will ensure paths are absolute and prefix them with the \\?\.  This is why you  will  see
       that  your paths, for instance .\files is shown as path \\?\C:\files in the output, and \\server\share as
       \\?\UNC\server\share.

       However, in rare cases this may cause problems with buggy file system drivers like EncFS.  To disable UNC
       conversion globally, add this to your .rclone.conf file:

              [local]
              nounc = true

       If you want to selectively disable UNC, you can add it to a separate entry like this:

              [nounc]
              type = local
              nounc = true

       And use rclone like this:

       rclone copy c:\src nounc:z:\dst

       This will use UNC paths on c:\src but not on z:\dst.  Of course this will cause problems if the  absolute
       path length of a file exceeds 259 characters on z, so only use this option if you have to.

   Symlinks / Junction points
       Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

       If  you  supply  --copy-links  or  -L then rclone will follow the symlink and copy the pointed to file or
       directory.  Note that this flag is incompatible with --links / -l.

       This flag applies to all commands.

       For example, supposing you have a directory structure like this

              $ tree /tmp/a
              /tmp/a
              ├── b -> ../b
              ├── expected -> ../expected
              ├── one
              └── two
                  └── three

       Then you can see the difference with and without the flag like this

              $ rclone ls /tmp/a
                      6 one
                      6 two/three

       and

              $ rclone -L ls /tmp/a
                   4174 expected
                      6 one
                      6 two/three
                      6 b/two
                      6 b/one

   –links, -l
       Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

       If you supply this flag then rclone will copy symbolic links from the local storage, and  store  them  as
       text files, with a `.rclonelink' suffix in the remote storage.

       The text file will contain the target of the symbolic link (see example).

       This flag applies to all commands.

       For example, supposing you have a directory structure like this

              $ tree /tmp/a
              /tmp/a
              ├── file1 -> ./file4
              └── file2 -> /home/user/file3

       Copying the entire directory with `-l'

              $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/

       The remote files are created with a `.rclonelink' suffix

              $ rclone ls remote:/tmp/a
                     5 file1.rclonelink
                    14 file2.rclonelink

       The remote files will contain the target of the symbolic links

              $ rclone cat remote:/tmp/a/file1.rclonelink
              ./file4

              $ rclone cat remote:/tmp/a/file2.rclonelink
              /home/user/file3

       Copying them back with `-l'

              $ rclone copyto -l remote:/tmp/a/ /tmp/b/

              $ tree /tmp/b
              /tmp/b
              ├── file1 -> ./file4
              └── file2 -> /home/user/file3

       However, if copied back without `-l'

              $ rclone copyto remote:/tmp/a/ /tmp/b/

              $ tree /tmp/b
              /tmp/b
              ├── file1.rclonelink
              └── file2.rclonelink

       Note that this flag is incompatible with -copy-links / -L.

   Restricting filesystems with –one-file-system
       Normally rclone will recurse through filesystems as mounted.

       However  if  you set --one-file-system or -x this tells rclone to stay in the filesystem specified by the
       root and not to recurse into different file systems.

       For example if you have a directory hierarchy like this

              root
              ├── disk1     - disk1 mounted on the root
              │   └── file3 - stored on disk1
              ├── disk2     - disk2 mounted on the root
              │   └── file4 - stored on disk12
              ├── file1     - stored on the root disk
              └── file2     - stored on the root disk

       Using rclone --one-file-system copy root remote: will only copy file1 and file2.  Eg

              $ rclone -q --one-file-system ls root
                      0 file1
                      0 file2

              $ rclone -q ls root
                      0 disk1/file3
                      0 disk2/file4
                      0 file1
                      0 file2

       NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount  to  the  same  device  as
       being on the same filesystem.

       NB This flag is only available on Unix based systems.  On systems where it isn’t supported (e.g. Windows)
       it will be ignored.

   Advanced options
       Here are the Advanced options specific to local (Local Disk).

   –local-nounc
       Disable UNC (long path names) conversion on Windows.

       Properties:

       • Config: nounc

       • Env Var: RCLONE_LOCAL_NOUNC

       • Type: bool

       • Default: false

       • Examples:

         • “true”

           • Disables long file names.

   –copy-links / -L
       Follow symlinks and copy the pointed to item.

       Properties:

       • Config: copy_links

       • Env Var: RCLONE_LOCAL_COPY_LINKS

       • Type: bool

       • Default: false

   –links / -l
       Translate symlinks to/from regular files with a `.rclonelink' extension.

       Properties:

       • Config: links

       • Env Var: RCLONE_LOCAL_LINKS

       • Type: bool

       • Default: false

   –skip-links
       Don’t warn about skipped symlinks.

       This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge
       that they should be skipped.

       Properties:

       • Config: skip_links

       • Env Var: RCLONE_LOCAL_SKIP_LINKS

       • Type: bool

       • Default: false

   –local-zero-size-links
       Assume the Stat size of links is zero (and read them instead) (deprecated).

       Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:

       • Windows

       • On some virtual filesystems (such ash LucidLink)

       • Android

       So rclone now always reads the link.

       Properties:

       • Config: zero_size_links

       • Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS

       • Type: bool

       • Default: false

   –local-unicode-normalization
       Apply unicode NFC normalization to paths and filenames.

       This  flag  can  be  used  to  normalize  file  names  into unicode NFC form that are read from the local
       filesystem.

       Rclone does not normally touch the encoding of file names it reads from the file system.

       This can be useful when using macOS as it normally  provides  decomposed  (NFD)  unicode  which  in  some
       language (eg Korean) doesn’t display properly on some OSes.

       Note that rclone compares filenames with unicode normalization in the sync routine so this flag shouldn’t
       normally be used.

       Properties:

       • Config: unicode_normalization

       • Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION

       • Type: bool

       • Default: false

   –local-no-check-updated
       Don’t check to see if the files change during upload.

       Normally rclone checks the size and modification time of files as they are being uploaded and aborts with
       a message which starts “can’t copy - source file is being updated” if the file changes during upload.

       However  on some file systems this modification time check may fail (e.g.  Glusterfs #2206) so this check
       can be disabled with this flag.

       If this flag is set, rclone will use its best efforts to transfer a file which is being updated.  If  the
       file  is  only  having things appended to it (e.g. a log) then rclone will transfer the log file with the
       size it had the first time rclone saw it.

       If the file is being modified throughout (not just appended to) then the transfer may fail  with  a  hash
       check failure.

       In detail, once the file has had stat() called on it for the first time we:

       • Only transfer the size that stat gave

       • Only checksum the size that stat gave

       • Don’t update the stat info for the file

       Properties:

       • Config: no_check_updated

       • Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED

       • Type: bool

       • Default: false

   –one-file-system / -x
       Don’t cross filesystem boundaries (unix/macOS only).

       Properties:

       • Config: one_file_system

       • Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM

       • Type: bool

       • Default: false

   –local-case-sensitive
       Force the filesystem to report itself as case sensitive.

       Normally  the  local  backend declares itself as case insensitive on Windows/macOS and case sensitive for
       everything else.  Use this flag to override the default choice.

       Properties:

       • Config: case_sensitive

       • Env Var: RCLONE_LOCAL_CASE_SENSITIVE

       • Type: bool

       • Default: false

   –local-case-insensitive
       Force the filesystem to report itself as case insensitive.

       Normally the local backend declares itself as case insensitive on Windows/macOS and  case  sensitive  for
       everything else.  Use this flag to override the default choice.

       Properties:

       • Config: case_insensitive

       • Env Var: RCLONE_LOCAL_CASE_INSENSITIVE

       • Type: bool

       • Default: false

   –local-no-preallocate
       Disable preallocation of disk space for transferred files.

       Preallocation  of  disk  space  helps prevent filesystem fragmentation.  However, some virtual filesystem
       layers (such as Google Drive File Stream)  may  incorrectly  set  the  actual  file  size  equal  to  the
       preallocated  space,  causing  checksum  and  file  size  checks  to  fail.   Use  this  flag  to disable
       preallocation.

       Properties:

       • Config: no_preallocate

       • Env Var: RCLONE_LOCAL_NO_PREALLOCATE

       • Type: bool

       • Default: false

   –local-no-sparse
       Disable sparse files for multi-thread downloads.

       On Windows platforms rclone will make sparse files when doing multi-thread downloads.  This  avoids  long
       pauses on large files where the OS zeros the file.  However sparse files may be undesirable as they cause
       disk fragmentation and can be slow to work with.

       Properties:

       • Config: no_sparse

       • Env Var: RCLONE_LOCAL_NO_SPARSE

       • Type: bool

       • Default: false

   –local-no-set-modtime
       Disable setting modtime.

       Normally  rclone  updates  modification  time  of  files  after  they are done uploading.  This can cause
       permissions issues on Linux platforms when the user rclone is running as does not own the file  uploaded,
       such  as  when  copying to a CIFS mount owned by another user.  If this option is enabled, rclone will no
       longer update the modtime after copying a file.

       Properties:

       • Config: no_set_modtime

       • Env Var: RCLONE_LOCAL_NO_SET_MODTIME

       • Type: bool

       • Default: false

   –local-encoding
       The encoding for the backend.

       See the encoding section in the overview for more info.

       Properties:

       • Config: encoding

       • Env Var: RCLONE_LOCAL_ENCODING

       • Type: MultiEncoder

       • Default: Slash,Dot

   Metadata
       Depending on which OS is in use the local backend may return only some of the system  metadata.   Setting
       system  metadata  is supported on all OSes but setting user metadata is only supported on linux, freebsd,
       netbsd, macOS and Solaris.  It is not supported on Windows yet (see pkg/attrs#47).

       User metadata is stored as extended attributes (which may not be supported by all file systems) under the
       “user.*” prefix.

       Here are the possible system metadata items for the local backend.

       Name          Help           Type          Example                               Read Only
       ──────────────────────────────────────────────────────────────────────────────────────────────────────
       atime         Time     of    RFC 3339      2006-01-02T15:04:05.999999999Z07:00   N
                     last access
       btime         Time     of    RFC 3339      2006-01-02T15:04:05.999999999Z07:00   N
                     file  birth
                     (creation)
       gid           Group ID of    decimal       500                                   N
                     owner          number
       mode          File   type    octal, unix   0100664                               N
                     and mode       style
       mtime         Time     of    RFC 3339      2006-01-02T15:04:05.999999999Z07:00   N
                     last
                     modification
       rdev          Device    ID   hexadecimal   1abc                                  N
                     (if  special
                     file)
       uid           User  ID  of   decimal       500                                   N
                     owner          number

       See the metadata docs for more info.

   Backend commands
       Here are the commands specific to the local backend.

       Run them with

              rclone backend COMMAND remote:

       The help below will explain what arguments each command takes.

       See the backend command for more info on how to pass options and arguments.

       These can be run on a running backend using the rc command backend/command.

   noop
       A null operation for testing backend commands

              rclone backend noop remote: [options] [<arguments>+]

       This is a test command which has some options you can try to change the output.

       Options:

       • “echo”: echo the input arguments

       • “error”: return an error based on option value

Changelog

   v1.60.1 - 2022-11-17
       See commits

       • Bug Fixes

         • lib/cache: Fix alias backend shutting down too soon (Nick Craig-Wood)

         • wasm: Fix walltime link error by adding up-to-date wasm_exec.js (João Henrique Franco)

         • docs

           • Update faq.md with bisync (Samuel Johnson)

           • Corrected download links in windows install docs (coultonluke)

           • Add direct download link for windows arm64 (albertony)

           • Remove link to rclone slack as it is no longer supported (Nick Craig-Wood)

           • Faq: how to use a proxy server that requires a username and password (asdffdsazqqq)

           • Oracle-object-storage: doc fix (Manoj Ghosh)

           • Fix typo remove in rclone_serve_restic command (Joda Stößer)

           • Fix character that was incorrectly interpreted as markdown (Clément Notin)

       • VFS

         • Fix deadlock caused by cache cleaner and upload finishing (Nick Craig-Wood)

       • Local

         • Clean absolute paths (albertony)

         • Fix -L/–copy-links with filters missing directories (Nick Craig-Wood)

       • Mailru

         • Note that an app password is now needed (Nick Craig-Wood)

         • Allow timestamps to be before the epoch 1970-01-01 (Nick Craig-Wood)

       • S3

         • Add provider quirk --s3-might-gzip to fix corrupted on transfer: sizes differ (Nick Craig-Wood)

         • Allow Storj to server side copy since it seems to work now (Nick Craig-Wood)

         • Fix for unchecked err value in s3 listv2 (Aaron Gokaslan)

         • Add additional Wasabi locations (techknowlogick)

       • Smb

         • Fix Failed to sync: context canceled at the end of syncs (Nick Craig-Wood)

       • WebDAV

         • Fix Move/Copy/DirMove when using -server-side-across-configs (Nick Craig-Wood)

   v1.60.0 - 2022-10-21
       See commits

       • New backends

         • Oracle object storage (Manoj Ghosh)

         • SMB / CIFS (Windows file sharing) (Lesmiscore)

         • New S3 providers

           • IONOS Cloud Storage (Dmitry Deniskin)

           • Qiniu KODO (Bachue Zhou)

       • New Features

         • build

           • Update to go1.19 and make go1.17 the minimum required version (Nick Craig-Wood)

           • Install.sh: fix arm-v7 download (Ole Frost)

         • fs: Warn the user when using an existing remote name without a colon (Nick Craig-Wood)

         • httplib: Add --xxx-min-tls-version option to select minimum TLS  version  for  HTTP  servers  (Robert
           Newson)

         • librclone: Add PHP bindings and test program (Jordi Gonzalez Muñoz)

         • operations

           • Add --server-side-across-configs global flag for any backend (Nick Craig-Wood)

           • Optimise --copy-dest and --compare-dest (Nick Craig-Wood)

         • rc: add job/stopgroup to stop group (Evan Spensley)

         • serve dlna

           • Add --announce-interval to control SSDP Announce Interval (YanceyChiew)

           • Add --interface to Specify SSDP interface names line (Simon Bos)

           • Add support for more external subtitles (YanceyChiew)

           • Add verification of addresses (YanceyChiew)

         • sync: Optimise --copy-dest and --compare-dest (Nick Craig-Wood)

         • doc  updates  (albertony, Alexander Knorr, anonion, João Henrique Franco, Josh Soref, Lorenzo Milesi,
           Marco Molteni, Mark Trolley, Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)

       • Bug Fixes

         • filter

           • Fix incorrect filtering with UseFilter context flag and wrapping backends (Nick Craig-Wood)

           • Make sure we check --files-from when looking for a single file (Nick Craig-Wood)

         • rc

           • Fix mount/listmounts not returning the full Fs entered in mount/mount (Tom Mombourquette)

           • Handle external unmount when mounting (Isaac Aymerich)

           • Validate Daemon option is not set when mounting a volume via RC (Isaac Aymerich)

         • sync: Update docs and error messages to reflect fixes to overlap checks (Nick Naumann)

       • VFS

         • Reduce memory use by embedding sync.Cond (Nick Craig-Wood)

         • Reduce memory usage by re-ordering commonly used structures (Nick Craig-Wood)

         • Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)

       • Local

         • Obey file filters in listing to fix errors on excluded files (Nick Craig-Wood)

         • Fix “Failed to read metadata: function not implemented” on old Linux kernels (Nick Craig-Wood)

       • Compress

         • Fix crash due to nil metadata (Nick Craig-Wood)

         • Fix error handling to not use or return nil objects (Nick Craig-Wood)

       • Drive

         • Make --drive-stop-on-upload-limit obey quota exceeded error (Steve Kowalik)

       • FTP

         • Add --ftp-force-list-hidden option to show hidden items (Øyvind Heddeland Instefjord)

         • Fix hang when using ExplicitTLS to certain servers.  (Nick Craig-Wood)

       • Google Cloud Storage

         • Add --gcs-endpoint flag and config parameter (Nick Craig-Wood)

       • Hubic

         • Remove backend as service has now shut down (Nick Craig-Wood)

       • Onedrive

         • Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)

         • Disable change notify in China region since it is not supported (Nick Craig-Wood)

       • S3

         • Implement --s3-versions flag to show old versions of objects if enabled (Nick Craig-Wood)

         • Implement --s3-version-at flag to show versions of objects at a particular time (Nick Craig-Wood)

         • Implement backend versioning command to get/set bucket versioning (Nick Craig-Wood)

         • Implement Purge to purge versions and backend cleanup-hidden (Nick Craig-Wood)

         • Add --s3-decompress flag to decompress gzip-encoded files (Nick Craig-Wood)

         • Add --s3-sse-customer-key-base64 to supply keys with binary data (Richard Bateman)

         • Try to keep the maximum precision in ModTime with --user-server-modtime (Nick Craig-Wood)

         • Drop binary metadata with an ERROR message as it can’t be stored (Nick Craig-Wood)

         • Add --s3-no-system-metadata to suppress read and write of system metadata (Nick Craig-Wood)

       • SFTP

         • Fix directory creation races (Lesmiscore)

       • Swift

         • Add --swift-no-large-objects to reduce HEAD requests (Nick Craig-Wood)

       • Union

         • Propagate SlowHash feature to fix hasher interaction (Lesmiscore)

   v1.59.2 - 2022-09-15
       See commits

       • Bug Fixes

         • config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)

       • Local

         • Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)

       • Azure Blob

         • Fix chunksize calculations producing too many parts (Nick Craig-Wood)

       • B2

         • Fix chunksize calculations producing too many parts (Nick Craig-Wood)

       • S3

         • Fix chunksize calculations producing too many parts (Nick Craig-Wood)

   v1.59.1 - 2022-08-08
       See commits

       • Bug Fixes

         • accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)

         • build: Fix android build after GitHub actions change (Nick Craig-Wood)

         • dlna: Fix SOAP action header parsing (Joram Schrijver)

         • docs: Fix links to mount command from install docs (albertony)

         • dropbox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)

         • fs: Fix parsing of times and durations of the form “YYYY-MM-DD HH:MM:SS” (Nick Craig-Wood)

         • serve sftp: Fix checksum detection (Nick Craig-Wood)

         • sync: Add accidentally missed filter-sensitivity to –backup-dir option (Nick Naumann)

       • Combine

         • Fix docs showing remote= instead of upstreams= (Nick Craig-Wood)

         • Throw error if duplicate directory name is specified (Nick Craig-Wood)

         • Fix errors with backends shutting down while in use (Nick Craig-Wood)

       • Dropbox

         • Fix hang on quit with –dropbox-batch-mode off (Nick Craig-Wood)

         • Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)

       • Internetarchive

         • Ignore checksums for files using the different method (Lesmiscore)

         • Handle hash symbol in the middle of filename (Lesmiscore)

       • Jottacloud

         • Fix working with whitelabel Elgiganten Cloud

         • Do not store username in config when using standard auth (albertony)

       • Mega

         • Fix nil pointer exception when bad node received (Nick Craig-Wood)

       • S3

         • Fix –s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)

       • SFTP

         • Fix issue with WS_FTP by working around failing RealPath (albertony)

       • Union

         • Fix duplicated files when using directories with leading / (Nick Craig-Wood)

         • Fix multiple files being uploaded when roots don’t exist (Nick Craig-Wood)

         • Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)

   v1.59.0 - 2022-07-09
       See commits

       • New backends

         • Combine multiple remotes in one directory tree (Nick Craig-Wood)

         • Hidrive (Ovidiu Victor Tatar)

         • Internet Archive (Lesmiscore (Naoya Ozaki))

         • New S3 providers

           • ArvanCloud AOS (ehsantdy)

           • Cloudflare R2 (Nick Craig-Wood)

           • Huawei OBS (m00594701)

           • IDrive e2 (vyloy)

       • New commands

         • test makefile: Create a single file for testing (Nick Craig-Wood)

       • New Features

         • Metadata framework to read and write system and user metadata on backends (Nick Craig-Wood)

           • Implemented initially for local, s3 and internetarchive backends

           • --metadata/-M flag to control whether metadata is copied

           • --metadata-set flag to specify metadata for uploads

           • Thanks to Manz Solutions for sponsoring this work.

         • build

           • Update to go1.18 and make go1.16 the minimum required version (Nick Craig-Wood)

           • Update android go build to 1.18.x and NDK to 23.1.7779620 (Nick Craig-Wood)

           • All windows binaries now no longer CGO (Nick Craig-Wood)

           • Add linux/arm/v6 to docker images (Nick Craig-Wood)

           • A huge number of fixes found with staticcheck (albertony)

           • Configurable version suffix independent of version number (albertony)

         • check: Implement --no-traverse and --no-unicode-normalization (Nick Craig-Wood)

         • config: Readability improvements (albertony)

         • copyurl: Add --header-filename to honor the HTTP header filename directive (J-P Treen)

         • filter: Allow multiple --exclude-if-present flags (albertony)

         • fshttp: Add --disable-http-keep-alives to disable HTTP Keep Alives (Nick Craig-Wood)

         • install.sh

           • Set the modes on the files and/or directories on macOS (Michael C Tiernan - MIT-Research  Computing
             Project)

           • Pre  verify sudo authorization -v before calling curl.  (Michael C Tiernan - MIT-Research Computing
             Project)

         • lib/encoder: Add Semicolon encoding (Nick Craig-Wood)

         • lsf: Add metadata support with M flag (Nick Craig-Wood)

         • lsjson: Add --metadata/-M flag (Nick Craig-Wood)

         • ncdu

           • Implement multi selection (CrossR)

           • Replace termbox with tcell’s termbox wrapper (eNV25)

           • Display correct path in delete confirmation dialog (Roberto Ricci)

         • operations

           • Speed up hash checking by aborting the other hash if first returns nothing (Nick Craig-Wood)

           • Use correct src/dst in some log messages (zzr93)

         • rcat: Check checksums by default like copy does (Nick Craig-Wood)

         • selfupdate: Replace deprecated x/crypto/openpgp package with ProtonMail/go-crypto (albertony)

         • serve ftp: Check --passive-port arguments are correct (Nick Craig-Wood)

         • size: Warn about inaccurate results when objects with unknown size (albertony)

         • sync: Overlap check is now filter-sensitive so --backup-dir  can  be  in  the  root  provided  it  is
           filtered (Nick)

         • test info: Check file name lengths using 1,2,3,4 byte unicode characters (Nick Craig-Wood)

         • test  makefile(s):  --sparse,  --zero,  --pattern,  --ascii, --chargen flags to control file contents
           (Nick Craig-Wood)

         • Make sure we call the Shutdown method on backends (Martin Czygan)

       • Bug Fixes

         • accounting: Fix unknown length file transfers counting 3 transfers each (buda)

         • ncdu: Fix issue where dir size is summed when file sizes are -1 (albertony)

         • sync/copy/move

           • Fix --fast-list --create-empty-src-dirs and --exclude (Nick Craig-Wood)

           • Fix --max-duration and --cutoff-mode soft (Nick Craig-Wood)

         • Fix fs cache unpin (Martin Czygan)

         • Set proper exit code for  errors  that  are  not  low-level  retried  (e.g. size/timestamp  changing)
           (albertony)

       • Mount

         • Support windows/arm64 (may still be problems - see #5828) (Nick Craig-Wood)

         • Log IO errors at ERROR level (Nick Craig-Wood)

         • Ignore _netdev mount argument (Hugal31)

       • VFS

         • Add --vfs-fast-fingerprint for less accurate but faster fingerprints (Nick Craig-Wood)

         • Add --vfs-disk-space-total-size option to manually set the total disk space (Claudio Maradonna)

         • vfscache: Fix fatal error: sync: unlock of unlocked mutex error (Nick Craig-Wood)

       • Local

         • Fix parsing of --local-nounc flag (Nick Craig-Wood)

         • Add Metadata support (Nick Craig-Wood)

       • Crypt

         • Support metadata (Nick Craig-Wood)

       • Azure Blob

         • Calculate Chunksize/blocksize to stay below maxUploadParts (Leroy van Logchem)

         • Use chunksize lib to determine chunksize dynamically (Derek Battams)

         • Case insensitive access tier (Rob Pickerill)

         • Allow remote emulator (azurite) (Lorenzo Maiorfi)

       • B2

         • Add --b2-version-at flag to show file versions at time specified (SwazRGB)

         • Use chunksize lib to determine chunksize dynamically (Derek Battams)

       • Chunker

         • Mark as not supporting metadata (Nick Craig-Wood)

       • Compress

         • Support metadata (Nick Craig-Wood)

       • Drive

         • Make backend config -o config add a combined AllDrives: remote (Nick Craig-Wood)

         • Make --drive-shared-with-me work with shared drives (Nick Craig-Wood)

         • Add --drive-resource-key for accessing link-shared files (Nick Craig-Wood)

         • Add backend commands exportformats and importformats for debugging (Nick Craig-Wood)

         • Fix 404 errors on copy/server side copy objects from public folder (Nick Craig-Wood)

         • Update Internal OAuth consent screen docs (Phil Shackleton)

         • Moved root_folder_id to advanced section (Abhiraj)

       • Dropbox

         • Migrate from deprecated api (m8rge)

         • Add logs to show when poll interval limits are exceeded (Nick Craig-Wood)

         • Fix nil pointer exception on dropbox impersonate user not found (Nick Craig-Wood)

       • Fichier

         • Parse api error codes and them accordingly (buengese)

       • FTP

         • Add support for disable_utf8 option (Jason Zheng)

         • Revert to upstream github.com/jlaffaye/ftp from our fork (Nick Craig-Wood)

       • Google Cloud Storage

         • Add --gcs-no-check-bucket to minimise transactions and perms (Nick Gooding)

         • Add --gcs-decompress flag to decompress gzip-encoded files (Nick Craig-Wood)

           • by default these will be downloaded compressed (which previously failed)

       • Hasher

         • Support metadata (Nick Craig-Wood)

       • HTTP

         • Fix missing response when using custom auth handler (albertony)

       • Jottacloud

         • Add support for upload to custom device and mountpoint (albertony)

         • Always store username in config and use it to avoid initial API request (albertony)

         • Fix issue with server-side copy when destination is in trash (albertony)

         • Fix listing output of remote with special characters (albertony)

       • Mailru

         • Fix timeout by using int instead of time.Duration for keeping number of seconds (albertony)

       • Mega

         • Document using MEGAcmd to help with login failures (Art M. Gallagher)

       • Onedrive

         • Implement --poll-interval for onedrive (Hugo Laloge)

         • Add access scopes option (Sven Gerber)

       • Opendrive

         • Resolve lag and truncate bugs (Scott Grimes)

       • Pcloud

         • Fix about with no free space left (buengese)

         • Fix cleanup (buengese)

       • S3

         • Use PUT Object instead of presigned URLs to upload single part objects (Nick Craig-Wood)

         • Backend restore command to skip non-GLACIER objects (Vincent Murphy)

         • Use chunksize lib to determine chunksize dynamically (Derek Battams)

         • Retry RequestTimeout errors (Nick Craig-Wood)

         • Implement reading and writing of metadata (Nick Craig-Wood)

       • SFTP

         • Add support for about and hashsum on windows server (albertony)

         • Use vendor-specific VFS statistics extension for about if available (albertony)

         • Add --sftp-chunk-size to control packets sizes for high latency links (Nick Craig-Wood)

         • Add --sftp-concurrency to improve high latency transfers (Nick Craig-Wood)

         • Add --sftp-set-env option to set environment variables (Nick Craig-Wood)

         • Add Hetzner Storage Boxes to supported sftp backends (Anthrazz)

       • Storj

         • Fix put which lead to the file being unreadable when using mount (Erik van Velzen)

       • Union

         • Add min_free_space option for lfs/eplfs policies (Nick Craig-Wood)

         • Fix uploading files to union of all bucket based remotes (Nick Craig-Wood)

         • Fix get free space for remotes which don’t support it (Nick Craig-Wood)

         • Fix eplus policy to select correct entry for existing files (Nick Craig-Wood)

         • Support metadata (Nick Craig-Wood)

       • Uptobox

         • Fix root path handling (buengese)

       • WebDAV

         • Add SharePoint in other specific regions support (Noah Hsu)

       • Yandex

         • Handle api error on server-side move (albertony)

       • Zoho

         • Add Japan and China regions (buengese)

   v1.58.1 - 2022-04-29
       See commits

       • Bug Fixes

         • build: Update github.com/billziss-gh to github.com/winfsp (Nick Craig-Wood)

         • filter: Fix timezone of --min-age/-max-age from UTC to local as documented (Nick Craig-Wood)

         • rc/js: Correct RC method names (Sơn Trần-Nguyễn)

         • docs

           • Fix some links to command pages (albertony)

           • Add --multi-thread-streams note to --transfers.  (Zsolt Ero)

       • Mount

         • Fix --devname and fusermount: unknown option `fsname' when mounting via rc (Nick Craig-Wood)

       • VFS

         • Remove wording which suggests VFS is only for mounting (Nick Craig-Wood)

       • Dropbox

         • Fix retries of multipart uploads with incorrect_offset error (Nick Craig-Wood)

       • Google Cloud Storage

         • Use the s3 pacer to speed up transactions (Nick Craig-Wood)

         • pacer: Default the Google pacer to a burst of 100 to fix gcs pacing (Nick Craig-Wood)

       • Jottacloud

         • Fix scope in token request (albertony)

       • Netstorage

         • Fix unescaped HTML in documentation (Nick Craig-Wood)

         • Make levels of headings consistent (Nick Craig-Wood)

         • Add support contacts to netstorage doc (Nil Alexandrov)

       • Onedrive

         • Note that sharepoint also changes web files (.html, .aspx) (GH)

       • Putio

         • Handle rate limit errors (Berkan Teber)

         • Fix multithread download and other ranged requests (rafma0)

       • S3

         • Add ChinaMobile EOS to provider list (GuoXingbin)

         • Sync providers in config description with providers (Nick Craig-Wood)

       • SFTP

         • Fix OpenSSH 8.8+ RSA keys incompatibility (KARBOWSKI Piotr)

         • Note that Scaleway C14 is deprecating SFTP in favor of S3 (Adrien Rey-Jarthon)

       • Storj

         • Fix bucket creation on Move (Nick Craig-Wood)

       • WebDAV

         • Don’t override Referer if user sets it (Nick Craig-Wood)

   v1.58.0 - 2022-03-18
       See commits

       • New backends

         • Akamai Netstorage (Nil Alexandrov)

         • Seagate Lyve, SeaweedFS, Storj, RackCorp via s3 backend

         • Storj (renamed from Tardigrade - your old config files will continue working)

       • New commands

         • bisync - experimental bidirectional cloud sync (Ivan Andreev, Chris Nelson)

       • New Features

         • build

           • Add windows/arm64 build (rclone mount not supported yet) (Nick Craig-Wood)

           • Raise minimum go version to go1.15 (Nick Craig-Wood)

         • config: Allow dot in remote names and improve config editing (albertony)

         • dedupe: Add quit as a choice in interactive mode (albertony)

         • dlna: Change icons to the newest ones.  (Alain Nussbaumer)

         • filter: Add {{ regexp }} syntax to pattern matches (Nick Craig-Wood)

         • fshttp: Add prometheus metrics for HTTP status code (Michał Matczuk)

         • hashsum: Support creating hash from data received on stdin (albertony)

         • librclone

           • Allow empty string or null input instead of empty json object (albertony)

           • Add support for mount commands (albertony)

         • operations: Add server-side moves to stats (Ole Frost)

         • rc: Allow user to disable authentication for web gui (negative0)

         • tree: Remove obsolete --human replaced by global --human-readable (albertony)

         • version: Report correct friendly-name for newer Windows 10/11 versions (albertony)

       • Bug Fixes

         • build

           • Fix ARM architecture version in .deb packages after nfpm change (Nick Craig-Wood)

           • Hard fork github.com/jlaffaye/ftp to fix go get github.com/rclone/rclone (Nick Craig-Wood)

         • oauthutil: Fix crash when webbrowser requests /robots.txt (Nick Craig-Wood)

         • operations: Fix goroutine leak in case of copy retry (Ankur Gupta)

         • rc:

           • Fix operations/publiclink default for expires parameter (Nick Craig-Wood)

           • Fix missing computation of transferQueueSize when summing up statistics group (Carlo Mion)

           • Fix missing StatsInfo fields in the computation of the group sum (Carlo Mion)

         • sync: Fix --max-duration so it doesn’t retry when the duration is exceeded (Nick Craig-Wood)

         • touch: Fix issue where a directory is created instead of a file (albertony)

       • Mount

         • Add --devname to set the device name sent to FUSE for mount display (Nick Craig-Wood)

       • VFS

         • Add vfs/stats remote control to show statistics (Nick Craig-Wood)

         • Fix failed to _ensure cache internal error: downloaders is nil error (Nick Craig-Wood)

         • Fix handling of special characters in file names (Bumsu Hyeon)

       • Local

         • Fix hash invalidation which caused errors with local crypt mount (Nick Craig-Wood)

       • Crypt

         • Add base64 and base32768 filename encoding options (Max Sum, Sinan Tan)

       • Azure Blob

         • Implement --azureblob-upload-concurrency parameter to speed uploads (Nick Craig-Wood)

         • Remove 100MB upper limit on chunk_size as it is no longer needed (Nick Craig-Wood)

         • Raise --azureblob-upload-concurrency to 16 by default (Nick Craig-Wood)

         • Fix crash with SAS URL and no container (Nick Craig-Wood)

       • Compress

         • Fix crash if metadata upload failed (Nick Craig-Wood)

         • Fix memory leak (Nick Craig-Wood)

       • Drive

         • Added --drive-copy-shortcut-content (Abhiraj)

         • Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)

           • See https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob the dep‐
             reca‐ tion note.

         • Add --drive-skip-dangling-shortcuts flag (Nick Craig-Wood)

         • When using a link type --drive-export-formats shows all doc types (Nick Craig-Wood)

       • Dropbox

         • Speed up directory listings by specifying 1000 items in a chunk (Nick Craig-Wood)

         • Save an API request when at the root (Nick Craig-Wood)

       • Fichier

         • Implemented About functionality (Gourav T)

       • FTP

         • Add --ftp-ask-password to prompt for password when needed (Borna Butkovic)

       • Google Cloud Storage

         • Add missing regions (Nick Craig-Wood)

         • Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)

           • See https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob the dep‐
             reca‐ tion note.

       • Googlephotos

         • Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)

           • See https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob the dep‐
             reca‐ tion note.

       • Hasher

         • Fix crash on object not found (Nick Craig-Wood)

       • Hdfs

         • Add file (Move) and directory move (DirMove) support (Andy Jackson)

       • HTTP

         • Improved recognition of URL pointing to a single file (albertony)

       • Jottacloud

         • Change API used by recursive list (ListR) (Kim)

         • Add support for Tele2 Cloud (Fredric Arklid)

       • Koofr

         • Add Digistorage service as a Koofr provider.  (jaKa)

       • Mailru

         • Fix int32 overflow on arm32 (Ivan Andreev)

       • Onedrive

         • Add config option for oauth scope Sites.Read.All (Charlie Jiang)

         • Minor optimization of quickxorhash (Isaac Levy)

         • Add --onedrive-root-folder-id flag (Nick Craig-Wood)

         • Do not retry on 400 pathIsTooLong error (ctrl-q)

       • Pcloud

         • Add support for recursive list (ListR) (Niels van de Weem)

         • Fix pre-1970 time stamps (Nick Craig-Wood)

       • S3

         • Use ListObjectsV2 for faster listings (Felix Bünemann)

           • Fallback to ListObject v1 on unsupported providers (Nick Craig-Wood)

         • Use the ETag on multipart transfers to verify the transfer was OK (Nick Craig-Wood)

           • Add  --s3-use-multipart-etag  provider  quirk  to  disable  this  on  unsupported  providers  (Nick
             Craig-Wood)

         • New Providers

           • RackCorp object storage (bbabich)

           • Seagate Lyve Cloud storage (Nick Craig-Wood)

           • SeaweedFS (Chris Lu)

           • Storj Shared gateways (Márton Elek, Nick Craig-Wood)

         • Add Wasabi AP Northeast 2 endpoint info (lindwurm)

         • Add GLACIER_IR storage class (Yunhai Luo)

         • Document Content-MD5 workaround for object-lock enabled buckets (Paulo Martins)

         • Fix multipart upload with --no-head flag (Nick Craig-Wood)

         • Simplify content length processing in s3 with download url (Logeshwaran Murugesan)

       • SFTP

         • Add rclone to list of supported md5sum/sha1sum commands to look for (albertony)

         • Refactor so we only have one way of running remote commands (Nick Craig-Wood)

         • Fix timeout on hashing large files by sending keepalives (Nick Craig-Wood)

         • Fix unnecessary seeking when uploading and downloading files (Nick Craig-Wood)

         • Update docs on how to create known_hosts file (Nick Craig-Wood)

       • Storj

         • Rename tardigrade backend to storj backend (Nick Craig-Wood)

         • Implement server side Move for files (Nick Craig-Wood)

         • Update docs to explain differences between s3 and this backend (Elek, Márton)

       • Swift

         • Fix About so it shows info about the current container only (Nick Craig-Wood)

       • Union

         • Fix treatment of remotes with // in (Nick Craig-Wood)

         • Fix deadlock when one part of a multi-upload fails (Nick Craig-Wood)

         • Fix eplus policy returned nil (Vitor Arruda)

       • Yandex

         • Add permanent deletion support (deinferno)

   v1.57.0 - 2021-11-01
       See commits

       • New backends

         • Sia: for Sia decentralized cloud (Ian Levesque, Matthew Sevey, Ivan Andreev)

         • Hasher: caches hashes and enable hashes for backends that don’t support them (Ivan Andreev)

       • New commands

         • lsjson –stat: to get info about a single file/dir and operations/stat api (Nick Craig-Wood)

         • config paths: show configured paths (albertony)

       • New Features

         • about: Make human-readable output more consistent with other commands (albertony)

         • build

           • Use go1.17 for building and make go1.14 the minimum supported (Nick Craig-Wood)

           • Update Go to 1.16 and NDK to 22b for Android builds (x0b)

         • config

           • Support hyphen in remote name from environment variable (albertony)

           • Make temporary directory user-configurable (albertony)

           • Convert --cache-dir value to an absolute path (albertony)

           • Do not override MIME types from OS defaults (albertony)

         • docs

           • Toc styling and header levels cleanup (albertony)

           • Extend documentation on valid remote names (albertony)

           • Mention make for building and cmount tag for macos (Alex Chen)

           • ...and many more contributions to numerous to mention!

         • fs: Move with --ignore-existing will not delete skipped files (Nathan Collins)

         • hashsum

           • Treat hash values in sum file as case insensitive (Ivan Andreev)

           • Don’t put ERROR or UNSUPPORTED in output (Ivan Andreev)

         • lib/encoder: Add encoding of square brackets (Ivan Andreev)

         • lib/file: Improve error message when attempting  to  create  dir  on  nonexistent  drive  on  windows
           (albertony)

         • lib/http: Factor password hash salt into options with default (Nolan Woods)

         • lib/kv: Add key-value database api (Ivan Andreev)

         • librclone

           • Add RcloneFreeString function (albertony)

           • Free strings in python example (albertony)

         • log: Optionally print pid in logs (Ivan Andreev)

         • ls: Introduce --human-readable global option to print human-readable sizes (albertony)

         • ncdu: Introduce key u to toggle human-readable (albertony)

         • operations: Add rmdirs -v output (Justin Winokur)

         • serve sftp

           • Generate an ECDSA server key as well as RSA (Nick Craig-Wood)

           • Generate an Ed25519 server key as well as ECDSA and RSA (albertony)

         • serve docker

           • Allow to customize proxy settings of docker plugin (Ivan Andreev)

           • Build docker plugin for multiple platforms (Thomas Stachl)

         • size: Include human-readable count (albertony)

         • touch: Add support for touching files in directory, with recursive option, filtering and --dry-run/-i
           (albertony)

         • tree: Option to print human-readable sizes removed in favor of global option (albertony)

       • Bug Fixes

         • lib/http

           • Fix bad username check in single auth secret provider (Nolan Woods)

           • Fix handling of SSL credentials (Nolan Woods)

         • serve ftp: Ensure modtime is passed as UTC always to fix timezone oddities (Nick Craig-Wood)

         • serve sftp: Fix generation of server keys on windows (albertony)

         • serve docker: Fix octal umask (Ivan Andreev)

       • Mount

         • Enable rclone to be run as mount helper direct from the fstab (Ivan Andreev)

         • Use procfs to validate mount on linux (Ivan Andreev)

         • Correctly daemonize for compatibility with automount (Ivan Andreev)

       • VFS

         • Ensure names used in cache path are legal on current OS (albertony)

         • Ignore  ECLOSED  when  truncating  file  handles  to fix intermittent bad file descriptor error (Nick
           Craig-Wood)

       • Local

         • Refactor default OS encoding out from local backend into shared encoder lib (albertony)

       • Crypt

         • Return wrapped object even with --crypt-no-data-encryption (Ivan Andreev)

         • Fix uploads with --crypt-no-data-encryption (Nick Craig-Wood)

       • Azure Blob

         • Add --azureblob-no-head-object (Tatsuya Noyori)

       • Box

         • Make listings of heavily used directories more reliable (Nick Craig-Wood)

         • When doing cleanup delete as much as possible (Nick Craig-Wood)

         • Add --box-list-chunk to control listing chunk size (Nick Craig-Wood)

         • Delete items in parallel in cleanup using --checkers threads (Nick Craig-Wood)

         • Add --box-owned-by to only show items owned by the login passed (Nick Craig-Wood)

         • Retry operation_blocked_temporary errors (Nick Craig-Wood)

       • Chunker

         • Md5all must create metadata if base hash is slow (Ivan Andreev)

       • Drive

         • Speed up directory listings by constraining the API listing using the current filters (fotile96, Ivan
           Andreev)

         • Fix buffering for single request upload for files smaller than --drive-upload-cutoff (YenForYang)

         • Add -o config option to backend drives to make config for all shared drives (Nick Craig-Wood)

       • Dropbox

         • Add --dropbox-batch-commit-timeout to control batch timeout (Nick Craig-Wood)

       • Filefabric

         • Make backoff exponential for error_background to fix errors (Nick Craig-Wood)

         • Fix directory move after API change (Nick Craig-Wood)

       • FTP

         • Enable tls session cache by default (Ivan Andreev)

         • Add option to disable tls13 (Ivan Andreev)

         • Fix timeout after long uploads (Ivan Andreev)

         • Add support for precise time (Ivan Andreev)

         • Enable CI for ProFtpd, PureFtpd, VsFtpd (Ivan Andreev)

       • Googlephotos

         • Use encoder for album names to fix albums with control characters (Parth Shukla)

       • Jottacloud

         • Implement SetModTime to support modtime-only changes (albertony)

         • Improved error handling with SetModTime and corrupt files in general (albertony)

         • Add support for UserInfo (rclone config userinfo) feature (albertony)

         • Return direct download link from rclone link command (albertony)

       • Koofr

         • Create direct share link (Dmitry Bogatov)

       • Pcloud

         • Add sha256 support (Ken Enrique Morel)

       • Premiumizeme

         • Fix directory listing after API changes (Nick Craig-Wood)

         • Fix server side move after API change (Nick Craig-Wood)

         • Fix server side directory move after API changes (Nick Craig-Wood)

       • S3

         • Add support to use CDN URL to download the file (Logeshwaran)

         • Add AWS Snowball Edge to providers examples (r0kk3rz)

         • Use a combination of SDK retries and rclone retries (Nick Craig-Wood)

         • Fix IAM Role for Service Account not working and other auth problems (Nick Craig-Wood)

         • Fix shared_credentials_file auth after reverting incorrect fix (Nick Craig-Wood)

         • Fix corrupted on transfer: sizes differ 0 vs xxxx with Ceph (Nick Craig-Wood)

       • Seafile

         • Fix error when not configured for 2fa (Fred)

       • SFTP

         • Fix timeout when doing MD5SUM of large file (Nick Craig-Wood)

       • Swift

         • Update OCI URL (David Liu)

         • Document OVH Cloud Archive (HNGamingUK)

       • Union

         • Fix rename not working with union of local disk and bucket based remote (Nick Craig-Wood)

   v1.56.2 - 2021-10-01
       See commits

       • Bug Fixes

         • serve http: Re-add missing auth to http service (Nolan Woods)

         • build: Update golang.org/x/sys to fix crash on macOS when compiled with go1.17 (Herby Gillot)

       • FTP

         • Fix deadlock after failed update when concurrency=1 (Ivan Andreev)

   v1.56.1 - 2021-09-19
       See commits

       • Bug Fixes

         • accounting: Fix maximum bwlimit by scaling scale max token bucket size (Nick Craig-Wood)

         • rc: Fix speed does not update in core/stats (negative0)

         • selfupdate: Fix --quiet option, not quite quiet (yedamo)

         • serve http: Fix serve http exiting directly after starting (Cnly)

         • build

           • Apply gofmt from golang 1.17 (Ivan Andreev)

           • Update Go to 1.16 and NDK to 22b for android/any (x0b)

       • Mount

         • Fix --daemon mode (Ivan Andreev)

       • VFS

         • Fix duplicates on rename (Nick Craig-Wood)

         • Fix crash when truncating a just uploaded object (Nick Craig-Wood)

         • Fix issue where empty dirs would build up in cache meta dir (albertony)

       • Drive

         • Fix instructions for auto config (Greg Sadetsky)

         • Fix lsf example without drive-impersonate (Greg Sadetsky)

       • Onedrive

         • Handle HTTP 400 better in PublicLink (Alex Chen)

         • Clarification of the process for creating custom client_id (Mariano Absatz)

       • Pcloud

         • Return an early error when Put is called with an unknown size (Nick Craig-Wood)

         • Try harder to delete a failed upload (Nick Craig-Wood)

       • S3

         • Add Wasabi’s AP-Northeast endpoint info (hota)

         • Fix typo in s3 documentation (Greg Sadetsky)

       • Seafile

         • Fix 2fa config state machine (Fred)

       • SFTP

         • Remove spurious error message on --sftp-disable-concurrent-reads (Nick Craig-Wood)

       • Sugarsync

         • Fix initial connection after config re-arrangement (Nick Craig-Wood)

   v1.56.0 - 2021-07-20
       See commits

       • New backends

         • Uptobox (buengese)

       • New commands

         • serve docker (Antoine GIRARD) (Ivan Andreev)

           • and accompanying docker volume plugin

         • checksum to check files against a file of checksums (Ivan Andreev)

           • this is also available as rclone md5sum -C etc

         • config touch: ensure config exists at configured location (albertony)

         • test changenotify: command to help debugging changenotify (Nick Craig-Wood)

       • Deprecations

         • dbhashsum: Remove command deprecated a year ago (Ivan Andreev)

         • cache: Deprecate cache backend (Ivan Andreev)

       • New Features

         • rework config system so it can be used non-interactively via cli and rc API.

           • See docs in config create

           • This is a very big change to all the backends so may cause breakages - please file bugs!

         • librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)

           • Link a C-API rclone shared object into your project

           • Use the RC as an in memory interface

           • Python example supplied

           • Also supports Android and gomobile

         • fs

           • Add --disable-http2 for global http2 disable (Nick Craig-Wood)

           • Make --dump imply -vv (Alex Chen)

           • Use binary prefixes for size and rate units (albertony)

           • Use decimal prefixes for counts (albertony)

           • Add google search widget to rclone.org (Ivan Andreev)

         • accounting: Calculate rolling average speed (Haochen Tong)

         • atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)

         • build

           • Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)

           • Add Android build with gomobile (x0b)

         • check: Log the hash in use like cryptcheck does (Nick Craig-Wood)

         • version: Print os/version, kernel and bitness (Ivan Andreev)

         • config

           • Prevent use of Windows reserved names in config file name (albertony)

           • Create config file in windows appdata directory by default (albertony)

           • Treat any config file paths with filename notfound as memory-only config (albertony)

           • Delay load config file (albertony)

           • Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)

           • Allow config create and friends to take key=value parameters (Nick Craig-Wood)

           • Fixed issues with flags/options set by environment vars.  (Ole Frost)

         • fshttp: Implement graceful DSCP error handling (Tyson Moore)

         • lib/http - provides an abstraction for a central http server that services can bind routes to  (Nolan
           Woods)

           • Add --template config and flags to serve/data (Nolan Woods)

           • Add default 404 handler (Nolan Woods)

         • link: Use “off” value for unset expiry (Nick Craig-Wood)

         • oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)

         • rcat: Add --size flag for more efficient uploads of known size (Nazar Mishturak)

         • serve sftp: Add --stdio flag to serve via stdio (Tom)

         • sync: Don’t warn about --no-traverse when --files-from is set (Nick Gaya)

         • test makefiles

           • Add --seed flag and make data generated repeatable (Nick Craig-Wood)

           • Add log levels and speed summary (Nick Craig-Wood)

       • Bug Fixes

         • accounting: Fix startTime of statsGroups.sum (Haochen Tong)

         • cmd/ncdu: Fix out of range panic in delete (buengese)

         • config

           • Fix issues with memory-only config file paths (albertony)

           • Fix in memory config not saving on the fly backend config (Nick Craig-Wood)

         • fshttp: Fix address parsing for DSCP (Tyson Moore)

         • ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)

         • oauthutil: Fix old authorize result not recognised (Cnly)

         • operations: Don’t update timestamps of files in --compare-dest (Nick Gaya)

         • selfupdate: fix archive name on macos (Ivan Andreev)

       • Mount

         • Refactor before adding serve docker (Antoine GIRARD)

       • VFS

         • Add cache reset for --vfs-cache-max-size handling at cache poll interval (Leo Luan)

         • Fix modtime changing when reading file into cache (Nick Craig-Wood)

         • Avoid unnecessary subdir in cache path (albertony)

         • Fix that umask option cannot be set as environment variable (albertony)

         • Do not print notice about missing poll-interval support when set to 0 (albertony)

       • Local

         • Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)

         • Add --local-unicode-normalization (and remove --local-no-unicode-normalization) (Nick Craig-Wood)

         • Skip entries removed concurrently with List() (Ivan Andreev)

       • Crypt

         • Support timestamped filenames from --b2-versions (Dominik Mydlil)

       • B2

         • Don’t include the bucket name in public link file prefixes (Jeffrey Tolar)

         • Fix versions and .files with no extension (Nick Craig-Wood)

         • Factor version handling into lib/version (Dominik Mydlil)

       • Box

         • Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)

         • Return errors instead of calling log.Fatal with them (Nick Craig-Wood)

       • Drive

         • Switch to the Drives API for looking up shared drives (Nick Craig-Wood)

         • Fix some google docs being treated as files (Nick Craig-Wood)

       • Dropbox

         • Add --dropbox-batch-mode flag to speed up uploading (Nick Craig-Wood)

           • Read the batch mode docs for more info

         • Set visibility in link sharing when --expire is set (Nick Craig-Wood)

         • Simplify chunked uploads (Alexey Ivanov)

         • Improve “own App IP” instructions (Ivan Andreev)

       • Fichier

         • Check if more than one upload link is returned (Nick Craig-Wood)

         • Support downloading password protected files and folders (Florian Penzkofer)

         • Make error messages report text from the API (Nick Craig-Wood)

         • Fix move of files in the same directory (Nick Craig-Wood)

         • Check that we actually got a download token and retry if we didn’t (buengese)

       • Filefabric

         • Fix listing after change of from field from “int” to int.  (Nick Craig-Wood)

       • FTP

         • Make upload error 250 indicate success (Nick Craig-Wood)

       • GCS

         • Make compatible with gsutil’s mtime metadata (database64128)

         • Clean up time format constants (database64128)

       • Google Photos

         • Fix read only scope not being used properly (Nick Craig-Wood)

       • HTTP

         • Replace httplib with lib/http (Nolan Woods)

         • Clean up Bind to better use middleware (Nolan Woods)

       • Jottacloud

         • Fix legacy auth with state based config system (buengese)

         • Fix invalid url in output from link command (albertony)

         • Add no versions option (buengese)

       • Onedrive

         • Add list_chunk option (Nick Gaya)

         • Also report root error if unable to cancel multipart upload (Cnly)

         • Fix failed to configure: empty token found error (Nick Craig-Wood)

         • Make link return direct download link (Xuanchen Wu)

       • S3

         • Add --s3-no-head-object (Tatsuya Noyori)

         • Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)

         • Don’t check to see if remote is object if it ends with / (Nick Craig-Wood)

         • Add SeaweedFS (Chris Lu)

         • Update Alibaba OSS endpoints (Chuan Zh)

       • SFTP

         • Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)

         • Expand tilde and environment variables in configured known_hosts_file (albertony)

       • Tardigrade

         • Upgrade to uplink v1.4.6 (Caleb Case)

         • Use negative offset (Caleb Case)

         • Add warning about too many open files (acsfer)

       • WebDAV

         • Fix sharepoint auth over http (Nick Craig-Wood)

         • Add headers option (Antoon Prins)

   v1.55.1 - 2021-04-26
       See commits

       • Bug Fixes

         • selfupdate

           • Dont detect FUSE if build is static (Ivan Andreev)

           • Add build tag noselfupdate (Ivan Andreev)

         • sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)

         • install.sh: fix macOS arm64 download (Nick Craig-Wood)

         • build: Fix version numbers in android branch builds (Nick Craig-Wood)

         • docs

           • Contributing.md: update setup instructions for go1.16 (Nick Gaya)

           • WinFsp 2021 is out of beta (albertony)

           • Minor cleanup of space around code section (albertony)

           • Fixed some typos (albertony)

       • VFS

         • Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)

       • Compress

         • Fix compressed name regexp (buengese)

       • Drive

         • Fix backend copyid of google doc to directory (Nick Craig-Wood)

         • Don’t open browser when service account...  (Ansh Mittal)

       • Dropbox

         • Add missing team_data.member scope for use with –impersonate (Nick Craig-Wood)

         • Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)

         • Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)

       • FTP

         • Fix implicit TLS (Ivan Andreev)

       • Onedrive

         • Work around for random “Unable to initialize RPS” errors (OleFrost)

       • SFTP

         • Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)

         • Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)

       • Zoho

         • Fix error when region isn’t set (buengese)

         • Do not ask for mountpoint twice when using headless setup (buengese)

   v1.55.0 - 2021-03-31
       See commits

       • New commands

         • selfupdate (Ivan Andreev)

           • Allows rclone to update itself in-place or via a package (using --package flag)

           • Reads cryptographically signed signatures for non beta releases

           • Works on all OSes.

         • test - these are test commands - use with care!

           • histogram - Makes a histogram of file name characters.

           • info - Discovers file name or other limitations for paths.

           • makefiles - Make a random file hierarchy for testing.

           • memory - Load all the objects at remote:path into memory and report memory stats.

       • New Features

         • Connection strings

           • Config parameters can now be passed as part of the remote name as a connection string.

           • For example, to do the equivalent of --drive-shared-with-me use drive,shared_with_me:

           • Make sure we don’t save on the fly remote config to the config file (Nick Craig-Wood)

           • Make sure backends with additional config have a different name for caching (Nick Craig-Wood)

           • This work was sponsored by CERN, through the CS3MESH4EOSC Project.

             • CS3MESH4EOSC has received funding from the European Union’s Horizon 2020

             • research and innovation programme under Grant Agreement no.  863353.

         • build

           • Update go build version to go1.16 and raise minimum go version to go1.13 (Nick Craig-Wood)

           • Make a macOS ARM64 build to support Apple Silicon (Nick Craig-Wood)

           • Install macfuse 4.x instead of osxfuse 3.x (Nick Craig-Wood)

           • Use GO386=softfloat instead of deprecated GO386=387 for 386 builds (Nick Craig-Wood)

           • Disable IOS builds for the time being (Nick Craig-Wood)

           • Androids builds made with up to date NDK (x0b)

           • Add an rclone user to the Docker image but don’t use it by default (cynthia kwok)

         • dedupe: Make largest directory primary to minimize data moved (Saksham Khanna)

         • config

           • Wrap config library in an interface (Fionera)

           • Make config file system pluggable (Nick Craig-Wood)

           • --config "" or "/notfound" for in memory config only (Nick Craig-Wood)

           • Clear fs cache of stale entries when altering config (Nick Craig-Wood)

         • copyurl: Add option to print resulting auto-filename (albertony)

         • delete: Make --rmdirs obey the filters (Nick Craig-Wood)

         • docs - many fixes and reworks from edwardxml, albertony, pvalls, Ivan Andreev, Evan Harris, buengese,
           Alexey Tabakman

         • encoder/filename - add SCSU as tables (Klaus Post)

         • Add multiple paths support to --compare-dest and --copy-dest flag (K265)

         • filter: Make --exclude "dir/" equivalent to --exclude "dir/**" (Nick Craig-Wood)

         • fshttp: Add DSCP support with --dscp for QoS with differentiated services (Max Sum)

         • lib/cache: Add Delete and DeletePrefix methods (Nick Craig-Wood)

         • lib/file

           • Make pre-allocate detect disk full errors and return them (Nick Craig-Wood)

           • Don’t run preallocate concurrently (Nick Craig-Wood)

           • Retry preallocate on EINTR (Nick Craig-Wood)

         • operations: Made copy and sync operations obey a RetryAfterError (Ankur Gupta)

         • rc

           • Add string alternatives for setting options over the rc (Nick Craig-Wood)

           • Add options/local to see the options configured in the context (Nick Craig-Wood)

           • Add _config parameter to set global config for just this rc call (Nick Craig-Wood)

           • Implement passing filter config with _filter parameter (Nick Craig-Wood)

           • Add fscache/clear and fscache/entries to control the fs cache (Nick Craig-Wood)

           • Avoid +Inf value for speed in core/stats (albertony)

           • Add a full set of stats to core/stats (Nick Craig-Wood)

           • Allow fs= params to be a JSON blob (Nick Craig-Wood)

         • rcd: Added systemd notification during the rclone rcd command.  (Naveen Honest Raj)

         • rmdirs: Make --rmdirs obey the filters (Nick Craig-Wood)

         • version: Show build tags and type of executable (Ivan Andreev)

       • Bug Fixes

         • install.sh: make it fail on download errors (Ivan Andreev)

         • Fix excessive retries missing --max-duration timeout (Nick Craig-Wood)

         • Fix crash when --low-level-retries=0 (Nick Craig-Wood)

         • Fix failed token refresh on mounts created via the rc (Nick Craig-Wood)

         • fshttp: Fix bandwidth limiting after bad merge (Nick Craig-Wood)

         • lib/atexit

           • Unregister interrupt handler once it has fired so users can interrupt again (Nick Craig-Wood)

           • Fix occasional failure to unmount with CTRL-C (Nick Craig-Wood)

           • Fix deadlock calling Finalise while Run is running (Nick Craig-Wood)

         • lib/rest: Fix multipart uploads not stopping on context cancel (Nick Craig-Wood)

       • Mount

         • Allow mounting to root directory on windows (albertony)

         • Improved handling of relative paths on windows (albertony)

         • Fix unicode issues with accented characters on macOS (Nick Craig-Wood)

         • Docs: document the new FileSecurity option in WinFsp 2021 (albertony)

         • Docs: add note about volume path syntax on windows (albertony)

         • Fix caching of old directories after renaming them (Nick Craig-Wood)

         • Update cgofuse to the latest version to bring in macfuse 4 fix (Nick Craig-Wood)

       • VFS

         • --vfs-used-is-size to report used space using recursive scan (tYYGH)

         • Don’t set modification time if it was already correct (Nick Craig-Wood)

         • Fix Create causing windows explorer to truncate files on CTRL-C CTRL-V (Nick Craig-Wood)

         • Fix modtimes not updating when writing via cache (Nick Craig-Wood)

         • Fix modtimes changing by fractional seconds after upload (Nick Craig-Wood)

         • Fix modtime set if --vfs-cache-mode writes/full and no write (Nick Craig-Wood)

         • Rename files in cache and cancel uploads on directory rename (Nick Craig-Wood)

         • Fix directory renaming by renaming dirs cached in memory (Nick Craig-Wood)

       • Local

         • Add flag --local-no-preallocate (David Sze)

         • Make nounc an advanced option except on Windows (albertony)

         • Don’t ignore preallocate disk full errors (Nick Craig-Wood)

       • Cache

         • Add --fs-cache-expire-duration to control the fs cache (Nick Craig-Wood)

       • Crypt

         • Add option to not encrypt data (Vesnyx)

         • Log hash ok on upload (albertony)

       • Azure Blob

         • Add container public access level support.  (Manish Kumar)

       • B2

         • Fix HTML files downloaded via cloudflare (Nick Craig-Wood)

       • Box

         • Fix transfers getting stuck on token expiry after API change (Nick Craig-Wood)

       • Chunker

         • Partially implement no-rename transactions (Maxwell Calman)

       • Drive

         • Don’t stop server side copy if couldn’t read description (Nick Craig-Wood)

         • Pass context on to drive SDK - to help with cancellation (Nick Craig-Wood)

       • Dropbox

         • Add polling for changes support (Robert Thomas)

         • Make --timeout 0 work properly (Nick Craig-Wood)

         • Raise priority of rate limited message to INFO to make it more noticeable (Nick Craig-Wood)

       • Fichier

         • Implement copy & move (buengese)

         • Implement public link (buengese)

       • FTP

         • Implement Shutdown method (Nick Craig-Wood)

         • Close idle connections after --ftp-idle-timeout (1m by default) (Nick Craig-Wood)

         • Make --timeout 0 work properly (Nick Craig-Wood)

         • Add --ftp-close-timeout flag for use with awkward ftp servers (Nick Craig-Wood)

         • Retry connections and logins on 421 errors (Nick Craig-Wood)

       • Hdfs

         • Fix permissions for when directory is created (Lucas Messenger)

       • Onedrive

         • Make --timeout 0 work properly (Nick Craig-Wood)

       • S3

         • Fix --s3-profile which wasn’t working (Nick Craig-Wood)

       • SFTP

         • Close idle connections after --sftp-idle-timeout (1m by default) (Nick Craig-Wood)

         • Fix “file not found” errors for read once servers (Nick Craig-Wood)

         • Fix SetModTime stat failed: object not found with --sftp-set-modtime=false (Nick Craig-Wood)

       • Swift

         • Update github.com/ncw/swift to v2.0.0 (Nick Craig-Wood)

         • Implement copying large objects (nguyenhuuluan434)

       • Union

         • Fix crash when using epff policy (Nick Craig-Wood)

         • Fix union attempting to update files on a read only file system (Nick Craig-Wood)

         • Refactor to use fspath.SplitFs instead of fs.ParseRemote (Nick Craig-Wood)

         • Fix initialisation broken in refactor (Nick Craig-Wood)

       • WebDAV

         • Add support for sharepoint with NTLM authentication (Rauno Ots)

         • Make sharepoint-ntlm docs more consistent (Alex Chen)

         • Improve terminology in sharepoint-ntlm docs (Ivan Andreev)

         • Disable HTTP/2 for NTLM authentication (georne)

         • Fix sharepoint-ntlm error 401 for parallel actions (Ivan Andreev)

         • Check that purged directory really exists (Ivan Andreev)

       • Yandex

         • Make --timeout 0 work properly (Nick Craig-Wood)

       • Zoho

         • Replace client id - you will need to rclone config reconnect after this (buengese)

         • Add forgotten setupRegion() to NewFs - this finally fixes regions other than EU (buengese)

   v1.54.1 - 2021-03-08
       See commits

       • Bug Fixes

         • accounting: Fix –bwlimit when up or down is off (Nick Craig-Wood)

         • docs

           • Fix nesting of brackets and backticks in ftp docs (edwardxml)

           • Fix broken link in sftp page (edwardxml)

           • Fix typo in crypt.md (Romeo Kienzler)

           • Changelog: Correct link to digitalis.io (Alex JOST)

           • Replace #file-caching with #vfs-file-caching (Miron Veryanskiy)

           • Convert bogus example link to code (edwardxml)

           • Remove dead link from rc.md (edwardxml)

         • rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick Craig-Wood)

         • lsjson: Fix unterminated JSON in the presence of errors (Nick Craig-Wood)

       • Mount

         • Fix mount dropping on macOS by setting –daemon-timeout 10m (Nick Craig-Wood)

       • VFS

         • Document simultaneous usage with the same cache shouldn’t be used (Nick Craig-Wood)

       • B2

         • Automatically raise upload cutoff to avoid spurious error (Nick Craig-Wood)

         • Fix failed to create file system with application key limited to a prefix (Nick Craig-Wood)

       • Drive

         • Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)

       • Dropbox

         • Add scopes to oauth request and optionally “members.read” (Nick Craig-Wood)

       • S3

         • Fix failed to create file system with folder level permissions policy (Nick Craig-Wood)

         • Fix Wasabi HEAD requests returning stale data by using only 1 transport (Nick Craig-Wood)

         • Fix shared_credentials_file auth (Dmitry Chepurovskiy)

         • Add –s3-no-head to reducing costs docs (Nick Craig-Wood)

       • Union

         • Fix mkdir at root with remote:/ (Nick Craig-Wood)

       • Zoho

         • Fix custom client id’s (buengese)

   v1.54.0 - 2021-02-02
       See commits

       • New backends

         • Compression remote (experimental) (buengese)

         • Enterprise File Fabric (Nick Craig-Wood)

           • This work was sponsored by Storage Made Easy

         • HDFS (Hadoop Distributed File System) (Yury Stankevich)

         • Zoho workdrive (buengese)

       • New Features

         • Deglobalise the config (Nick Craig-Wood)

           • Global config now read from the context

           • This will enable passing of global config via the rc

           • This work was sponsored by Digitalis

         • Add --bwlimit for upload and download (Nick Craig-Wood)

           • Obey bwlimit in http Transport for better limiting

         • Enhance systemd integration (Hekmon)

           • log level identification, manual activation with flag, automatic systemd launch detection

           • Don’t compile systemd log integration for non unix systems (Benjamin Gustin)

         • Add  a  --download  flag to md5sum/sha1sum/hashsum to force rclone to download and hash files locally
           (lostheli)

         • Add --progress-terminal-title to print ETA to terminal title (LaSombra)

         • Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)

         • build

           • Raise minimum go version to go1.12 (Nick Craig-Wood)

         • dedupe

           • Add --by-hash to dedupe on content hash not file name (Nick Craig-Wood)

           • Add --dedupe-mode list to just list dupes, changing nothing (Nick Craig-Wood)

           • Add warning if used on a remote which can’t have duplicate names (Nick Craig-Wood)

         • fs

           • Add Shutdown optional method for backends (Nick Craig-Wood)

           • When using --files-from check files concurrently (zhucan)

           • Accumulate stats when using --dry-run (Ingo Weiss)

           • Always show stats when using --dry-run or --interactive (Nick Craig-Wood)

           • Add support for flag --no-console on windows to hide the console window (albertony)

         • genautocomplete: Add support to output to stdout (Ingo)

         • ncdu

           • Highlight read errors instead of aborting (Claudio Bantaloukas)

           • Add sort by average size in directory (Adam Plánský)

           • Add toggle option for average s3ize in directory - key `a' (Adam Plánský)

           • Add empty folder flag into ncdu browser (Adam Plánský)

           • Add ! (error) and . (unreadable) file flags to go with e (empty) (Nick Craig-Wood)

         • obscure: Make rclone obscure - ignore newline at end of line (Nick Craig-Wood)

         • operations

           • Add logs when need to upload files to set mod times (Nick Craig-Wood)

           • Move and copy log name of the destination object in verbose (Adam Plánský)

           • Add size if known to skipped items and JSON log (Nick Craig-Wood)

         • rc

           • Prefer actual listener address if using “:port” or “addr:0” only (Nick Craig-Wood)

           • Add listener for finished jobs (Aleksandar Jankovic)

         • serve ftp: Add options to enable TLS (Deepak Sah)

         • serve http/webdav: Redirect requests to the base url without the / (Nick Craig-Wood)

         • serve restic: Implement object cache (Nick Craig-Wood)

         • stats: Add counter for deleted directories (Nick Craig-Wood)

         • sync: Only print “There was nothing to transfer” if no errors (Nick Craig-Wood)

         • webui

           • Prompt user for updating webui if an update is available (Chaitanya Bankanhal)

           • Fix plugins initialization (negative0)

       • Bug Fixes

         • fs

           • Fix nil pointer on copy & move operations directly to remote (Anagh Kumar Baranwal)

           • Fix parsing of ..  when joining remotes (Nick Craig-Wood)

         • log: Fix enabling systemd logging when using --log-file (Nick Craig-Wood)

         • check

           • Make the error count match up in the log message (Nick Craig-Wood)

         • move: Fix data loss when source and destination are the same object (Nick Craig-Wood)

         • operations

           • Fix --cutoff-mode hard not cutting off immediately (Nick Craig-Wood)

           • Fix --immutable error message (Nick Craig-Wood)

         • sync

           • Fix --cutoff-mode soft & cautious so it doesn’t end the transfer early (Nick Craig-Wood)

           • Fix --immutable errors retrying many times (Nick Craig-Wood)

       • Docs

         • Many fixes and a rewrite of the filtering docs (edwardxml)

         • Many spelling and grammar fixes (Josh Soref)

         • Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony)

         • And thanks to these people for many doc fixes too numerous to list

           • Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart

           • CokeMine, David, Dov Murik, Durval Menezes, Evan Harris, gtorelly

           • Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent,

           • Martin Michlmayr, Milly, Sơn Trần-Nguyễn

       • Mount

         • Update systemd status with cache stats (Hekmon)

         • Disable bazil/fuse based mount on macOS (Nick Craig-Wood)

           • Make rclone mount actually run rclone cmount under macOS (Nick Craig-Wood)

         • Implement mknod to make NFS file creation work (Nick Craig-Wood)

         • Make sure we don’t call umount more than once (Nick Craig-Wood)

         • More user friendly mounting as network drive on windows (albertony)

         • Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony)

         • Don’t attempt to unmount if fs has been destroyed already (Nick Craig-Wood)

       • VFS

         • Fix virtual entries causing deleted files to still appear (Nick Craig-Wood)

         • Fix “file already exists” error for stale cache files (Nick Craig-Wood)

         • Fix file leaks with --vfs-cache-mode full and --buffer-size 0 (Nick Craig-Wood)

         • Fix invalid cache path on windows when using :backend: as remote (albertony)

       • Local

         • Continue listing files/folders when a circular symlink is detected (Manish Gupta)

         • New flag --local-zero-size-links to fix sync on some virtual filesystems (Riccardo Iaconelli)

       • Azure Blob

         • Add support for service principals (James Lim)

         • Add support for managed identities (Brad Ackerman)

         • Add examples for access tier (Bob Pusateri)

         • Utilize the streaming capabilities from the SDK for multipart uploads (Denis Neuling)

         • Fix setting of mime types (Nick Craig-Wood)

         • Fix crash when listing outside a SAS URL’s root (Nick Craig-Wood)

         • Delete archive tier blobs before update if --azureblob-archive-tier-delete (Nick Craig-Wood)

         • Fix crash on startup (Nick Craig-Wood)

         • Fix memory usage by upgrading the SDK to v0.13.0 and implementing a TransferManager (Nick Craig-Wood)

         • Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)

       • B2

         • Make NewObject use less expensive API calls (Nick Craig-Wood)

           • This will improve --files-from and restic serve in particular

         • Fixed crash on an empty file name (lluuaapp)

       • Box

         • Fix NewObject for files that differ in case (Nick Craig-Wood)

         • Fix finding directories in a case insensitive way (Nick Craig-Wood)

       • Chunker

         • Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)

         • Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)

         • Fix case-insensitive NewObject, test metadata detection (Ivan Andreev)

       • Drive

         • Implement rclone backend copyid command for copying files by ID (Nick Craig-Wood)

         • Added flag --drive-stop-on-download-limit to stop transfers  when  the  download  limit  is  exceeded
           (Anagh Kumar Baranwal)

         • Implement CleanUp workaround for team drives (buengese)

         • Allow shortcut resolution and creation to be retried (Nick Craig-Wood)

         • Log that emptying the trash can take some time (Nick Craig-Wood)

         • Add xdg office icons to xdg desktop files (Pau Rodriguez-Estivill)

       • Dropbox

         • Add support for viewing shared files and folders (buengese)

         • Enable short lived access tokens (Nick Craig-Wood)

         • Implement IDer on Objects so rclone lsf etc can read the IDs (buengese)

         • Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)

         • Make malformed_path errors from too long files not retriable (Nick Craig-Wood)

         • Test file name length before upload to fix upload loop (Nick Craig-Wood)

       • Fichier

         • Set Features ReadMimeType to true as Object.MimeType is supported (Nick Craig-Wood)

       • FTP

         • Add --ftp-disable-msld option to ignore MLSD for really old servers (Nick Craig-Wood)

         • Make --tpslimit apply (Nick Craig-Wood)

       • Google Cloud Storage

         • Storage class object header support (Laurens Janssen)

         • Fix anonymous client to use rclone’s HTTP client (Nick Craig-Wood)

         • Fix Entry doesn't belong in directory "" (same as directory) - ignoring (Nick Craig-Wood)

       • Googlephotos

         • New flag --gphotos-include-archived to show archived photos as well (Nicolas Rueff)

       • Jottacloud

         • Don’t erroneously report support for writing mime types (buengese)

         • Add support for Telia Cloud (Patrik Nordlén)

       • Mailru

         • Accept special folders eg camera-upload (Ivan Andreev)

         • Avoid prehashing of large local files (Ivan Andreev)

         • Fix uploads after recent changes on server (Ivan Andreev)

         • Fix range requests after June 2020 changes on server (Ivan Andreev)

         • Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)

         • Remove deprecated protocol quirks (Ivan Andreev)

       • Memory

         • Fix setting of mime types (Nick Craig-Wood)

       • Onedrive

         • Add support for China region operated by 21vianet and other regional suppliers (NyaMisty)

         • Warn on gateway timeout errors (Nick Craig-Wood)

         • Fall back to normal copy if server-side copy unavailable (Alex Chen)

         • Fix server-side copy completely disabled on OneDrive for Business (Cnly)

         • (business only) workaround to replace existing file on server-side copy (Alex Chen)

         • Enhance link creation with expiry, scope, type and password (Nick Craig-Wood)

         • Remove % and # from the set of encoded characters (Alex Chen)

         • Support addressing site by server-relative URL (kice)

       • Opendrive

         • Fix finding directories in a case insensitive way (Nick Craig-Wood)

       • Pcloud

         • Fix setting of mime types (Nick Craig-Wood)

       • Premiumizeme

         • Fix finding directories in a case insensitive way (Nick Craig-Wood)

       • Qingstor

         • Fix error propagation in CleanUp (Nick Craig-Wood)

         • Fix rclone cleanup (Nick Craig-Wood)

       • S3

         • Added --s3-disable-http2 to disable http/2 (Anagh Kumar Baranwal)

         • Complete SSE-C implementation (Nick Craig-Wood)

           • Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood)

           • Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood)

         • Add --s3-no-head parameter to minimise transactions on upload (Nick Craig-Wood)

         • Update docs with a Reducing Costs section (Nick Craig-Wood)

         • Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal)

         • Add requester pays option (kelv)

         • Fix copy multipart with v2 auth failing with `SignatureDoesNotMatch' (Louis Koo)

       • SFTP

         • Allow cert based auth via optional pubkey (Stephen Harris)

         • Allow user to optionally check server hosts key to add security (Stephen Harris)

         • Defer asking for user passwords until the SSH connection succeeds (Stephen Harris)

         • Remember entered password in AskPass mode (Stephen Harris)

         • Implement Shutdown method (Nick Craig-Wood)

         • Implement keyboard interactive authentication (Nick Craig-Wood)

         • Make --tpslimit apply (Nick Craig-Wood)

         • Implement --sftp-use-fstat for unusual SFTP servers (Nick Craig-Wood)

       • Sugarsync

         • Fix NewObject for files that differ in case (Nick Craig-Wood)

         • Fix finding directories in a case insensitive way (Nick Craig-Wood)

       • Swift

         • Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)

         • Ensure  partially  uploaded  large files are uploaded unless --swift-leave-parts-on-error (Nguyễn Hữu
           Luân)

       • Tardigrade

         • Upgrade to uplink v1.4.1 (Caleb Case)

       • WebDAV

         • Updated docs to show streaming to nextcloud is working (Durval Menezes)

       • Yandex

         • Set Features WriteMimeType to false as Yandex ignores mime types (Nick Craig-Wood)

   v1.53.4 - 2021-01-20
       See commits

       • Bug Fixes

         • accounting: Fix data race in Transferred() (Maciej Zimnoch)

         • build

           • Stop tagged releases making a current beta (Nick Craig-Wood)

           • Upgrade docker buildx action (Matteo Pietro Dazzi)

           • Add -buildmode to cross-compile.go (Nick Craig-Wood)

           • Fix docker build by upgrading ilteoood/docker_buildx (Nick Craig-Wood)

           • Revert GitHub actions brew fix since this is now fixed (Nick Craig-Wood)

           • Fix brew install –cask syntax for macOS build (Nick Craig-Wood)

           • Update nfpm syntax to fix build of .deb/.rpm packages (Nick Craig-Wood)

           • Fix for Windows build errors (Ivan Andreev)

         • fs: Parseduration: fixed tests to use UTC time (Ankur Gupta)

         • fshttp: Prevent overlap of HTTP headers in logs (Nathan Collins)

         • rc

           • Fix core/command giving 500 internal error (Nick Craig-Wood)

           • Add Copy method to rc.Params (Nick Craig-Wood)

           • Fix 500 error when marshalling errors from core/command (Nick Craig-Wood)

           • plugins: Create plugins files only if webui is enabled.  (negative0)

         • serve http: Fix serving files of unknown length (Nick Craig-Wood)

         • serve sftp: Fix authentication on one connection blocking others (Nick Craig-Wood)

       • Mount

         • Add optional brew tag to throw an error when using mount  in  the  binaries  installed  via  Homebrew
           (Anagh Kumar Baranwal)

         • Add “.” and “..” to directories to match cmount and expectations (Nick Craig-Wood)

       • VFS

         • Make cache dir absolute before using it to fix path too long errors (Nick Craig-Wood)

       • Chunker

         • Improve detection of incompatible metadata (Ivan Andreev)

       • Google Cloud Storage

         • Fix server side copy of large objects (Nick Craig-Wood)

       • Jottacloud

         • Fix token renewer to fix long uploads (Nick Craig-Wood)

         • Fix token refresh failed: is not a regular file error (Nick Craig-Wood)

       • Pcloud

         • Only use SHA1 hashes in EU region (Nick Craig-Wood)

       • Sharefile

         • Undo Fix backend due to API swapping integers for strings (Nick Craig-Wood)

       • WebDAV

         • Fix Open Range requests to fix 4shared mount (Nick Craig-Wood)

         • Add “Depth: 0” to GET requests to fix bitrix (Nick Craig-Wood)

   v1.53.3 - 2020-11-19
       See commits

       • Bug Fixes

         • random: Fix incorrect use of math/rand instead of crypto/rand CVE-2020-28924 (Nick Craig-Wood)

           • Passwords you have generated with rclone config may be insecure

           • See issue #4783 for more details and a checking tool

         • random: Seed math/rand in one place with crypto strong seed (Nick Craig-Wood)

       • VFS

         • Fix vfs/refresh calls with fs= parameter (Nick Craig-Wood)

       • Sharefile

         • Fix backend due to API swapping integers for strings (Nick Craig-Wood)

   v1.53.2 - 2020-10-26
       See commits

       • Bug Fixes

         • accounting

           • Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood)

           • Stabilize display order of transfers on Windows (Nick Craig-Wood)

         • operations

           • Fix use of –suffix without –backup-dir (Nick Craig-Wood)

           • Fix  spurious  “–checksum  is in use but the source and destination have no hashes in common” (Nick
             Craig-Wood)

         • build

           • Work around GitHub actions brew problem (Nick Craig-Wood)

           • Stop using set-env and set-path in the GitHub actions (Nick Craig-Wood)

       • Mount

         • mount2: Fix the swapped UID / GID values (Russell Cattelan)

       • VFS

         • Detect and recover from a file being removed externally from the cache (Nick Craig-Wood)

         • Fix a deadlock vulnerability in downloaders.Close (Leo Luan)

         • Fix a race condition in retryFailedResets (Leo Luan)

         • Fix missed concurrency control between some item operations and reset (Leo Luan)

         • Add exponential backoff during ENOSPC retries (Leo Luan)

         • Add a missed update of used cache space (Leo Luan)

         • Fix –no-modtime to not attempt to set modtimes (as documented) (Nick Craig-Wood)

       • Local

         • Fix sizes and syncing with –links option on Windows (Nick Craig-Wood)

       • Chunker

         • Disable ListR to fix missing files on GDrive (workaround) (Ivan Andreev)

         • Fix upload over crypt (Ivan Andreev)

       • Fichier

         • Increase maximum file size from 100GB to 300GB (gyutw)

       • Jottacloud

         • Remove clientSecret from config when upgrading to token based authentication (buengese)

         • Avoid double url escaping of device/mountpoint (albertony)

         • Remove DirMove workaround as it’s not required anymore - also (buengese)

       • Mailru

         • Fix uploads after recent changes on server (Ivan Andreev)

         • Fix range requests after june changes on server (Ivan Andreev)

         • Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)

       • Onedrive

         • Fix disk usage for sharepoint (Nick Craig-Wood)

       • S3

         • Add missing regions for AWS (Anagh Kumar Baranwal)

       • Seafile

         • Fix accessing libraries > 2GB on 32 bit systems (Muffin King)

       • SFTP

         • Always convert the checksum to lower case (buengese)

       • Union

         • Create root directories if none exist (Nick Craig-Wood)

   v1.53.1 - 2020-09-13
       See commits

       • Bug Fixes

         • accounting: Remove new line from end of –stats-one-line display (Nick Craig-Wood)

         • check

           • Add back missing –download flag (Nick Craig-Wood)

           • Fix docs (Nick Craig-Wood)

         • docs

           • Note –log-file does append (Nick Craig-Wood)

           • Add full stops for consistency in rclone –help (edwardxml)

           • Add Tencent COS to s3 provider list (wjielai)

           • Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris)

           • jottacloud: Mention that uploads from local disk will not need to  cache  files  to  disk  for  md5
             calculation (albertony)

           • Fix formatting of rc docs page (Nick Craig-Wood)

         • build

           • Include vendor tar ball in release and fix startdev (Nick Craig-Wood)

           • Fix “Illegal instruction” error for ARMv6 builds (Nick Craig-Wood)

           • Fix architecture name in ARMv7 build (Nick Craig-Wood)

       • VFS

         • Fix spurious error “vfs cache: failed to _ensure cache EOF” (Nick Craig-Wood)

         • Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)

       • Local

         • Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)

       • Drive

         • Re-adds special oauth help text (Tim Gallant)

       • Opendrive

         • Do not retry 400 errors (Evan Harris)

   v1.53.0 - 2020-09-02
       See commits

       • New Features

         • The VFS layer was heavily reworked for this release - see below for more details

         • Interactive mode -i/–interactive for destructive operations (fishbullet)

         • Add –bwlimit-file flag to limit speeds of individual file transfers (Nick Craig-Wood)

         • Transfers are sorted by start time in the stats and progress output (Max Sum)

         • Make sure backends expand ~ and environment vars in file names they use (Nick Craig-Wood)

         • Add –refresh-times flag to set modtimes on hashless backends (Nick Craig-Wood)

         • build

           • Remove vendor directory in favour of Go modules (Nick Craig-Wood)

           • Build with go1.15.x by default (Nick Craig-Wood)

           • Drop macOS 386 build as it is no longer supported by go1.15 (Nick Craig-Wood)

           • Add ARMv7 to the supported builds (Nick Craig-Wood)

           • Enable rclone cmount on macOS (Nick Craig-Wood)

           • Make rclone build with gccgo (Nick Craig-Wood)

           • Make rclone build with wasm (Nick Craig-Wood)

           • Change beta numbering to be semver compatible (Nick Craig-Wood)

           • Add file properties and icon to Windows executable (albertony)

           • Add experimental interface for integrating rclone into browsers (Nick Craig-Wood)

         • lib: Add file name compression (Klaus Post)

         • rc

           • Allow installation and use of plugins and test plugins with rclone-webui (Chaitanya Bankanhal)

           • Add reverse proxy pluginsHandler for serving plugins (Chaitanya Bankanhal)

           • Add mount/listmounts option for listing current mounts (Chaitanya Bankanhal)

           • Add operations/uploadfile to upload a file through rc using encoding multipart/form-data (Chaitanya
             Bankanhal)

           • Add core/command to execute rclone terminal commands.  (Chaitanya Bankanhal)

         • rclone check

           • Add reporting of filenames for same/missing/changed (Nick Craig-Wood)

           • Make check command obey --dry-run/-i/--interactive (Nick Craig-Wood)

           • Make check do --checkers files concurrently (Nick Craig-Wood)

           • Retry downloads if they fail when using the --download flag (Nick Craig-Wood)

           • Make it show stats by default (Nick Craig-Wood)

         • rclone obscure: Allow obscure command to accept password on STDIN (David Ibarra)

         • rclone config

           • Set RCLONE_CONFIG_DIR for use in config files and subprocesses (Nick Craig-Wood)

           • Reject remote names starting with a dash.  (jtagcat)

         • rclone cryptcheck: Add reporting of filenames for same/missing/changed (Nick Craig-Wood)

         • rclone dedupe: Make it obey the --size-only flag for duplicate detection (Nick Craig-Wood)

         • rclone link: Add --expire and --unlink flags (Roman Kredentser)

         • rclone mkdir: Warn when using mkdir on remotes which can’t have empty directories (Nick Craig-Wood)

         • rclone rc: Allow JSON parameters to simplify command line usage (Nick Craig-Wood)

         • rclone serve ftp

           • Don’t compile on < go1.13 after dependency update (Nick Craig-Wood)

           • Add error message if auth proxy fails (Nick Craig-Wood)

           • Use refactored goftp.io/server library for binary shrink (Nick Craig-Wood)

         • rclone  serve  restic:  Expose  interfaces so that rclone can be used as a library from within restic
           (Jack)

         • rclone sync: Add --track-renames-strategy leaf (Nick Craig-Wood)

         • rclone touch: Add ability to set nanosecond resolution times (Nick Craig-Wood)

         • rclone tree: Remove  -i  shorthand  for  --noindent  as  it  conflicts  with  -i/--interactive  (Nick
           Craig-Wood)

       • Bug Fixes

         • accounting

           • Fix documentation for speed/speedAvg (Nick Craig-Wood)

           • Fix elapsed time not show actual time since beginning (Chaitanya Bankanhal)

           • Fix deadlock in stats printing (Nick Craig-Wood)

         • build

           • Fix file handle leak in GitHub release tool (Garrett Squire)

         • rclone check: Fix successful retries with --download counting errors (Nick Craig-Wood)

         • rclone dedupe: Fix logging to be easier to understand (Nick Craig-Wood)

       • Mount

         • Warn macOS users that mount implementation is changing (Nick Craig-Wood)

           • to test the new implementation use rclone cmount instead of rclone mount

           • this is because the library rclone uses has dropped macOS support

         • rc interface

           • Add call for unmount all (Chaitanya Bankanhal)

           • Make mount/mount remote control take vfsOpt option (Nick Craig-Wood)

           • Add mountOpt to mount/mount (Nick Craig-Wood)

           • Add VFS and Mount options to mount/listmounts (Nick Craig-Wood)

         • Catch panics in cgofuse initialization and turn into error messages (Nick Craig-Wood)

         • Always supply stat information in Readdir (Nick Craig-Wood)

         • Add support for reading unknown length files using direct IO (Windows) (Nick Craig-Wood)

         • Fix On Windows don’t add -o uid/gid=-1 if user supplies -o uid/gid.  (Nick Craig-Wood)

         • Fix macOS losing directory contents in cmount (Nick Craig-Wood)

         • Fix volume name broken in recent refactor (Nick Craig-Wood)

       • VFS

         • Implement partial reads for --vfs-cache-mode full (Nick Craig-Wood)

         • Add --vfs-writeback option to delay writes back to cloud storage (Nick Craig-Wood)

         • Add --vfs-read-ahead parameter for use with --vfs-cache-mode full (Nick Craig-Wood)

         • Restart pending uploads on restart of the cache (Nick Craig-Wood)

         • Support synchronous cache space recovery upon ENOSPC (Leo Luan)

         • Allow ReadAt and WriteAt to run concurrently with themselves (Nick Craig-Wood)

         • Change modtime of file before upload to current (Rob Calistri)

         • Recommend --vfs-cache-modes writes on backends which can’t stream (Nick Craig-Wood)

         • Add an optional fs parameter to vfs rc methods (Nick Craig-Wood)

         • Fix errors when using > 260 char files in the cache in Windows (Nick Craig-Wood)

         • Fix renaming of items while they are being uploaded (Nick Craig-Wood)

         • Fix very high load caused by slow directory listings (Nick Craig-Wood)

         • Fix renamed files not being uploaded with --vfs-cache-mode minimal (Nick Craig-Wood)

         • Fix directory locking caused by slow directory listings (Nick Craig-Wood)

         • Fix saving from chrome without --vfs-cache-mode writes (Nick Craig-Wood)

       • Local

         • Add --local-no-updated to provide a consistent view of changing objects (Nick Craig-Wood)

         • Add --local-no-set-modtime option to prevent modtime changes (tyhuber1)

         • Fix race conditions updating and reading Object metadata (Nick Craig-Wood)

       • Cache

         • Make any created backends be cached to fix rc problems (Nick Craig-Wood)

         • Fix dedupe on caches wrapping drives (Nick Craig-Wood)

       • Crypt

         • Add --crypt-server-side-across-configs flag (Nick Craig-Wood)

         • Make any created backends be cached to fix rc problems (Nick Craig-Wood)

       • Alias

         • Make any created backends be cached to fix rc problems (Nick Craig-Wood)

       • Azure Blob

         • Don’t compile on < go1.13 after dependency update (Nick Craig-Wood)

       • B2

         • Implement server-side copy for files > 5GB (Nick Craig-Wood)

         • Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood)

         • Note that b2’s encoding now allows  but rclone’s hasn’t changed (Nick Craig-Wood)

         • Fix transfers when using download_url (Nick Craig-Wood)

       • Box

         • Implement rclone cleanup (buengese)

         • Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood)

         • Allow authentication with access token (David)

       • Chunker

         • Make any created backends be cached to fix rc problems (Nick Craig-Wood)

       • Drive

         • Add rclone backend drives to list shared drives (teamdrives) (Nick Craig-Wood)

         • Implement rclone backend untrash (Nick Craig-Wood)

         • Work around drive bug which didn’t set modtime of copied docs (Nick Craig-Wood)

         • Added --drive-starred-only to only show starred files (Jay McEntire)

         • Deprecate --drive-alternate-export as it is no longer needed (themylogin)

         • Fix duplication of Google docs on server-side copy (Nick Craig-Wood)

         • Fix “panic: send on closed channel” when recycling dir entries (Nick Craig-Wood)

       • Dropbox

         • Add copyright detector info in limitations section in the docs (Alex Guerrero)

         • Fix rclone link by removing expires parameter (Nick Craig-Wood)

       • Fichier

         • Detect Flood detected: IP Locked error and sleep for 30s (Nick Craig-Wood)

       • FTP

         • Add explicit TLS support (Heiko Bornholdt)

         • Add support for --dump bodies and --dump auth for debugging (Nick Craig-Wood)

         • Fix interoperation with pure-ftpd (Nick Craig-Wood)

       • Google Cloud Storage

         • Add support for anonymous access (Kai Lüke)

       • Jottacloud

         • Bring back legacy authentication for use with whitelabel versions (buengese)

         • Switch to new api root - also implement a very ugly workaround for the DirMove failures (buengese)

       • Onedrive

         • Rework cancel of multipart uploads on rclone exit (Nick Craig-Wood)

         • Implement rclone cleanup (Nick Craig-Wood)

         • Add --onedrive-no-versions flag to remove old versions (Nick Craig-Wood)

       • Pcloud

         • Implement rclone link for public link creation (buengese)

       • Qingstor

         • Cancel in progress multipart uploads on rclone exit (Nick Craig-Wood)

       • S3

         • Preserve metadata when doing multipart copy (Nick Craig-Wood)

         • Cancel in progress multipart uploads and copies on rclone exit (Nick Craig-Wood)

         • Add rclone link for public link sharing (Roman Kredentser)

         • Add rclone backend restore command to restore objects from GLACIER (Nick Craig-Wood)

         • Add rclone cleanup and rclone backend cleanup to clean unfinished multipart uploads (Nick Craig-Wood)

         • Add rclone backend list-multipart-uploads to list unfinished multipart uploads (Nick Craig-Wood)

         • Add --s3-max-upload-parts support (Kamil Trzciński)

         • Add --s3-no-check-bucket for minimising rclone transactions and perms (Nick Craig-Wood)

         • Add --s3-profile and --s3-shared-credentials-file options (Nick Craig-Wood)

         • Use regional s3 us-east-1 endpoint (David)

         • Add Scaleway provider (Vincent Feltz)

         • Update IBM COS endpoints (Egor Margineanu)

         • Reduce the default --s3-copy-cutoff to < 5GB for Backblaze S3 compatibility (Nick Craig-Wood)

         • Fix detection of bucket existing (Nick Craig-Wood)

       • SFTP

         • Use  the  absolute  path  instead  of  the relative path for listing for improved compatibility (Nick
           Craig-Wood)

         • Add --sftp-subsystem and --sftp-server-command options (aus)

       • Swift

         • Fix dangling large objects breaking the listing (Nick Craig-Wood)

         • Fix purge not deleting directory markers (Nick Craig-Wood)

         • Fix update multipart object removing all of its own parts (Nick Craig-Wood)

         • Fix missing hash from object returned from upload (Nick Craig-Wood)

       • Tardigrade

         • Upgrade to uplink v1.2.0 (Kaloyan Raev)

       • Union

         • Fix writing with the all policy (Nick Craig-Wood)

       • WebDAV

         • Fix directory creation with 4shared (Nick Craig-Wood)

   v1.52.3 - 2020-08-07
       See commits

       • Bug Fixes

         • docs

           • Disable smart typography (e.g. en-dash) in MANUAL.* and man page (Nick Craig-Wood)

           • Update install.md to reflect minimum Go version (Evan Harris)

           • Update install from source instructions (Nick Craig-Wood)

           • make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud)

         • log: Fix –use-json-log going to stderr not –log-file on Windows (Nick Craig-Wood)

         • serve dlna: Fix file list on Samsung Series 6+ TVs (Matteo Pietro Dazzi)

         • sync: Fix deadlock with –track-renames-strategy modtime (Nick Craig-Wood)

       • Cache

         • Fix moveto/copyto remote:file remote:file2 (Nick Craig-Wood)

       • Drive

         • Stop using root_folder_id as a cache (Nick Craig-Wood)

         • Make dangling shortcuts appear in listings (Nick Craig-Wood)

         • Drop “Disabling ListR” messages down to debug (Nick Craig-Wood)

         • Workaround and policy for Google Drive API (Dmitry Ustalov)

       • FTP

         • Add note to docs about home vs root directory selection (Nick Craig-Wood)

       • Onedrive

         • Fix reverting to Copy when Move would have worked (Nick Craig-Wood)

         • Avoid comma rendered in URL in onedrive.md (Kevin)

       • Pcloud

         • Fix oauth on European region “eapi.pcloud.com” (Nick Craig-Wood)

       • S3

         • Fix bucket Region auto detection when Region unset in config (Nick Craig-Wood)

   v1.52.2 - 2020-06-24
       See commits

       • Bug Fixes

         • build

           • Fix docker release build action (Nick Craig-Wood)

           • Fix custom timezone in Docker image (NoLooseEnds)

         • check: Fix misleading message which printed errors instead of differences (Nick Craig-Wood)

         • errors: Add WSAECONNREFUSED and more to the list of retriable Windows errors (Nick Craig-Wood)

         • rcd: Fix incorrect prometheus metrics (Gary Kim)

         • serve restic: Fix flags so they use environment variables (Nick Craig-Wood)

         • serve webdav: Fix flags so they use environment variables (Nick Craig-Wood)

         • sync: Fix –track-renames-strategy modtime (Nick Craig-Wood)

       • Drive

         • Fix not being able to delete a directory with a trashed shortcut (Nick Craig-Wood)

         • Fix creating a directory inside a shortcut (Nick Craig-Wood)

         • Fix –drive-impersonate with cached root_folder_id (Nick Craig-Wood)

       • SFTP

         • Fix SSH key PEM loading (Zac Rubin)

       • Swift

         • Speed up deletes by not retrying segment container deletes (Nick Craig-Wood)

       • Tardigrade

         • Upgrade to uplink v1.1.1 (Caleb Case)

       • WebDAV

         • Fix free/used display for rclone about/df for certain backends (Nick Craig-Wood)

   v1.52.1 - 2020-06-10
       See commits

       • Bug Fixes

         • lib/file: Fix SetSparse on Windows 7 which fixes downloads of files > 250MB (Nick Craig-Wood)

         • build

           • Update go.mod to go1.14 to enable -mod=vendor build (Nick Craig-Wood)

           • Remove quicktest from Dockerfile (Nick Craig-Wood)

           • Build Docker images with GitHub actions (Matteo Pietro Dazzi)

           • Update Docker build workflows (Nick Craig-Wood)

           • Set user_allow_other in /etc/fuse.conf in the Docker image (Nick Craig-Wood)

           • Fix xgo build after go1.14 go.mod update (Nick Craig-Wood)

         • docs

           • Add link to source and modified time to footer of every page (Nick Craig-Wood)

           • Remove manually set dates and use git dates instead (Nick Craig-Wood)

           • Minor tense, punctuation, brevity and positivity changes for the home page (edwardxml)

           • Remove leading slash in page reference in footer when present (Nick Craig-Wood)

           • Note commands which need obscured input in the docs (Nick Craig-Wood)

           • obscure: Write more help as we are referencing it elsewhere (Nick Craig-Wood)

       • VFS

         • Fix OS vs Unix path confusion - fixes ChangeNotify on Windows (Nick Craig-Wood)

       • Drive

         • Fix missing items when listing using –fast-list / ListR (Nick Craig-Wood)

       • Putio

         • Fix panic on Object.Open (Cenk Alti)

       • S3

         • Fix upload of single files into buckets without create permission (Nick Craig-Wood)

         • Fix –header-upload (Nick Craig-Wood)

       • Tardigrade

         • Fix listing bug by upgrading to v1.0.7

         • Set UserAgent to rclone (Caleb Case)

   v1.52.0 - 2020-05-27
       Special thanks to Martin Michlmayr for proof reading and correcting all the docs and  Edward  Barker  for
       helping re-write the front page.

       See commits

       • New backends

         • Tardigrade backend for use with storj.io (Caleb Case)

         • Union re-write to have multiple writable remotes (Max Sum)

         • Seafile for Seafile server (Fred @creativeprojects)

       • New commands

         • backend: command for backend-specific commands (see backends) (Nick Craig-Wood)

         • cachestats: Deprecate in favour of rclone backend stats cache: (Nick Craig-Wood)

         • dbhashsum: Deprecate in favour of rclone hashsum DropboxHash (Nick Craig-Wood)

       • New Features

         • Add  --header-download  and --header-upload flags for setting HTTP headers when uploading/downloading
           (Tim Gallant)

         • Add --header flag to add HTTP headers to every HTTP transaction (Nick Craig-Wood)

         • Add --check-first to do all checking before starting transfers (Nick Craig-Wood)

         • Add  --track-renames-strategy  for  configurable  matching  criteria   for   --track-renames   (Bernd
           Schoolmann)

         • Add --cutoff-mode hard,soft,cautious (Shing Kit Chan & Franklyn Tackitt)

         • Filter flags (e.g. --files-from -) can read from stdin (fishbullet)

         • Add --error-on-no-transfer option (Jon Fautley)

         • Implement --order-by xxx,mixed for copying some small and some big files (Nick Craig-Wood)

         • Allow --max-backlog to be negative meaning as large as possible (Nick Craig-Wood)

         • Added --no-unicode-normalization flag to allow Unicode filenames to remain unique (Ben Zenker)

         • Allow --min-age/--max-age to take a date as well as a duration (Nick Craig-Wood)

         • Add rename statistics for file and directory renames (Nick Craig-Wood)

         • Add statistics output to JSON log (reddi)

         • Make stats be printed on non-zero exit code (Nick Craig-Wood)

         • When running --password-command allow use of stdin (Sébastien Gross)

         • Stop empty strings being a valid remote path (Nick Craig-Wood)

         • accounting: support WriterTo for less memory copying (Nick Craig-Wood)

         • build

           • Update to use go1.14 for the build (Nick Craig-Wood)

           • Add -trimpath to release build for reproduceable builds (Nick Craig-Wood)

           • Remove GOOS and GOARCH from Dockerfile (Brandon Philips)

         • config

           • Fsync the config file after writing to save more reliably (Nick Craig-Wood)

           • Add --obscure and --no-obscure flags to config create/update (Nick Craig-Wood)

           • Make config show take remote: as well as remote (Nick Craig-Wood)

         • copyurl: Add --no-clobber flag (Denis)

         • delete: Added --rmdirs flag to delete directories as well (Kush)

         • filter: Added --files-from-raw flag (Ankur Gupta)

         • genautocomplete: Add support for fish shell (Matan Rosenberg)

         • log: Add support for syslog LOCAL facilities (Patryk Jakuszew)

         • lsjson: Add --hash-type parameter and use it in lsf to speed up hashing (Nick Craig-Wood)

         • rc

           • Add -o/--opt and -a/--arg for more structured input (Nick Craig-Wood)

           • Implement backend/command for running backend-specific commands remotely (Nick Craig-Wood)

           • Add mount/mount command for starting rclone mount via the API (Chaitanya)

         • rcd: Add Prometheus metrics support (Gary Kim)

         • serve http

           • Added a --template flag for user defined markup (calistri)

           • Add Last-Modified headers to files and directories (Nick Craig-Wood)

         • serve sftp: Add support for multiple host keys by repeating --key flag (Maxime Suret)

         • touch: Add --localtime flag to make --timestamp localtime not UTC (Nick Craig-Wood)

       • Bug Fixes

         • accounting

           • Restore “Max number of stats groups reached” log line (Michał Matczuk)

           • Correct exitcode on Transfer Limit Exceeded flag.  (Anuar Serdaliyev)

           • Reset bytes read during copy retry (Ankur Gupta)

           • Fix race clearing stats (Nick Craig-Wood)

         • copy: Only create empty directories when they don’t exist on the remote (Ishuah Kariuki)

         • dedupe: Stop dedupe deleting files with identical IDs (Nick Craig-Wood)

         • oauth

           • Use custom http client so that --no-check-certificate is honored by oauth token fetch (Mark Spieth)

           • Replace deprecated oauth2.NoContext (Lars Lehtonen)

         • operations

           • Fix setting the timestamp on Windows for multithread copy (Nick Craig-Wood)

           • Make rcat obey --ignore-checksum (Nick Craig-Wood)

           • Make --max-transfer more accurate (Nick Craig-Wood)

         • rc

           • Fix dropped error (Lars Lehtonen)

           • Fix misplaced http server config (Xiaoxing Ye)

           • Disable duplicate log (ElonH)

         • serve dlna

           • Cds: don’t specify childCount at all when unknown (Dan Walters)

           • Cds: use modification time as date in dlna metadata (Dan Walters)

         • serve restic: Fix tests after restic project removed vendoring (Nick Craig-Wood)

         • sync

           • Fix incorrect “nothing to transfer” message using --delete-before (Nick Craig-Wood)

           • Only create empty directories when they don’t exist on the remote (Ishuah Kariuki)

       • Mount

         • Add --async-read flag to disable asynchronous reads (Nick Craig-Wood)

         • Ignore --allow-root flag with a warning as it has been removed upstream (Nick Craig-Wood)

         • Warn if --allow-non-empty used on Windows and clarify docs (Nick Craig-Wood)

         • Constrain to go1.13 or above otherwise bazil.org/fuse fails to compile (Nick Craig-Wood)

         • Fix fail because of too long volume name (evileye)

         • Report 1PB free for unknown disk sizes (Nick Craig-Wood)

         • Map more rclone errors into file systems errors (Nick Craig-Wood)

         • Fix disappearing cwd problem (Nick Craig-Wood)

         • Use ReaddirPlus on Windows to improve directory listing performance (Nick Craig-Wood)

         • Send a hint as to whether the filesystem is case insensitive or not (Nick Craig-Wood)

         • Add rc command mount/types (Nick Craig-Wood)

         • Change maximum leaf name length to 1024 bytes (Nick Craig-Wood)

       • VFS

         • Add  --vfs-read-wait  and  --vfs-write-wait flags to control time waiting for a sequential read/write
           (Nick Craig-Wood)

         • Change default --vfs-read-wait to 20ms (it was 5ms and not configurable) (Nick Craig-Wood)

         • Make df output more consistent on a rclone mount.  (Yves G)

         • Report 1PB free for unknown disk sizes (Nick Craig-Wood)

         • Fix race condition caused by unlocked reading of Dir.path (Nick Craig-Wood)

         • Make File lock and Dir lock not overlap to avoid deadlock (Nick Craig-Wood)

         • Implement lock ordering between File and Dir to eliminate deadlocks (Nick Craig-Wood)

         • Factor the vfs cache into its own package (Nick Craig-Wood)

         • Pin the Fs in use in the Fs cache (Nick Craig-Wood)

         • Add SetSys() methods to Node to allow caching stuff on a node (Nick Craig-Wood)

         • Ignore file not found errors from Hash in Read.Release (Nick Craig-Wood)

         • Fix hang in read wait code (Nick Craig-Wood)

       • Local

         • Speed up multi thread downloads by using sparse files on Windows (Nick Craig-Wood)

         • Implement --local-no-sparse flag for disabling sparse files (Nick Craig-Wood)

         • Implement rclone backend noop for testing purposes (Nick Craig-Wood)

         • Fix “file not found” errors on post transfer Hash calculation (Nick Craig-Wood)

       • Cache

         • Implement rclone backend stats command (Nick Craig-Wood)

         • Fix Server Side Copy with Temp Upload (Brandon McNama)

         • Remove Unused Functions (Lars Lehtonen)

         • Disable race tests until bbolt is fixed (Nick Craig-Wood)

         • Move methods used for testing into test file (greatroar)

         • Add Pin and Unpin and canonicalised lookup (Nick Craig-Wood)

         • Use proper import path go.etcd.io/bbolt (Robert-André Mauchin)

       • Crypt

         • Calculate hashes for uploads from local disk (Nick Craig-Wood)

           • This allows crypted Jottacloud uploads without using local disk

           • This means crypted s3/b2 uploads will now have hashes

         • Added rclone backend decode/encode commands to replicate functionality of  cryptdecode  (Anagh  Kumar
           Baranwal)

         • Get rid of the unused Cipher interface as it obfuscated the code (Nick Craig-Wood)

       • Azure Blob

         • Implement streaming of unknown sized files so rcat is now supported (Nick Craig-Wood)

         • Implement memory pooling to control memory use (Nick Craig-Wood)

         • Add --azureblob-disable-checksum flag (Nick Craig-Wood)

         • Retry InvalidBlobOrBlock error as it may indicate block concurrency problems (Nick Craig-Wood)

         • Remove unused Object.parseTimeString() (Lars Lehtonen)

         • Fix permission error on SAS URL limited to container (Nick Craig-Wood)

       • B2

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Ignore directory markers at the root also (Nick Craig-Wood)

         • Force the case of the SHA1 to lowercase (Nick Craig-Wood)

         • Remove unused largeUpload.clearUploadURL() (Lars Lehtonen)

       • Box

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Implement About to read size used (Nick Craig-Wood)

         • Add token renew function for jwt auth (David Bramwell)

         • Added support for interchangeable root folder for Box backend (Sunil Patra)

         • Remove unnecessary iat from jws claims (David)

       • Drive

         • Follow shortcuts by default, skip with --drive-skip-shortcuts (Nick Craig-Wood)

         • Implement rclone backend shortcut command for creating shortcuts (Nick Craig-Wood)

         • Added rclone backend command to change service_account_file and chunk_size (Anagh Kumar Baranwal)

         • Fix missing files when using --fast-list and --drive-shared-with-me (Nick Craig-Wood)

         • Fix duplicate items when using --drive-shared-with-me (Nick Craig-Wood)

         • Extend --drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded.  (harry)

         • Don’t delete files with multiple parents to avoid data loss (Nick Craig-Wood)

         • Server side copy docs use default description if empty (Nick Craig-Wood)

       • Dropbox

         • Make error insufficient space to be fatal (harry)

         • Add info about required redirect url (Elan Ruusamäe)

       • Fichier

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Implement custom pacer to deal with the new rate limiting (buengese)

       • FTP

         • Fix lockup when using concurrency limit on failed connections (Nick Craig-Wood)

         • Fix lockup on failed upload when using concurrency limit (Nick Craig-Wood)

         • Fix lockup on Close failures when using concurrency limit (Nick Craig-Wood)

         • Work around pureftp sending spurious 150 messages (Nick Craig-Wood)

       • Google Cloud Storage

         • Add support for --header-upload and --header-download (Nick Craig-Wood)

         • Add ARCHIVE storage class to help (Adam Stroud)

         • Ignore directory markers at the root (Nick Craig-Wood)

       • Googlephotos

         • Make the start year configurable (Daven)

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Create feature/favorites directory (Brandon Philips)

         • Fix “concurrent map write” error (Nick Craig-Wood)

         • Don’t put an image in error message (Nick Craig-Wood)

       • HTTP

         • Improved directory listing with new template from Caddy project (calisro)

       • Jottacloud

         • Implement --jottacloud-trashed-only (buengese)

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Use RawURLEncoding when decoding base64 encoded login token (buengese)

         • Implement cleanup (buengese)

         • Update  docs  regarding  cleanup,  removed  remains  from  old  auth, and added warning about special
           mountpoints.  (albertony)

       • Mailru

         • Describe 2FA requirements (valery1707)

       • Onedrive

         • Implement --onedrive-server-side-across-configs (Nick Craig-Wood)

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Fix occasional 416 errors on multipart uploads (Nick Craig-Wood)

         • Added maximum chunk size limit warning in the docs (Harry)

         • Fix missing drive on config (Nick Craig-Wood)

         • Make error quotaLimitReached to be fatal (harry)

       • Opendrive

         • Add support for --header-upload and --header-download (Tim Gallant)

       • Pcloud

         • Added support for interchangeable root folder for pCloud backend (Sunil Patra)

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Fix initial config “Auth state doesn’t match” message (Nick Craig-Wood)

       • Premiumizeme

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Prune unused functions (Lars Lehtonen)

       • Putio

         • Add support for --header-upload and --header-download (Nick Craig-Wood)

         • Make downloading files use the rclone http Client (Nick Craig-Wood)

         • Fix parsing of remotes with leading and trailing / (Nick Craig-Wood)

       • Qingstor

         • Make rclone cleanup remove pending multipart uploads older than 24h (Nick Craig-Wood)

         • Try harder to cancel failed multipart uploads (Nick Craig-Wood)

         • Prune multiUploader.list() (Lars Lehtonen)

         • Lint fix (Lars Lehtonen)

       • S3

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Use memory pool for buffer allocations (Maciej Zimnoch)

         • Add SSE-C support for AWS, Ceph, and MinIO (Jack Anderson)

         • Fail fast multipart upload (Michał Matczuk)

         • Report errors on bucket creation (mkdir) correctly (Nick Craig-Wood)

         • Specify that Minio supports URL encoding in listings (Nick Craig-Wood)

         • Added 500 as retryErrorCode (Michał Matczuk)

         • Use --low-level-retries as the number of SDK retries (Aleksandar Janković)

         • Fix multipart abort context (Aleksandar Jankovic)

         • Replace deprecated session.New() with session.NewSession() (Lars Lehtonen)

         • Use the provided size parameter when allocating a new memory pool (Joachim Brandon LeBlanc)

         • Use rclone’s low level retries instead of AWS SDK to fix listing retries (Nick Craig-Wood)

         • Ignore directory markers at the root also (Nick Craig-Wood)

         • Use single memory pool (Michał Matczuk)

         • Do not resize buf on put to memBuf (Michał Matczuk)

         • Improve docs for --s3-disable-checksum (Nick Craig-Wood)

         • Don’t leak memory or tokens in edge cases for multipart upload (Nick Craig-Wood)

       • Seafile

         • Implement 2FA (Fred)

       • SFTP

         • Added --sftp-pem-key to support inline key files (calisro)

         • Fix post transfer copies failing with 0 size when using set_modtime=false (Nick Craig-Wood)

       • Sharefile

         • Add support for --header-upload and --header-download (Tim Gallant)

       • Sugarsync

         • Add support for --header-upload and --header-download (Tim Gallant)

       • Swift

         • Add support for --header-upload and --header-download (Nick Craig-Wood)

         • Fix cosmetic issue in error message (Martin Michlmayr)

       • Union

         • Implement multiple writable remotes (Max Sum)

         • Fix server-side copy (Max Sum)

         • Implement ListR (Max Sum)

         • Enable ListR when upstreams contain local (Max Sum)

       • WebDAV

         • Add support for --header-upload and --header-download (Tim Gallant)

         • Fix X-OC-Mtime header for Transip compatibility (Nick Craig-Wood)

         • Report full and consistent usage with about (Yves G)

       • Yandex

         • Add support for --header-upload and --header-download (Tim Gallant)

   v1.51.0 - 2020-02-01
       • New backends

         • Memory (Nick Craig-Wood)

         • Sugarsync (Nick Craig-Wood)

       • New Features

         • Adjust all backends to have --backend-encoding parameter (Nick Craig-Wood)

           • this enables the encoding for special characters to be adjusted or disabled

         • Add --max-duration flag to control the maximum duration of a transfer session (boosh)

         • Add --expect-continue-timeout flag, default 1s (Nick Craig-Wood)

         • Add --no-check-dest flag for copying without testing the destination (Nick Craig-Wood)

         • Implement --order-by flag to order transfers (Nick Craig-Wood)

         • accounting

           • Don’t show entries in both transferring and checking (Nick Craig-Wood)

           • Add option to delete stats (Aleksandar Jankovic)

         • build

           • Compress the test builds with gzip (Nick Craig-Wood)

           • Implement a framework for starting test servers during tests (Nick Craig-Wood)

         • cmd: Always print elapsed time to tenth place seconds in progress (Gary Kim)

         • config

           • Add --password-command to allow dynamic config password (Damon Permezel)

           • Give config questions default values (Nick Craig-Wood)

           • Check a remote exists when creating a new one (Nick Craig-Wood)

         • copyurl: Add --stdout flag to write to stdout (Nick Craig-Wood)

         • dedupe: Implement keep smallest too (Nick Craig-Wood)

         • hashsum: Add flag --base64 flag (landall)

         • lsf: Speed up on s3/swift/etc by not reading mimetype by default (Nick Craig-Wood)

         • lsjson: Add --no-mimetype flag (Nick Craig-Wood)

         • rc: Add methods to turn on blocking and mutex profiling (Nick Craig-Wood)

         • rcd

           • Adding group parameter to stats (Chaitanya)

           • Move webgui apart; option to disable browser (Xiaoxing Ye)

         • serve sftp: Add support for public key with auth proxy (Paul Tinsley)

         • stats: Show deletes in stats and hide zero stats (anuar45)

       • Bug Fixes

         • accounting

           • Fix error counter counting multiple times (Ankur Gupta)

           • Fix error count shown as checks (Cnly)

           • Clear finished transfer in stats-reset (Maciej Zimnoch)

           • Added StatsInfo locking in statsGroups sum function (Michał Matczuk)

         • asyncreader: Fix EOF error (buengese)

         • check: Fix --one-way recursing more directories than it needs to (Nick Craig-Wood)

         • chunkedreader: Disable hash calculation for first segment (Nick Craig-Wood)

         • config

           • Do not open browser on headless on drive/gcs/google photos (Xiaoxing Ye)

           • SetValueAndSave ignore error if config section does not exist yet (buengese)

         • cmd: Fix completion with an encrypted config (Danil Semelenov)

         • dbhashsum: Stop it returning UNSUPPORTED on dropbox (Nick Craig-Wood)

         • dedupe: Add missing modes to help string (Nick Craig-Wood)

         • operations

           • Fix dedupe continuing on errors like insufficientFilePersimmon (SezalAgrawal)

           • Clear accounting before low level retry (Maciej Zimnoch)

           • Write debug message when hashes could not be checked (Ole Schütt)

           • Move interface assertion to tests to remove pflag dependency (Nick Craig-Wood)

           • Make NewOverrideObjectInfo public and factor uses (Nick Craig-Wood)

         • proxy: Replace use of bcrypt with sha256 (Nick Craig-Wood)

         • vendor

           • Update bazil.org/fuse to fix FreeBSD 12.1 (Nick Craig-Wood)

           • Update github.com/t3rm1n4l/go-mega to fix mega  “illegal  base64  data  at  input  byte  22”  (Nick
             Craig-Wood)

           • Update termbox-go to fix ncdu command on FreeBSD (Kuang-che Wu)

           • Update  t3rm1n4l/go-mega  -  fixes  mega:  couldn’t  login:  crypto/aes:  invalid  key size 0 (Nick
             Craig-Wood)

       • Mount

         • Enable async reads for a 20% speedup (Nick Craig-Wood)

         • Replace use of WriteAt with Write for cache mode >= writes and O_APPEND (Brett Dutro)

         • Make sure we call unmount when exiting (Nick Craig-Wood)

         • Don’t build on go1.10 as bazil/fuse no longer supports it (Nick Craig-Wood)

         • When setting dates discard out of range dates (Nick Craig-Wood)

       • VFS

         • Add a newly created file straight into the directory (Nick Craig-Wood)

         • Only calculate one hash for reads for a speedup (Nick Craig-Wood)

         • Make ReadAt for non cached files work better with non-sequential reads (Nick Craig-Wood)

         • Fix edge cases when reading ModTime from file (Nick Craig-Wood)

         • Make sure existing files opened for write show correct size (Nick Craig-Wood)

         • Don’t cache the path in RW file objects to fix renaming (Nick Craig-Wood)

         • Fix rename of open files when using the VFS cache (Nick Craig-Wood)

         • When renaming files in the cache, rename the cache item in memory too (Nick Craig-Wood)

         • Fix open file renaming on drive when using --vfs-cache-mode writes (Nick Craig-Wood)

         • Fix incorrect modtime for mv into mount with --vfs-cache-modes writes (Nick Craig-Wood)

         • On rename, rename in cache too if the file exists (Anagh Kumar Baranwal)

       • Local

         • Make source file being updated errors be NoLowLevelRetry errors (Nick Craig-Wood)

         • Fix update of hidden files on Windows (Nick Craig-Wood)

       • Cache

         • Follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt (Nick Craig-Wood)

         • Fix fatal error: concurrent map writes (Nick Craig-Wood)

       • Crypt

         • Reorder the filename encryption options (Thomas Eales)

         • Correctly handle trailing dot (buengese)

       • Chunker

         • Reduce length of temporary suffix (Ivan Andreev)

       • Drive

         • Add --drive-stop-on-upload-limit flag to stop syncs when upload limit reached (Nick Craig-Wood)

         • Add --drive-use-shared-date to use date file was shared instead of modified date (Garry McNulty)

         • Make sure invalid auth for teamdrives always reports an error (Nick Craig-Wood)

         • Fix --fast-list when using appDataFolder (Nick Craig-Wood)

         • Use multipart resumable uploads for streaming and uploads in mount (Nick Craig-Wood)

         • Log an ERROR if an incomplete search is returned (Nick Craig-Wood)

         • Hide dangerous config from the configurator (Nick Craig-Wood)

       • Dropbox

         • Treat insufficient_space errors as non retriable errors (Nick Craig-Wood)

       • Jottacloud

         • Use new auth method used by official client (buengese)

         • Add URL to generate Login Token to config wizard (Nick Craig-Wood)

         • Add support whitelabel versions (buengese)

       • Koofr

         • Use rclone HTTP client.  (jaKa)

       • Onedrive

         • Add Sites.Read.All permission (Benjamin Richter)

         • Add support “Retry-After” header (Motonori IWAMURO)

       • Opendrive

         • Implement --opendrive-chunk-size (Nick Craig-Wood)

       • S3

         • Re-implement multipart upload to fix memory issues (Nick Craig-Wood)

         • Add --s3-copy-cutoff for size to switch to multipart copy (Nick Craig-Wood)

         • Add new region Asia Pacific (Hong Kong) (Outvi V)

         • Reduce memory usage streaming files by reducing max stream upload size (Nick Craig-Wood)

         • Add --s3-list-chunk option for bucket listing (Thomas Kriechbaumer)

         • Force path style bucket access to off for AWS deprecation (Nick Craig-Wood)

         • Use AWS web identity role provider if available (Tennix)

         • Add StackPath Object Storage Support (Dave Koston)

         • Fix ExpiryWindow value (Aleksandar Jankovic)

         • Fix DisableChecksum condition (Aleksandar Janković)

         • Fix URL decoding of NextMarker (Nick Craig-Wood)

       • SFTP

         • Add --sftp-skip-links to skip symlinks and non regular files (Nick Craig-Wood)

         • Retry Creation of Connection (Sebastian Brandt)

         • Fix “failed to parse private key file: ssh: not an encrypted key” error (Nick Craig-Wood)

         • Open files for update write only to fix AWS SFTP interop (Nick Craig-Wood)

       • Swift

         • Reserve segments of  dynamic  large  object  when  delete  objects  in  container  what  was  enabled
           versioning.  (Nguyễn Hữu Luân)

         • Fix parsing of X-Object-Manifest (Nick Craig-Wood)

         • Update OVH API endpoint (unbelauscht)

       • WebDAV

         • Make nextcloud only upload SHA1 checksums (Nick Craig-Wood)

         • Fix case of “Bearer” in Authorization: header to agree with RFC (Nick Craig-Wood)

         • Add Referer header to fix problems with WAFs (Nick Craig-Wood)

   v1.50.2 - 2019-11-19
       • Bug Fixes

         • accounting: Fix memory leak on retries operations (Nick Craig-Wood)

       • Drive

         • Fix listing of the root directory with drive.files scope (Nick Craig-Wood)

         • Fix –drive-root-folder-id with team/shared drives (Nick Craig-Wood)

   v1.50.1 - 2019-11-02
       • Bug Fixes

         • hash: Fix accidentally changed hash names for DropboxHash and CRC-32 (Nick Craig-Wood)

         • fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood)

         • fshttp: Don’t print token bucket errors on context cancelled (Nick Craig-Wood)

       • Local

         • Fix listings of .  on Windows (Nick Craig-Wood)

       • Onedrive

         • Fix DirMove/Move after Onedrive change (Xiaoxing Ye)

   v1.50.0 - 2019-10-26
       • New backends

         • Citrix Sharefile (Nick Craig-Wood)

         • Chunker - an overlay backend to split files into smaller parts (Ivan Andreev)

         • Mail.ru Cloud (Ivan Andreev)

       • New Features

         • encodings (Fabian Möller & Nick Craig-Wood)

           • All backends now use file name encoding to ensure any file name can be written to any backend.

           • See the restricted file name docs for more info and the local backend docs.

           • Some file names may look different in rclone if you are using any control characters  in  names  or
             https://en.wikipedia.org/wiki/Halfwidth_and_Fullwidth_Forms_(Unicode_block)  unicode FULLWIDTH sym‐
             bols.

         • build

           • Update to use go1.13 for the build (Nick Craig-Wood)

           • Drop support for go1.9 (Nick Craig-Wood)

           • Build rclone with GitHub actions (Nick Craig-Wood)

           • Convert python scripts to python3 (Nick Craig-Wood)

           • Swap Azure/go-ansiterm for mattn/go-colorable (Nick Craig-Wood)

           • Dockerfile fixes (Matei David)

           • Add            https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#writing-a-plugin plugin
             support for backends and commands (Richard Patel)

         • config

           • Use alternating Red/Green in config to make more obvious (Nick Craig-Wood)

         • contrib

           • Add sample DLNA server Docker Compose manifest.  (pataquets)

           • Add sample WebDAV server Docker Compose manifest.  (pataquets)

         • copyurl

           • Add --auto-filename flag for using file name from URL in destination path (Denis)

         • serve dlna:

           • Many compatibility improvements (Dan Walters)

           • Support for external srt subtitles (Dan Walters)

         • rc

           • Added command core/quit (Saksham Khanna)

       • Bug Fixes

         • sync

           • Make --update/-u not transfer files that haven’t changed (Nick Craig-Wood)

           • Free objects after they come out of the transfer pipe to save memory (Nick Craig-Wood)

           • Fix --files-from without --no-traverse doing a recursive scan (Nick Craig-Wood)

         • operations

           • Fix accounting for server-side copies (Nick Craig-Wood)

           • Display `All duplicates removed' only if dedupe successful (Sezal Agrawal)

           • Display `Deleted X extra copies' only if dedupe successful (Sezal Agrawal)

         • accounting

           • Only allow up to 100 completed transfers in the accounting list to save memory (Nick Craig-Wood)

           • Cull the old time ranges when possible to save memory (Nick Craig-Wood)

           • Fix panic due to server-side copy fallback (Ivan Andreev)

           • Fix memory leak noticeable for transfers of large numbers of objects (Nick Craig-Wood)

           • Fix total duration calculation (Nick Craig-Wood)

         • cmd

           • Fix environment variables not setting command line flags (Nick Craig-Wood)

           • Make autocomplete compatible with bash’s posix mode for macOS (Danil Semelenov)

           • Make --progress work in git bash on Windows (Nick Craig-Wood)

           • Fix `compopt: command not found' on autocomplete on macOS (Danil Semelenov)

         • config

           • Fix setting of non top level flags from environment variables (Nick Craig-Wood)

           • Check config names more carefully and report errors (Nick Craig-Wood)

           • Remove error: can’t use --size-only and --ignore-size together.  (Nick Craig-Wood)

         • filter: Prevent mixing options when --files-from is in use (Michele Caci)

         • serve sftp: Fix crash on unsupported operations (e.g. Readlink) (Nick Craig-Wood)

       • Mount

         • Allow files of unknown size to be read properly (Nick Craig-Wood)

         • Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood)

         • Fix panic on File.Open (Nick Craig-Wood)

         • Fix “mount_fusefs: -o timeout=: option not supported” on FreeBSD (Nick Craig-Wood)

         • Don’t pass huge filenames (>4k) to FUSE as it can’t cope (Nick Craig-Wood)

       • VFS

         • Add flag --vfs-case-insensitive for windows/macOS mounts (Ivan Andreev)

         • Make objects of unknown size readable through the VFS (Nick Craig-Wood)

         • Move writeback of dirty data out of close() method into  its  own  method  (FlushWrites)  and  remove
           close() call from Flush() (Brett Dutro)

         • Stop empty dirs disappearing when renamed on bucket-based remotes (Nick Craig-Wood)

         • Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood)

       • Azure Blob

         • Disable logging to the Windows event log (Nick Craig-Wood)

       • B2

         • Remove unverified: prefix on sha1 to improve interop (e.g. with CyberDuck) (Nick Craig-Wood)

       • Box

         • Add options to get access token via JWT auth (David)

       • Drive

         • Disable HTTP/2 by default to work around INTERNAL_ERROR problems (Nick Craig-Wood)

         • Make sure that drive root ID is always canonical (Nick Craig-Wood)

         • Fix --drive-shared-with-me from the root with lsand --fast-list (Nick Craig-Wood)

         • Fix ChangeNotify polling for shared drives (Nick Craig-Wood)

         • Fix change notify polling when using appDataFolder (Nick Craig-Wood)

       • Dropbox

         • Make disallowed filenames errors not retry (Nick Craig-Wood)

         • Fix nil pointer exception on restricted files (Nick Craig-Wood)

       • Fichier

         • Fix accessing files > 2GB on 32 bit systems (Nick Craig-Wood)

       • FTP

         • Allow disabling EPSV mode (Jon Fautley)

       • HTTP

         • HEAD directory entries in parallel to speedup (Nick Craig-Wood)

         • Add --http-no-head to stop rclone doing HEAD in listings (Nick Craig-Wood)

       • Putio

         • Add ability to resume uploads (Cenk Alti)

       • S3

         • Fix signature v2_auth headers (Anthony Rusdi)

         • Fix encoding for control characters (Nick Craig-Wood)

         • Only ask for URL encoded directory listings if we need them on Ceph (Nick Craig-Wood)

         • Add option for multipart failure behaviour (Aleksandar Jankovic)

         • Support for multipart copy (庄天翼)

         • Fix nil pointer reference if no metadata returned for object (Nick Craig-Wood)

       • SFTP

         • Fix --sftp-ask-password trying to contact the ssh agent (Nick Craig-Wood)

         • Fix hashes of files with backslashes (Nick Craig-Wood)

         • Include more ciphers with --sftp-use-insecure-cipher (Carlos Ferreyra)

       • WebDAV

         • Parse and return Sharepoint error response (Henning Surmeier)

   v1.49.5 - 2019-10-05
       • Bug Fixes

         • Revert back to go1.12.x for the v1.49.x builds as go1.13.x was causing issues (Nick Craig-Wood)

         • Fix rpm packages by using master builds of nfpm (Nick Craig-Wood)

         • Fix macOS build after brew changes (Nick Craig-Wood)

   v1.49.4 - 2019-09-29
       • Bug Fixes

         • cmd/rcd: Address ZipSlip vulnerability (Richard Patel)

         • accounting: Fix file handle leak on errors (Nick Craig-Wood)

         • oauthutil: Fix security problem when running with two users on the same machine (Nick Craig-Wood)

       • FTP

         • Fix listing of an empty root returning: error dir not found (Nick Craig-Wood)

       • S3

         • Fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier (Nick Craig-Wood)

   v1.49.3 - 2019-09-15
       • Bug Fixes

         • accounting

           • Fix total duration calculation (Aleksandar Jankovic)

           • Fix “file already closed” on transfer retries (Nick Craig-Wood)

   v1.49.2 - 2019-09-08
       • New Features

         • build: Add Docker workflow support (Alfonso Montero)

       • Bug Fixes

         • accounting: Fix locking in Transfer to avoid deadlock with --progress (Nick Craig-Wood)

         • docs: Fix template argument for mktemp in install.sh (Cnly)

         • operations: Fix -u/--update with google photos / files of unknown size (Nick Craig-Wood)

         • rc: Fix docs for config/create /update /password (Nick Craig-Wood)

       • Google Cloud Storage

         • Fix need for elevated permissions on SetModTime (Nick Craig-Wood)

   v1.49.1 - 2019-08-28
       • Bug Fixes

         • config: Fix generated passwords being stored as empty password (Nick Craig-Wood)

         • rcd: Added missing parameter for web-gui info logs.  (Chaitanya)

       • Googlephotos

         • Fix crash on error response (Nick Craig-Wood)

       • Onedrive

         • Fix crash on error response (Nick Craig-Wood)

   v1.49.0 - 2019-08-26
       • New backends

         • 1fichier (Laura Hausmann)

         • Google Photos (Nick Craig-Wood)

         • Putio (Cenk Alti)

         • premiumize.me (Nick Craig-Wood)

       • New Features

         • Experimental web GUI (Chaitanya Bankanhal)

         • Implement --compare-dest & --copy-dest (yparitcher)

         • Implement --suffix without --backup-dir for backup to current dir (yparitcher)

         • config reconnect to re-login (re-run the oauth login) for the backend.  (Nick Craig-Wood)

         • config userinfo to discover which user you are logged in as.  (Nick Craig-Wood)

         • config disconnect to disconnect you (log out) from the backend.  (Nick Craig-Wood)

         • Add --use-json-log for JSON logging (justinalin)

         • Add context propagation to rclone (Aleksandar Jankovic)

         • Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)

         • Add Higher units for ETA (AbelThar)

         • Update rclone logos to new design (Andreas Chlupka)

         • hash: Add CRC-32 support (Cenk Alti)

         • help showbackend: Fixed advanced option category when there are no standard options (buengese)

         • ncdu: Display/Copy to Clipboard Current Path (Gary Kim)

         • operations:

           • Run hashing operations in parallel (Nick Craig-Wood)

           • Don’t calculate checksums when using --ignore-checksum (Nick Craig-Wood)

           • Check transfer hashes when using --size-only mode (Nick Craig-Wood)

           • Disable multi thread copy for local to local copies (Nick Craig-Wood)

           • Debug successful hashes as well as failures (Nick Craig-Wood)

         • rc

           • Add ability to stop async jobs (Aleksandar Jankovic)

           • Return current settings if core/bwlimit called without parameters (Nick Craig-Wood)

           • Rclone-WebUI integration with rclone (Chaitanya Bankanhal)

           • Added  command  line  parameter  to  control  the  cross origin resource sharing (CORS) in the rcd.
             (Security Improvement) (Chaitanya Bankanhal)

           • Add anchor tags to the docs so links are consistent (Nick Craig-Wood)

           • Remove _async key from input parameters after  parsing  so  later  operations  won’t  get  confused
             (buengese)

           • Add call to clear stats (Aleksandar Jankovic)

         • rcd

           • Auto-login for web-gui (Chaitanya Bankanhal)

           • Implement --baseurl for rcd and web-gui (Chaitanya Bankanhal)

         • serve dlna

           • Only select interfaces which can multicast for SSDP (Nick Craig-Wood)

           • Add more builtin mime types to cover standard audio/video (Nick Craig-Wood)

           • Fix missing mime types on Android causing missing videos (Nick Craig-Wood)

         • serve ftp

           • Refactor to bring into line with other serve commands (Nick Craig-Wood)

           • Implement --auth-proxy (Nick Craig-Wood)

         • serve http: Implement --baseurl (Nick Craig-Wood)

         • serve restic: Implement --baseurl (Nick Craig-Wood)

         • serve sftp

           • Implement auth proxy (Nick Craig-Wood)

           • Fix detection of whether server is authorized (Nick Craig-Wood)

         • serve webdav

           • Implement --baseurl (Nick Craig-Wood)

           • Support --auth-proxy (Nick Craig-Wood)

       • Bug Fixes

         • Make “bad record MAC” a retriable error (Nick Craig-Wood)

         • copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood)

         • march: Fix checking sub-directories when using --no-traverse (buengese)

         • rc

           • Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood)

           • Move job expire flags to rc to fix initialization problem (Nick Craig-Wood)

           • Fix --loopback with rc/list and others (Nick Craig-Wood)

         • rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood)

         • rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood)

       • Mount

         • Default --daemon-timeout to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)

         • Update docs to show mounting from root OK for bucket-based (Nick Craig-Wood)

         • Remove nonseekable flag from write files (Nick Craig-Wood)

       • VFS

         • Make write without cache more efficient (Nick Craig-Wood)

         • Fix --vfs-cache-mode minimal and writes ignoring cached files (Nick Craig-Wood)

       • Local

         • Add --local-case-sensitive and --local-case-insensitive (Nick Craig-Wood)

         • Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk)

         • Don’t calculate any hashes by default (Nick Craig-Wood)

         • Fadvise run syscall on a dedicated go routine (Michał Matczuk)

       • Azure Blob

         • Azure Storage Emulator support (Sandeep)

         • Updated config help details to remove connection string references (Sandeep)

         • Make all operations work from the root (Nick Craig-Wood)

       • B2

         • Implement link sharing (yparitcher)

         • Enable server-side copy to copy between buckets (Nick Craig-Wood)

         • Make all operations work from the root (Nick Craig-Wood)

       • Drive

         • Fix server-side copy of big files (Nick Craig-Wood)

         • Update API for teamdrive use (Nick Craig-Wood)

         • Add error for purge with --drive-trashed-only (ginvine)

       • Fichier

         • Make FolderID int and adjust related code (buengese)

       • Google Cloud Storage

         • Reduce oauth scope requested as suggested by Google (Nick Craig-Wood)

         • Make all operations work from the root (Nick Craig-Wood)

       • HTTP

         • Add --http-headers flag for setting arbitrary headers (Nick Craig-Wood)

       • Jottacloud

         • Use new api for retrieving internal username (buengese)

         • Refactor configuration and minor cleanup (buengese)

       • Koofr

         • Support setting modification times on Koofr backend.  (jaKa)

       • Opendrive

         • Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood)

       • Qingstor

         • Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood)

         • Make all operations work from the root (Nick Craig-Wood)

       • S3

         • Add INTELLIGENT_TIERING storage class (Matti Niemenmaa)

         • Make all operations work from the root (Nick Craig-Wood)

       • SFTP

         • Add missing interface check and fix About (Nick Craig-Wood)

         • Completely ignore all modtime checks if SetModTime=false (Jon Fautley)

         • Support md5/sha1 with rsync.net (Nick Craig-Wood)

         • Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood)

         • Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU)

       • Swift

         • Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood)

         • Fix upload when using no_chunk to return the correct size (Nick Craig-Wood)

         • Make all operations work from the root (Nick Craig-Wood)

         • Fix segments leak during failed large file uploads.  (nguyenhuuluan434)

       • WebDAV

         • Add --webdav-bearer-token-command (Nick Craig-Wood)

         • Refresh token when it expires with --webdav-bearer-token-command (Nick Craig-Wood)

         • Add docs for using bearer_token_command with oidc-agent (Paul Millar)

   v1.48.0 - 2019-06-15
       • New commands

         • serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood)

       • New Features

         • Multi threaded downloads to local storage (Nick Craig-Wood)

           • controlled with --multi-thread-cutoff and --multi-thread-streams

         • Use rclone.conf from rclone executable directory to enable portable use (albertony)

         • Allow sync of a file and a directory with the same name (forgems)

           • this is common on bucket-based remotes, e.g. s3, gcs

         • Add --ignore-case-sync for forced case insensitivity (garry415)

         • Implement --stats-one-line-date and --stats-one-line-date-format (Peter Berbec)

         • Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood)

         • Use go-homedir to read the home directory more reliably (Nick Craig-Wood)

         • Enable creating encrypted config through external script invocation (Wojciech Smigielski)

         • build: Drop support for go1.8 (Nick Craig-Wood)

         • config: Make config create/update encrypt passwords where necessary (Nick Craig-Wood)

         • copyurl: Honor --no-check-certificate (Stefan Breunig)

         • install: Linux skip man pages if no mandb (didil)

         • lsf: Support showing the Tier of the object (Nick Craig-Wood)

         • lsjson

           • Added EncryptedPath to output (calisro)

           • Support showing the Tier of the object (Nick Craig-Wood)

           • Add IsBucket field for bucket-based remote listing of the root (Nick Craig-Wood)

         • rc

           • Add --loopback flag to run commands directly without a server (Nick Craig-Wood)

           • Add operations/fsinfo: Return information about the remote (Nick Craig-Wood)

           • Skip auth for OPTIONS request (Nick Craig-Wood)

           • cmd/providers: Add DefaultStr, ValueStr and Type fields (Nick Craig-Wood)

           • jobs: Make job expiry timeouts configurable (Aleksandar Jankovic)

         • serve dlna reworked and improved (Dan Walters)

         • serve ftp: add --ftp-public-ip flag to specify public IP (calistri)

         • serve restic: Add support for --private-repos in serve restic (Florian Apolloner)

         • serve webdav: Combine serve webdav and serve http (Gary Kim)

         • size: Ignore negative sizes when calculating total (Garry McNulty)

       • Bug Fixes

         • Make move and copy individual files obey --backup-dir (Nick Craig-Wood)

         • If --ignore-checksum is in effect, don’t calculate checksum (Nick Craig-Wood)

         • moveto: Fix case-insensitive same remote move (Gary Kim)

         • rc: Fix serving bucket-based objects with --rc-serve (Nick Craig-Wood)

         • serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim)

       • Mount

         • Fix poll interval documentation (Animosity022)

       • VFS

         • Make WriteAt for non cached files work with non-sequential writes (Nick Craig-Wood)

       • Local

         • Only calculate the required hashes for big speedup (Nick Craig-Wood)

         • Log errors when listing instead of returning an error (Nick Craig-Wood)

         • Fix preallocate warning on Linux with ZFS (Nick Craig-Wood)

       • Crypt

         • Make rclone dedupe work through crypt (Nick Craig-Wood)

         • Fix wrapping of ChangeNotify to decrypt directories properly (Nick Craig-Wood)

         • Support PublicLink (rclone link) of underlying backend (Nick Craig-Wood)

         • Implement Optional methods SetTier, GetTier (Nick Craig-Wood)

       • B2

         • Implement server-side copy (Nick Craig-Wood)

         • Implement SetModTime (Nick Craig-Wood)

       • Drive

         • Fix move and copy from TeamDrive to GDrive (Fionera)

         • Add notes that cleanup works in the background on drive (Nick Craig-Wood)

         • Add  --drive-server-side-across-configs  to default back to old server-side copy semantics by default
           (Nick Craig-Wood)

         • Add --drive-size-as-quota to show storage quota usage for file size (Garry McNulty)

       • FTP

         • Add FTP List timeout (Jeff Quinn)

         • Add FTP over TLS support (Gary Kim)

         • Add --ftp-no-check-certificate option for FTPS (Gary Kim)

       • Google Cloud Storage

         • Fix upload errors when uploading pre 1970 files (Nick Craig-Wood)

       • Jottacloud

         • Add support for selecting device and mountpoint.  (buengese)

       • Mega

         • Add cleanup support (Gary Kim)

       • Onedrive

         • More accurately check if root is found (Cnly)

       • S3

         • Support S3 Accelerated endpoints with --s3-use-accelerate-endpoint (Nick Craig-Wood)

         • Add config info for Wasabi’s EU Central endpoint (Robert Marko)

         • Make SetModTime work for GLACIER while syncing (Philip Harvey)

       • SFTP

         • Add About support (Gary Kim)

         • Fix about parsing of df results so it can cope with -ve results (Nick Craig-Wood)

         • Send custom client version and debug server version (Nick Craig-Wood)

       • WebDAV

         • Retry on 423 Locked errors (Nick Craig-Wood)

   v1.47.0 - 2019-04-13
       • New backends

         • Backend for Koofr cloud storage service.  (jaKa)

       • New Features

         • Resume downloads if the reader fails in copy (Nick Craig-Wood)

           • this means rclone will restart transfers if the source has an error

           • this is most useful for downloads or cloud to cloud copies

         • Use --fast-list for listing operations where it won’t use more memory (Nick Craig-Wood)

           • this should speed up the following operations on remotes which support ListR

           • dedupe, serve restic lsf, ls, lsl, lsjson,  lsd,  md5sum,  sha1sum,  hashsum,  size,  delete,  cat,
             settier

           • use --disable ListR to get old behaviour if required

         • Make --files-from traverse the destination unless --no-traverse is set (Nick Craig-Wood)

           • this fixes --files-from with Google drive and excessive API use in general.

         • Make server-side copy account bytes and obey --max-transfer (Nick Craig-Wood)

         • Add --create-empty-src-dirs flag and default to not creating empty dirs (ishuah)

         • Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key (Nick Craig-Wood)

         • Implement --suffix-keep-extension for use with --suffix (Nick Craig-Wood)

         • build:

           • Switch to semver compliant version tags to be go modules compliant (Nick Craig-Wood)

           • Update to use go1.12.x for the build (Nick Craig-Wood)

         • serve dlna: Add connection manager service description to improve compatibility (Dan Walters)

         • lsf: Add `e' format to show encrypted names and `o' for original IDs (Nick Craig-Wood)

         • lsjson: Added --files-only and --dirs-only flags (calistri)

         • rc: Implement operations/publiclink the equivalent of rclone link (Nick Craig-Wood)

       • Bug Fixes

         • accounting: Fix total ETA when --stats-unit bits is in effect (Nick Craig-Wood)

         • Bash TAB completion

           • Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood)

           • Fix for remotes with underscores in their names (Six)

           • Fix completion of remotes (Florian Gamböck)

           • Fix autocompletion of remote paths with spaces (Danil Semelenov)

         • serve dlna: Fix root XML service descriptor (Dan Walters)

         • ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood)

         • Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood)

         • rc: Reload filter when the options are set via the rc (Nick Craig-Wood)

       • VFS / Mount

         • Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood)

         • Read directory and check for a file before mkdir (Nick Craig-Wood)

         • Shorten the locking window for vfs/refresh (Nick Craig-Wood)

       • Azure Blob

         • Enable MD5 checksums when uploading files bigger than the “Cutoff” (Dr.Rx)

         • Fix SAS URL support (Nick Craig-Wood)

       • B2

         • Allow manual configuration of backblaze downloadUrl (Vince)

         • Ignore already_hidden error on remove (Nick Craig-Wood)

         • Ignore malformed src_last_modified_millis (Nick Craig-Wood)

       • Drive

         • Add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos (Nick Craig-Wood)

         • Allow server-side move/copy between different remotes.  (Fionera)

         • Add docs on team drives and --fast-list eventual consistency (Nestar47)

         • Fix imports of text files (Nick Craig-Wood)

         • Fix range requests on 0 length files (Nick Craig-Wood)

         • Fix creation of duplicates with server-side copy (Nick Craig-Wood)

       • Dropbox

         • Retry blank errors to fix long listings (Nick Craig-Wood)

       • FTP

         • Add --ftp-concurrency to limit maximum number of connections (Nick Craig-Wood)

       • Google Cloud Storage

         • Fall back to default application credentials (marcintustin)

         • Allow bucket policy only buckets (Nick Craig-Wood)

       • HTTP

         • Add --http-no-slash for websites with directories with no slashes (Nick Craig-Wood)

         • Remove duplicates from listings (Nick Craig-Wood)

         • Fix socket leak on 404 errors (Nick Craig-Wood)

       • Jottacloud

         • Fix token refresh (Sebastian Bünger)

         • Add device registration (Oliver Heyme)

       • Onedrive

         • Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly)

         • Always add trailing colon to path when addressing items, (Cnly)

         • Return errors instead of panic for invalid uploads (Fabian Möller)

       • S3

         • Add support for “Glacier Deep Archive” storage class (Manu)

         • Update Dreamhost endpoint (Nick Craig-Wood)

         • Note incompatibility with CEPH Jewel (Nick Craig-Wood)

       • SFTP

         • Allow custom ssh client config (Alexandru Bumbacea)

       • Swift

         • Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood)

         • Work around token expiry on CEPH (Nick Craig-Wood)

       • WebDAV

         • Allow IsCollection property to be integer or boolean (Nick Craig-Wood)

         • Fix race when creating directories (Nick Craig-Wood)

         • Fix About/df when reading the available/total returns 0 (Nick Craig-Wood)

   v1.46 - 2019-02-09
       • New backends

         • Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood)

       • New commands

         • serve dlna: serves a remove via DLNA for the local network (nicolov)

       • New Features

         • copy, move: Restore deprecated --no-traverse flag (Nick Craig-Wood)

           • This is useful for when transferring a small number of files into a large destination

         • genautocomplete:  Add  remote  path  completion  for  bash  completion  (Christopher Peterson & Danil
           Semelenov)

         • Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood)

           • Buffer recycling library to replace sync.Pool

           • Optionally use memory mapped memory for better memory shrinking

           • Enable with --use-mmap if having memory problems - not default yet

         • Parallelise reading of files specified by --files-from (Nick Craig-Wood)

         • check: Add stats showing total files matched.  (Dario Guzik)

         • Allow rename/delete open files under Windows (Nick Craig-Wood)

         • lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood)

         • Add cookie support with cmdline switch --use-cookies for all HTTP based remotes (qip)

         • Warn if --checksum is set but there are no hashes available (Nick Craig-Wood)

         • Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood)

         • Improve error reporting for too many/few arguments in commands (Nick Craig-Wood)

         • listremotes: Remove -l short flag as it conflicts with the new global flag (weetmuts)

         • Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood)

       • Bug Fixes

         • Fix layout of stats (Nick Craig-Wood)

         • Fix --progress crash under Windows Jenkins (Nick Craig-Wood)

         • Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly)

         • copyurl: Fix checking of --dry-run (Denis Skovpen)

       • Mount

         • Check that mountpoint and local directory to mount don’t overlap (Nick Craig-Wood)

         • Fix mount size under 32 bit Windows (Nick Craig-Wood)

       • VFS

         • Implement renaming of directories for backends without DirMove (Nick Craig-Wood)

           • now all backends except b2 support renaming directories

         • Implement --vfs-cache-max-size to limit the total size of the cache (Nick Craig-Wood)

         • Add --dir-perms and --file-perms flags to set default permissions (Nick Craig-Wood)

         • Fix deadlock on concurrent operations on a directory (Nick Craig-Wood)

         • Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood)

         • Fix renaming/deleting open files with cache mode “writes” under Windows (Nick Craig-Wood)

         • Fix panic on rename with --dry-run set (Nick Craig-Wood)

         • Fix vfs/refresh with recurse=true needing the --fast-list flag

       • Local

         • Add support for -l/--links (symbolic link translation) (yair@unicorn)

           • this works by showing links as link.rclonelink - see local backend docs for more info

           • this errors if used with -L/--copy-links

         • Fix renaming/deleting open files on Windows (Nick Craig-Wood)

       • Crypt

         • Check for maximum length before decrypting filename to fix panic (Garry McNulty)

       • Azure Blob

         • Allow building azureblob backend on *BSD (themylogin)

         • Use the rclone HTTP client to support --dump headers, --tpslimit, etc.  (Nick Craig-Wood)

         • Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood)

         • Ignore directory markers (Nick Craig-Wood)

         • Stop Mkdir attempting to create existing containers (Nick Craig-Wood)

       • B2

         • cleanup: will remove unfinished large files >24hrs old (Garry McNulty)

         • For a bucket limited application key check the bucket name (Nick Craig-Wood)

           • before this, rclone would use the authorised bucket regardless of what you put on the command line

         • Added --b2-disable-checksum flag (Wojciech Smigielski)

           • this enables large files to be uploaded without a SHA-1 hash for speed reasons

       • Drive

         • Set default pacer to 100ms for 10 tps (Nick Craig-Wood)

           • This fits the Google defaults much better and reduces the 403 errors massively

           • Add --drive-pacer-min-sleep and --drive-pacer-burst to control the pacer

         • Improve ChangeNotify support for items with multiple parents (Fabian Möller)

         • Fix ListR for items with multiple parents - this fixes oddities with vfs/refresh (Fabian Möller)

         • Fix using --drive-impersonate and appfolders (Nick Craig-Wood)

         • Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood)

       • Dropbox

         • Retry-After support for Dropbox backend (Mathieu Carbou)

       • FTP

         • Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood)

           • helps with indefinite hangs on some FTP servers

       • Google Cloud Storage

         • Update google cloud storage endpoints (weetmuts)

       • HTTP

         • Add an example with username and password which is supported but wasn’t documented (Nick Craig-Wood)

         • Fix backend with --files-from and nonexistent files (Nick Craig-Wood)

       • Hubic

         • Make error message more informative if authentication fails (Nick Craig-Wood)

       • Jottacloud

         • Resume and deduplication support (Oliver Heyme)

         • Use token auth for all API requests Don’t store password anymore (Sebastian Bünger)

         • Add support for 2-factor authentication (Sebastian Bünger)

       • Mega

         • Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood)

         • Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood)

         • Add new error codes for better error reporting (Nick Craig-Wood)

       • Onedrive

         • Fix broken support for “shared with me” folders (Alex Chen)

         • Fix root ID not normalised (Cnly)

         • Return err instead of panic on unknown-sized uploads (Cnly)

       • Qingstor

         • Fix go routine leak on multipart upload errors (Nick Craig-Wood)

         • Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood)

         • Default --qingstor-upload-concurrency to 1 to work around bug (Nick Craig-Wood)

       • S3

         • Implement --s3-upload-cutoff for single part uploads below this (Nick Craig-Wood)

         • Change --s3-upload-concurrency default to 4 to increase performance (Nick Craig-Wood)

         • Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood)

         • Auto detect region for buckets on operation failure (Nick Craig-Wood)

         • Add GLACIER storage class (William Cocker)

         • Add Scaleway to s3 documentation (Rémy Léone)

         • Add AWS endpoint eu-north-1 (weetmuts)

       • SFTP

         • Add support for PEM encrypted private keys (Fabian Möller)

         • Add option to force the usage of an ssh-agent (Fabian Möller)

         • Perform environment variable expansion on key-file (Fabian Möller)

         • Fix rmdir on Windows based servers (e.g. CrushFTP) (Nick Craig-Wood)

         • Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood)

         • Fix error on dangling symlinks (Nick Craig-Wood)

       • Swift

         • Add --swift-no-chunk to disable segmented uploads in rcat/mount (Nick Craig-Wood)

         • Introduce application credential auth support (kayrus)

         • Fix memory usage by slimming Object (Nick Craig-Wood)

         • Fix extra requests on upload (Nick Craig-Wood)

         • Fix reauth on big files (Nick Craig-Wood)

       • Union

         • Fix poll-interval not working (Nick Craig-Wood)

       • WebDAV

         • Support About which means rclone mount will show the correct disk size (Nick Craig-Wood)

         • Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood)

         • Fail soft on time parsing errors (Nick Craig-Wood)

         • Fix infinite loop on failed directory creation (Nick Craig-Wood)

         • Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood)

         • Fix upload of 0 length files on some servers (Nick Craig-Wood)

         • Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood)

   v1.45 - 2018-11-24
       • New backends

         • The Yandex backend was re-written - see below for details (Sebastian Bünger)

       • New commands

         • rcd: New command just to serve the remote control API (Nick Craig-Wood)

       • New Features

         • The remote control API (rc) was greatly expanded to allow full control over rclone (Nick Craig-Wood)

           • sensitive operations require authorization or the --rc-no-auth flag

           • config/* operations to configure rclone

           • options/* for reading/setting command line flags

           • operations/* for all low level operations, e.g. copy file, list directory

           • sync/* for sync, copy and move

           • --rc-files flag to serve files on the rc http server

             • this is for building web native GUIs for rclone

           • Optionally serving objects on the rc http server

           • Ensure rclone fails to start up if the --rc port is in use already

           • See the rc docs for more info

         • sync/copy/move

           • Make --files-from only read the objects specified and don’t scan directories (Nick Craig-Wood)

             • This is a huge speed improvement for destinations with lots of files

         • filter: Add --ignore-case flag (Nick Craig-Wood)

         • ncdu: Add remove function (`d' key) (Henning Surmeier)

         • rc command

           • Add --json flag for structured JSON input (Nick Craig-Wood)

           • Add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr (Nick Craig-Wood)

         • build

           • Require go1.8 or later for compilation (Nick Craig-Wood)

           • Enable softfloat on MIPS arch (Scott Edlund)

           • Integration test framework revamped with a better report and better retries (Nick Craig-Wood)

       • Bug Fixes

         • cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood)

         • config: Create config directory on save if it is missing (Nick Craig-Wood)

         • dedupe: Check for existing filename before renaming a dupe file (ssaqua)

         • move: Don’t create directories with --dry-run (Nick Craig-Wood)

         • operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood)

         • serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood)

       • Mount

         • Make --volname work for Windows and macOS (Nick Craig-Wood)

       • Azure Blob

         • Avoid context deadline exceeded error by setting a large TryTimeout value (brused27)

         • Fix erroneous Rmdir error “directory not empty” (Nick Craig-Wood)

         • Wait for up to 60s to create a just deleted container (Nick Craig-Wood)

       • Dropbox

         • Add dropbox impersonate support (Jake Coggiano)

       • Jottacloud

         • Fix bug in --fast-list handing of empty folders (albertony)

       • Opendrive

         • Fix transfer of files with + and & in (Nick Craig-Wood)

         • Fix retries of upload chunks (Nick Craig-Wood)

       • S3

         • Set ACL for server-side copies to that provided by the user (Nick Craig-Wood)

         • Fix role_arn, credential_source, ...  (Erik Swanson)

         • Add config info for Wasabi’s US-West endpoint (Henry Ptasinski)

       • SFTP

         • Ensure file hash checking is really disabled (Jon Fautley)

       • Swift

         • Add pacer for retries to make swift more reliable (Nick Craig-Wood)

       • WebDAV

         • Add Content-Type to PUT requests (Nick Craig-Wood)

         • Fix config parsing so --webdav-user and --webdav-pass flags work (Nick Craig-Wood)

         • Add RFC3339 date format (Ralf Hemberger)

       • Yandex

         • The yandex backend was re-written (Sebastian Bünger)

           • This implements low level retries (Sebastian Bünger)

           • Copy, Move, DirMove, PublicLink and About optional interfaces (Sebastian Bünger)

           • Improved general error handling (Sebastian Bünger)

           • Removed ListR for now due to inconsistent behaviour (Sebastian Bünger)

   v1.44 - 2018-10-15
       • New commands

         • serve ftp: Add ftp server (Antoine GIRARD)

         • settier: perform storage tier changes on supported remotes (sandeepkru)

       • New Features

         • Reworked command line help

           • Make default help less verbose (Nick Craig-Wood)

           • Split flags up into global and backend flags (Nick Craig-Wood)

           • Implement specialised help for flags and backends (Nick Craig-Wood)

           • Show URL of backend help page when starting config (Nick Craig-Wood)

         • stats: Long names now split in center (Joanna Marek)

         • Add --log-format flag for more control over log output (dcpu)

         • rc: Add support for OPTIONS and basic CORS (frenos)

         • stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)

       • Bug Fixes

         • Fix -P not ending with a new line (Nick Craig-Wood)

         • config: don’t create default config dir when user supplies --config (albertony)

         • Don’t print non-ASCII characters with --progress on windows (Nick Craig-Wood)

         • Correct logs for excluded items (ssaqua)

       • Mount

         • Remove EXPERIMENTAL tags (Nick Craig-Wood)

       • VFS

         • Fix race condition detected by serve ftp tests (Nick Craig-Wood)

         • Add vfs/poll-interval rc command (Fabian Möller)

         • Enable rename for nearly all remotes using server-side Move or Copy (Nick Craig-Wood)

         • Reduce directory cache cleared by poll-interval (Fabian Möller)

         • Remove EXPERIMENTAL tags (Nick Craig-Wood)

       • Local

         • Skip bad symlinks in dir listing with -L enabled (Cédric Connes)

         • Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)

         • Preallocate files on linux with fallocate(2) (Nick Craig-Wood)

       • Cache

         • Add cache/fetch rc function (Fabian Möller)

         • Fix worker scale down (Fabian Möller)

         • Improve performance by not sending info requests for cached chunks (dcpu)

         • Fix error return value of cache/fetch rc method (Fabian Möller)

         • Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)

         • Preserve leading / in wrapped remote path (Fabian Möller)

         • Add plex_insecure option to skip certificate validation (Fabian Möller)

         • Remove entries that no longer exist in the source (dcpu)

       • Crypt

         • Preserve leading / in wrapped remote path (Fabian Möller)

       • Alias

         • Fix handling of Windows network paths (Nick Craig-Wood)

       • Azure Blob

         • Add --azureblob-list-chunk parameter (Santiago Rodríguez)

         • Implemented settier command support on azureblob remote.  (sandeepkru)

         • Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)

       • Box

         • Implement link sharing.  (Sebastian Bünger)

       • Drive

         • Add --drive-import-formats - google docs can now be imported (Fabian Möller)

           • Rewrite mime type and extension handling (Fabian Möller)

           • Add document links (Fabian Möller)

           • Add support for multipart document extensions (Fabian Möller)

           • Add support for apps-script to json export (Fabian Möller)

           • Fix escaped chars in documents during list (Fabian Möller)

         • Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)

         • Improve directory notifications in ChangeNotify (Fabian Möller)

         • When listing team drives in config, continue on failure (Nick Craig-Wood)

       • FTP

         • Add a small pause after failed upload before deleting file (Nick Craig-Wood)

       • Google Cloud Storage

         • Fix service_account_file being ignored (Fabian Möller)

       • Jottacloud

         • Minor improvement in quota info (omit if unlimited) (albertony)

         • Add --fast-list support (albertony)

         • Add permanent delete support: --jottacloud-hard-delete (albertony)

         • Add link sharing support (albertony)

         • Fix handling of reserved characters.  (Sebastian Bünger)

         • Fix socket leak on Object.Remove (Nick Craig-Wood)

       • Onedrive

         • Rework to support Microsoft Graph (Cnly)

           • NB this will require re-authenticating the remote

         • Removed upload cutoff and always do session uploads (Oliver Heyme)

         • Use single-part upload for empty files (Cnly)

         • Fix new fields not saved when editing old config (Alex Chen)

         • Fix sometimes special chars in filenames not replaced (Alex Chen)

         • Ignore OneNote files by default (Alex Chen)

         • Add link sharing support (jackyzy823)

       • S3

         • Use custom pacer, to retry operations when reasonable (Craig Miskell)

         • Use configured server-side-encryption and storage  class  options  when  calling  CopyObject()  (Paul
           Kohout)

         • Make --s3-v2-auth flag (Nick Craig-Wood)

         • Fix v2 auth on files with spaces (Nick Craig-Wood)

       • Union

         • Implement union backend which reads from multiple backends (Felix Brucker)

         • Implement optional interfaces (Move, DirMove, Copy, etc.)  (Nick Craig-Wood)

         • Fix ChangeNotify to support multiple remotes (Fabian Möller)

         • Fix --backup-dir on union backend (Nick Craig-Wood)

       • WebDAV

         • Add another time format (Nick Craig-Wood)

         • Add a small pause after failed upload before deleting file (Nick Craig-Wood)

         • Add workaround for missing mtime (buergi)

         • Sharepoint: Renew cookies after 12hrs (Henning Surmeier)

       • Yandex

         • Remove redundant nil checks (teresy)

   v1.43.1 - 2018-09-07
       Point release to fix hubic and azureblob backends.

       • Bug Fixes

         • ncdu: Return error instead of log.Fatal in Show (Fabian Möller)

         • cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)

         • docs: Tidy website display (Anagh Kumar Baranwal)

       • Azure Blob:

         • Fix multi-part uploads.  (sandeepkru)

       • Hubic

         • Fix uploads (Nick Craig-Wood)

         • Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)

   v1.43 - 2018-09-01
       • New backends

         • Jottacloud (Sebastian Bünger)

       • New commands

         • copyurl: copies a URL to a remote (Denis)

       • New Features

         • Reworked config for backends (Nick Craig-Wood)

           • All backend config can now be supplied by command line, env var or config file

           • Advanced section in the config wizard for the optional items

           • A large step towards rclone backends being usable in other go software

           • Allow on the fly remotes with :backend: syntax

         • Stats revamp

           • Add --progress/-P flag to show interactive progress (Nick Craig-Wood)

           • Show the total progress of the sync in the stats (Nick Craig-Wood)

           • Add --stats-one-line flag for single line stats (Nick Craig-Wood)

         • Added weekday schedule into --bwlimit (Mateusz)

         • lsjson: Add option to show the original object IDs (Fabian Möller)

         • serve webdav: Make Content-Type without reading the file and add --etag-hash (Nick Craig-Wood)

         • build

           • Build macOS with native compiler (Nick Craig-Wood)

           • Update to use go1.11 for the build (Nick Craig-Wood)

         • rc

           • Added core/stats to return the stats (reddi1)

         • version --check: Prints the current release and beta versions (Nick Craig-Wood)

       • Bug Fixes

         • accounting

           • Fix time to completion estimates (Nick Craig-Wood)

           • Fix moving average speed for file stats (Nick Craig-Wood)

         • config: Fix error reading password from piped input (Nick Craig-Wood)

         • move: Fix --delete-empty-src-dirs flag to delete all empty dirs on move (ishuah)

       • Mount

         • Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood)

         • Fix mount --daemon not working with encrypted config (Alex Chen)

         • Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood)

       • VFS

         • Enable vfs-read-chunk-size by default (Fabian Möller)

         • Add the vfs/refresh rc command (Fabian Möller)

         • Add non recursive mode to vfs/refresh rc command (Fabian Möller)

         • Try to seek buffer on read only files (Fabian Möller)

       • Local

         • Fix crash when deprecated --local-no-unicode-normalization is supplied (Nick Craig-Wood)

         • Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood)

       • Cache

         • Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood)

         • Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood)

       • Crypt

         • Fix accounting when checking hashes on upload (Nick Craig-Wood)

       • Amazon Cloud Drive

         • Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood)

       • Azure Blob

         • Add connection string and SAS URL auth (Nick Craig-Wood)

         • List the container to see if it exists (Nick Craig-Wood)

         • Port new Azure Blob Storage SDK (sandeepkru)

         • Added blob tier, tier between Hot, Cool and Archive.  (sandeepkru)

         • Remove leading / from paths (Nick Craig-Wood)

       • B2

         • Support Application Keys (Nick Craig-Wood)

         • Remove leading / from paths (Nick Craig-Wood)

       • Box

         • Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)

         • Make --box-commit-retries flag defaulting to 100 to fix large uploads (Nick Craig-Wood)

       • Drive

         • Add --drive-keep-revision-forever flag (lewapm)

         • Handle gdocs when filtering file names in list (Fabian Möller)

         • Support using --fast-list for large speedups (Fabian Möller)

       • FTP

         • Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)

       • Google Cloud Storage

         • Fix index out of range error with --fast-list (Nick Craig-Wood)

       • Jottacloud

         • Fix MD5 error check (Oliver Heyme)

         • Handle empty time values (Martin Polden)

         • Calculate missing MD5s (Oliver Heyme)

         • Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)

         • Add optional MimeTyper interface.  (Sebastian Bünger)

         • Implement optional About interface (for df support).  (Sebastian Bünger)

       • Mega

         • Wait for events instead of arbitrary sleeping (Nick Craig-Wood)

         • Add --mega-hard-delete flag (Nick Craig-Wood)

         • Fix failed logins with upper case chars in email (Nick Craig-Wood)

       • Onedrive

         • Shared folder support (Yoni Jah)

         • Implement DirMove (Cnly)

         • Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood)

       • Pcloud

         • Delete half uploaded files on upload error (Nick Craig-Wood)

       • Qingstor

         • Remove leading / from paths (Nick Craig-Wood)

       • S3

         • Fix index out of range error with --fast-list (Nick Craig-Wood)

         • Add --s3-force-path-style (Nick Craig-Wood)

         • Add support for KMS Key ID (bsteiss)

         • Remove leading / from paths (Nick Craig-Wood)

       • Swift

         • Add storage_policy (Ruben Vandamme)

         • Make it so just storage_url or auth_token can be overridden (Nick Craig-Wood)

         • Fix server-side copy bug for unusual file names (Nick Craig-Wood)

         • Remove leading / from paths (Nick Craig-Wood)

       • WebDAV

         • Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood)

         • If root ends with / then don’t check if it is a file (Nick Craig-Wood)

         • Don’t accept redirects when reading metadata (Nick Craig-Wood)

         • Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)

         • Document dCache and Macaroons (Onno Zweers)

         • Sharepoint recursion with different depth (Henning)

         • Attempt to remove failed uploads (Nick Craig-Wood)

       • Yandex

         • Fix listing/deleting files in the root (Nick Craig-Wood)

   v1.42 - 2018-06-16
       • New backends

         • OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)

       • New commands

         • deletefile command (Filip Bartodziej)

       • New Features

         • copy, move: Copy single files directly, don’t use --files-from work-around

           • this makes them much more efficient

         • Implement --max-transfer flag to quit transferring at a limit

           • make exit code 8 for --max-transfer exceeded

         • copy: copy empty source directories to destination (Ishuah Kariuki)

         • check: Add --one-way flag (Kasper Byrdal Nielsen)

         • Add siginfo handler for macOS for ctrl-T stats (kubatasiemski)

         • rc

           • add core/gc to run a garbage collection on demand

           • enable go profiling by default on the --rc port

           • return error from remote on failure

         • lsf

           • Add --absolute flag to add a leading / onto path names

           • Add --csv flag for compliant CSV output

           • Add `m' format specifier to show the MimeType

           • Implement `i' format for showing object ID

         • lsjson

           • Add MimeType to the output

           • Add ID field to output to show Object ID

         • Add --retries-sleep flag (Benjamin Joseph Dag)

         • Oauth tidy up web page and error handling (Henning Surmeier)

       • Bug Fixes

         • Password prompt output with --log-file fixed for unix (Filip Bartodziej)

         • Calculate ModifyWindow each time on the fly to fix various problems (Stefan Breunig)

       • Mount

         • Only print “File.rename error” if there actually is an error (Stefan Breunig)

         • Delay rename if file has open writers instead of failing outright (Stefan Breunig)

         • Ensure atexit gets run on interrupt

         • macOS enhancements

           • Make --noappledouble --noapplexattr

           • Add --volname flag and remove special chars from it

           • Make Get/List/Set/Remove xattr return ENOSYS for efficiency

           • Make --daemon work for macOS without CGO

       • VFS

         • Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit (Fabian Möller)

         • Fix ChangeNotify for new or changed folders (Fabian Möller)

       • Local

         • Fix symlink/junction point directory handling under Windows

           • NB you will need to add -L to your command line to copy files with reparse points

       • Cache

         • Add non cached dirs on notifications (Remus Bunduc)

         • Allow root to be expired from rc (Remus Bunduc)

         • Clean remaining empty folders from temp upload path (Remus Bunduc)

         • Cache lists using batch writes (Remus Bunduc)

         • Use secure websockets for HTTPS Plex addresses (John Clayton)

         • Reconnect plex websocket on failures (Remus Bunduc)

         • Fix panic when running without plex configs (Remus Bunduc)

         • Fix root folder caching (Remus Bunduc)

       • Crypt

         • Check the crypted hash of files when uploading for extra data security

       • Dropbox

         • Make Dropbox for business folders accessible using an initial / in the path

       • Google Cloud Storage

         • Low level retry all operations if necessary

       • Google Drive

         • Add --drive-acknowledge-abuse to download flagged files

         • Add --drive-alternate-export to fix large doc export

         • Don’t attempt to choose Team Drives when using rclone config create

         • Fix change list polling with team drives

         • Fix ChangeNotify for folders (Fabian Möller)

         • Fix about (and df on a mount) for team drives

       • Onedrive

         • Errorhandler for onedrive for business requests (Henning Surmeier)

       • S3

         • Adjust upload concurrency with --s3-upload-concurrency (themylogin)

         • Fix --s3-chunk-size which was always using the minimum

       • SFTP

         • Add --ssh-path-override flag (Piotr Oleszczyk)

         • Fix slow downloads for long latency connections

       • Webdav

         • Add workarounds for biz.mail.ru

         • Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)

         • Better error message generation

   v1.41 - 2018-04-28
       • New backends

         • Mega support added

         • Webdav now supports SharePoint cookie authentication (hensur)

       • New commands

         • link: create public link to files and folders (Stefan Breunig)

         • about: gets quota info from a remote (a-roussos, ncw)

         • hashsum: a generic tool for any hash to produce md5sum like output

       • New Features

         • lsd: Add -R flag and fix and update docs for all ls commands

         • ncdu: added a “refresh” key - CTRL-L (Keith Goldfarb)

         • serve restic: Add append-only mode (Steve Kriss)

         • serve restic: Disallow overwriting files in append-only mode (Alexander Neumann)

         • serve restic: Print actual listener address (Matt Holt)

         • size: Add –json flag (Matthew Holt)

         • sync: implement –ignore-errors (Mateusz Pabian)

         • dedupe: Add dedupe largest functionality (Richard Yang)

         • fs: Extend SizeSuffix to include TB and PB for rclone about

         • fs: add –dump goroutines and –dump openfiles for debugging

         • rc: implement core/memstats to print internal memory usage info

         • rc: new call rc/pid (Michael P. Dubner)

       • Compile

         • Drop support for go1.6

       • Release

         • Fix make tarball (Chih-Hsuan Yen)

       • Bug Fixes

         • filter: fix –min-age and –max-age together check

         • fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport

         • lsd,lsf: make sure all times we output are in local time

         • rc: fix setting bwlimit to unlimited

         • rc: take note of the –rc-addr flag too as per the docs

       • Mount

         • Use About to return the correct disk total/used/free (e.g. in df)

         • Set --attr-timeout default to 1s - fixes:

           • rclone using too much memory

           • rclone not serving files to samba

           • excessive time listing directories

         • Fix df -i (upstream fix)

       • VFS

         • Filter files . and .. from directory listing

         • Only make the VFS cache if –vfs-cache-mode > Off

       • Local

         • Add –local-no-check-updated to disable updated file checks

         • Retry remove on Windows sharing violation error

       • Cache

         • Flush the memory cache after close

         • Purge file data on notification

         • Always forget parent dir for notifications

         • Integrate with Plex websocket

         • Add rc cache/stats (seuffert)

         • Add info log on notification

       • Box

         • Fix failure reading large directories - parse file/directory size as float

       • Dropbox

         • Fix crypt+obfuscate on dropbox

         • Fix repeatedly uploading the same files

       • FTP

         • Work around strange response from box FTP server

         • More workarounds for FTP servers to fix mkParentDir error

         • Fix no error on listing nonexistent directory

       • Google Cloud Storage

         • Add service_account_credentials (Matt Holt)

         • Detect bucket presence by listing it - minimises permissions needed

         • Ignore zero length directory markers

       • Google Drive

         • Add service_account_credentials (Matt Holt)

         • Fix directory move leaving a hardlinked directory behind

         • Return proper google errors when Opening files

         • When initialized with a filepath, optional features used incorrect root path (Stefan Breunig)

       • HTTP

         • Fix sync for servers which don’t return Content-Length in HEAD

       • Onedrive

         • Add QuickXorHash support for OneDrive for business

         • Fix socket leak in multipart session upload

       • S3

         • Look in S3 named profile files for credentials

         • Add --s3-disable-checksum to disable checksum uploading (Chris Redekop)

         • Hierarchical configuration support (Giri Badanahatti)

         • Add in config for all the supported S3 providers

         • Add One Zone Infrequent Access storage class (Craig Rachel)

         • Add –use-server-modtime support (Peter Baumgartner)

         • Add –s3-chunk-size option to control multipart uploads

         • Ignore zero length directory markers

       • SFTP

         • Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll)

         • Update docs with Synology quirks

         • Fail soft with a debug on hash failure

       • Swift

         • Add –use-server-modtime support (Peter Baumgartner)

       • Webdav

         • Support SharePoint cookie authentication (hensur)

         • Strip leading and trailing / off root

   v1.40 - 2018-03-19
       • New backends

         • Alias backend to create aliases for existing remote names (Fabian Möller)

       • New commands

         • lsf: list for parsing purposes (Jakub Tasiemski)

           • by default this is a simple non recursive list of files and directories

           • it can be configured to add more info in an easy to parse way

         • serve restic: for serving a remote as a Restic REST endpoint

           • This enables restic to use any backends that rclone can access

           • Thanks Alexander Neumann for help, patches and review

         • rc: enable the remote control of a running rclone

           • The running rclone must be started with –rc and related flags.

           • Currently there is support for bwlimit, and flushing for mount and cache.

       • New Features

         • --max-delete flag to add a delete threshold (Bjørn Erik Pedersen)

         • All backends now support RangeOption for ranged Open

           • cat: Use RangeOption for limited fetches to make more efficient

           • cryptcheck: make reading of nonce more efficient with RangeOption

         • serve http/webdav/restic

           • support SSL/TLS

           • add --user --pass and --htpasswd for authentication

         • copy/move: detect file size change during copy/move and abort transfer (ishuah)

         • cryptdecode: added option to return encrypted file names.  (ishuah)

         • lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)

         • Add --stats-file-name-length to specify the printed file name length for stats (Will Gunn)

       • Compile

         • Code base was shuffled and factored

           • backends moved into a backend directory

           • large packages split up

           • See the CONTRIBUTING.md doc for info as to what lives where now

         • Update to using go1.10 as the default go version

         • Implement daily full integration tests

       • Release

         • Include a source tarball and sign it and the binaries

         • Sign the git tags as part of the release process

         • Add .deb and .rpm packages as part of the build

         • Make a beta release for all branches on the main repo (but not pull requests)

       • Bug Fixes

         • config: fixes errors on nonexistent config by loading config file only on first access

         • config: retry saving the config after failure (Mateusz)

         • sync: when using --backup-dir don’t delete files if we can’t set their modtime

           • this fixes odd behaviour with Dropbox and --backup-dir

         • fshttp: fix idle timeouts for HTTP connections

         • serve http: fix serving files with : in - fixes

         • Fix --exclude-if-present to ignore directories which it doesn’t have permission for (Iakov Davydov)

         • Make accounting work properly with crypt and b2

         • remove --no-traverse flag because it is obsolete

       • Mount

         • Add --attr-timeout flag to control attribute caching in kernel

           • this now defaults to 0 which is correct but less efficient

           • see the mount docs for more info

         • Add --daemon flag to allow mount to run in the background (ishuah)

         • Fix: Return ENOSYS rather than EIO on attempted link

           • This fixes FileZilla accessing an rclone mount served over sftp.

         • Fix setting modtime twice

         • Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows

         • Many bugs fixed in the VFS layer - see below

       • VFS

         • Many fixes for --vfs-cache-mode writes and above

           • Update cached copy if we know it has changed (fixes stale data)

           • Clean path names before using them in the cache

           • Disable cache cleaner if --vfs-cache-poll-interval=0

           • Fill and clean the cache immediately on startup

         • Fix Windows opening every file when it stats the file

         • Fix applying modtime for an open Write Handle

         • Fix creation of files when truncating

         • Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE

         • Downgrade “poll-interval is not supported” message to Info

         • Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC

       • Local

         • Downgrade “invalid cross-device link: trying copy” to debug

         • Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device

         • Fix race conditions updating the hashes

       • Cache

         • Add support for polling - cache will update when remote changes on supported backends

         • Reduce log level for Plex api

         • Fix dir cache issue

         • Implement --cache-db-wait-time flag

         • Improve efficiency with RangeOption and RangeSeek

         • Fix dirmove with temp fs enabled

         • Notify vfs when using temp fs

         • Offline uploading

         • Remote control support for path flushing

       • Amazon cloud drive

         • Rclone no longer has any working keys - disable integration tests

         • Implement DirChangeNotify to notify cache/vfs/mount of changes

       • Azureblob

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

         • Improve accounting for chunked uploads

       • Backblaze B2

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

       • Box

         • Improve accounting for chunked uploads

       • Dropbox

         • Fix custom oauth client parameters

       • Google Cloud Storage

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

       • Google Drive

         • Migrate to api v3 (Fabian Möller)

         • Add scope configuration and root folder selection

         • Add --drive-impersonate for service accounts

           • thanks to everyone who tested, explored and contributed docs

         • Add --drive-use-created-date to use created date as modified date (nbuchanan)

         • Request the export formats only when required

           • This makes rclone quicker when there are no google docs

         • Fix finding paths with latin1 chars (a workaround for a drive bug)

         • Fix copying of a single Google doc file

         • Fix --drive-auth-owner-only to look in all directories

       • HTTP

         • Fix handling of directories with & in

       • Onedrive

         • Removed upload cutoff and always do session uploads

           • this stops the creation of multiple versions on business onedrive

         • Overwrite object size value with real size when reading file.  (Victor)

           • this fixes oddities when onedrive misreports the size of images

       • Pcloud

         • Remove unused chunked upload flag and code

       • Qingstor

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

       • S3

         • Support hashes for multipart files (Chris Redekop)

         • Initial support for IBM COS (S3) (Giri Badanahatti)

         • Update docs to discourage use of v2 auth with CEPH and others

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

         • Fix server-side copy and set modtime on files with + in

       • SFTP

         • Add option to disable remote hash check command execution (Jon Fautley)

         • Add --sftp-ask-password flag to prompt for password when needed (Leo R. Lundgren)

         • Add set_modtime configuration option

         • Fix following of symlinks

         • Fix reading config file outside of Fs setup

         • Fix reading $USER in username fallback not $HOME

         • Fix running under crontab - Use correct OS way of reading username

       • Swift

         • Fix refresh of authentication token

           • in v1.39 a bug was introduced which ignored new tokens - this fixes it

         • Fix extra HEAD transaction when uploading a new file

         • Don’t check for bucket/container presence if listing was OK

           • this makes rclone do one less request per invocation

       • Webdav

         • Add new time formats to support mydrive.ch and others

   v1.39 - 2017-12-23
       • New backends

         • WebDAV

           • tested with nextcloud, owncloud, put.io and others!

         • Pcloud

         • cache - wraps a cache around other backends (Remus Bunduc)

           • useful in combination with mount

           • NB this feature is in beta so use with care

       • New commands

         • serve command with subcommands:

           • serve webdav: this implements a webdav server for any rclone remote.

           • serve http: command to serve a remote over HTTP

         • config: add sub commands for full config file management

           • create/delete/dump/edit/file/password/providers/show/update

         • touch: to create or update the timestamp of a file (Jakub Tasiemski)

       • New Features

         • curl install for rclone (Filip Bartodziej)

         • –stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki)

         • –exclude-if-present to exclude a directory if a file is present (Iakov Davydov)

         • rmdirs: add –leave-root flag (lewapm)

         • move: add –delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki)

         • Add –dump flag, introduce –dump requests, responses and remove –dump-auth, –dump-filters

           • Obscure X-Auth-Token: from headers when dumping too

         • Document and implement exit codes for different failure modes (Ishuah Kariuki)

       • Compile

       • Bug Fixes

         • Retry lots more different types of errors to make multipart transfers more reliable

         • Save the config before asking for a token, fixes disappearing oauth config

         • Warn the user if –include and –exclude are used together (Ernest Borowski)

         • Fix duplicate files (e.g. on Google drive) causing spurious copies

         • Allow trailing and leading whitespace for passwords (Jason Rose)

         • ncdu: fix crashes on empty directories

         • rcat: fix goroutine leak

         • moveto/copyto: Fix to allow copying to the same name

       • Mount

         • –vfs-cache mode to make writes into mounts more reliable.

           • this requires caching files on the disk (see –cache-dir)

           • As this is a new feature, use with care

         • Use sdnotify to signal systemd the mount is ready (Fabian Möller)

         • Check if directory is not empty before mounting (Ernest Borowski)

       • Local

         • Add error message for cross file system moves

         • Fix equality check for times

       • Dropbox

         • Rework multipart upload

           • buffer the chunks when uploading large files so they can be retried

           • change default chunk size to 48MB now we are buffering them in memory

           • retry every error after the first chunk is done successfully

         • Fix error when renaming directories

       • Swift

         • Fix crash on bad authentication

       • Google Drive

         • Add service account support (Tim Cooijmans)

       • S3

         • Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio)

         • Fix crash if a bad listing is received

         • Add support for ECS task IAM roles (David Minor)

       • Backblaze B2

         • Fix multipart upload retries

         • Fix –hard-delete to make it work 100% of the time

       • Swift

         • Allow authentication with storage URL and auth key (Giovanni Pizzi)

         • Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson)

         • Add OS_TENANT_ID and OS_USER_ID to config

         • Allow configs with user id instead of user name

         • Check if swift segments container exists before creating (John Leach)

         • Fix memory leak in swift transfers (upstream fix)

       • SFTP

         • Add option to enable the use of aes128-cbc cipher (Jon Fautley)

       • Amazon cloud drive

         • Fix download of large files failing with “Only one auth mechanism allowed”

       • crypt

         • Option to encrypt directory names or leave them intact

         • Implement DirChangeNotify (Fabian Möller)

       • onedrive

         • Add  option  to  choose  resourceURL  during  setup  of OneDrive Business account if more than one is
           available for user

   v1.38 - 2017-09-30
       • New backends

         • Azure Blob Storage (thanks Andrei Dragomir)

         • Box

         • Onedrive for Business (thanks Oliver Heyme)

         • QingStor from QingCloud (thanks wuyu)

       • New commands

         • rcat - read from standard input and stream upload

         • tree - shows a nicely formatted recursive listing

         • cryptdecode - decode crypted file names (thanks ishuah)

         • config show - print the config file

         • config file - print the config file location

       • New Features

         • Empty directories are deleted on sync

         • dedupe - implement merging of duplicate directories

         • check and cryptcheck made more consistent and use less memory

         • cleanup for remaining remotes (thanks ishuah)

         • --immutable for ensuring that files don’t change (thanks Jacob McNamee)

         • --user-agent option (thanks Alex McGrath Kraak)

         • --disable flag to disable optional features

         • --bind flag for choosing the local addr on outgoing connections

         • Support for zsh auto-completion (thanks bpicode)

         • Stop normalizing file names but do a normalized compare in sync

       • Compile

         • Update to using go1.9 as the default go version

         • Remove snapd build due to maintenance problems

       • Bug Fixes

         • Improve retriable error detection which makes multipart uploads better

         • Make check obey --ignore-size

         • Fix bwlimit toggle in conjunction with schedules (thanks cbruegg)

         • config ensures newly written config is on the same mount

       • Local

         • Revert to copy when moving file across file system boundaries

         • --skip-links to suppress symlink warnings (thanks Zhiming Wang)

       • Mount

         • Re-use rcat internals to support uploads from all remotes

       • Dropbox

         • Fix “entry doesn’t belong in directory” error

         • Stop using deprecated API methods

       • Swift

         • Fix server-side copy to empty container with --fast-list

       • Google Drive

         • Change the default for --drive-use-trash to true

       • S3

         • Set session token when using STS (thanks Girish Ramakrishnan)

         • Glacier docs and error messages (thanks Jan Varho)

         • Read 1000 (not 1024) items in dir listings to fix Wasabi

       • Backblaze B2

         • Fix SHA1 mismatch when downloading files with no SHA1

         • Calculate missing hashes on the fly instead of spooling

         • --b2-hard-delete to permanently delete (not hide) files (thanks John Papandriopoulos)

       • Hubic

         • Fix creating containers - no longer have to use the default container

       • Swift

         • Optionally configure from a standard set of OpenStack environment vars

         • Add endpoint_type config

       • Google Cloud Storage

         • Fix bucket creation to work with limited permission users

       • SFTP

         • Implement connection pooling for multiple ssh connections

         • Limit new connections per second

         • Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann)

       • HTTP

         • Fix URL encoding issues

         • Fix directories with : in

         • Fix panic with URL encoded content

   v1.37 - 2017-07-22
       • New backends

         • FTP - thanks to Antonio Messina

         • HTTP - thanks to Vasiliy Tolstov

       • New commands

         • rclone ncdu - for exploring a remote with a text based user interface.

         • rclone lsjson - for listing with a machine-readable output

         • rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox)

       • New Features

         • Implement –fast-list flag

           • This allows remotes to list recursively if they can

           • This uses less transactions (important if you pay for them)

           • This may or may not be quicker

           • This will use more memory as it has to hold the listing in memory

           • –old-sync-method deprecated - the remaining uses are covered by –fast-list

           • This involved a major re-write of all the listing code

         • Add –tpslimit and –tpslimit-burst to limit transactions per second

           • this is useful in conjunction with rclone mount to limit external apps

         • Add –stats-log-level so can see –stats without -v

         • Print password prompts to stderr - Hraban Luyat

         • Warn about duplicate files when syncing

         • Oauth improvements

           • allow auth_url and token_url to be set in the config file

           • Print redirection URI if using own credentials.

         • Don’t Mkdir at the start of sync to save transactions

       • Compile

         • Update build to go1.8.3

         • Require go1.6 for building rclone

         • Compile 386 builds with “GO386=387” for maximum compatibility

       • Bug Fixes

         • Fix menu selection when no remotes

         • Config saving reworked to not kill the file if disk gets full

         • Don’t delete remote if name does not change while renaming

         • moveto, copyto: report transfers and checks as per move and copy

       • Local

         • Add –local-no-unicode-normalization flag - Bob Potter

       • Mount

         • Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help

         • Compare checksums on upload/download via FUSE

         • Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino

         • On read only open of file, make open pending until first read

         • Make –read-only reject modify operations

         • Implement ModTime via FUSE for remotes that support it

         • Allow modTime to be changed even before all writers are closed

         • Fix panic on renames

         • Fix hang on errored upload

       • Crypt

         • Report the name:root as specified by the user

         • Add an “obfuscate” option for filename encryption - Stephen Harris

       • Amazon Drive

         • Fix initialization order for token renewer

         • Remove revoked credentials, allow oauth proxy config and update docs

       • B2

         • Reduce minimum chunk size to 5MB

       • Drive

         • Add team drive support

         • Reduce bandwidth by adding fields for partial responses - Martin Kristensen

         • Implement –drive-shared-with-me flag to view shared with me files - Danny Tsai

         • Add –drive-trashed-only to read only the files in the trash

         • Remove obsolete –drive-full-list

         • Add missing seek to start on retries of chunked uploads

         • Fix stats accounting for upload

         • Convert / in names to a unicode equivalent (/)

         • Poll for Google Drive changes when mounted

       • OneDrive

         • Fix the uploading of files with spaces

         • Fix initialization order for token renewer

         • Display speeds accurately when uploading - Yoni Jah

         • Swap to using http://localhost:53682/ as redirect URL - Michael Ledin

         • Retry on token expired error, reset upload body on retry - Yoni Jah

       • Google Cloud Storage

         • Add ability to specify location and storage class via config and command line - thanks gdm85

         • Create container if necessary on server-side copy

         • Increase directory listing chunk to 1000 to increase performance

         • Obtain a refresh token for GCS - Steven Lu

       • Yandex

         • Fix the name reported in log messages (was empty)

         • Correct error return for listing empty directory

       • Dropbox

         • Rewritten to use the v2 API

           • Now supports ModTime

             • Can only set by uploading the file again

             • If you uploaded with an old rclone, rclone may upload everything again

             • Use --size-only or --checksum to avoid this

           • Now supports the Dropbox content hashing scheme

           • Now supports low level retries

       • S3

         • Work around eventual consistency in bucket creation

         • Create container if necessary on server-side copy

         • Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed

       • Swift, Hubic

         • Fix zero length directory markers showing in the subdirectory listing

           • this caused lots of duplicate transfers

         • Fix paged directory listings

           • this caused duplicate directory errors

         • Create container if necessary on server-side copy

         • Increase directory listing chunk to 1000 to increase performance

         • Make sensible error if the user forgets the container

       • SFTP

         • Add support for using ssh key files

         • Fix under Windows

         • Fix ssh agent on Windows

         • Adapt to latest version of library - Igor Kharin

   v1.36 - 2017-03-18
       • New Features

         • SFTP remote (Jack Schmidt)

         • Re-implement sync routine to work a directory at a time reducing memory usage

         • Logging revamped to be more inline with rsync - now much quieter * -v only shows transfers *  -vv  is
           for full debug * –syslog to log to syslog on capable platforms

         • Implement –backup-dir and –suffix

         • Implement –track-renames (initial implementation by Bjørn Erik Pedersen)

         • Add time-based bandwidth limits (Lukas Loesche)

         • rclone cryptcheck: checks integrity of crypt remotes

         • Allow all config file variables and options to be set from environment variables

         • Add –buffer-size parameter to control buffer size for copy

         • Make –delete-after the default

         • Add –ignore-checksum flag (fixed by Hisham Zarka)

         • rclone check: Add –download flag to check all the data, not just hashes

         • rclone cat: add –head, –tail, –offset, –count and –discard

         • rclone config: when choosing from a list, allow the value to be entered too

         • rclone config: allow rename and copy of remotes

         • rclone obscure: for generating encrypted passwords for rclone’s config (T.C.  Ferguson)

         • Comply with XDG Base Directory specification (Dario Giovannetti)

           • this moves the default location of the config file in a backwards compatible way

         • Release changes

           • Ubuntu snap support (Dedsec1)

           • Compile with go 1.8

           • MIPS/Linux big and little endian support

       • Bug Fixes

         • Fix copyto copying things to the wrong place if the destination dir didn’t exist

         • Fix parsing of remotes in moveto and copyto

         • Fix –delete-before deleting files on copy

         • Fix –files-from with an empty file copying everything

         • Fix sync: don’t update mod times if –dry-run set

         • Fix MimeType propagation

         • Fix filters to add ** rules to directory rules

       • Local

         • Implement -L, –copy-links flag to allow rclone to follow symlinks

         • Open files in write only mode so rclone can write to an rclone mount

         • Fix unnormalised unicode causing problems reading directories

         • Fix interaction between -x flag and –max-depth

       • Mount

         • Implement proper directory handling (mkdir, rmdir, renaming)

         • Make include and exclude filters apply to mount

         • Implement read and write async buffers - control with –buffer-size

         • Fix fsync on for directories

         • Fix retry on network failure when reading off crypt

       • Crypt

         • Add –crypt-show-mapping to show encrypted file mapping

         • Fix crypt writer getting stuck in a loop

           • IMPORTANT this bug had the potential to cause data corruption when

             • reading data from a network based remote and

             • writing to a crypt on Google Drive

           • Use the cryptcheck command to validate your data if you are concerned

           • If syncing two crypt remotes, sync the unencrypted remote

       • Amazon Drive

         • Fix panics on Move (rename)

         • Fix panic on token expiry

       • B2

         • Fix inconsistent listings and rclone check

         • Fix uploading empty files with go1.8

         • Constrain memory usage when doing multipart uploads

         • Fix upload url not being refreshed properly

       • Drive

         • Fix Rmdir on directories with trashed files

         • Fix “Ignoring unknown object” when downloading

         • Add –drive-list-chunk

         • Add –drive-skip-gdocs (Károly Oláh)

       • OneDrive

         • Implement Move

         • Fix Copy

           • Fix overwrite detection in Copy

           • Fix waitForJob to parse errors correctly

         • Use token renewer to stop auth errors on long uploads

         • Fix uploading empty files with go1.8

       • Google Cloud Storage

         • Fix depth 1 directory listings

       • Yandex

         • Fix single level directory listing

       • Dropbox

         • Normalise the case for single level directory listings

         • Fix depth 1 listing

       • S3

         • Added ca-central-1 region (Jon Yergatian)

   v1.35 - 2017-01-02
       • New Features

         • moveto and copyto commands for choosing a destination name on copy/move

         • rmdirs command to recursively delete empty directories

         • Allow repeated –include/–exclude/–filter options

         • Only show transfer stats on commands which transfer stuff

           • show stats on any command using the --stats flag

         • Allow overlapping directories in move when server-side dir move is supported

         • Add –stats-unit option - thanks Scott McGillivray

       • Bug Fixes

         • Fix the config file being overwritten when two rclone instances are running

         • Make rclone lsd obey the filters properly

         • Fix compilation on mips

         • Fix not transferring files that don’t differ in size

         • Fix panic on nil retry/fatal error

       • Mount

         • Retry reads on error - should help with reliability a lot

         • Report the modification times for directories from the remote

         • Add bandwidth accounting and limiting (fixes –bwlimit)

         • If –stats provided will show stats and which files are transferring

         • Support R/W files if truncate is set.

         • Implement statfs interface so df works

         • Note that write is now supported on Amazon Drive

         • Report number of blocks in a file - thanks Stefan Breunig

       • Crypt

         • Prevent the user pointing crypt at itself

         • Fix failed to authenticate decrypted block errors

           • these will now return the underlying unexpected EOF instead

       • Amazon Drive

         • Add support for server-side move and directory move - thanks Stefan Breunig

         • Fix nil pointer deref on size attribute

       • B2

         • Use new prefix and delimiter parameters in directory listings

           • This makes –max-depth 1 dir listings as used in mount much faster

         • Reauth the account while doing uploads too - should help with token expiry

       • Drive

         • Make DirMove more efficient and complain about moving the root

         • Create destination directory on Move()

   v1.34 - 2016-11-06
       • New Features

         • Stop single file and --files-from operations iterating through the source bucket.

         • Stop removing failed upload to cloud storage remotes

         • Make ContentType be preserved for cloud to cloud copies

         • Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini

         • rclone check shows count of hashes that couldn’t be checked

         • rclone listremotes command

         • Support linux/arm64 build - thanks Fredrik Fornwall

         • Remove Authorization: lines from --dump-headers output

       • Bug Fixes

         • Ignore files with control characters in the names

         • Fix rclone move command

           • Delete src files which already existed in dst

           • Fix deletion of src file when dst file older

         • Fix rclone check on crypted file systems

         • Make failed uploads not count as “Transferred”

         • Make sure high level retries show with -q

         • Use a vendor directory with godep for repeatable builds

       • rclone mount - FUSE

         • Implement FUSE mount options

           • --no-modtime, --debug-fuse, --read-only, --allow-non-empty, --allow-root, --allow-other

           • --default-permissions, --write-back-cache, --max-read-ahead, --umask, --uid, --gid

         • Add --dir-cache-time to control caching of directory entries

         • Implement seek for files opened for read (useful for video players)

           • with -no-seek flag to disable

         • Fix crash on 32 bit ARM (alignment of 64 bit counter)

         • ...and many more internal fixes and improvements!

       • Crypt

         • Don’t show encrypted password in configurator to stop confusion

       • Amazon Drive

         • New wait for upload option --acd-upload-wait-per-gb

           • upload timeouts scale by file size and can be disabled

         • Add 502 Bad Gateway to list of errors we retry

         • Fix overwriting a file with a zero length file

         • Fix ACD file size warning limit - thanks Felix Bünemann

       • Local

         • Unix: implement -x/--one-file-system to stay on a single file system

           • thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana

         • Windows: ignore the symlink bit on files

         • Windows: Ignore directory-based junction points

       • B2

         • Make sure each upload has at least one upload slot - fixes strange upload stats

         • Fix uploads when using crypt

         • Fix download of large files (sha1 mismatch)

         • Return error when we try to create a bucket which someone else owns

         • Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur

       • S3

         • Command line and config file support for

           • Setting/overriding ACL - thanks Radek Šenfeld

           • Setting storage class - thanks Asko Tamm

       • Drive

         • Make exponential backoff work exactly as per Google specification

         • add .epub, .odp and .tsv as export formats.

       • Swift

         • Don’t read metadata for directory marker objects

   v1.33 - 2016-08-24
       • New Features

         • Implement encryption

           • data encrypted in NACL secretbox format

           • with optional file name encryption

         • New commands

           • rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)

             • works on Linux, FreeBSD and OS X (need testers for the last 2!)

           • rclone cat - outputs remote file or files to the terminal

           • rclone genautocomplete - command to make a bash completion script for rclone

         • Editing a remote using rclone config now goes through the wizard

         • Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors

         • Use cobra for sub commands and docs generation

       • drive

         • Document how to make your own client_id

       • s3

         • User-configurable Amazon S3 ACL (thanks Radek Šenfeld)

       • b2

         • Fix stats accounting for upload - no more jumping to 100% done

         • On cleanup delete hide marker if it is the current file

         • New B2 API endpoint (thanks Per Cederberg)

         • Set maximum backoff to 5 Minutes

       • onedrive

         • Fix URL escaping in file names - e.g. uploading files with + in them.

       • amazon cloud drive

         • Fix token expiry during large uploads

         • Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors

       • local

         • Fix filenames with invalid UTF-8 not being uploaded

         • Fix problem with some UTF-8 characters on OS X

   v1.32 - 2016-07-13
       • Backblaze B2

         • Fix upload of files large files not in root

   v1.31 - 2016-07-13
       • New Features

         • Reduce memory on sync by about 50%

         • Implement –no-traverse flag to stop copy traversing the destination remote.

           • This can be used to reduce memory usage down to the smallest possible.

           • Useful to copy a small number of files into a large destination folder.

         • Implement cleanup command for emptying trash / removing old versions of files

           • Currently B2 only

         • Single file handling improved

           • Now copied with –files-from

           • Automatically sets –no-traverse when copying a single file

         • Info on using installing with ansible - thanks Stefan Weichinger

         • Implement –no-update-modtime flag to stop rclone fixing the remote modified times.

       • Bug Fixes

         • Fix move command - stop it running for overlapping Fses - this was causing data loss.

       • Local

         • Fix incomplete hashes - this was causing problems for B2.

       • Amazon Drive

         • Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.

       • Swift

         • Add support for non-default project domain - thanks Antonio Messina.

       • S3

         • Add instructions on how to use rclone with minio.

         • Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.

         • Skip setting the modified time for objects > 5GB as it isn’t possible.

       • Backblaze B2

         • Add –b2-versions flag so old versions can be listed and retrieved.

         • Treat 403 errors (e.g. cap exceeded) as fatal.

         • Implement cleanup command for deleting old file versions.

         • Make error handling compliant with B2 integrations notes.

         • Fix handling of token expiry.

         • Implement –b2-test-mode to set X-Bz-Test-Mode header.

         • Set cutoff for chunked upload to 200MB as per B2 guidelines.

         • Make upload multi-threaded.

       • Dropbox

         • Don’t retry 461 errors.

   v1.30 - 2016-06-18
       • New Features

         • Directory  listing  code  reworked for more features and better error reporting (thanks to Klaus Post
           for help).  This enables

           • Directory include filtering for efficiency

           • –max-depth parameter

           • Better error reporting

           • More to come

         • Retry more errors

         • Add –ignore-size flag - for uploading images to onedrive

         • Log -v output to stdout by default

         • Display the transfer stats in more human-readable form

         • Make 0 size files specifiable with --max-size 0b

         • Add b suffix so we can specify bytes in –bwlimit, –min-size, etc.

         • Use “password:” instead of “password>” prompt - thanks Klaus Post and Leigh Klotz

       • Bug Fixes

         • Fix retry doing one too many retries

       • Local

         • Fix problems with OS X and UTF-8 characters

       • Amazon Drive

         • Check a file exists before uploading to help with 408 Conflict errors

         • Reauth on 401 errors - this has been causing a lot of problems

         • Work around spurious 403 errors

         • Restart directory listings on error

       • Google Drive

         • Check a file exists before uploading to help with duplicates

         • Fix retry of multipart uploads

       • Backblaze B2

         • Implement large file uploading

       • S3

         • Add AES256 server-side encryption for - thanks Justin R. Wilson

       • Google Cloud Storage

         • Make sure we don’t use conflicting content types on upload

         • Add service account support - thanks Michal Witkowski

       • Swift

         • Add auth version parameter

         • Add domain option for openstack (v3 auth) - thanks Fabian Ruff

   v1.29 - 2016-04-18
       • New Features

         • Implement -I, --ignore-times for unconditional upload

         • Improve dedupecommand

           • Now removes identical copies without asking

           • Now obeys --dry-run

           • Implement --dedupe-mode for non interactive running

             • --dedupe-mode interactive - interactive the default.

             • --dedupe-mode skip - removes identical files then skips anything left.

             • --dedupe-mode first - removes identical files then keeps the first one.

             • --dedupe-mode newest - removes identical files then keeps the newest one.

             • --dedupe-mode oldest - removes identical files then keeps the oldest one.

             • --dedupe-mode rename - removes identical files then renames the rest to be different.

       • Bug fixes

         • Make rclone check obey the --size-only flag.

         • Use “application/octet-stream” if discovered mime type is invalid.

         • Fix missing “quit” option when there are no remotes.

       • Google Drive

         • Increase default chunk size to 8 MB - increases upload speed of big files

         • Speed up directory listings and make more reliable

         • Add missing retries for Move and DirMove - increases reliability

         • Preserve mime type on file update

       • Backblaze B2

         • Enable mod time syncing

           • This means that B2 will now check modification times

           • It will upload new files to update the modification times

           • (there isn’t an API to just set the mod time.)

           • If you want the old behaviour use --size-only.

         • Update API to new version

         • Fix parsing of mod time when not in metadata

       • Swift/Hubic

         • Don’t return an MD5SUM for static large objects

       • S3

         • Fix uploading files bigger than 50GB

   v1.28 - 2016-03-01
       • New Features

         • Configuration file encryption - thanks Klaus Post

         • Improve rclone config adding more help and making it easier to understand

         • Implement -u/--update so creation times can be used on all remotes

         • Implement --low-level-retries flag

         • Optionally disable gzip compression on downloads with --no-gzip-encoding

       • Bug fixes

         • Don’t make directories if --dry-run set

         • Fix and document the move command

         • Fix redirecting stderr on unix-like OSes when using --log-file

         • Fix delete command to wait until all finished - fixes missing deletes.

       • Backblaze B2

         • Use one upload URL per go routine fixes more than one upload using auth token

         • Add pacing, retries and reauthentication - fixes token expiry problems

         • Upload without using a temporary file from local (and remotes which support SHA1)

         • Fix reading metadata for all files when it shouldn’t have been

       • Drive

         • Fix listing drive documents at root

         • Disable copy and move for Google docs

       • Swift

         • Fix uploading of chunked files with non ASCII characters

         • Allow setting of storage_url in the config - thanks Xavier Lucas

       • S3

         • Allow IAM role and credentials from environment variables - thanks Brian Stengaard

         • Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon

       • Amazon Drive

         • Retry on more things to make directory listings more reliable

   v1.27 - 2016-01-31
       • New Features

         • Easier headless configuration with rclone authorize

         • Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.

         • delete command which does obey the filters (unlike purge)

         • dedupe command to deduplicate a remote.  Useful with Google Drive.

         • Add --ignore-existing flag to skip all files that exist on destination.

         • Add --delete-before, --delete-during, --delete-after flags.

         • Add --memprofile flag to debug memory use.

         • Warn the user about files with same name but different case

         • Make --include rules add their implicit exclude * at the end of the filter list

         • Deprecate compiling with go1.3

       • Amazon Drive

         • Fix download of files > 10 GB

         • Fix directory traversal (“Next token is expired”) for large directory listings

         • Remove 409 conflict from error codes we will retry - stops very long pauses

       • Backblaze B2

         • SHA1 hashes now checked by rclone core

       • Drive

         • Add --drive-auth-owner-only to only consider files owned by the user - thanks Björn Harrtell

         • Export Google documents

       • Dropbox

         • Make file exclusion error controllable with -q

       • Swift

         • Fix upload from unprivileged user.

       • S3

         • Fix updating of mod times of files with + in.

       • Local

         • Add local file system option to disable UNC on Windows.

   v1.26 - 2016-01-02
       • New Features

         • Yandex storage backend - thank you Dmitry Burdeev (“dibu”)

         • Implement Backblaze B2 storage backend

         • Add –min-age and –max-age flags - thank you Adriano Aurélio Meirelles

         • Make ls/lsl/md5sum/size/check obey includes and excludes

       • Fixes

         • Fix crash in http logging

         • Upload releases to github too

       • Swift

         • Fix sync for chunked files

       • OneDrive

         • Re-enable server-side copy

         • Don’t mask HTTP error codes with JSON decode error

       • S3

         • Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)

   v1.25 - 2015-11-14
       • New features

         • Implement Hubic storage system

       • Fixes

         • Fix deletion of some excluded files without –delete-excluded

           • This could have deleted files unexpectedly on sync

           • Always check first with --dry-run!

       • Swift

         • Stop SetModTime losing metadata (e.g. X-Object-Manifest)

           • This could have caused data loss for files > 5GB in size

         • Use ContentType from Object to avoid lookups in listings

       • OneDrive

         • disable server-side copy as it seems to be broken at Microsoft

   v1.24 - 2015-11-07
       • New features

         • Add support for Microsoft OneDrive

         • Add --no-check-certificate option to disable server certificate verification

         • Add async readahead buffer for faster transfer of big files

       • Fixes

         • Allow spaces in remotes and check remote names for validity at creation time

         • Allow `&' and disallow `:' in Windows filenames.

       • Swift

         • Ignore directory marker objects where appropriate - allows working with Hubic

         • Don’t delete the container if fs wasn’t at root

       • S3

         • Don’t delete the bucket if fs wasn’t at root

       • Google Cloud Storage

         • Don’t delete the bucket if fs wasn’t at root

   v1.23 - 2015-10-03
       • New features

         • Implement rclone size for measuring remotes

       • Fixes

         • Fix headless config for drive and gcs

         • Tell the user they should try again if the webserver method failed

         • Improve output of --dump-headers

       • S3

         • Allow anonymous access to public buckets

       • Swift

         • Stop chunked operations logging “Failed to read info: Object Not Found”

         • Use Content-Length on uploads for extra reliability

   v1.22 - 2015-09-28
       • Implement rsync like include and exclude flags

       • swift

         • Support files > 5GB - thanks Sergey Tolmachev

   v1.21 - 2015-09-22
       • New features

         • Display individual transfer progress

         • Make lsl output times in localtime

       • Fixes

         • Fix allowing user to override credentials again in Drive, GCS and ACD

       • Amazon Drive

         • Implement compliant pacing scheme

       • Google Drive

         • Make directory reads concurrent for increased speed.

   v1.20 - 2015-09-15
       • New features

         • Amazon Drive support

         • Oauth support redone - fix many bugs and improve usability

           • Use “golang.org/x/oauth2” as oauth library of choice

           • Improve oauth usability for smoother initial signup

           • drive, googlecloudstorage: optionally use auto config for the oauth token

         • Implement –dump-headers and –dump-bodies debug flags

         • Show multiple matched commands if abbreviation too short

         • Implement server-side move where possible

       • local

         • Always use UNC paths internally on Windows - fixes a lot of bugs

       • dropbox

         • force use of our custom transport which makes timeouts work

       • Thanks to Klaus Post for lots of help with this release

   v1.19 - 2015-08-28
       • New features

         • Server side copies for s3/swift/drive/dropbox/gcs

         • Move command - uses server-side copies if it can

         • Implement –retries flag - tries 3 times by default

         • Build for plan9/amd64 and solaris/amd64 too

       • Fixes

         • Make a current version download with a fixed URL for scripting

         • Ignore rmdir in limited fs rather than throwing error

       • dropbox

         • Increase chunk size to improve upload speeds massively

         • Issue an error message when trying to upload bad file name

   v1.18 - 2015-08-17
       • drive

         • Add --drive-use-trash flag so rclone trashes instead of deletes

         • Add “Forbidden to download” message for files with no downloadURL

       • dropbox

         • Remove datastore

           • This was deprecated and it caused a lot of problems

           • Modification times and MD5SUMs no longer stored

         • Fix uploading files > 2GB

       • s3

         • use official AWS SDK from github.com/aws/aws-sdk-go

         • NB will most likely require you to delete and recreate remote

         • enable multipart upload which enables files > 5GB

         • tested with Ceph / RadosGW / S3 emulation

         • many thanks to Sam Liston and Brian Haymore at the  Utah Center for High Performance Computing for  a
           Ceph test account

       • misc

         • Show errors when reading the config file

         • Do not print stats in quiet mode - thanks Leonid Shalupov

         • Add FAQ

         • Fix created directories not obeying umask

         • Linux installation instructions - thanks Shimon Doodkin

   v1.17 - 2015-06-14
       • dropbox: fix case insensitivity issues - thanks Leonid Shalupov

   v1.16 - 2015-06-09
       • Fix uploading big files which was causing timeouts or panics

       • Don’t check md5sum after download with –size-only

   v1.15 - 2015-06-06
       • Add –checksum flag to only discard transfers by MD5SUM - thanks Alex Couper

       • Implement –size-only flag to sync on size not checksum & modtime

       • Expand docs and remove duplicated information

       • Document rclone’s limitations with directories

       • dropbox: update docs about case insensitivity

   v1.14 - 2015-05-21
       • local: fix encoding of non utf-8 file names - fixes a duplicate file problem

       • drive: docs about rate limiting

       • google cloud storage: Fix compile after API change in “google.golang.org/api/storage/v1”

   v1.13 - 2015-05-10
       • Revise documentation (especially sync)

       • Implement –timeout and –conntimeout

       • s3: ignore etags from multipart uploads which aren’t md5sums

   v1.12 - 2015-03-15
       • drive: Use chunked upload for files above a certain size

       • drive: add –drive-chunk-size and –drive-upload-cutoff parameters

       • drive: switch to insert from update when a failed copy deletes the upload

       • core: Log duplicate files if they are detected

   v1.11 - 2015-03-04
       • swift: add region parameter

       • drive: fix crash on failed to update remote mtime

       • In remote paths, change native directory separators to /

       • Add synchronization to ls/lsl/lsd output to stop corruptions

       • Ensure all stats/log messages to go stderr

       • Add –log-file flag to log everything (including panics) to file

       • Make it possible to disable stats printing with –stats=0

       • Implement –bwlimit to limit data transfer bandwidth

   v1.10 - 2015-02-12
       • s3: list an unlimited number of items

       • Fix getting stuck in the configurator

   v1.09 - 2015-02-07
       • windows: Stop drive letters (e.g. C:) getting mixed up with remotes (e.g. drive:)

       • local: Fix directory separators on Windows

       • drive: fix rate limit exceeded errors

   v1.08 - 2015-02-04
       • drive: fix subdirectory listing to not list entire drive

       • drive: Fix SetModTime

       • dropbox: adapt code to recent library changes

   v1.07 - 2014-12-23
       • google cloud storage: fix memory leak

   v1.06 - 2014-12-12
       • Fix “Couldn’t find home directory” on OSX

       • swift: Add tenant parameter

       • Use new location of Google API packages

   v1.05 - 2014-08-09
       • Improved tests and consequently lots of minor fixes

       • core: Fix race detected by go race detector

       • core: Fixes after running errcheck

       • drive: reset root directory on Rmdir and Purge

       • fs: Document that Purger returns error on empty directory, test and fix

       • google cloud storage: fix ListDir on subdirectory

       • google cloud storage: re-read metadata in SetModTime

       • s3: make reading metadata more reliable to work around eventual consistency problems

       • s3: strip trailing / from ListDir()

       • swift: return directories without / in ListDir

   v1.04 - 2014-07-21
       • google cloud storage: Fix crash on Update

   v1.03 - 2014-07-20
       • swift, s3, dropbox: fix updated files being marked as corrupted

       • Make compile with go 1.1 again

   v1.02 - 2014-07-19
       • Implement Dropbox remote

       • Implement Google Cloud Storage remote

       • Verify Md5sums and Sizes after copies

       • Remove times from “ls” command - lists sizes only

       • Add add “lsl” - lists times and sizes

       • Add “md5sum” command

   v1.01 - 2014-07-04
       • drive: fix transfer of big files using up lots of memory

   v1.00 - 2014-07-03
       • drive: fix whole second dates

   v0.99 - 2014-06-26
       • Fix –dry-run not working

       • Make compatible with go 1.1

   v0.98 - 2014-05-30
       • s3: Treat missing Content-Length as 0 for some ceph installations

       • rclonetest: add file with a space in

   v0.97 - 2014-05-05
       • Implement copying of single files

       • s3 & swift: support paths inside containers/buckets

   v0.96 - 2014-04-24
       • drive: Fix multiple files of same name being created

       • drive: Use o.Update and fs.Put to optimise transfers

       • Add version number, -V and –version

   v0.95 - 2014-03-28
       • rclone.org: website, docs and graphics

       • drive: fix path parsing

   v0.94 - 2014-03-27
       • Change remote format one last time

       • GNU style flags

   v0.93 - 2014-03-16
       • drive: store token in config file

       • cross compile other versions

       • set strict permissions on config file

   v0.92 - 2014-03-15
       • Config fixes and –config option

   v0.91 - 2014-03-15
       • Make config file

   v0.90 - 2013-06-27
       • Project named rclone

   v0.00 - 2012-11-18
       • Project started

Bugs and Limitations

   Limitations
   Directory timestamps aren’t preserved
       Rclone  doesn’t  currently  preserve  the  timestamps of directories.  This is because rclone only really
       considers objects when syncing.

   Rclone struggles with millions of files in a directory/bucket
       Currently rclone loads each directory/bucket entirely into memory before using  it.   Since  each  rclone
       object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory.

       Millions  of  files  in  a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those
       remotes do not segregate subdirectories within the bucket.

   Bucket-based remotes and folders
       Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have  a  concept  of  directories.   Rclone  therefore
       cannot  create  directories in them which means that empty directories on a bucket-based remote will tend
       to disappear.

       Some software creates empty keys ending in /  as  directory  markers.   Rclone  doesn’t  do  this  as  it
       potentially creates more objects and costs more.  This ability may be added in the future (probably via a
       flag/option).

   Bugs
       Bugs are stored in rclone’s GitHub project:

       • https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug Reported bugs

       • https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22    Known
         issues

Frequently Asked Questions

   Do all cloud storage systems support all rclone commands
       Yes they do.  All the rclone commands (e.g. sync, copy, etc.)   will  work  on  all  the  remote  storage
       systems.

   Can I copy the config from one machine to another
       Sure!   Rclone  stores  all  of  its  config in a single file.  If you want to find this file, run rclone
       config file which will tell you where it is.

       See the remote setup docs for more info.

   How do I configure rclone on a remote / headless box with no browser?
       This has now been documented in its own remote setup page.

   Can rclone sync directly from drive to s3
       Rclone can sync between two remote cloud storage systems just fine.

       Note that it effectively downloads the file and uploads it again, so the node running rclone  would  need
       to have lots of bandwidth.

       The syncs would be incremental (on a file by file basis).

       e.g.

              rclone sync -i drive:Folder s3:bucket

   Using rclone from multiple locations at the same time
       You  can  use  rclone  from multiple places at the same time if you choose different subdirectory for the
       output, e.g.

              Server A> rclone sync -i /tmp/whatever remote:ServerA
              Server B> rclone sync -i /tmp/whatever remote:ServerB

       If you sync to the same directory then you should use rclone copy otherwise the two instances  of  rclone
       may delete each other’s files, e.g.

              Server A> rclone copy /tmp/whatever remote:Backup
              Server B> rclone copy /tmp/whatever remote:Backup

       The  file  names  you  upload from Server A and Server B should be different in this case, otherwise some
       file systems (e.g. Drive) may make duplicates.

   Why doesn’t rclone support partial transfers / binary diffs like rsync?
       Rclone stores each file you transfer as a native object on the remote cloud storage system.   This  means
       that you can see the files you upload as expected using alternative access methods (e.g. using the Google
       Drive  web interface).  There is a 1:1 mapping between files on your hard disk and objects created in the
       cloud storage system.

       Cloud storage systems (at least none I’ve come across yet) don’t support partially uploading  an  object.
       You can’t take an existing object, and change some bytes in the middle of it.

       It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone
       does,  but  that  would  break  the 1:1 mapping of files on your hard disk to objects in the remote cloud
       storage system.

       All the cloud storage systems support partial downloads of content, so  it  would  be  possible  to  make
       partial  downloads  work.  However to make this work efficiently this would require storing a significant
       amount of metadata, which breaks the desired 1:1 mapping of files to objects.

   Can rclone do bi-directional sync?
       Yes, since rclone v1.58.0, bidirectional cloud sync is available.

   Can I use rclone with an HTTP proxy?
       Yes.  rclone will follow the standard environment variables  for  proxies,  similar  to  cURL  and  other
       programs.

       In  general  the  variables  are  called http_proxy (for services reached over http) and https_proxy (for
       services reached over https).  Most public services will be using https, but you may wish to set both.

       The content of the variable is protocol://server:port.  The protocol value is the one used to talk to the
       proxy server, itself, and is commonly either http or socks5.

       Slightly annoyingly, there is no standard for the name; some applications may use http_proxy but  another
       one  HTTP_PROXY.   The  Go libraries used by rclone will try both variations, but you may wish to set all
       possibilities.  So, on Linux, you may end up with code similar to

              export http_proxy=http://proxyserver:12345
              export https_proxy=$http_proxy
              export HTTP_PROXY=$http_proxy
              export HTTPS_PROXY=$http_proxy

       Note: If the proxy server requires a username and password, then use

              export http_proxy=http://username:password@proxyserver:12345
              export https_proxy=$http_proxy
              export HTTP_PROXY=$http_proxy
              export HTTPS_PROXY=$http_proxy

       The NO_PROXY allows you to disable the proxy for specific hosts.  Hosts must be comma separated, and  can
       contain domains or parts.  For instance “foo.com” also matches “bar.foo.com”.

       e.g.

              export no_proxy=localhost,127.0.0.0/8,my.host.name
              export NO_PROXY=$no_proxy

       Note that the FTP backend does not support ftp_proxy yet.

   Rclone gives x509: failed to load system roots and no roots provided error
       This means that rclone can’t find the SSL root certificates.  Likely you are running rclone on a NAS with
       a cut-down Linux OS, or possibly on Solaris.

       Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.

              "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
              "/etc/pki/tls/certs/ca-bundle.crt",   // Fedora/RHEL
              "/etc/ssl/ca-bundle.pem",             // OpenSUSE
              "/etc/pki/tls/cacert.pem",            // OpenELEC

       So doing something like this should fix the problem.  It also sets the time which is important for SSL to
       work properly.

              mkdir -p /etc/ssl/certs/
              curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
              ntpclient -s -h pool.ntp.org

       The  two  environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned in the x509 package, provide an
       additional way to provide the SSL root certificates.

       Note that you may need to add the --insecure option to the curl command line if it doesn’t work without.

              curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

   Rclone gives Failed to load config file: function not implemented error
       Likely this means that you are running rclone on Linux version  not  supported  by  the  go  runtime,  ie
       earlier than version 2.6.23.

       See the system requirements section in the go install docs for full details.

   All my uploaded docx/xlsx/pptx files appear as archive/zip
       This  is  caused  by  uploading these files from a Windows computer which hasn’t got the Microsoft Office
       suite installed.  The easiest way to fix  is  to  install  the  Word  viewer  and  the  Microsoft  Office
       Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions’ file formats

   tcp lookup some.domain.com no such host
       This happens when rclone cannot resolve a domain.  Please check that your DNS setup is generally working,
       e.g.

              # both should print a long list of possible IP addresses
              dig www.googleapis.com          # resolve using your default DNS
              dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server

       If  you  are  using  systemd-resolved  (default  on  Arch  Linux), ensure it is at version 233 or higher.
       Previous releases contain a bug which causes not all domains to be resolved properly.

       Additionally with the GODEBUG=netdns= environment variable the Go resolver decision  can  be  influenced.
       This  also  allows to resolve certain issues with DNS resolution.  See the name resolution section in the
       go docs.

   The total size reported in the stats for a sync is wrong and keeps changing
       It is likely you have more than 10,000 files that need to be synced.  By default, rclone only gets 10,000
       files ahead in a sync so as not to use up too  much  memory.   You  can  change  this  default  with  the
       –max-backlog flag.

   Rclone is using too much memory or appears to have a memory leak
       Rclone  is  written in Go which uses a garbage collector.  The default settings for the garbage collector
       mean that it runs when the heap size has doubled.

       However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value,
       say export GOGC=20.  This will make the garbage collector  work  harder,  reducing  memory  size  at  the
       expense of CPU usage.

       The  most common cause of rclone using lots of memory is a single directory with thousands or millions of
       files in.  Rclone has to load this entirely into memory as rclone  objects.   Each  rclone  object  takes
       0.5k-1k of memory.

   Rclone changes fullwidth Unicode punctuation marks in file names
       For  example:  On  a  Windows  system,  you  have  a  file with name Test:1.jpg, where : is the Unicode
       fullwidth colon symbol.  When using rclone to copy this to your Google Drive, you will  notice  that  the
       file gets renamed to Test:1.jpg, where : is the regular (halfwidth) colon.

       The  reason  for such renames is the way rclone handles different restricted filenames on different cloud
       storage systems.  It tries to avoid ambiguous file names as much and  allow  moving  files  between  many
       cloud  storage  systems  transparently,  by  replacing  invalid  characters  with similar looking Unicode
       characters when transferring to one storage system, and replacing  back  again  when  transferring  to  a
       different  storage  system where the original characters are supported.  When the same Unicode characters
       are intentionally used in file names, this replacement strategy leads to  unwanted  renames.   Read  more
       here.

License

       This is free software under the terms of the MIT license (check the COPYING file included with the source
       code).

              Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/

              Permission is hereby granted, free of charge, to any person obtaining a copy
              of this software and associated documentation files (the "Software"), to deal
              in the Software without restriction, including without limitation the rights
              to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
              copies of the Software, and to permit persons to whom the Software is
              furnished to do so, subject to the following conditions:

              The above copyright notice and this permission notice shall be included in
              all copies or substantial portions of the Software.

              THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
              IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
              FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
              AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
              LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
              OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
              THE SOFTWARE.

Authors and contributors

   Authors
       • Nick Craig-Wood nick@craig-wood.com

   Contributors
       {{<  rem  email  addresses  removed  from  here  need  to  be  addeed  to bin/.ignore-emails to make sure
       update-authors.py doesn't immediately put them back in again.  >}}

       • Alex Couper amcouper@gmail.com

       • Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru

       • Shimon Doodkin helpmepro1@gmail.com

       • Colin Nicholson colin@colinn.com

       • Klaus Post klauspost@gmail.com

       • Sergey Tolmachev tolsi.ru@gmail.com

       • Adriano Aurélio Meirelles adriano@atinge.com

       • C. Bess cbess@users.noreply.github.com

       • Dmitry Burdeev dibu28@gmail.com

       • Joseph Spurrier github@josephspurrier.com

       • Björn Harrtell bjorn@wololo.org

       • Xavier Lucas xavier.lucas@corp.ovh.com

       • Werner Beroux werner@beroux.com

       • Brian Stengaard brian@stengaard.eu

       • Jakub Gedeon jgedeon@sofi.com

       • Jim Tittsler jwt@onjapan.net

       • Michal Witkowski michal@improbable.io

       • Fabian Ruff fabian.ruff@sap.com

       • Leigh Klotz klotz@quixey.com

       • Romain Lapray lapray.romain@gmail.com

       • Justin R. Wilson jrw972@gmail.com

       • Antonio Messina antonio.s.messina@gmail.com

       • Stefan G. Weichinger office@oops.co.at

       • Per Cederberg cederberg@gmail.com

       • Radek Šenfeld rush@logic.cz

       • Fredrik Fornwall fredrik@fornwall.net

       • Asko Tamm asko@deekit.net

       • xor-zz xor@gstocco.com

       • Tomasz Mazur tmazur90@gmail.com

       • Marco Paganini paganini@paganini.net

       • Felix Bünemann buenemann@louis.info

       • Durval Menezes jmrclone@durval.com

       • Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com

       • Stefan Breunig stefan-github@yrden.de

       • Alishan Ladhani ali-l@users.noreply.github.com

       • 0xJAKE 0xJAKE@users.noreply.github.com

       • Thibault Molleman thibaultmol@users.noreply.github.com

       • Scott McGillivray scott.mcgillivray@gmail.com

       • Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com

       • Lukas Loesche lukas@mesosphere.io

       • emyarod allllaboutyou@gmail.com

       • T.C.  Ferguson tcf909@gmail.com

       • Brandur brandur@mutelight.org

       • Dario Giovannetti dev@dariogiovannetti.net

       • Károly Oláh okaresz@aol.com

       • Jon Yergatian jon@macfanatic.ca

       • Jack Schmidt github@mowsey.org

       • Dedsec1 Dedsec1@users.noreply.github.com

       • Hisham Zarka hzarka@gmail.com

       • Jérôme Vizcaino jerome.vizcaino@gmail.com

       • Mike Tesch mjt6129@rit.edu

       • Marvin Watson marvwatson@users.noreply.github.com

       • Danny Tsai danny8376@gmail.com

       • Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com

       • Stephen Harris github@spuddy.org sweharris@users.noreply.github.com

       • Ihor Dvoretskyi ihor.dvoretskyi@gmail.com

       • Jon Craton jncraton@gmail.com

       • Hraban Luyat hraban@0brg.net

       • Michael Ledin mledin89@gmail.com

       • Martin Kristensen me@azgul.com

       • Too Much IO toomuchio@users.noreply.github.com

       • Anisse Astier anisse@astier.eu

       • Zahiar Ahmed zahiar@live.com

       • Igor Kharin igorkharin@gmail.com

       • Bill Zissimopoulos billziss@navimatics.com

       • Bob Potter bobby.potter@gmail.com

       • Steven Lu tacticalazn@gmail.com

       • Sjur Fredriksen sjurtf@ifi.uio.no

       • Ruwbin hubus12345@gmail.com

       • Fabian Möller fabianm88@gmail.com f.moeller@nynex.de

       • Edward Q. Bridges github@eqbridges.com

       • Vasiliy Tolstov v.tolstov@selfip.ru

       • Harshavardhana harsha@minio.io

       • sainaen sainaen@gmail.com

       • gdm85 gdm85@users.noreply.github.com

       • Yaroslav Halchenko debian@onerussian.com

       • John Papandriopoulos jpap@users.noreply.github.com

       • Zhiming Wang zmwangx@gmail.com

       • Andy Pilate cubox@cubox.me

       • Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com de8olihe@lego.com

       • wuyu wuyu@yunify.com

       • Andrei Dragomir adragomi@adobe.com

       • Christian Brüggemann mail@cbruegg.com

       • Alex McGrath Kraak amkdude@gmail.com

       • bpicode bjoern.pirnay@googlemail.com

       • Daniel Jagszent daniel@jagszent.de

       • Josiah White thegenius2009@gmail.com

       • Ishuah Kariuki kariuki@ishuah.com ishuah91@gmail.com

       • Jan Varho jan@varho.org

       • Girish Ramakrishnan girish@cloudron.io

       • LingMan LingMan@users.noreply.github.com

       • Jacob McNamee jacobmcnamee@gmail.com

       • jersou jertux@gmail.com

       • thierry thierry@substantiel.fr

       • Simon Leinen simon.leinen@gmail.com ubuntu@s3-test.novalocal

       • Dan Dascalescu ddascalescu+github@gmail.com

       • Jason Rose jason@jro.io

       • Andrew Starr-Bochicchio a.starr.b@gmail.com

       • John Leach john@johnleach.co.uk

       • Corban Raun craun@instructure.com

       • Pierre Carlson mpcarl@us.ibm.com

       • Ernest Borowski er.borowski@gmail.com

       • Remus Bunduc remus.bunduc@gmail.com

       • Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru

       • Jakub Tasiemski tasiemski@gmail.com

       • David Minor dminor@saymedia.com

       • Tim Cooijmans cooijmans.tim@gmail.com

       • Laurence liuxy6@gmail.com

       • Giovanni Pizzi gio.piz@gmail.com

       • Filip Bartodziej filipbartodziej@gmail.com

       • Jon Fautley jon@dead.li

       • lewapm 32110057+lewapm@users.noreply.github.com

       • Yassine Imounachen yassine256@gmail.com

       • Chris Redekop chris-redekop@users.noreply.github.com chris.redekop@gmail.com

       • Jon Fautley jon@adenoid.appstal.co.uk

       • Will Gunn WillGunn@users.noreply.github.com

       • Lucas Bremgartner lucas@bremis.ch

       • Jody Frankowski jody.frankowski@gmail.com

       • Andreas Roussos arouss1980@gmail.com

       • nbuchanan nbuchanan@utah.gov

       • Durval Menezes rclone@durval.com

       • Victor vb-github@viblo.se

       • Mateusz pabian.mateusz@gmail.com

       • Daniel Loader spicypixel@gmail.com

       • David0rk davidork@gmail.com

       • Alexander Neumann alexander@bumpern.de

       • Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local

       • Leo R. Lundgren leo@finalresort.org

       • wolfv wolfv6@users.noreply.github.com

       • Dave Pedu dave@davepedu.com

       • Stefan Lindblom lindblom@spotify.com

       • seuffert oliver@seuffert.biz

       • gbadanahatti 37121690+gbadanahatti@users.noreply.github.com

       • Keith Goldfarb barkofdelight@gmail.com

       • Steve Kriss steve@heptio.com

       • Chih-Hsuan Yen yan12125@gmail.com

       • Alexander Neumann fd0@users.noreply.github.com

       • Matt Holt mholt@users.noreply.github.com

       • Eri Bastos bastos.eri@gmail.com

       • Michael P. Dubner pywebmail@list.ru

       • Antoine GIRARD sapk@users.noreply.github.com

       • Mateusz Piotrowski mpp302@gmail.com

       • Animosity022 animosity22@users.noreply.github.com earl.texter@gmail.com

       • Peter Baumgartner pete@lincolnloop.com

       • Craig Rachel craig@craigrachel.com

       • Michael G. Noll miguno@users.noreply.github.com

       • hensur me@hensur.de

       • Oliver Heyme de8olihe@lego.com

       • Richard Yang richard@yenforyang.com

       • Piotr Oleszczyk piotr.oleszczyk@gmail.com

       • Rodrigo rodarima@gmail.com

       • NoLooseEnds NoLooseEnds@users.noreply.github.com

       • Jakub Karlicek jakub@karlicek.me

       • John Clayton john@codemonkeylabs.com

       • Kasper Byrdal Nielsen byrdal76@gmail.com

       • Benjamin Joseph Dag bjdag1234@users.noreply.github.com

       • themylogin themylogin@gmail.com

       • Onno Zweers onno.zweers@surfsara.nl

       • Jasper Lievisse Adriaanse jasper@humppa.nl

       • sandeepkru sandeep.ummadi@gmail.com sandeepkru@users.noreply.github.com

       • HerrH atomtigerzoo@users.noreply.github.com

       • Andrew 4030760+sparkyman215@users.noreply.github.com

       • dan smith XX1011@gmail.com

       • Oleg Kovalov iamolegkovalov@gmail.com

       • Ruben Vandamme github-com-00ff86@vandamme.email

       • Cnly minecnly@gmail.com

       • Andres Alvarez 1671935+kir4h@users.noreply.github.com

       • reddi1 xreddi@gmail.com

       • Matt Tucker matthewtckr@gmail.com

       • Sebastian Bünger buengese@gmail.com buengese@protonmail.com

       • Martin Polden mpolden@mpolden.no

       • Alex Chen Cnly@users.noreply.github.com

       • Denis deniskovpen@gmail.com

       • bsteiss 35940619+bsteiss@users.noreply.github.com

       • Cédric Connes cedric.connes@gmail.com

       • Dr. Tobias Quathamer toddy15@users.noreply.github.com

       • dcpu 42736967+dcpu@users.noreply.github.com

       • Sheldon Rupp me@shel.io

       • albertony 12441419+albertony@users.noreply.github.com

       • cron410 cron410@gmail.com

       • Anagh Kumar Baranwal 6824881+darthShadow@users.noreply.github.com

       • Felix Brucker felix@felixbrucker.com

       • Santiago Rodríguez scollazo@users.noreply.github.com

       • Craig Miskell craig.miskell@fluxfederation.com

       • Antoine GIRARD sapk@sapk.fr

       • Joanna Marek joanna.marek@u2i.com

       • frenos frenos@users.noreply.github.com

       • ssaqua ssaqua@users.noreply.github.com

       • xnaas me@xnaas.info

       • Frantisek Fuka fuka@fuxoft.cz

       • Paul Kohout pauljkohout@yahoo.com

       • dcpu 43330287+dcpu@users.noreply.github.com

       • jackyzy823 jackyzy823@gmail.com

       • David Haguenauer ml@kurokatta.org

       • teresy hi.teresy@gmail.com

       • buergi patbuergi@gmx.de

       • Florian Gamboeck mail@floga.de

       • Ralf Hemberger 10364191+rhemberger@users.noreply.github.com

       • Scott Edlund sedlund@users.noreply.github.com

       • Erik Swanson erik@retailnext.net

       • Jake Coggiano jake@stripe.com

       • brused27 brused27@noemailaddress

       • Peter Kaminski kaminski@istori.com

       • Henry Ptasinski henry@logout.com

       • Alexander kharkovalexander@gmail.com

       • Garry McNulty garrmcnu@gmail.com

       • Mathieu Carbou mathieu.carbou@gmail.com

       • Mark Otway mark@otway.com

       • William Cocker 37018962+WilliamCocker@users.noreply.github.com

       • François Leurent 131.js@cloudyks.org

       • Arkadius Stefanski arkste@gmail.com

       • Jay dev@jaygoel.com

       • andrea rota a@xelera.eu

       • nicolov nicolov@users.noreply.github.com

       • Dario Guzik dario@guzik.com.ar

       • qip qip@users.noreply.github.com

       • yair@unicorn yair@unicorn

       • Matt Robinson brimstone@the.narro.ws

       • kayrus kay.diam@gmail.com

       • Rémy Léone remy.leone@gmail.com

       • Wojciech Smigielski wojciech.hieronim.smigielski@gmail.com

       • weetmuts oehrstroem@gmail.com

       • Jonathan vanillajonathan@users.noreply.github.com

       • James Carpenter orbsmiv@users.noreply.github.com

       • Vince vince0villamora@gmail.com

       • Nestar47 47841759+Nestar47@users.noreply.github.com

       • Six brbsix@gmail.com

       • Alexandru Bumbacea alexandru.bumbacea@booking.com

       • calisro robert.calistri@gmail.com

       • Dr.Rx david.rey@nventive.com

       • marcintustin marcintustin@users.noreply.github.com

       • jaKa Močnik jaka@koofr.net

       • Fionera fionera@fionera.de

       • Dan Walters dan@walters.io

       • Danil Semelenov sgtpep@users.noreply.github.com

       • xopez 28950736+xopez@users.noreply.github.com

       • Ben Boeckel mathstuf@gmail.com

       • Manu manu@snapdragon.cc

       • Kyle E. Mitchell kyle@kemitchell.com

       • Gary Kim gary@garykim.dev

       • Jon jonathn@github.com

       • Jeff Quinn jeffrey.quinn@bluevoyant.com

       • Peter Berbec peter@berbec.com

       • didil 1284255+didil@users.noreply.github.com

       • id01 gaviniboom@gmail.com

       • Robert Marko robimarko@gmail.com

       • Philip Harvey 32467456+pharveybattelle@users.noreply.github.com

       • JorisE JorisE@users.noreply.github.com

       • garry415 garry.415@gmail.com

       • forgems forgems@gmail.com

       • Florian Apolloner florian@apolloner.eu

       • Aleksandar Janković office@ajankovic.com ajankovic@users.noreply.github.com

       • Maran maran@protonmail.com

       • nguyenhuuluan434 nguyenhuuluan434@gmail.com

       • Laura Hausmann zotan@zotan.pw laura@hausmann.dev

       • yparitcher y@paritcher.com

       • AbelThar abela.tharen@gmail.com

       • Matti Niemenmaa matti.niemenmaa+git@iki.fi

       • Russell Davis russelldavis@users.noreply.github.com

       • Yi FU yi.fu@tink.se

       • Paul Millar paul.millar@desy.de

       • justinalin justinalin@qnap.com

       • EliEron subanimehd@gmail.com

       • justina777 chiahuei.lin@gmail.com

       • Chaitanya Bankanhal bchaitanya15@gmail.com

       • Michał Matczuk michal@scylladb.com

       • Macavirus macavirus@zoho.com

       • Abhinav Sharma abhi18av@outlook.com

       • ginvine 34869051+ginvine@users.noreply.github.com

       • Patrick Wang mail6543210@yahoo.com.tw

       • Cenk Alti cenkalti@gmail.com

       • Andreas Chlupka andy@chlupka.com

       • Alfonso Montero amontero@tinet.org

       • Ivan Andreev ivandeex@gmail.com

       • David Baumgold david@davidbaumgold.com

       • Lars Lehtonen lars.lehtonen@gmail.com

       • Matei David matei.david@gmail.com

       • David david.bramwell@endemolshine.com

       • Anthony Rusdi 33247310+antrusd@users.noreply.github.com

       • Richard Patel me@terorie.dev

       • 庄天翼 zty0826@gmail.com

       • SwitchJS dev@switchjs.com

       • Raphael PowershellNinja@users.noreply.github.com

       • Sezal Agrawal sezalagrawal@gmail.com

       • Tyler TylerNakamura@users.noreply.github.com

       • Brett Dutro brett.dutro@gmail.com

       • Vighnesh SK booterror99@gmail.com

       • Arijit Biswas dibbyo456@gmail.com

       • Michele Caci michele.caci@gmail.com

       • AlexandrBoltris ua2fgb@gmail.com

       • Bryce Larson blarson@saltstack.com

       • Carlos Ferreyra crypticmind@gmail.com

       • Saksham Khanna sakshamkhanna@outlook.com

       • dausruddin 5763466+dausruddin@users.noreply.github.com

       • zero-24 zero-24@users.noreply.github.com

       • Xiaoxing Ye ye@xiaoxing.us

       • Barry Muldrey barry@muldrey.net

       • Sebastian Brandt sebastian.brandt@friday.de

       • Marco Molteni marco.molteni@mailbox.org

       • Ankur Gupta 7876747+ankur0493@users.noreply.github.com

       • Maciej Zimnoch maciej@scylladb.com

       • anuar45 serdaliyev.anuar@gmail.com

       • Fernando ferferga@users.noreply.github.com

       • David Cole david.cole@sohonet.com

       • Wei He git@weispot.com

       • Outvi V 19144373+outloudvi@users.noreply.github.com

       • Thomas Kriechbaumer thomas@kriechbaumer.name

       • Tennix tennix@users.noreply.github.com

       • Ole Schütt ole@schuett.name

       • Kuang-che Wu kcwu@csie.org

       • Thomas Eales wingsuit@users.noreply.github.com

       • Paul Tinsley paul.tinsley@vitalsource.com

       • Felix Hungenberg git@shiftgeist.com

       • Benjamin Richter github@dev.telepath.de

       • landall cst_zf@qq.com

       • thestigma thestigma@gmail.com

       • jtagcat 38327267+jtagcat@users.noreply.github.com

       • Damon Permezel permezel@me.com

       • boosh boosh@users.noreply.github.com

       • unbelauscht 58393353+unbelauscht@users.noreply.github.com

       • Motonori IWAMURO vmi@nifty.com

       • Benjapol Worakan benwrk@live.com

       • Dave Koston dave.koston@stackpath.com

       • Durval Menezes DurvalMenezes@users.noreply.github.com

       • Tim Gallant me@timgallant.us

       • Frederick Zhang frederick888@tsundere.moe

       • valery1707 valery1707@gmail.com

       • Yves G theYinYeti@yalis.fr

       • Shing Kit Chan chanshingkit@gmail.com

       • Franklyn Tackitt franklyn@tackitt.net

       • Robert-André Mauchin zebob.m@gmail.com

       • evileye 48332831+ibiruai@users.noreply.github.com

       • Joachim Brandon LeBlanc brandon@leblanc.codes

       • Patryk Jakuszew patryk.jakuszew@gmail.com

       • fishbullet shindu666@gmail.com

       • greatroar <@>

       • Bernd Schoolmann mail@quexten.com

       • Elan Ruusamäe glen@pld-linux.org

       • Max Sum max@lolyculture.com

       • Mark Spieth mspieth@users.noreply.github.com

       • harry me@harry.plus

       • Samantha McVey samantham@posteo.net

       • Jack Anderson jack.anderson@metaswitch.com

       • Michael G draget@speciesm.net

       • Brandon Philips brandon@ifup.org

       • Daven dooven@users.noreply.github.com

       • Martin Stone martin@d7415.co.uk

       • David Bramwell 13053834+dbramwell@users.noreply.github.com

       • Sunil Patra snl_su@live.com

       • Adam Stroud adam.stroud@gmail.com

       • Kush kushsharma@users.noreply.github.com

       • Matan Rosenberg matan129@gmail.com

       • gitch1 63495046+gitch1@users.noreply.github.com

       • ElonH elonhhuang@gmail.com

       • Fred fred@creativeprojects.tech

       • Sébastien Gross renard@users.noreply.github.com

       • Maxime Suret 11944422+msuret@users.noreply.github.com

       • Caleb Case caleb@storj.io calebcase@gmail.com

       • Ben Zenker imbenzenker@gmail.com

       • Martin Michlmayr tbm@cyrius.com

       • Brandon McNama bmcnama@pagerduty.com

       • Daniel Slyman github@skylayer.eu

       • Alex Guerrero guerrero@users.noreply.github.com

       • Matteo Pietro Dazzi matteopietro.dazzi@gft.com

       • edwardxml 56691903+edwardxml@users.noreply.github.com

       • Roman Kredentser shareed2k@gmail.com

       • Kamil Trzciński ayufan@ayufan.eu

       • Zac Rubin z-0@users.noreply.github.com

       • Vincent Feltz psycho@feltzv.fr

       • Heiko Bornholdt bornholdt@informatik.uni-hamburg.de

       • Matteo Pietro Dazzi matteopietro.dazzi@gmail.com

       • jtagcat gitlab@c7.ee

       • Petri Salminen petri@salminen.dev

       • Tim Burke tim.burke@gmail.com

       • Kai Lüke kai@kinvolk.io

       • Garrett Squire github@garrettsquire.com

       • Evan Harris eharris@puremagic.com

       • Kevin keyam@microsoft.com

       • Morten Linderud morten@linderud.pw

       • Dmitry Ustalov dmitry.ustalov@gmail.com

       • Jack 196648+jdeng@users.noreply.github.com

       • kcris cristian.tarsoaga@gmail.com

       • tyhuber1 68970760+tyhuber1@users.noreply.github.com

       • David Ibarra david.ibarra@realty.com

       • Tim Gallant tim@lilt.com

       • Kaloyan Raev kaloyan@storj.io

       • Jay McEntire jay.mcentire@gmail.com

       • Leo Luan leoluan@us.ibm.com

       • aus 549081+aus@users.noreply.github.com

       • Aaron Gokaslan agokaslan@fb.com

       • Egor Margineanu egmar@users.noreply.github.com

       • Lucas Kanashiro lucas.kanashiro@canonical.com

       • WarpedPixel WarpedPixel@users.noreply.github.com

       • Sam Edwards sam@samedwards.ca

       • wjielai gouki0123@gmail.com

       • Muffin King jinxz_k@live.com

       • Christopher Stewart 6573710+1f47a@users.noreply.github.com

       • Russell Cattelan cattelan@digitalelves.com

       • gyutw 30371241+gyutw@users.noreply.github.com

       • Hekmon edouardhur@gmail.com

       • LaSombra lasombra@users.noreply.github.com

       • Dov Murik dov.murik@gmail.com

       • Ameer Dawood ameer1234567890@gmail.com

       • Dan Hipschman dan.hipschman@opendoor.com

       • Josh Soref jsoref@users.noreply.github.com

       • David david@staron.nl

       • Ingo ingo@hoffmann.cx

       • Adam Plánský adamplansky@users.noreply.github.com adamplansky@gmail.com

       • Manish Gupta manishgupta.ait@gmail.com

       • Deepak Sah sah.sslpu@gmail.com

       • Marcin Zelent marcin@zelent.net

       • zhucan zhucan.k8s@gmail.com

       • James Lim james.lim@samsara.com

       • Laurens Janssen BD69BM@insim.biz

       • Bob Bagwill bobbagwill@gmail.com

       • Nathan Collins colli372@msu.edu

       • lostheli

       • kelv kelvin@acks.org

       • Milly milly.ca@gmail.com

       • gtorelly gtorelly@gmail.com

       • Brad Ackerman brad@facefault.org

       • Mitsuo Heijo mitsuo.heijo@gmail.com

       • Claudio Bantaloukas rockdreamer@gmail.com

       • Benjamin Gustin gustin.ben@gmail.com

       • Ingo Weiss ingo@redhat.com

       • Kerry Su me@sshockwave.net

       • Ilyess Bachiri ilyess.bachiri@sonder.com

       • Yury Stankevich urykhy@gmail.com

       • kice wslikerqs@gmail.com

       • Denis Neuling denisneuling@gmail.com

       • Janne Johansson icepic.dz@gmail.com

       • Patrik Nordlén patriki@gmail.com

       • CokeMine aptx4561@gmail.com

       • Sơn Trần-Nguyễn github@sntran.com

       • lluuaapp 266615+lluuaapp@users.noreply.github.com

       • Zach Kipp kipp.zach@gmail.com

       • Riccardo Iaconelli riccardo@kde.org

       • Sakuragawa Misty gyc990326@gmail.com

       • Nicolas Rueff nicolas@rueff.fr

       • Pau Rodriguez-Estivill prodrigestivill@gmail.com

       • Bob Pusateri BobPusateri@users.noreply.github.com

       • Alex JOST 25005220+dimejo@users.noreply.github.com

       • Alexey Tabakman samosad.ru@gmail.com

       • David Sze sze.david@gmail.com

       • cynthia kwok cynthia.m.kwok@gmail.com

       • Miron Veryanskiy MironVeryanskiy@gmail.com

       • K265 k.265@qq.com

       • Vesnyx Vesnyx@users.noreply.github.com

       • Dmitry Chepurovskiy me@dm3ch.net

       • Rauno Ots rauno.ots@cgi.com

       • Georg Neugschwandtner georg.neugschwandtner@gmx.net

       • pvalls polvallsrue@gmail.com

       • Robert Thomas 31854736+wolveix@users.noreply.github.com

       • Romeo Kienzler romeo.kienzler@gmail.com

       • tYYGH tYYGH@users.noreply.github.com

       • georne 77802995+georne@users.noreply.github.com

       • Maxwell Calman mcalman@MacBook-Pro.local

       • Naveen Honest Raj naveendurai19@gmail.com

       • Lucas Messenger lmesseng@cisco.com

       • Manish Kumar krmanish260@gmail.com

       • x0b x0bdev@gmail.com

       • CERN through the CS3MESH4EOSC Project

       • Nick Gaya nicholasgaya+github@gmail.com

       • Ashok Gelal 401055+ashokgelal@users.noreply.github.com

       • Dominik Mydlil dominik.mydlil@outlook.com

       • Nazar Mishturak nazarmx@gmail.com

       • Ansh Mittal iamAnshMittal@gmail.com

       • noabody noabody@yahoo.com

       • OleFrost 82263101+olefrost@users.noreply.github.com

       • Kenny Parsons kennyparsons93@gmail.com

       • Jeffrey Tolar tolar.jeffrey@gmail.com

       • jtagcat git-514635f7@jtag.cat

       • Tatsuya Noyori 63089076+public-tatsuya-noyori@users.noreply.github.com

       • lewisxy lewisxy@users.noreply.github.com

       • Nolan Woods nolan_w@sfu.ca

       • Gautam Kumar 25435568+gautamajay52@users.noreply.github.com

       • Chris Macklin chris.macklin@10xgenomics.com

       • Antoon Prins antoon.prins@surfsara.nl

       • Alexey Ivanov rbtz@dropbox.com

       • Serge Pouliquen sp31415@free.fr

       • acsfer carlos@reendex.com

       • Tom tom@tom-fitzhenry.me.uk

       • Tyson Moore tyson@tyson.me

       • database64128 free122448@hotmail.com

       • Chris Lu chrislusf@users.noreply.github.com

       • Reid Buzby reid@rethink.software

       • darrenrhs darrenrhs@gmail.com

       • Florian Penzkofer fp@nullptr.de

       • Xuanchen Wu 117010292@link.cuhk.edu.cn

       • partev petrosyan@gmail.com

       • Dmitry Sitnikov fo2@inbox.ru

       • Haochen Tong i@hexchain.org

       • Michael Hanselmann public@hansmi.ch

       • Chuan Zh zhchuan7@gmail.com

       • Antoine GIRARD antoine.girard@sapk.fr

       • Justin Winokur (Jwink3101) Jwink3101@users.noreply.github.com

       • Mariano Absatz (git) scm@baby.com.ar

       • Greg Sadetsky lepetitg@gmail.com

       • yedamo logindaveye@gmail.com

       • hota lindwurm.q@gmail.com

       • vinibali vinibali1@gmail.com

       • Ken Enrique Morel ken.morel.santana@gmail.com

       • Justin Hellings justin.hellings@gmail.com

       • Parth Shukla pparth@pparth.net

       • wzl wangzl31@outlook.com

       • HNGamingUK connor@earnshawhome.co.uk

       • Jonta 359397+Jonta@users.noreply.github.com

       • YenForYang YenForYang@users.noreply.github.com

       • Joda Stößer stoesser@yay-digital.de services+github@simjo.st

       • Logeshwaran waranlogesh@gmail.com

       • Rajat Goel rajat@dropbox.com

       • r0kk3rz r0kk3rz@gmail.com

       • Matthew Sevey mjsevey@gmail.com

       • Filip Rysavy fil@siasky.net

       • Ian Levesque ian@ianlevesque.org

       • Thomas Stachl thomas@stachl.me

       • Dmitry Bogatov git#v1@kaction.cc

       • thomae 4493560+thomae@users.noreply.github.com

       • trevyn trevyn-git@protonmail.com

       • David Liu david.yx.liu@oracle.com

       • Chris Nelson stuff@cjnaz.com

       • Felix Bünemann felix.buenemann@gmail.com

       • Atílio Antônio atiliodadalto@hotmail.com

       • Roberto Ricci ricci@disroot.org

       • Carlo Mion mion00@gmail.com

       • Chris Lu chris.lu@gmail.com

       • Vitor Arruda vitor.pimenta.arruda@gmail.com

       • bbabich bbabich@datamossa.com

       • David dp.davide.palma@gmail.com

       • Borna Butkovic borna@favicode.net

       • Fredric Arklid fredric.arklid@consid.se

       • Andy Jackson Andrew.Jackson@bl.uk

       • Sinan Tan i@tinytangent.com

       • deinferno 14363193+deinferno@users.noreply.github.com

       • rsapkf rsapkfff@pm.me

       • Will Holtz wholtz@gmail.com

       • GGG KILLER gggkiller2@gmail.com

       • Logeshwaran Murugesan logeshwaran@testpress.in

       • Lu Wang coolwanglu@gmail.com

       • Bumsu Hyeon ksitht@gmail.com

       • Shmz Ozggrn 98463324+ShmzOzggrn@users.noreply.github.com

       • Kim kim@jotta.no

       • Niels van de Weem n.van.de.weem@smile.nl

       • Koopa codingkoopa@gmail.com

       • Yunhai Luo yunhai-luo@hotmail.com

       • Charlie Jiang w@chariri.moe

       • Alain Nussbaumer alain.nussbaumer@alleluia.ch

       • Vanessasaurus 814322+vsoch@users.noreply.github.com

       • Isaac Levy isaac.r.levy@gmail.com

       • Gourav T workflowautomation@protonmail.com

       • Paulo Martins paulo.pontes.m@gmail.com

       • viveknathani viveknathani2402@gmail.com

       • Eng Zer Jun engzerjun@gmail.com

       • Abhiraj abhiraj.official15@gmail.com

       • Márton Elek elek@apache.org elek@users.noreply.github.com

       • Vincent Murphy vdm@vdm.ie

       • ctrl-q 34975747+ctrl-q@users.noreply.github.com

       • Nil Alexandrov nalexand@akamai.com

       • GuoXingbin 101376330+guoxingbin@users.noreply.github.com

       • Berkan Teber berkan@berkanteber.com

       • Tobias Klauser tklauser@distanz.ch

       • KARBOWSKI Piotr piotr.karbowski@gmail.com

       • GH geeklihui@foxmail.com

       • rafma0 int.main@gmail.com

       • Adrien Rey-Jarthon jobs@adrienjarthon.com

       • Nick Gooding 73336146+nickgooding@users.noreply.github.com

       • Leroy van Logchem lr.vanlogchem@gmail.com

       • Zsolt Ero zsolt.ero@gmail.com

       • Lesmiscore nao20010128@gmail.com

       • ehsantdy ehsan.tadayon@arvancloud.com

       • SwazRGB 65694696+swazrgb@users.noreply.github.com

       • Mateusz Puczyński mati6095@gmail.com

       • Michael C Tiernan - MIT-Research Computing Project mtiernan@mit.edu

       • Kaspian 34658474+KaspianDev@users.noreply.github.com

       • Werner EvilOlaf@users.noreply.github.com

       • Hugal31 hugo.laloge@gmail.com

       • Christian Galo 36752715+cgalo5758@users.noreply.github.com

       • Erik van Velzen erik@evanv.nl

       • Derek Battams derek@battams.ca

       • SimonLiu simonliu009@users.noreply.github.com

       • Hugo Laloge hla@lescompanions.com

       • Mr-Kanister 68117355+Mr-Kanister@users.noreply.github.com

       • Rob Pickerill r.pickerill@gmail.com

       • Andrey to.merge@gmail.com

       • Eric Wolf 19wolf@gmail.com

       • Nick nick.naumann@mailbox.tu-dresden.de

       • Jason Zheng jszheng17@gmail.com

       • Matthew Vernon mvernon@wikimedia.org

       • Noah Hsu i@nn.ci

       • m00594701 mengpengbo@huawei.com

       • Art M. Gallagher artmg50@gmail.com

       • Sven Gerber 49589423+svengerber@users.noreply.github.com

       • CrossR r.cross@lancaster.ac.uk

       • Maciej Radzikowski maciej@radzikowski.com.pl

       • Scott Grimes scott.grimes@spaciq.com

       • Phil Shackleton 71221528+philshacks@users.noreply.github.com

       • eNV25 env252525@gmail.com

       • Caleb inventor96@users.noreply.github.com

       • J-P Treen jp@wraptious.com

       • Martin Czygan 53705+miku@users.noreply.github.com

       • buda sandrojijavadze@protonmail.com

       • mirekphd 36706320+mirekphd@users.noreply.github.com

       • vyloy vyloy@qq.com

       • Anthrazz 25553648+Anthrazz@users.noreply.github.com

       • zzr93 34027824+zzr93@users.noreply.github.com

       • Paul Norman penorman@mac.com

       • Lorenzo Maiorfi maiorfi@gmail.com

       • Claudio Maradonna penguyman@stronzi.org

       • Ovidiu Victor Tatar ovi.tatar@googlemail.com

       • Evan Spensley epspensley@gmail.com

       • Yen Hu 61753151+0x59656e@users.noreply.github.com

       • Steve Kowalik steven@wedontsleep.org

       • Jordi Gonzalez Muñoz jordigonzm@gmail.com

       • Joram Schrijver i@joram.io

       • Mark Trolley marktrolley@gmail.com

       • João Henrique Franco joaohenrique.franco@gmail.com

       • anonion aman207@users.noreply.github.com

       • Ryan Morey 4590343+rmorey@users.noreply.github.com

       • Simon Bos simonbos9@gmail.com

       • YFdyh000 yfdyh000@gmail.com * Josh Soref 2119212+jsoref@users.noreply.github.com

       • Øyvind Heddeland Instefjord instefjord@outlook.com

       • Dmitry Deniskin 110819396+ddeniskin@users.noreply.github.com

       • Alexander Knorr 106825+opexxx@users.noreply.github.com

       • Richard Bateman richard@batemansr.us

       • Dimitri Papadopoulos Orfanos 3234522+DimitriPapadopoulos@users.noreply.github.com

       • Lorenzo Milesi lorenzo.milesi@yetopen.com

       • Isaac Aymerich isaac.aymerich@gmail.com

       • YanceyChiew 35898533+YanceyChiew@users.noreply.github.com

       • Manoj Ghosh msays2000@gmail.com

       • Bachue Zhou bachue.shu@gmail.com

       • Manoj Ghosh manoj.ghosh@oracle.com

       • Tom Mombourquette tom@devnode.com

       • Robert Newson rnewson@apache.org

Contact the rclone project

   Forum
       Forum for questions and general discussion:

       • https://forum.rclone.org

   GitHub repository
       The project’s repository is located at:

       • https://github.com/rclone/rclone

       There you can file bug reports or contribute with pull requests.

   Twitter
       You can also follow me on twitter for rclone announcements:

       • @njcw

   Email
       Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood.   Please
       don’t email me requests for help - those are better directed to the forum.  Thanks!

AUTHORS

       Nick Craig-Wood.

User Manual                                       Oct 04, 2024                                         rclone(1)