Getting Started with Rclone

Rclone is a command line tool used to mount and synchronise files and directories between a Linux workstation and various cloud-based storage provides like Amazon S3, Dropbox, Google Cloud storage, and many more.

Installation

You can install rclone by using any of the methods shown here on their website. Here I use the script installation method:

[[email protected] ~]# curl https://rclone.org/install.sh | sudo bash

Hopefully the output should indicate success.

rclone v1.47.0 has successfully installed.
Now run "rclone config" for setup. Check https://rclone.org/docs/ for more details.

You can also confirm by running the below command to check the current version number:

[[email protected] ~]# rclone version
rclone v1.47.0
- os/arch: linux/amd64
- go version: go1.12.4

The goal here is to mount and synchronise an AWS S3 bucket to a directory on a CentOS 7 server. You will need to know your AWS security key pair. They should look similar to the below.

Key ID:		AKIAPJIFUCTJWDAYF2LZ
Secret Key:	AcbQW4syws9bL5mVCWqAwys1TpC3BIlZeEQXi+7j

Armed with this, configure rclone as shown below.

[[email protected] ~]# rclone config
2019/05/28 11:03:08 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> aws-s3-bucket

Here we type n to create a new remote connection and give it the name aws-s3-bucket. Next select the type of storage to configure. In my case, I select number 4 and then 1 for AWS S3:

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / A stackable unification remote, which can appear to merge the contents of several remotes
   \ "union"
 2 / Alias for a existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
 6 / Box
   \ "box"
 7 / Cache a remote
   \ "cache"
 8 / Dropbox
   \ "dropbox"
 9 / Encrypt/Decrypt a remote
   \ "crypt"
10 / FTP Connection
   \ "ftp"
11 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
12 / Google Drive
   \ "drive"
13 / Hubic
   \ "hubic"
14 / JottaCloud
   \ "jottacloud"
15 / Koofr
   \ "koofr"
16 / Local Disk
   \ "local"
17 / Mega
   \ "mega"
18 / Microsoft Azure Blob Storage
   \ "azureblob"
19 / Microsoft OneDrive
   \ "onedrive"
20 / OpenDrive
   \ "opendrive"
21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
22 / Pcloud
   \ "pcloud"
23 / QingCloud Object Storage
   \ "qingstor"
24 / SSH/SFTP Connection
   \ "sftp"
25 / Webdav
   \ "webdav"
26 / Yandex Disk
   \ "yandex"
27 / http Connection
   \ "http"
Storage> 4
** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
 4 / Digital Ocean Spaces
   \ "DigitalOcean"
 5 / Dreamhost DreamObjects
   \ "Dreamhost"
 6 / IBM COS S3
   \ "IBMCOS"
 7 / Minio Object Storage
   \ "Minio"
 8 / Netease Object Storage (NOS)
   \ "Netease"
 9 / Wasabi Object Storage
   \ "Wasabi"
10 / Any other S3 compatible provider
   \ "Other"
provider> 1

Here we choose to enter our credentials manually by selecting number 1 from the options.

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIAPJIFUCTJWDAYF2LZ
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> AcbQW4syws9bL5mVCWqAwys1TpC3BIlZeEQXi+7j

Next select the region to connect to. I select London here.

Region to connect to.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
   / US East (Ohio) Region
 2 | Needs location constraint us-east-2.
   \ "us-east-2"
   / US West (Oregon) Region
 3 | Needs location constraint us-west-2.
   \ "us-west-2"
   / US West (Northern California) Region
 4 | Needs location constraint us-west-1.
   \ "us-west-1"
   / Canada (Central) Region
 5 | Needs location constraint ca-central-1.
   \ "ca-central-1"
   / EU (Ireland) Region
 6 | Needs location constraint EU or eu-west-1.
   \ "eu-west-1"
   / EU (London) Region
 7 | Needs location constraint eu-west-2.
   \ "eu-west-2"
   / EU (Stockholm) Region
 8 | Needs location constraint eu-north-1.
   \ "eu-north-1"
   / EU (Frankfurt) Region
 9 | Needs location constraint eu-central-1.
   \ "eu-central-1"
   / Asia Pacific (Singapore) Region
10 | Needs location constraint ap-southeast-1.
   \ "ap-southeast-1"
   / Asia Pacific (Sydney) Region
11 | Needs location constraint ap-southeast-2.
   \ "ap-southeast-2"
   / Asia Pacific (Tokyo) Region
12 | Needs location constraint ap-northeast-1.
   \ "ap-northeast-1"
   / Asia Pacific (Seoul)
13 | Needs location constraint ap-northeast-2.
   \ "ap-northeast-2"
   / Asia Pacific (Mumbai)
14 | Needs location constraint ap-south-1.
   \ "ap-south-1"
   / South America (Sao Paulo) Region
15 | Needs location constraint sa-east-1.
   \ "sa-east-1"
region> 7

Leave the endpoint for S3 API blank.

Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Enter a string value. Press Enter for the default ("").
endpoint>

Set the location constraint to match the region.

Location constraint - must be set to match the Region.
Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
 2 / US East (Ohio) Region.
   \ "us-east-2"
 3 / US West (Oregon) Region.
   \ "us-west-2"
 4 / US West (Northern California) Region.
   \ "us-west-1"
 5 / Canada (Central) Region.
   \ "ca-central-1"
 6 / EU (Ireland) Region.
   \ "eu-west-1"
 7 / EU (London) Region.
   \ "eu-west-2"
 8 / EU (Stockholm) Region.
   \ "eu-north-1"
 9 / EU Region.
   \ "EU"
10 / Asia Pacific (Singapore) Region.
   \ "ap-southeast-1"
11 / Asia Pacific (Sydney) Region.
   \ "ap-southeast-2"
12 / Asia Pacific (Tokyo) Region.
   \ "ap-northeast-1"
13 / Asia Pacific (Seoul)
   \ "ap-northeast-2"
14 / Asia Pacific (Mumbai)
   \ "ap-south-1"
15 / South America (Sao Paulo) Region.
   \ "sa-east-1"
location_constraint> 7

Select number 1, owner gets full control from the ACL list of options.

Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> 1

Select none for the server-side encryption.

The server-side encryption algorithm used when storing this object in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
 3 / aws:kms
   \ "aws:kms"
server_side_encryption> 1

And select none for the KMS ID question:

If using KMS ID you must provide the ARN of Key.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / arn:aws:kms:*
   \ "arn:aws:kms:us-east-1:*"
sse_kms_key_id> 1

Lastly select number 1, the default option for the storage class:

The storage class to use when storing new objects in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
 5 / One Zone Infrequent Access storage class
   \ "ONEZONE_IA"
 6 / Glacier storage class
   \ "GLACIER"
 7 / Glacier Deep Archive storage class
   \ "DEEP_ARCHIVE"
storage_class> 1

If you don’t need to make any changes, select no below.

Edit advanced config? (y/n)
y) Yes
n) No
y/n> n

You should then see your configuration printed to screen.

Remote config
--------------------
[aws-s3-bucket]
type = s3
provider = AWS
env_auth = false
access_key_id = AKIAPJIFUCTJWDAYF2LZ
secret_access_key = AcbQW4syws9bL5mVCWqAwys1TpC3BIlZeEQXi+7j
region = eu-west-2
location_constraint = eu-west-2
acl = private
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

Confirm this looks okay and then quit.

Current remotes:

Name                 Type
====                 ====
aws-s3-bucket        s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Usage

If you created a connection called aws-s3-bucket, you can list the available buckets with the below command:

View Buckets

[[email protected] ~]# rclone lsd aws-s3-bucket:
          -1 2018-05-19 03:35:33        -1 gitlab-application-backup
          -1 2019-04-01 10:26:24        -1 opnit-uploads

List all Objects

You can then use the ls command to list all objects within a bucket:

[[email protected] ~]# rclone ls aws-s3-bucket:gitlab-application-backup
   133120 1526701826_2018_05_19_10.7.3_gitlab_backup.tar
   133120 1526702266_2018_05_19_10.7.3_gitlab_backup.tar
   133120 1526788065_2018_05_20_10.7.3_gitlab_backup.tar
   133120 1526837713_2018_05_20_10.7.3_gitlab_backup.tar
   133120 1526874335_2018_05_21_10.7.3_gitlab_backup.tar
   133120 1526960757_2018_05_22_10.7.3_gitlab_backup.tar
   133120 1527047142_2018_05_23_10.7.3_gitlab_backup.tar

Create a new Bucket

To create a new bucket called pikedom-pics on the aws-s3-bucket remote connection, run the below command:

[[email protected] ~]# rclone mkdir aws-s3-bucket:pikedom-pics

Copy Files

To copy files from a local directory to a remote bucket.

[[email protected] ~]# du -sh Pictures/
50M	Pictures/
[[email protected] ~]# rclone copy Pictures/ aws-s3-bucket:pikedom-pics

List all objects again to confirm that worked:

[[email protected] ~]# rclone ls aws-s3-bucket:pikedom-pics
   828928 2018-11-30-105431_5760x1080_scrot.png
   718638 2018-12-03-110057_5760x1080_scrot.png
   589146 2018-12-04-101917_5760x1080_scrot.png
   772318 2018-12-05-141304_5760x1080_scrot.png
   104242 2018-12-06-092823_939x1030_scrot.png
   134906 2018-12-06-092830_939x1030_scrot.png
   117729 2018-12-06-092836_939x1030_scrot.png
   132996 2018-12-06-092845_939x1030_scrot.png
   114258 2018-12-06-092851_939x1030_scrot.png
...

Synchronise Files

Now lets add a new file to the Pictures directory and synchronise the directory to the AWS S3 bucket.

[[email protected] ~]# touch Pictures/newfile.txt

The above just create a new empty file called newfile.txt in the Pictures directory. If you want to see what files would be copied over without actually copying anything, issue the command with --dry-run appended:

[[email protected] ~]# rclone sync Pictures/ aws-s3-bucket:pikedom-pics --dry-run
2019/05/28 13:52:20 NOTICE: newfile.txt: Not copying as --dry-run

Here we can see there is just one file that needs to be copied over. Remove the --dry-run option to actually copy the files.

[[email protected] ~]# rclone sync Pictures/ aws-s3-bucket:pikedom-pics

Local Mounts

To mount an S3 bucket to a local directory, issue the below command. For some reason the above command does not work for me without appending the ampersand (&).

[[email protected] ~]# mkdir -v local-mount
mkdir: created directory ‘local-mount’
[[email protected] ~]# rclone mount aws-s3-bucket:gitlab-application-backup local-mount/ &
[2] 6084
[[email protected] ~]# ls -la local-mount/
total 3640
-rw-r--r--. 1 root root 133120 May 19  2018 1526701826_2018_05_19_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 19  2018 1526702266_2018_05_19_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 20  2018 1526788065_2018_05_20_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 20  2018 1526837713_2018_05_20_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 21  2018 1526874335_2018_05_21_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 22  2018 1526960757_2018_05_22_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 23  2018 1527047142_2018_05_23_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 24  2018 1527133543_2018_05_24_10.7.3_gitlab_backup.tar
-rw-r--r--. 1 root root 133120 May 25  2018 1527219943_2018_05_25_10.7.3_gitlab_backup.tar
...

A normal umount should unmount the bucket:

[[email protected] ~]# umount -v local-mount/
umount: /root/local-mount (aws-s3-bucket:gitlab-application-backup) unmounted
[2]+  Done                    rclone mount aws-s3-bucket:gitlab-application-backup local-mount/
[[email protected] ~]#

If the above fails, try the following:

[[email protected] ~]# fusermount -u mount/
[[email protected] ~]# ls -la mount/
total 8
drwxr-xr-x. 2 root root 4096 May 28 14:41 .
dr-xr-x---. 9 root root 4096 May 28 14:41 ..
[1]+  Done                    rclone mount aws-s3-bucket:gitlab-application-backup mount/

Permanently Mount

Making a mount persistent over a reboot is not quite as straightforward as was taken from jamescoyle.net.

First create a new systemd service file.

[[email protected] ~]# vim /etc/systemd/system/rclone.service

Populate with the following:

# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/root/local-mount
After=plexdrive.service
 
[Service]
Type=simple
ExecStart=/usr/bin/rclone mount \
        --config=/root/.config/rclone/rclone.conf \
        --allow-other \
        --cache-tmp-upload-path=/tmp/rclone/upload \
        --cache-chunk-path=/tmp/rclone/chunks \
        --cache-workers=8 \
        --cache-writes \
        --cache-dir=/tmp/rclone/vfs \
        --cache-db-path=/tmp/rclone/db \
        --no-modtime \
        --drive-use-trash \
        --stats=0 \
        --checkers=16 \
        --bwlimit=40M \
        --dir-cache-time=60m \
        --cache-info-age=60m aws-s3-bucket:gitlab-application-backup /root/local-mount
ExecStop=/bin/fusermount -u /root/local-mount
Restart=always
RestartSec=10
 
[Install]
WantedBy=default.target

Replace all instances of /root/local-mount with the location you want to mount to. Also replace aws-s3-bucket:gitlab-application-backup with the remote connection name you created earlier and the bucket name you wish to mount.

If you now start the process, you should see the directory is now populated. If not, amend the script and reload with:

[[email protected] ~]# systemctl daemon-reload

If that worked once started, you will need to enable the service for it to start on reboot.

[[email protected] ~]# systemctl enable rclone.service

Be the first to comment

Leave a Reply