- <div class="title"><a href="http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html">S3QL, a locally mounted cloud file system - nice free software</a></div>
- <div class="date"> 9th April 2014</div>
- <div class="body"><p>For a while now, I have been looking for a sensible offsite backup
-solution for use at home. My requirements are simple, it must be
-cheap and locally encrypted (in other words, I keep the encryption
-keys, the storage provider do not have access to my private files).
-One idea me and my friends had many years ago, before the cloud
-storage providers showed up, was to use Google mail as storage,
-writing a Linux block device storing blocks as emails in the mail
-service provided by Google, and thus get heaps of free space. On top
-of this one can add encryption, RAID and volume management to have
-lots of (fairly slow, I admit that) cheap and encrypted storage. But
-I never found time to implement such system. But the last few weeks I
-have looked at a system called
-<a href="https://bitbucket.org/nikratio/s3ql/">S3QL</a>, a locally
-mounted network backed file system with the features I need.</p>
-
-<p>S3QL is a fuse file system with a local cache and cloud storage,
-handling several different storage providers, any with Amazon S3,
-Google Drive or OpenStack API. There are heaps of such storage
-providers. S3QL can also use a local directory as storage, which
-combined with sshfs allow for file storage on any ssh server. S3QL
-include support for encryption, compression, de-duplication, snapshots
-and immutable file systems, allowing me to mount the remote storage as
-a local mount point, look at and use the files as if they were local,
-while the content is stored in the cloud as well. This allow me to
-have a backup that should survive fire. The file system can not be
-shared between several machines at the same time, as only one can
-mount it at the time, but any machine with the encryption key and
-access to the storage service can mount it if it is unmounted.</p>
-
-<p>It is simple to use. I'm using it on Debian Wheezy, where the
-package is included already. So to get started, run <tt>apt-get
-install s3ql</tt>. Next, pick a storage provider. I ended up picking
-Greenqloud, after reading their nice recipe on
-<a href="https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy">how
-to use S3QL with their Amazon S3 service</a>, because I trust the laws
-in Iceland more than those in USA when it come to keeping my personal
-data safe and private, and thus would rather spend money on a company
-in Iceland. Another nice recipe is available from the article
-<a href="http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage">S3QL
-Filesystem for HPC Storage</a> by Jeff Layton in the HPC section of
-Admin magazine. When the provider is picked, figure out how to get
-the API key needed to connect to the storage API. With Greencloud,
-the key did not show up until I had added payment details to my
-account.</p>
-
-<p>Armed with the API access details, it is time to create the file
-system. First, create a new bucket in the cloud. This bucket is the
-file system storage area. I picked a bucket name reflecting the
-machine that was going to store data there, but any name will do.
-I'll refer to it as <tt>bucket-name</tt> below. In addition, one need
-the API login and password, and a locally created password. Store it
-all in ~root/.s3ql/authinfo2 like this:
-
-<p><blockquote><pre>
-[s3c]
-storage-url: s3c://s.greenqloud.com:443/bucket-name
-backend-login: API-login
-backend-password: API-password
-fs-passphrase: local-password
-</pre></blockquote></p>
-
-<p>I create my local passphrase using <tt>pwget 50</tt> or similar,
-but any sensible way to create a fairly random password should do it.
-Armed with these details, it is now time to run mkfs, entering the API
-details and password to create it:</p>
-
-<p><blockquote><pre>
-# mkdir -m 700 /var/lib/s3ql-cache
-# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl s3c://s.greenqloud.com:443/bucket-name
-Enter backend login:
-Enter backend password:
-Before using S3QL, make sure to read the user's guide, especially
-the 'Important Rules to Avoid Loosing Data' section.
-Enter encryption password:
-Confirm encryption password:
-Generating random encryption key...
-Creating metadata tables...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.00 MB of compressed metadata.
-# </pre></blockquote></p>
-
-<p>The next step is mounting the file system to make the storage available.
-
-<p><blockquote><pre>
-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 4 upload threads.
-Downloading and decompressing metadata...
-Reading metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Mounting filesystem...
-# df -h /s3ql
-Filesystem Size Used Avail Use% Mounted on
-s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql
-#
-</pre></blockquote></p>
-
-<p>The file system is now ready for use. I use rsync to store my
-backups in it, and as the metadata used by rsync is downloaded at
-mount time, no network traffic (and storage cost) is triggered by
-running rsync. To unmount, one should not use the normal umount
-command, as this will not flush the cache to the cloud storage, but
-instead running the umount.s3ql command like this:
-
-<p><blockquote><pre>
-# umount.s3ql /s3ql
-#
-</pre></blockquote></p>
-
-<p>There is a fsck command available to check the file system and
-correct any problems detected. This can be used if the local server
-crashes while the file system is mounted, to reset the "already
-mounted" flag. This is what it look like when processing a working
-file system:</p>
-
-<p><blockquote><pre>
-# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
-Using cached metadata.
-File system seems clean, checking anyway.
-Checking DB integrity...
-Creating temporary extra indices...
-Checking lost+found...
-Checking cached objects...
-Checking names (refcounts)...
-Checking contents (names)...
-Checking contents (inodes)...
-Checking contents (parent inodes)...
-Checking objects (reference counts)...
-Checking objects (backend)...
-..processed 5000 objects so far..
-..processed 10000 objects so far..
-..processed 15000 objects so far..
-Checking objects (sizes)...
-Checking blocks (referenced objects)...
-Checking blocks (refcounts)...
-Checking inode-block mapping (blocks)...
-Checking inode-block mapping (inodes)...
-Checking inodes (refcounts)...
-Checking inodes (sizes)...
-Checking extended attributes (names)...
-Checking extended attributes (inodes)...
-Checking symlinks (inodes)...
-Checking directory reachability...
-Checking unix conventions...
-Checking referential integrity...
-Dropping temporary indices...
-Backing up old metadata...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.89 MB of compressed metadata.
-#
-</pre></blockquote></p>
-
-<p>Thanks to the cache, working on files that fit in the cache is very
-quick, about the same speed as local file access. Uploading large
-amount of data is to me limited by the bandwidth out of and into my
-house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s,
-which is very close to my upload speed, and downloading the same
-Debian installation ISO gave me 610 kiB/s, close to my download speed.
-Both were measured using <tt>dd</tt>. So for me, the bottleneck is my
-network, not the file system code. I do not know what a good cache
-size would be, but suspect that the cache should e larger than your
-working set.</p>
-
-<p>I mentioned that only one machine can mount the file system at the
-time. If another machine try, it is told that the file system is
-busy:</p>