Petter Reinholdtsen

S3QL, a locally mounted cloud file system - nice free software
9th April 2014

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends had many years ago, before the cloud storage providers showed up, was to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use S3QL with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c]
storage-url: s3c://s.greenqloud.com:443/bucket-name
backend-login: API-login
backend-password: API-password
fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache
# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl s3c://s.greenqloud.com:443/bucket-name
Enter backend login: 
Enter backend password: 
Before using S3QL, make sure to read the user's guide, especially
the 'Important Rules to Avoid Loosing Data' section.
Enter encryption password: 
Confirm encryption password: 
Generating random encryption key...
Creating metadata tables...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.00 MB of compressed metadata.
# 

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 4 upload threads.
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Mounting filesystem...
# df -h /s3ql
Filesystem                              Size  Used Avail Use% Mounted on
s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
#

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql
# 

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
Using cached metadata.
File system seems clean, checking anyway.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 5000 objects so far..
..processed 10000 objects so far..
..processed 15000 objects so far..
Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Backing up old metadata...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.89 MB of compressed metadata.
# 

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 8 upload threads.
Backend reports that fs is still mounted elsewhere, aborting.
#

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask S3QL to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql
# s3qlctrl flushcache /s3ql
# 

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql
Directory entries:    9141
Inodes:               9143
Data blocks:          8851
Total data size:      22049.38 MB
After de-duplication: 21955.46 MB (99.57% of total)
After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
Database size:        2.39 MB (uncompressed)
(some values do not take into account not-yet-uploaded dirty blocks in cache)
#

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quite different and you will have to figure out what suits you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, personvern, sikkerhet.
EU-domstolen bekreftet i dag at datalagringsdirektivet er ulovlig
8th April 2014

I dag kom endelig avgjørelsen fra EU-domstolen om datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva datalagringsdirektivet er for noe, så er det en flott dokumentar tilgjengelig hos NRK som jeg tidligere har anbefalt alle å se.

Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at det kommer flere ut over dagen. Flere kan finnes via mylder.

Jeg synes det er veldig fint at nok en stemme slår fast at totalitær overvåkning av befolkningen er uakseptabelt, men det er fortsatt like viktig å beskytte privatsfæren som før, da de teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror innsats i prosjekter som Freedombox og Dugnadsnett er viktigere enn noen gang.

Update 2014-04-08 12:10: Kronerullingen for å stoppe datalagringsdirektivet i Norge gjøres hos foreningen Digitalt Personvern, som har samlet inn 843 215,- så langt men trenger nok mye mer hvis ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var kun partinene Høyre og Arbeiderpartiet som stemte for Datalagringsdirektivet, og en av dem må bytte mening for at det skal bli flertall mot i Stortinget. Se mer om saken Holder de ord.

Tags: dld, norsk, personvern, sikkerhet, surveillance.
ReactOS Windows clone - nice free software
1st April 2014

Microsoft have announced that Windows XP reaches its end of life 2014-04-08, in 7 days. But there are heaps of machines still running Windows XP, and depending on Windows XP to run their applications, and upgrading will be expensive, both when it comes to money and when it comes to the amount of effort needed to migrate from Windows XP to a new operating system. Some obvious options (buy new a Windows machine, buy a MacOSX machine, install Linux on the existing machine) are already well known and covered elsewhere. Most of them involve leaving the user applications installed on Windows XP behind and trying out replacements or updated versions. In this blog post I want to mention one strange bird that allow people to keep the hardware and the existing Windows XP applications and run them on a free software operating system that is Windows XP compatible.

ReactOS is a free software operating system (GNU GPL licensed) working on providing a operating system that is binary compatible with Windows, able to run windows programs directly and to use Windows drivers for hardware directly. The project goal is for Windows user to keep their existing machines, drivers and software, and gain the advantages from user a operating system without usage limitations caused by non-free licensing. It is a Windows clone running directly on the hardware, so quite different from the approach taken by the Wine project, which make it possible to run Windows binaries on Linux.

The ReactOS project share code with the Wine project, so most shared libraries available on Windows are already implemented already. There is also a software manager like the one we are used to on Linux, allowing the user to install free software applications with a simple click directly from the Internet. Check out the screen shots on the project web site for an idea what it look like (it looks just like Windows before metro).

I do not use ReactOS myself, preferring Linux and Unix like operating systems. I've tested it, and it work fine in a virt-manager virtual machine. The browser, minesweeper, notepad etc is working fine as far as I can tell. Unfortunately, my main test application is the software included on a CD with the Lego Mindstorms NXT, which seem to install just fine from CD but fail to leave any binaries on the disk after the installation. So no luck with that test software. No idea why, but hope someone else figure out and fix the problem. I've tried the ReactOS Live ISO on a physical machine, and it seemed to work just fine. If you like Windows and want to keep running your old Windows binaries, check it out by downloading the installation CD, the live CD or the preinstalled virtual machine image.

Tags: english, reactos.
Debian Edu interview: Roger Marsal
30th March 2014

Debian Edu / Skolelinux keep gaining new users. Some weeks ago, a person showed up on IRC, #debian-edu, with a wish to contribute, and I managed to get a interview with this great contributor Roger Marsal to learn more about his background.

Who are you, and how do you spend your days?

My name is Roger Marsal, I'm 27 years old (1986 generation) and I live in Barcelona, Spain. I've got a strong business background and I work as a patrimony manager and as a real estate agent. Additionally, I've co-founded a British based tech company that is nowadays on the last development phase of a new social networking concept.

I'm a Linux enthusiast that started its journey with Ubuntu four years ago and have recently switched to Debian seeking rock solid stability and as a necessary step to gain expertise.

In a nutshell, I spend my days working and learning as much as I can to face both my job, entrepreneur project and feed my Linux hunger.

How did you get in contact with the Skolelinux / Debian Edu project?

I discovered the LTSP advantages with "Ubuntu 12.04 alternate install" and after a year of use I started looking for an alternative. Even though I highly value and respect the Ubuntu project, I thought it was necessary for me to change to a more robust and stable alternative. As far as I was using Debian on my personal laptop I thought it would be fine to install Debian and configure an LTSP server myself. Surprised, I discovered that the Debian project also supported a kind of Edubuntu equivalent, and after having some pain I obtained a Debian Edu network up and running. I just loved it.

What do you see as the advantages of Skolelinux / Debian Edu?

I found a main advantage in that, once you know "the tips and tricks", a new installation just works out of the box. It's the most complete alternative I've found to create an LTSP network. All the other distributions seems to be made of plastic, Debian Edu seems to be made of steel.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I found two main disadvantages.

I'm not an expert but I've got notions and I had to spent a considerable amount of time trying to bring up a standard network topology. I'm quite stubborn and I just worked until I did but I'm sure many people with few resources (not big schools, but academies for example) would have switched or dropped.

It's amazing how such a complex system like Debian Edu has achieved this out-of-the-box state. Even though tweaking without breaking gets more difficult, as more factors have to be considered. This can discourage many people too.

Which free software do you use daily?

I use Debian, Firefox, Okular, Inkscape, LibreOffice and Virtualbox.

Which strategy do you believe is the right one to use to get schools to use free software?

I don't think there is a need for a particular strategy. The free attribute in both "freedom" and "no price" meanings is what will really bring free software to schools. In my experience I can think of the "R" statistical language; a few years a ago was an extremely nerd tool for university people. Today it's being increasingly used to teach statistics at many different level of studies. I believe free and open software will increasingly gain popularity, but I'm sure schools will be one of the first scenarios where this will happen.

Tags: debian edu, english, intervju.
Dokumentaren om Datalagringsdirektivet sendes endelig på NRK
26th March 2014

Foreningen NUUG melder i natt at NRK nå har bestemt seg for når den norske dokumentarfilmen om datalagringsdirektivet skal sendes (se IMDB for detaljer om filmen) . Første visning blir på NRK2 mandag 2014-03-31 kl. 19:50, og deretter visninger onsdag 2014-04-02 kl. 12:30, fredag 2014-04-04 kl. 19:40 og søndag 2014-04-06 kl. 15:10. Jeg har sett dokumentaren, og jeg anbefaler enhver å se den selv. Som oppvarming mens vi venter anbefaler jeg Bjørn Stærks kronikk i Aftenposten fra i går, Autoritær gjøkunge, der han gir en grei skisse av hvor ille det står til med retten til privatliv og beskyttelsen av demokrati i Norge og resten verden, og helt riktig slår fast at det er vi i databransjen som sitter med nøkkelen til å gjøre noe med dette. Jeg har involvert meg i prosjektene dugnadsnett.no og FreedomBox for å forsøke å gjøre litt selv for å bedre situasjonen, men det er mye hardt arbeid fra mange flere enn meg som gjenstår før vi kan sies å ha gjenopprettet balansen.

Jeg regner med at nettutgaven dukker opp på NRKs side om filmen om datalagringsdirektivet om fem dager. Hold et øye med siden, og tips venner og slekt om at de også bør se den.

Tags: dld, freedombox, mesh network, norsk, personvern, sikkerhet, surveillance.
Public Trusted Timestamping services for everyone
25th March 2014

Did you ever need to store logs or other files in a way that would allow it to be used as evidence in court, and needed a way to demonstrate without reasonable doubt that the file had not been changed since it was created? Or, did you ever need to document that a given document was received at some point in time, like some archived document or the answer to an exam, and not changed after it was received? The problem in these settings is to remove the need to trust yourself and your computers, while still being able to prove that a file is the same as it was at some given time in the past.

A solution to these problems is to have a trusted third party "stamp" the document and verify that at some given time the document looked a given way. Such notarius service have been around for thousands of years, and its digital equivalent is called a trusted timestamping service. The Internet Engineering Task Force standardised how such service could work a few years ago as RFC 3161. The mechanism is simple. Create a hash of the file in question, send it to a trusted third party which add a time stamp to the hash and sign the result with its private key, and send back the signed hash + timestamp. Both email, FTP and HTTP can be used to request such signature, depending on what is provided by the service used. Anyone with the document and the signature can then verify that the document matches the signature by creating their own hash and checking the signature using the trusted third party public key. There are several commercial services around providing such timestamping. A quick search for "rfc 3161 service" pointed me to at least DigiStamp, Quo Vadis, Global Sign and Global Trust Finder. The system work as long as the private key of the trusted third party is not compromised.

But as far as I can tell, there are very few public trusted timestamp services available for everyone. I've been looking for one for a while now. But yesterday I found one over at Deutches Forschungsnetz mentioned in a blog by David Müller. I then found a good recipe on how to use the service over at the University of Greifswald.

The OpenSSL library contain both server and tools to use and set up your own signing service. See the ts(1SSL), tsget(1SSL) manual pages for more details. The following shell script demonstrate how to extract a signed timestamp for any file on the disk in a Debian environment:

#!/bin/sh
set -e
url="http://zeitstempel.dfn.de"
caurl="https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt"
reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq)
resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr)
cafile=chain.txt
if [ ! -f $cafile ] ; then
    wget -O $cafile "$caurl"
fi
openssl ts -query -data "$1" -cert | tee "$reqfile" \
    | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile"
openssl ts -reply -in "$resfile" -text 1>&2
openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2
base64 < "$resfile"
rm "$reqfile" "$resfile"

The argument to the script is the file to timestamp, and the output is a base64 encoded version of the signature to STDOUT and details about the signature to STDERR. Note that due to a bug in the tsget script, you might need to modify the included script and remove the last line. Or just write your own HTTP uploader using curl. :) Now you too can prove and verify that files have not been changed.

But the Internet need more public trusted timestamp services. Perhaps something for Uninett or my work place the University of Oslo to set up?

Tags: english, sikkerhet.
Video DVD reader library / python-dvdvideo - nice free software
21st March 2014

Keeping your DVD collection safe from scratches and curious children fingers while still having it available when you want to see a movie is not straight forward. My preferred method at the moment is to store a full copy of the ISO on a hard drive, and use VLC, Popcorn Hour or other useful players to view the resulting file. This way the subtitles and bonus material are still available and using the ISO is just like inserting the original DVD record in the DVD player.

Earlier I used dd for taking security copies, but it do not handle DVDs giving read errors (which are quite a few of them). I've also tried using dvdbackup and genisoimage, but these days I use the marvellous python library and program python-dvdvideo written by Bastian Blank. It is in Debian already and the binary package name is python3-dvdvideo. Instead of trying to read every block from the DVD, it parses the file structure and figure out which block on the DVD is actually in used, and only read those blocks from the DVD. This work surprisingly well, and I have been able to almost backup my entire DVD collection using this method.

So far, python-dvdvideo have failed on between 10 and 20 DVDs, which is a small fraction of my collection. The most common problem is DVDs using UTF-16 instead of UTF-8 characters, which according to Bastian is against the DVD specification (and seem to cause some players to fail too). A rarer problem is what seem to be inconsistent DVD structures, as the python library claim there is a overlap between objects. An equally rare problem claim some value is out of range. No idea what is going on there. I wish I knew enough about the DVD format to fix these, to ensure my movie collection will stay with me in the future.

So, if you need to keep your DVDs safe, back them up using python-dvdvideo. :)

Tags: english, multimedia, opphavsrett, video.
Norsk utgave av Alaveteli / WhatDoTheyKnow på trappene
16th March 2014

Det offentlige Norge har mye kunnskap og informasjon. Men hvordan kan en få tilgang til den på en enkel måte? Takket være et lite knippe lover og tilhørende forskrifter, blant annet offentlighetsloven, miljøinformasjonsloven og forvaltningsloven har en rett til å spørre det offentlige og få svar. Men det finnes intet offentlig arkiv over hva andre har spurt om, og dermed risikerer en å måtte forstyrre myndighetene gang på gang for å få tak i samme informasjonen på nytt. Britiske mySociety har laget tjenesten WhatDoTheyKnow som gjør noe med dette. I Storbritannia blir WhatdoTheyKnow brukt i ca 15% av alle innsynsforespørsler mot sentraladministrasjonen. Prosjektet heter Alaveteli, og er takk i bruk en rekke steder etter at løsningen ble generalisert og gjort mulig å oversette. Den hjelper borgerne med å be om innsyn, rådgir ved purringer og klager og lar alle se hvilke henvendelser som er sendt til det offentlige og hvilke svar som er kommet inn, i et søkpart arkiv. Her i Norge holder vi i foreningen NUUG på å få opp en norsk utgave av Alaveteli, og her trenger vi din hjelp med oversettelsen.

Så langt er 76 % av Alaveteli oversatt til norsk bokmål, men vi skulle gjerne vært oppe i 100 % før lansering. Oversettelsen gjøres på Transifex, der enhver som registrerer seg og ber om tilgang til bokmålsoversettelsen får bidra. Vi har satt opp en test av tjenesten (som ikke sender epost til det offentlige, kun til oss som holder på å sette opp tjenesten) på maskinen alaveteli-dev.nuug.no, der en kan se hvordan de oversatte meldingen blir seende ut på nettsiden. Når tjenesten lanseres vil den hete Mimes brønn, etter visdomskilden som Odin måtte gi øyet sitt for å få drikke i. Den nettsiden er er ennå ikke klar til bruk.

Hvis noen vil oversette til nynorsk også, så skal vi finne ut hvordan vi lager en flerspråklig tjeneste. Men i første omgang er fokus på bokmålsoversettelsen, der vi selv har nok peiling til å ha fått oversatt 76%, men trenger hjelp for å komme helt i mål. :)

Tags: norsk, nuug, offentlig innsyn.
Freedombox on Dreamplug, Raspberry Pi and virtual x86 machine
14th March 2014

The Freedombox project is working on providing the software and hardware for making it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It has been going on for a while, and is slowly progressing towards a new test release (0.2).

And what day could be better than the Pi day to announce that the new version will provide "hard drive" / SD card / USB stick images for Dreamplug, Raspberry Pi and VirtualBox (or any other virtualization system), and can also be installed using a Debian installer preseed file. The Debian based Freedombox is now based on Debian Jessie, where most of the needed packages used are already present. Only one, the freedombox-setup package, is missing. To try to build your own boot image to test the current status, fetch the freedom-maker scripts and build using vmdebootstrap with a user with sudo access to become root:

git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
  freedom-maker
sudo apt-get install git vmdebootstrap mercurial python-docutils \
  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
  u-boot-tools
make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README for more details on the build. If you do not want all three images, trim the make line. But note that thanks to a race condition in vmdebootstrap, the build might fail without the patch to the kpartx call.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

But note that due to a recently introduced bug in apt in Jessie, the installer will currently hang while setting up APT sources. Killing the 'apt-cdrom ident' process when it hang a few times during the installation will get the installation going. This affect all installations in Jessie, and I expect it will be fixed soon.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

Tags: debian, english, freedombox, sikkerhet, surveillance, web.
How to add extra storage servers in Debian Edu / Skolelinux
12th March 2014

On larger sites, it is useful to use a dedicated storage server for storing user home directories and data. The design for handling this in Debian Edu / Skolelinux, is to update the automount rules in LDAP and let the automount daemon on the clients take care of the rest. I was reminded about the need to document this better when one of the customers of Skolelinux Drift AS, where I am on the board of directors, asked about how to do this. The steps to get this working are the following:

  1. Add new storage server in DNS. I use nas-server.intern as the example host here.
  2. Add automoun LDAP information about this server in LDAP, to allow all clients to automatically mount it on reqeust.
  3. Add the relevant entries in tjener.intern:/etc/fstab, because tjener.intern do not use automount to avoid mounting loops.

DNS entries are added in GOsa², and not described here. Follow the instructions in the manual (Machine Management with GOsa² in section Getting started).

Ensure that the NFS export points on the server are exported to the relevant subnets or machines:

root@tjener:~# showmount -e nas-server
Export list for nas-server:
/storage         10.0.0.0/8
root@tjener:~#

Here everything on the backbone network is granted access to the /storage export. With NFSv3 it is slightly better to limit it to netgroup membership or single IP addresses to have some limits on the NFS access.

The next step is to update LDAP. This can not be done using GOsa², because it lack a module for automount. Instead, use ldapvi and add the required LDAP objects using an editor.

ldapvi --ldap-conf -ZD '(cn=admin)' -b ou=automount,dc=skole,dc=skolelinux,dc=no

When the editor show up, add the following LDAP objects at the bottom of the document. The "/&" part in the last LDAP object is a wild card matching everything the nas-server exports, removing the need to list individual mount points in LDAP.

add cn=nas-server,ou=auto.skole,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: nas-server
automountInformation: -fstype=autofs --timeout=60 ldap:ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no

add ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: top
objectClass: automountMap
ou: auto.nas-server

add cn=/,ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: /
automountInformation: -fstype=nfs,tcp,rsize=32768,wsize=32768,rw,intr,hard,nodev,nosuid,noatime nas-server.intern:/&

The last step to remember is to mount the relevant mount points in tjener.intern by adding them to /etc/fstab, creating the mount directories using mkdir and running "mount -a" to mount them.

When this is done, your users should be able to access the files on the storage server directly by just visiting the /tjener/nas-server/storage/ directory using any application on any workstation, LTSP client or LTSP server.

Tags: debian edu, english, ldap.

RSS feed

Created by Chronicle v4.6