The Freedombox -project is working on providing the software and hardware to make -it easy for non-technical people to host their data and communication -at home, and being able to communicate with their friends and family -encrypted and away from prying eyes. It is still going strong, and -today a major mile stone was reached.
- -Today, the last of the packages currently used by the project to -created the system images were accepted into Debian Unstable. It was -the freedombox-setup package, which is used to configure the images -during build and on the first boot. Now all one need to get going is -the build code from the freedom-maker git repository and packages from -Debian. And once the freedombox-setup package enter testing, we can -build everything directly from Debian. :)
- -Some key packages used by Freedombox are -freedombox-setup, -plinth, -pagekite, -tor, -privoxy, -owncloud and -dnsmasq. There -are plans to integrate more packages into the setup. User -documentation is maintained on the Debian wiki. Please -check out -the manual and help us improve it.
- -To test for yourself and create boot images with the FreedomBox -setup, run this on a Debian machine using a user with sudo rights to -become root:
- --sudo apt-get install git vmdebootstrap mercurial python-docutils \ - mktorrent extlinux virtualbox qemu-user-static binfmt-support \ - u-boot-tools -git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \ - freedom-maker -make -C freedom-maker dreamplug-image raspberry-image virtualbox-image -- -
Root access is needed to run debootstrap and mount loopback -devices. See the README in the freedom-maker git repo for more -details on the build. If you do not want all three images, trim the -make line. Note that the virtualbox-image target is not really -virtualbox specific. It create a x86 image usable in kvm, qemu, -vmware and any other x86 virtual machine environment. You might need -the version of vmdebootstrap in Jessie to get the build working, as it -include fixes for a race condition with kpartx.
- -If you instead want to install using a Debian CD and the preseed -method, boot a Debian Wheezy ISO and use this boot argument to load -the preseed values:
- --url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat -- -
I have not tested it myself the last few weeks, so I do not know if -it still work.
- -If you wonder how to help, one task you could look at is using -systemd as the boot system. It will become the default for Linux in -Jessie, so we need to make sure it is usable on the Freedombox. I did -a simple test a few weeks ago, and noticed dnsmasq failed to start -during boot when using systemd. I suspect there are other problems -too. :) To detect problems, there is a test suite included, which can -be run from the plinth web interface.
- -Give it a go and let us know how it goes on the mailing list, and help -us get the new release published. :) Please join us on -IRC (#freedombox on -irc.debian.org) and -the -mailing list if you want to help make this vision come true.
+ +I use the lsdvd tool +to handle my fairly large DVD collection. It is a nice command line +tool to get details about a DVD, like title, tracks, track length, +etc, in XML, Perl or human readable format. But lsdvd have not seen +any new development since 2006 and had a few irritating bugs affecting +its use with some DVDs. Upstream seemed to be dead, and in January I +sent a small probe asking for a version control repository for the +project, without any reply. But I use it regularly and would like to +get an updated version +into Debian. So two weeks ago I tried harder to get in touch with +the project admin, and after getting a reply from him explaining that +he was no longer interested in the project, I asked if I could take +over. And yesterday, I became project admin.
+ +I've been in touch with a Gentoo developer and the Debian +maintainer interested in joining forces to maintain the upstream +project, and I hope we can get a new release out fairly quickly, +collecting the patches spread around on the internet into on place. +I've added the relevant Debian patches to the freshly created git +repository, and expect the Gentoo patches to make it too. If you got +a DVD collection and care about command line tools, check out +the git source and join +the project mailing +list. :)
For 12 år siden, skrev jeg et lite notat om -bruk av språkkoder -i Norge. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om -notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva -som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.
- -Når en velger språk i programmer på unix, så velger en blant mange -språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt -locale i parantes):
- --
-
- nb (nb_NO)
- Bokmål i Norge -
- nn (nn_NO)
- Nynorsk i Norge -
- se (se_NO)
- Nordsamisk i Norge -
Alle programmer som bruker andre koder bør endres.
- -Språkkoden bør brukes når .po-filer navngis og installeres. Dette -er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene -være navngitt nb.po, mens locale (LANG) bør være nb_NO.
- -Hvis vi ikke får standardisert de kodene i alle programmene med -norske oversettelser, så er det umulig å gi LANG-variablen ett innhold -som fungerer for alle programmer.
- -Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i -forbindelse med POSIX localer er standardisert i RFC 3066 og ISO -15897. Denne anbefalingen er i tråd med de angitte standardene.
- -Følgende koder er eller har vært i bruk som locale-verdier for -"norske" språk. Disse bør unngås, og erstattes når de oppdages:
- -norwegian | -> nb_NO |
bokmål | -> nb_NO |
bokmal | -> nb_NO |
nynorsk | -> nn_NO |
no | -> nb_NO |
no_NO | -> nb_NO |
no_NY | -> nn_NO |
sme_NO | -> se_NO |
Merk at når det gjelder de samiske språkene, at se_NO i praksis -henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til -lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske -språkkoder, der gjør -Divvun-prosjektet en bedre -jobb.
- -Referanser:
- --
-
-
- RFC 3066 - Tags - for the Identification of Languages (Erstatter RFC 1766) - -
- ISO - 639 - Codes for the Representation of Names of Languages - -
- ISO - DTR 14652 - locale-standard Specification method for cultural - conventions - -
- ISO - 15897: Registration procedures for cultural elements (cultural - registry), - (nytt - draft) - -
- ISO/IEC - JTC1/SC22/WG20 - Gruppen for i18n-standardisering i ISO - -
-
+
+
Rundt omkring i Oslo og ÃstlandsomrÃ¥det henger det bokser over +veiene som jeg har lurt pÃ¥ hva gjør. De har ut fra plassering og +vinkling sett ut som bokser som sniffer ut et eller annet fra +forbipasserende trafikk, men det har vært uklart for meg hva det er de +leser av. Her om dagen tok jeg bilde av en slik boks som henger under +ei +skibru pÃ¥ Sollihøgda:
+ +Boksen er tydelig merket «Kapsch >>>», logoen til +det sveitsiske selskapet Kapsch som +blant annet lager sensorsystemer for veitrafikk. Men de lager mye +forskjellig, og jeg kjente ikke igjen boksen på utseendet etter en +kjapp titt på produktlista til selskapet.
+ +I og med at boksen henger over veien E16, en riksvei vedlikeholdt +av Statens Vegvesen, så antok jeg at det burde være mulig å bruke +REST-API-et som gir tilgang til vegvesenets database over veier, +skilter og annet veirelatert til å finne ut hva i alle dager dette +kunne være. De har både +en +datakatalog og +et +søk, der en kan søke etter ulike typer oppføringer innen for et +gitt geografisk område. Jeg laget et enkelt shell-script for å hente +ut antall av en gitt type innenfor området skibrua dekker, og listet +opp navnet på typene som ble funnet. Orket ikke slå opp hvordan +URL-koding av aktuelle strenger kunne gjøres mer generisk, og brukte +en stygg sed-linje i stedet.
+ ++ +Aktuelt ID-område 1-874 var riktig i datakatalogen da jeg laget +scriptet. Det vil endre seg over tid. Skriptet listet så opp +aktuelle typer i og rundt skibrua: + ++#!/bin/sh +urlmap() { + sed \ + -e 's/ / /g' -e 's/{/%7B/g' \ + -e 's/}/%7D/g' -e 's/\[/%5B/g' \ + -e 's/\]/%5D/g' -e 's/ /%20/g' \ + -e 's/,/%2C/g' -e 's/\"/%22/g' \ + -e 's/:/%3A/g' +} + +lookup() { + url="$1" + curl -s -H 'Accept: application/vnd.vegvesen.nvdb-v1+xml' \ + "https://www.vegvesen.no/nvdb/api$url" | xmllint --format - +} + +for id in $(seq 1 874) ; do + search="{ + lokasjon: { + bbox: \"10.34425,59.96386,10.34458,59.96409\", + srid: \"WGS84\" + }, + objektTyper: [{ + id: $id, antall: 10 + }] +}" + + query=/sok?kriterie=$(echo $search | urlmap) + if lookup "$query" | + grep -q '<totaltAntallReturnert>0<' + then + : + else + echo $id + lookup "/datakatalog/objekttyper/$id" |grep '^ <navn>' + fi +done + +exit 0 +
+ ++5 + <navn>Rekkverk</navn> +14 + <navn>Rekkverksende</navn> +47 + <navn>Trafikklomme</navn> +49 + <navn>Trafikkøy</navn> +60 + <navn>Bru</navn> +79 + <navn>Stikkrenne/Kulvert</navn> +80 + <navn>Grøft, åpen</navn> +86 + <navn>Belysningsstrekning</navn> +95 + <navn>Skiltpunkt</navn> +96 + <navn>Skiltplate</navn> +98 + <navn>Referansestolpe</navn> +99 + <navn>Vegoppmerking, langsgående</navn> +105 + <navn>Fartsgrense</navn> +106 + <navn>Vinterdriftsstrategi</navn> +172 + <navn>Trafikkdeler</navn> +241 + <navn>Vegdekke</navn> +293 + <navn>Breddemåling</navn> +301 + <navn>Kantklippareal</navn> +318 + <navn>Snø-/isrydding</navn> +445 + <navn>Skred</navn> +446 + <navn>Dokumentasjon</navn> +452 + <navn>Undergang</navn> +528 + <navn>Tverrprofil</navn> +532 + <navn>Vegreferanse</navn> +534 + <navn>Region</navn> +535 + <navn>Fylke</navn> +536 + <navn>Kommune</navn> +538 + <navn>Gate</navn> +539 + <navn>Transportlenke</navn> +540 + <navn>Trafikkmengde</navn> +570 + <navn>Trafikkulykke</navn> +571 + <navn>Ulykkesinvolvert enhet</navn> +572 + <navn>Ulykkesinvolvert person</navn> +579 + <navn>Politidistrikt</navn> +583 + <navn>Vegbredde</navn> +591 + <navn>Høydebegrensning</navn> +592 + <navn>Nedbøyningsmåling</navn> +597 + <navn>Støy-luft, Strekningsdata</navn> +601 + <navn>Oppgravingsdata</navn> +602 + <navn>Oppgravingslag</navn> +603 + <navn>PMS-parsell</navn> +604 + <navn>Vegnormalstrekning</navn> +605 + <navn>Værrelatert strekning</navn> +616 + <navn>Feltstrekning</navn> +617 + <navn>Adressepunkt</navn> +626 + <navn>Friksjonsmåleserie</navn> +629 + <navn>Vegdekke, flatelapping</navn> +639 + <navn>Kurvatur, horisontalelement</navn> +640 + <navn>Kurvatur, vertikalelement</navn> +642 + <navn>Kurvatur, vertikalpunkt</navn> +643 + <navn>Statistikk, trafikkmengde</navn> +647 + <navn>Statistikk, vegbredde</navn> +774 + <navn>Nedbøyningsmåleserie</navn> +775 + <navn>ATK, influensstrekning</navn> +794 + <navn>Systemobjekt</navn> +810 + <navn>Vinterdriftsklasse</navn> +821 + <navn>Funksjonell vegklasse</navn> +825 + <navn>Kurvatur, stigning</navn> +838 + <navn>Vegbredde, beregnet</navn> +862 + <navn>Reisetidsregistreringspunkt</navn> +871 + <navn>Bruksklasse</navn> +
Av disse ser ID 775 og 862 mest relevant ut. ID 775 antar jeg +refererer til fotoboksen som stÃ¥r like ved brua, mens +«Reisetidsregistreringspunkt» kanskje kan være boksen som henger der. +Hvordan finner jeg sÃ¥ ut hva dette kan være for noe. En titt pÃ¥ +datakatalogsiden +for ID 862/Reisetidsregistreringspunkt viser at det er finnes 53 +slike mÃ¥lere i Norge, og hvor de er plassert, men gir ellers fÃ¥ +detaljer. Det er plassert 40 pÃ¥ østlandet og 13 i Trondheimsregionen. +Men siden nevner «AutoPASS», og hvis en slÃ¥r opp oppføringen pÃ¥ +Sollihøgda nevner den «Ciber AS» som ID for eksternt system. (Kan det +være snakk om +Ciber +Norge AS, et selskap eid av Ciber Europe Bv?) Et nettsøk pÃ¥ + «Ciber AS autopass» fører meg til en artikkel fra NRK Trøndelag i + 2013 med tittel +«Sjekk +dette hvis du vil unngÃ¥ kø». Artikkelen henviser til vegvesenets +nettside +reisetider.no +som har en +kartside +for Ãstlandet som viser at det mÃ¥les mellom Sandvika og Sollihøgda. +Det kan dermed se ut til at jeg har funnet ut hva boksene gjør.
+ +Hvis det stemmer, så er dette bokser som leser av AutoPASS-ID-en +til alle passerende biler med AutoPASS-brikke, og dermed gjør det mulig +for de som kontrollerer boksene å holde rede på hvor en gitt bil er +når den passerte et slikt målepunkt. NRK-artikkelen forteller at +denne informasjonen i dag kun brukes til å koble to +AutoPASS-brikkepasseringer passeringer sammen for å beregne +reisetiden, og at bruken er godkjent av Datatilsynet. Det er desverre +ikke mulig for en sjåfør som passerer under en slik boks å kontrollere +at AutoPASS-ID-en kun brukes til dette i dag og i fremtiden.
+ +I tillegg til denne type AutoPASS-sniffere vet jeg at det også +finnes mange automatiske stasjoner som tar betalt pr. passering (aka +bomstasjoner), og der lagres informasjon om tid, sted og bilnummer i +10 år. Finnes det andre slike sniffere plassert ut på veiene?
+ +Personlig har jeg valgt å ikke bruke AutoPASS-brikke, for å gjøre +det vanskeligere og mer kostbart for de som vil invadere privatsfæren +og holde rede på hvor bilen min beveger seg til enhver tid. Jeg håper +flere vil gjøre det samme, selv om det gir litt høyere private +utgifter (dyrere bompassering). Vern om privatsfæren koster i disse +dager.
+ +Takk til Jan Kristian Jensen i Statens Vegvesen for tips om +dokumentasjon på vegvesenets REST-API.
For a while now, I have been looking for a sensible offsite backup -solution for use at home. My requirements are simple, it must be -cheap and locally encrypted (in other words, I keep the encryption -keys, the storage provider do not have access to my private files). -One idea me and my friends had many years ago, before the cloud -storage providers showed up, was to use Google mail as storage, -writing a Linux block device storing blocks as emails in the mail -service provided by Google, and thus get heaps of free space. On top -of this one can add encryption, RAID and volume management to have -lots of (fairly slow, I admit that) cheap and encrypted storage. But -I never found time to implement such system. But the last few weeks I -have looked at a system called -S3QL, a locally -mounted network backed file system with the features I need.
- -S3QL is a fuse file system with a local cache and cloud storage, -handling several different storage providers, any with Amazon S3, -Google Drive or OpenStack API. There are heaps of such storage -providers. S3QL can also use a local directory as storage, which -combined with sshfs allow for file storage on any ssh server. S3QL -include support for encryption, compression, de-duplication, snapshots -and immutable file systems, allowing me to mount the remote storage as -a local mount point, look at and use the files as if they were local, -while the content is stored in the cloud as well. This allow me to -have a backup that should survive fire. The file system can not be -shared between several machines at the same time, as only one can -mount it at the time, but any machine with the encryption key and -access to the storage service can mount it if it is unmounted.
- -It is simple to use. I'm using it on Debian Wheezy, where the -package is included already. So to get started, run apt-get -install s3ql. Next, pick a storage provider. I ended up picking -Greenqloud, after reading their nice recipe on -how -to use S3QL with their Amazon S3 service, because I trust the laws -in Iceland more than those in USA when it come to keeping my personal -data safe and private, and thus would rather spend money on a company -in Iceland. Another nice recipe is available from the article -S3QL -Filesystem for HPC Storage by Jeff Layton in the HPC section of -Admin magazine. When the provider is picked, figure out how to get -the API key needed to connect to the storage API. With Greencloud, -the key did not show up until I had added payment details to my -account.
- -Armed with the API access details, it is time to create the file -system. First, create a new bucket in the cloud. This bucket is the -file system storage area. I picked a bucket name reflecting the -machine that was going to store data there, but any name will do. -I'll refer to it as bucket-name below. In addition, one need -the API login and password, and a locally created password. Store it -all in ~root/.s3ql/authinfo2 like this: - -
+ +-[s3c] -storage-url: s3c://s.greenqloud.com:443/bucket-name -backend-login: API-login -backend-password: API-password -fs-passphrase: local-password -
The Debian installer could be +a lot quicker. When we install more than 2000 packages in +Skolelinux / Debian Edu using +tasksel in the installer, unpacking the binary packages take forever. +A part of the slow I/O issue was discussed in +bug #613428 about too +much file system sync-ing done by dpkg, which is the package +responsible for unpacking the binary packages. Other parts (like code +executed by postinst scripts) might also sync to disk during +installation. All this sync-ing to disk do not really make sense to +me. If the machine crash half-way through, I start over, I do not try +to salvage the half installed system. So the failure sync-ing is +supposed to protect against, hardware or system crash, is not really +relevant while the installer is running.
+ +A few days ago, I thought of a way to get rid of all the file +system sync()-ing in a fairly non-intrusive way, without the need to +change the code in several packages. The idea is not new, but I have +not heard anyone propose the approach using dpkg-divert before. It +depend on the small and clever package +eatmydata, which +uses LD_PRELOAD to replace the system functions for syncing data to +disk with functions doing nothing, thus allowing programs to live +dangerous while speeding up disk I/O significantly. Instead of +modifying the implementation of dpkg, apt and tasksel (which are the +packages responsible for selecting, fetching and installing packages), +it occurred to me that we could just divert the programs away, replace +them with a simple shell wrapper calling +"eatmydata $program $@", to get the same effect. +Two days ago I decided to test the idea, and wrapped up a simple +implementation for the Debian Edu udeb.
+ +The effect was stunning. In my first test it reduced the running +time of the pkgsel step (installing tasks) from 64 to less than 44 +minutes (20 minutes shaved off the installation) on an old Dell +Latitude D505 machine. I am not quite sure what the optimised time +would have been, as I messed up the testing a bit, causing the debconf +priority to get low enough for two questions to pop up during +installation. As soon as I saw the questions I moved the installation +along, but do not know how long the question were holding up the +installation. I did some more measurements using Debian Edu Jessie, +and got these results. The time measured is the time stamp in +/var/log/syslog between the "pkgsel: starting tasksel" and the +"pkgsel: finishing up" lines, if you want to do the same measurement +yourself. In Debian Edu, the tasksel dialog do not show up, and the +timing thus do not depend on how quickly the user handle the tasksel +dialog.
-I create my local passphrase using pwget 50 or similar, -but any sensible way to create a fairly random password should do it. -Armed with these details, it is now time to run mkfs, entering the API -details and password to create it:
- -- --# mkdir -m 700 /var/lib/s3ql-cache -# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl s3c://s.greenqloud.com:443/bucket-name -Enter backend login: -Enter backend password: -Before using S3QL, make sure to read the user's guide, especially -the 'Important Rules to Avoid Loosing Data' section. -Enter encryption password: -Confirm encryption password: -Generating random encryption key... -Creating metadata tables... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.00 MB of compressed metadata. -#
The next step is mounting the file system to make the storage available. - -
+-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 4 upload threads. -Downloading and decompressing metadata... -Reading metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Mounting filesystem... -# df -h /s3ql -Filesystem Size Used Avail Use% Mounted on -s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql -# -
Machine/setup | +Original tasksel | +Optimised tasksel | +Reduction | +
---|---|---|---|
Latitude D505 Main+LTSP LXDE | +64 min (07:46-08:50) | +<44 min (11:27-12:11) | +>20 min 18% | +
Latitude D505 Roaming LXDE | +57 min (08:48-09:45) | +34 min (07:43-08:17) | +23 min 40% | +
Latitude D505 Minimal | +22 min (10:37-10:59) | +11 min (11:16-11:27) | +11 min 50% | +
Thinkpad X200 Minimal | +6 min (08:19-08:25) | +4 min (08:04-08:08) | +2 min 33% | +
Thinkpad X200 Roaming KDE | +19 min (09:21-09:40) | +15 min (10:25-10:40) | +4 min 21% | +
There is a fsck command available to check the file system and -correct any problems detected. This can be used if the local server -crashes while the file system is mounted, to reset the "already -mounted" flag. This is what it look like when processing a working -file system:
+The test is done using a netinst ISO on a USB stick, so some of the +time is spent downloading packages. The connection to the Internet +was 100Mbit/s during testing, so downloading should not be a +significant factor in the measurement. Download typically took a few +seconds to a few minutes, depending on the amount of packages being +installed.
+ +The speedup is implemented by using two hooks in +Debian +Installer, the pre-pkgsel.d hook to set up the diverts, and the +finish-install.d hook to remove the divert at the end of the +installation. I picked the pre-pkgsel.d hook instead of the +post-base-installer.d hook because I test using an ISO without the +eatmydata package included, and the post-base-installer.d hook in +Debian Edu can only operate on packages included in the ISO. The +negative effect of this is that I am unable to activate this +optimization for the kernel installation step in d-i. If the code is +moved to the post-base-installer.d hook, the speedup would be larger +for the entire installation.
+ +I've implemented this in the +debian-edu-install +git repository, and plan to provide the optimization as part of the +Debian Edu installation. If you want to test this yourself, you can +create two files in the installer (or in an udeb). One shell script +need do go into /usr/lib/pre-pkgsel.d/, with content like this:
--# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name -Using cached metadata. -File system seems clean, checking anyway. -Checking DB integrity... -Creating temporary extra indices... -Checking lost+found... -Checking cached objects... -Checking names (refcounts)... -Checking contents (names)... -Checking contents (inodes)... -Checking contents (parent inodes)... -Checking objects (reference counts)... -Checking objects (backend)... -..processed 5000 objects so far.. -..processed 10000 objects so far.. -..processed 15000 objects so far.. -Checking objects (sizes)... -Checking blocks (referenced objects)... -Checking blocks (refcounts)... -Checking inode-block mapping (blocks)... -Checking inode-block mapping (inodes)... -Checking inodes (refcounts)... -Checking inodes (sizes)... -Checking extended attributes (names)... -Checking extended attributes (inodes)... -Checking symlinks (inodes)... -Checking directory reachability... -Checking unix conventions... -Checking referential integrity... -Dropping temporary indices... -Backing up old metadata... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.89 MB of compressed metadata. -# +#!/bin/sh +set -e +. /usr/share/debconf/confmodule +info() { + logger -t my-pkgsel "info: $*" +} +error() { + logger -t my-pkgsel "error: $*" +} +override_install() { + apt-install eatmydata || true + if [ -x /target/usr/bin/eatmydata ] ; then + for bin in dpkg apt-get aptitude tasksel ; do + file=/usr/bin/$bin + # Test that the file exist and have not been diverted already. + if [ -f /target$file ] ; then + info "diverting $file using eatmydata" + printf "#!/bin/sh\neatmydata $bin.distrib \"\$@\"\n" \ + > /target$file.edu + chmod 755 /target$file.edu + in-target dpkg-divert --package debian-edu-config \ + --rename --quiet --add $file + ln -sf ./$bin.edu /target$file + else + error "unable to divert $file, as it is missing." + fi + done + else + error "unable to find /usr/bin/eatmydata after installing the eatmydata pacage" + fi +} + +override_install
Thanks to the cache, working on files that fit in the cache is very -quick, about the same speed as local file access. Uploading large -amount of data is to me limited by the bandwidth out of and into my -house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, -which is very close to my upload speed, and downloading the same -Debian installation ISO gave me 610 kiB/s, close to my download speed. -Both were measured using dd. So for me, the bottleneck is my -network, not the file system code. I do not know what a good cache -size would be, but suspect that the cache should e larger than your -working set.
- -I mentioned that only one machine can mount the file system at the -time. If another machine try, it is told that the file system is -busy:
+To clean up, another shell script should go into +/usr/lib/finish-install.d/ with code like this:
--# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 8 upload threads. -Backend reports that fs is still mounted elsewhere, aborting. -# +#! /bin/sh -e +. /usr/share/debconf/confmodule +error() { + logger -t my-finish-install "error: $@" +} +remove_install_override() { + for bin in dpkg apt-get aptitude tasksel ; do + file=/usr/bin/$bin + if [ -x /target$file.edu ] ; then + rm /target$file + in-target dpkg-divert --package debian-edu-config \ + --rename --quiet --remove $file + rm /target$file.edu + else + error "Missing divert for $file." + fi + done + sync # Flush file buffers before continuing +} + +remove_install_override
The file content is uploaded when the cache is full, while the -metadata is uploaded once every 24 hour by default. To ensure the -file system content is flushed to the cloud, one can either umount the -file system, or ask S3QL to flush the cache and metadata using -s3qlctrl: +
In Debian Edu, I placed both code fragments in a separate script +edu-eatmydata-install and call it from the pre-pkgsel.d and +finish-install.d scripts.
+ +By now you might ask if this change should get into the normal +Debian installer too? I suspect it should, but am not sure the +current debian-installer coordinators find it useful enough. It also +depend on the side effects of the change. I'm not aware of any, but I +guess we will see if the change is safe after some more testing. +Perhaps there is some package in Debian depending on sync() and +fsync() having effect? Perhaps it should go into its own udeb, to +allow those of us wanting to enable it to do so without affecting +everyone.
+ +Update 2014-09-24: Since a few days ago, enabling this optimization +will break installation of all programs using gnutls because of +bug #702711. An updated +eatmydata package in Debian will solve it.
+Yesterday, I had the pleasure of attending a talk with the +Norwegian Unix User Group about +the +OpenPGP keyserver pool sks-keyservers.net, and was very happy to +learn that there is a large set of publicly available key servers to +use when looking for peoples public key. So far I have used +subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former +were misbehaving, but those days are ended. The servers I have used +up until yesterday have been slow and some times unavailable. I hope +those problems are gone now.
+ +Behind the round robin DNS entry of the +sks-keyservers.net service +there is a pool of more than 100 keyservers which are checked every +day to ensure they are well connected and up to date. It must be +better than what I have used so far. :)
+ +Yesterdays speaker told me that the service is the default +keyserver provided by the default configuration in GnuPG, but this do +not seem to be used in Debian. Perhaps it should?
+ +Anyway, I've updated my ~/.gnupg/options file to now include this +line:
--# s3qlctrl upload-meta /s3ql -# s3qlctrl flushcache /s3ql -# +keyserver pool.sks-keyservers.net
If you are curious about how much space your data uses in the -cloud, and how much compression and deduplication cut down on the -storage usage, you can use s3qlstat on the mounted file system to get -a report:
+With GnuPG version 2 one can also locate the keyserver using SRV +entries in DNS. Just for fun, I did just that at work, so now every +user of GnuPG at the University of Oslo should find a OpenGPG +keyserver automatically should their need it:
--# s3qlstat /s3ql -Directory entries: 9141 -Inodes: 9143 -Data blocks: 8851 -Total data size: 22049.38 MB -After de-duplication: 21955.46 MB (99.57% of total) -After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) -Database size: 2.39 MB (uncompressed) -(some values do not take into account not-yet-uploaded dirty blocks in cache) -# +% host -t srv _pgpkey-http._tcp.uio.no +_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net. +%
I mentioned earlier that there are several possible suppliers of -storage. I did not try to locate them all, but am aware of at least -Greenqloud, -Google Drive, -Amazon S3 web serivces, -Rackspace and -Crowncloud. The latter even -accept payment in Bitcoin. Pick one that suit your need. Some of -them provide several GiB of free storage, but the prize models are -quite different and you will have to figure out what suits you -best.
- -While researching this blog post, I had a look at research papers -and posters discussing the S3QL file system. There are several, which -told me that the file system is getting a critical check by the -science community and increased my confidence in using it. One nice -poster is titled -"An -Innovative Parallel Cloud Storage System using OpenStackâs SwiftObject -Store and Transformative Parallel I/O Approach" by Hsing-Bung -Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields -and Pamela Smith. Please have a look.
- -Given my problems with different file systems earlier, I decided to -check out the mounted S3QL file system to see if it would be usable as -a home directory (in other word, that it provided POSIX semantics when -it come to locking and umask handling etc). Running -my -test code to check file system semantics, I was happy to discover that -no error was found. So the file system can be used for home -directories, if one chooses to do so.
- -If you do not want a locally file system, and want something that -work without the Linux fuse file system, I would like to mention the -Tarsnap service, which also -provide locally encrypted backup using a command line client. It have -a nicer access control system, where one can split out read and write -access, allowing some systems to write to the backup and others to -only read from it.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+Now if only +the +HKP lookup protocol supported finding signature paths, I would be +very happy. It can look up a given key or search for a user ID, but I +normally do not want that, but to find a trust path from my key to +another key. Given a user ID or key ID, I would like to find (and +download) the keys representing a signature path from my key to the +key in question, to be able to get a trust path between the two keys. +This is as far as I can tell not possible today. Perhaps something +for a future version of the protocol?