X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/28f4bb43d9d2970847f90ddcccdb23315b80b4ba..ae5db6d19f3d85fdd5e7bd4c12be28fa3f15fc43:/blog/archive/2014/04/04.rss diff --git a/blog/archive/2014/04/04.rss b/blog/archive/2014/04/04.rss index c9bda1bfc0..71971f80f2 100644 --- a/blog/archive/2014/04/04.rss +++ b/blog/archive/2014/04/04.rss @@ -6,6 +6,592 @@ http://people.skolelinux.org/pere/blog/ + + Half the Coverity issues in Gnash fixed in the next release + http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html + http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html + Tue, 29 Apr 2014 14:20:00 +0200 + <p>I've been following <a href="http://www.getgnash.org/">the Gnash +project</a> for quite a while now. It is a free software +implementation of Adobe Flash, both a standalone player and a browser +plugin. Gnash implement support for the AVM1 format (and not the +newer AVM2 format - see +<a href="http://lightspark.github.io/">Lightspark</a> for that one), +allowing several flash based sites to work. Thanks to the friendly +developers at Youtube, it also work with Youtube videos, because the +Javascript code at Youtube detect Gnash and serve a AVM1 player to +those users. :) Would be great if someone found time to implement AVM2 +support, but it has not happened yet. If you install both Lightspark +and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file, +so you can get both handled as free software. Unfortunately, +Lightspark so far only implement a small subset of AVM2, and many +sites do not work yet.</p> + +<p>A few months ago, I started looking at +<a href="http://scan.coverity.com/">Coverity</a>, the static source +checker used to find heaps and heaps of bugs in free software (thanks +to the donation of a scanning service to free software projects by the +company developing this non-free code checker), and Gnash was one of +the projects I decided to check out. Coverity is able to find lock +errors, memory errors, dead code and more. A few days ago they even +extended it to also be able to find the heartbleed bug in OpenSSL. +There are heaps of checks being done on the instrumented code, and the +amount of bogus warnings is quite low compared to the other static +code checkers I have tested over the years.</p> + +<p>Since a few weeks ago, I've been working with the other Gnash +developers squashing bugs discovered by Coverity. I was quite happy +today when I checked the current status and saw that of the 777 issues +detected so far, 374 are marked as fixed. This make me confident that +the next Gnash release will be more stable and more dependable than +the previous one. Most of the reported issues were and are in the +test suite, but it also found a few in the rest of the code.</p> + +<p>If you want to help out, you find us on +<a href="https://lists.gnu.org/mailman/listinfo/gnash-dev">the +gnash-dev mailing list</a> and on +<a href="irc://irc.freenode.net/#gnash">the #gnash channel on +irc.freenode.net IRC server</a>.</p> + + + + + Install hardware dependent packages using tasksel (Isenkram 0.7) + http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html + http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html + Wed, 23 Apr 2014 14:50:00 +0200 + <p>It would be nice if it was easier in Debian to get all the hardware +related packages relevant for the computer installed automatically. +So I implemented one, using +<a href="http://packages.qa.debian.org/isenkram">my Isenkram +package</a>. To use it, install the tasksel and isenkram packages and +run tasksel as user root. You should be presented with a new option, +"Hardware specific packages (autodetected by isenkram)". When you +select it, tasksel will install the packages isenkram claim is fit for +the current hardware, hot pluggable or not.<p> + +<p>The implementation is in two files, one is the tasksel menu entry +description, and the other is the script used to extract the list of +packages to install. The first part is in +<tt>/usr/share/tasksel/descs/isenkram.desc</tt> and look like +this:</p> + +<p><blockquote><pre> +Task: isenkram +Section: hardware +Description: Hardware specific packages (autodetected by isenkram) + Based on the detected hardware various hardware specific packages are + proposed. +Test-new-install: mark show +Relevance: 8 +Packages: for-current-hardware +</pre></blockquote></p> + +<p>The second part is in +<tt>/usr/lib/tasksel/packages/for-current-hardware</tt> and look like +this:</p> + +<p><blockquote><pre> +#!/bin/sh +# +( + isenkram-lookup + isenkram-autoinstall-firmware -l +) | sort -u +</pre></blockquote></p> + +<p>All in all, a very short and simple implementation making it +trivial to install the hardware dependent package we all may want to +have installed on our machines. I've not been able to find a way to +get tasksel to tell you exactly which packages it plan to install +before doing the installation. So if you are curious or careful, +check the output from the isenkram-* command line tools first.</p> + +<p>The information about which packages are handling which hardware is +fetched either from the isenkram package itself in +/usr/share/isenkram/, from git.debian.org or from the APT package +database (using the Modaliases header). The APT package database +parsing have caused a nasty resource leak in the isenkram daemon (bugs +<a href="http://bugs.debian.org/719837">#719837</a> and +<a href="http://bugs.debian.org/730704">#730704</a>). The cause is in +the python-apt code (bug +<a href="http://bugs.debian.org/745487">#745487</a>), but using a +workaround I was able to get rid of the file descriptor leak and +reduce the memory leak from ~30 MiB per hardware detection down to +around 2 MiB per hardware detection. It should make the desktop +daemon a lot more useful. The fix is in version 0.7 uploaded to +unstable today.</p> + +<p>I believe the current way of mapping hardware to packages in +Isenkram is is a good draft, but in the future I expect isenkram to +use the AppStream data source for this. A proposal for getting proper +AppStream support into Debian is floating around as +<a href="https://wiki.debian.org/DEP-11">DEP-11</a>, and +<a href="https://wiki.debian.org/SummerOfCode2014/Projects#SummerOfCode2014.2FProjects.2FAppStreamDEP11Implementation.AppStream.2FDEP-11_for_the_Debian_Archive">GSoC +project</a> will take place this summer to improve the situation. I +look forward to seeing the result, and welcome patches for isenkram to +start using the information when it is ready.</p> + +<p>If you want your package to map to some specific hardware, either +add a "Xb-Modaliases" header to your control file like I did in +<a href="http://packages.qa.debian.org/pymissile">the pymissile +package</a> or submit a bug report with the details to the isenkram +package. See also +<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">all my +blog posts tagged isenkram</a> for details on the notation. I expect +the information will be migrated to AppStream eventually, but for the +moment I got no better place to store it.</p> + + + + + FreedomBox milestone - all packages now in Debian Sid + http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html + http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html + Tue, 15 Apr 2014 22:10:00 +0200 + <p>The <a href="https://wiki.debian.org/FreedomBox">Freedombox +project</a> is working on providing the software and hardware to make +it easy for non-technical people to host their data and communication +at home, and being able to communicate with their friends and family +encrypted and away from prying eyes. It is still going strong, and +today a major mile stone was reached.</p> + +<p>Today, the last of the packages currently used by the project to +created the system images were accepted into Debian Unstable. It was +the freedombox-setup package, which is used to configure the images +during build and on the first boot. Now all one need to get going is +the build code from the freedom-maker git repository and packages from +Debian. And once the freedombox-setup package enter testing, we can +build everything directly from Debian. :)</p> + +<p>Some key packages used by Freedombox are +<a href="http://packages.qa.debian.org/freedombox-setup">freedombox-setup</a>, +<a href="http://packages.qa.debian.org/plinth">plinth</a>, +<a href="http://packages.qa.debian.org/pagekite">pagekite</a>, +<a href="http://packages.qa.debian.org/tor">tor</a>, +<a href="http://packages.qa.debian.org/privoxy">privoxy</a>, +<a href="http://packages.qa.debian.org/owncloud">owncloud</a> and +<a href="http://packages.qa.debian.org/dnsmasq">dnsmasq</a>. There +are plans to integrate more packages into the setup. User +documentation is maintained on the Debian wiki. Please +<a href="https://wiki.debian.org/FreedomBox/Manual/Jessie">check out +the manual</a> and help us improve it.</p> + +<p>To test for yourself and create boot images with the FreedomBox +setup, run this on a Debian machine using a user with sudo rights to +become root:</p> + +<p><pre> +sudo apt-get install git vmdebootstrap mercurial python-docutils \ + mktorrent extlinux virtualbox qemu-user-static binfmt-support \ + u-boot-tools +git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \ + freedom-maker +make -C freedom-maker dreamplug-image raspberry-image virtualbox-image +</pre></p> + +<p>Root access is needed to run debootstrap and mount loopback +devices. See the README in the freedom-maker git repo for more +details on the build. If you do not want all three images, trim the +make line. Note that the virtualbox-image target is not really +virtualbox specific. It create a x86 image usable in kvm, qemu, +vmware and any other x86 virtual machine environment. You might need +the version of vmdebootstrap in Jessie to get the build working, as it +include fixes for a race condition with kpartx.</p> + +<p>If you instead want to install using a Debian CD and the preseed +method, boot a Debian Wheezy ISO and use this boot argument to load +the preseed values:</p> + +<p><pre> +url=<a href="http://www.reinholdtsen.name/freedombox/preseed-jessie.dat">http://www.reinholdtsen.name/freedombox/preseed-jessie.dat</a> +</pre></p> + +<p>I have not tested it myself the last few weeks, so I do not know if +it still work.</p> + +<p>If you wonder how to help, one task you could look at is using +systemd as the boot system. It will become the default for Linux in +Jessie, so we need to make sure it is usable on the Freedombox. I did +a simple test a few weeks ago, and noticed dnsmasq failed to start +during boot when using systemd. I suspect there are other problems +too. :) To detect problems, there is a test suite included, which can +be run from the plinth web interface.</p> + +<p>Give it a go and let us know how it goes on the mailing list, and help +us get the new release published. :) Please join us on +<a href="irc://irc.debian.org:6667/%23freedombox">IRC (#freedombox on +irc.debian.org)</a> and +<a href="http://lists.alioth.debian.org/mailman/listinfo/freedombox-discuss">the +mailing list</a> if you want to help make this vision come true.</p> + + + + + Språkkoder for POSIX locale i Norge + http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html + http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html + Fri, 11 Apr 2014 21:30:00 +0200 + <p>For 12 år siden, skrev jeg et lite notat om +<a href="http://i18n.skolelinux.no/localekoder.txt">bruk av språkkoder +i Norge</a>. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om +notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva +som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.</p> + +<p>Når en velger språk i programmer på unix, så velger en blant mange +språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt +locale i parantes):</p> + +<p><dl> +<dt>nb (nb_NO)</dt><dd>Bokmål i Norge</dd> +<dt>nn (nn_NO)</dt><dd>Nynorsk i Norge</dd> +<dt>se (se_NO)</dt><dd>Nordsamisk i Norge</dd> +</dl></p> + +<p>Alle programmer som bruker andre koder bør endres.</p> + +<p>Språkkoden bør brukes når .po-filer navngis og installeres. Dette +er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene +være navngitt nb.po, mens locale (LANG) bør være nb_NO.</p> + +<p>Hvis vi ikke får standardisert de kodene i alle programmene med +norske oversettelser, så er det umulig å gi LANG-variablen ett innhold +som fungerer for alle programmer.</p> + +<p>Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i +forbindelse med POSIX localer er standardisert i RFC 3066 og ISO +15897. Denne anbefalingen er i tråd med de angitte standardene.</p> + +<p>Følgende koder er eller har vært i bruk som locale-verdier for +"norske" språk. Disse bør unngås, og erstattes når de oppdages:</p> + +<p><table> +<tr><td>norwegian</td><td>-> nb_NO</td></tr> +<tr><td>bokmål </td><td>-> nb_NO</td></tr> +<tr><td>bokmal </td><td>-> nb_NO</td></tr> +<tr><td>nynorsk </td><td>-> nn_NO</td></tr> +<tr><td>no </td><td>-> nb_NO</td></tr> +<tr><td>no_NO </td><td>-> nb_NO</td></tr> +<tr><td>no_NY </td><td>-> nn_NO</td></tr> +<tr><td>sme_NO </td><td>-> se_NO</td></tr> +</table></p> + +<p>Merk at når det gjelder de samiske språkene, at se_NO i praksis +henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til +lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske +språkkoder, der gjør +<a href="http://www.divvun.no/">Divvun-prosjektet</a> en bedre +jobb.</p> + +<p><strong>Referanser:</strong></p> + +<ul> + + <li><a href="http://www.rfc-base.org/rfc-3066.html">RFC 3066 - Tags + for the Identification of Languages</a> (Erstatter RFC 1766)</li> + + <li><a href="http://www.loc.gov/standards/iso639-2/langcodes.html">ISO + 639</a> - Codes for the Representation of Names of Languages</li> + + <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n897-14652w25.pdf">ISO + DTR 14652</a> - locale-standard Specification method for cultural + conventions</li> + + <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n610.pdf">ISO + 15897: Registration procedures for cultural elements (cultural + registry)</a>, + <a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n849-15897wd6.pdf">(nytt + draft)</a></li> + + <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/">ISO/IEC + JTC1/SC22/WG20</a> - Gruppen for i18n-standardisering i ISO</li> + +<ul> + + + + + S3QL, a locally mounted cloud file system - nice free software + http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html + http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html + Wed, 9 Apr 2014 11:30:00 +0200 + <p>For a while now, I have been looking for a sensible offsite backup +solution for use at home. My requirements are simple, it must be +cheap and locally encrypted (in other words, I keep the encryption +keys, the storage provider do not have access to my private files). +One idea me and my friends had many years ago, before the cloud +storage providers showed up, was to use Google mail as storage, +writing a Linux block device storing blocks as emails in the mail +service provided by Google, and thus get heaps of free space. On top +of this one can add encryption, RAID and volume management to have +lots of (fairly slow, I admit that) cheap and encrypted storage. But +I never found time to implement such system. But the last few weeks I +have looked at a system called +<a href="https://bitbucket.org/nikratio/s3ql/">S3QL</a>, a locally +mounted network backed file system with the features I need.</p> + +<p>S3QL is a fuse file system with a local cache and cloud storage, +handling several different storage providers, any with Amazon S3, +Google Drive or OpenStack API. There are heaps of such storage +providers. S3QL can also use a local directory as storage, which +combined with sshfs allow for file storage on any ssh server. S3QL +include support for encryption, compression, de-duplication, snapshots +and immutable file systems, allowing me to mount the remote storage as +a local mount point, look at and use the files as if they were local, +while the content is stored in the cloud as well. This allow me to +have a backup that should survive fire. The file system can not be +shared between several machines at the same time, as only one can +mount it at the time, but any machine with the encryption key and +access to the storage service can mount it if it is unmounted.</p> + +<p>It is simple to use. I'm using it on Debian Wheezy, where the +package is included already. So to get started, run <tt>apt-get +install s3ql</tt>. Next, pick a storage provider. I ended up picking +Greenqloud, after reading their nice recipe on +<a href="https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy">how +to use S3QL with their Amazon S3 service</a>, because I trust the laws +in Iceland more than those in USA when it come to keeping my personal +data safe and private, and thus would rather spend money on a company +in Iceland. Another nice recipe is available from the article +<a href="http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage">S3QL +Filesystem for HPC Storage</a> by Jeff Layton in the HPC section of +Admin magazine. When the provider is picked, figure out how to get +the API key needed to connect to the storage API. With Greencloud, +the key did not show up until I had added payment details to my +account.</p> + +<p>Armed with the API access details, it is time to create the file +system. First, create a new bucket in the cloud. This bucket is the +file system storage area. I picked a bucket name reflecting the +machine that was going to store data there, but any name will do. +I'll refer to it as <tt>bucket-name</tt> below. In addition, one need +the API login and password, and a locally created password. Store it +all in ~root/.s3ql/authinfo2 like this: + +<p><blockquote><pre> +[s3c] +storage-url: s3c://s.greenqloud.com:443/bucket-name +backend-login: API-login +backend-password: API-password +fs-passphrase: local-password +</pre></blockquote></p> + +<p>I create my local passphrase using <tt>pwget 50</tt> or similar, +but any sensible way to create a fairly random password should do it. +Armed with these details, it is now time to run mkfs, entering the API +details and password to create it:</p> + +<p><blockquote><pre> +# mkdir -m 700 /var/lib/s3ql-cache +# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ + --ssl s3c://s.greenqloud.com:443/bucket-name +Enter backend login: +Enter backend password: +Before using S3QL, make sure to read the user's guide, especially +the 'Important Rules to Avoid Loosing Data' section. +Enter encryption password: +Confirm encryption password: +Generating random encryption key... +Creating metadata tables... +Dumping metadata... +..objects.. +..blocks.. +..inodes.. +..inode_blocks.. +..symlink_targets.. +..names.. +..contents.. +..ext_attributes.. +Compressing and uploading metadata... +Wrote 0.00 MB of compressed metadata. +# </pre></blockquote></p> + +<p>The next step is mounting the file system to make the storage available. + +<p><blockquote><pre> +# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ + --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql +Using 4 upload threads. +Downloading and decompressing metadata... +Reading metadata... +..objects.. +..blocks.. +..inodes.. +..inode_blocks.. +..symlink_targets.. +..names.. +..contents.. +..ext_attributes.. +Mounting filesystem... +# df -h /s3ql +Filesystem Size Used Avail Use% Mounted on +s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql +# +</pre></blockquote></p> + +<p>The file system is now ready for use. I use rsync to store my +backups in it, and as the metadata used by rsync is downloaded at +mount time, no network traffic (and storage cost) is triggered by +running rsync. To unmount, one should not use the normal umount +command, as this will not flush the cache to the cloud storage, but +instead running the umount.s3ql command like this: + +<p><blockquote><pre> +# umount.s3ql /s3ql +# +</pre></blockquote></p> + +<p>There is a fsck command available to check the file system and +correct any problems detected. This can be used if the local server +crashes while the file system is mounted, to reset the "already +mounted" flag. This is what it look like when processing a working +file system:</p> + +<p><blockquote><pre> +# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name +Using cached metadata. +File system seems clean, checking anyway. +Checking DB integrity... +Creating temporary extra indices... +Checking lost+found... +Checking cached objects... +Checking names (refcounts)... +Checking contents (names)... +Checking contents (inodes)... +Checking contents (parent inodes)... +Checking objects (reference counts)... +Checking objects (backend)... +..processed 5000 objects so far.. +..processed 10000 objects so far.. +..processed 15000 objects so far.. +Checking objects (sizes)... +Checking blocks (referenced objects)... +Checking blocks (refcounts)... +Checking inode-block mapping (blocks)... +Checking inode-block mapping (inodes)... +Checking inodes (refcounts)... +Checking inodes (sizes)... +Checking extended attributes (names)... +Checking extended attributes (inodes)... +Checking symlinks (inodes)... +Checking directory reachability... +Checking unix conventions... +Checking referential integrity... +Dropping temporary indices... +Backing up old metadata... +Dumping metadata... +..objects.. +..blocks.. +..inodes.. +..inode_blocks.. +..symlink_targets.. +..names.. +..contents.. +..ext_attributes.. +Compressing and uploading metadata... +Wrote 0.89 MB of compressed metadata. +# +</pre></blockquote></p> + +<p>Thanks to the cache, working on files that fit in the cache is very +quick, about the same speed as local file access. Uploading large +amount of data is to me limited by the bandwidth out of and into my +house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, +which is very close to my upload speed, and downloading the same +Debian installation ISO gave me 610 kiB/s, close to my download speed. +Both were measured using <tt>dd</tt>. So for me, the bottleneck is my +network, not the file system code. I do not know what a good cache +size would be, but suspect that the cache should e larger than your +working set.</p> + +<p>I mentioned that only one machine can mount the file system at the +time. If another machine try, it is told that the file system is +busy:</p> + +<p><blockquote><pre> +# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ + --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql +Using 8 upload threads. +Backend reports that fs is still mounted elsewhere, aborting. +# +</pre></blockquote></p> + +<p>The file content is uploaded when the cache is full, while the +metadata is uploaded once every 24 hour by default. To ensure the +file system content is flushed to the cloud, one can either umount the +file system, or ask S3QL to flush the cache and metadata using +s3qlctrl: + +<p><blockquote><pre> +# s3qlctrl upload-meta /s3ql +# s3qlctrl flushcache /s3ql +# +</pre></blockquote></p> + +<p>If you are curious about how much space your data uses in the +cloud, and how much compression and deduplication cut down on the +storage usage, you can use s3qlstat on the mounted file system to get +a report:</p> + +<p><blockquote><pre> +# s3qlstat /s3ql +Directory entries: 9141 +Inodes: 9143 +Data blocks: 8851 +Total data size: 22049.38 MB +After de-duplication: 21955.46 MB (99.57% of total) +After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) +Database size: 2.39 MB (uncompressed) +(some values do not take into account not-yet-uploaded dirty blocks in cache) +# +</pre></blockquote></p> + +<p>I mentioned earlier that there are several possible suppliers of +storage. I did not try to locate them all, but am aware of at least +<a href="https://www.greenqloud.com/">Greenqloud</a>, +<a href="http://drive.google.com/">Google Drive</a>, +<a href="http://aws.amazon.com/s3/">Amazon S3 web serivces</a>, +<a href="http://www.rackspace.com/">Rackspace</a> and +<a href="http://crowncloud.net/">Crowncloud</A>. The latter even +accept payment in Bitcoin. Pick one that suit your need. Some of +them provide several GiB of free storage, but the prize models are +quite different and you will have to figure out what suits you +best.</p> + +<p>While researching this blog post, I had a look at research papers +and posters discussing the S3QL file system. There are several, which +told me that the file system is getting a critical check by the +science community and increased my confidence in using it. One nice +poster is titled +"<a href="http://www.lanl.gov/orgs/adtsc/publications/science_highlights_2013/docs/pg68_69.pdf">An +Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject +Store and Transformative Parallel I/O Approach</a>" by Hsing-Bung +Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields +and Pamela Smith. Please have a look.</p> + +<p>Given my problems with different file systems earlier, I decided to +check out the mounted S3QL file system to see if it would be usable as +a home directory (in other word, that it provided POSIX semantics when +it come to locking and umask handling etc). Running +<a href="http://people.skolelinux.org/pere/blog/Testing_if_a_file_system_can_be_used_for_home_directories___.html">my +test code to check file system semantics</a>, I was happy to discover that +no error was found. So the file system can be used for home +directories, if one chooses to do so.</p> + +<p>If you do not want a locally file system, and want something that +work without the Linux fuse file system, I would like to mention the +<a href="http://www.tarsnap.com/">Tarsnap service</a>, which also +provide locally encrypted backup using a command line client. It have +a nicer access control system, where one can split out read and write +access, allowing some systems to write to the backup and others to +only read from it.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + EU-domstolen bekreftet i dag at datalagringsdirektivet er ulovlig http://people.skolelinux.org/pere/blog/EU_domstolen_bekreftet_i_dag_at_datalagringsdirektivet_er_ulovlig.html @@ -71,6 +657,19 @@ innsats i prosjekter som <a href="https://wiki.debian.org/FreedomBox">Freedombox</a> og <a href="http://www.dugnadsnett.no/">Dugnadsnett</a> er viktigere enn noen gang.</p> + +<p><strong>Update 2014-04-08 12:10</strong>: Kronerullingen for å +stoppe datalagringsdirektivet i Norge gjøres hos foreningen +<a href="http://www.digitaltpersonvern.no/">Digitalt Personvern</a>, +som har samlet inn 843 215,- så langt men trenger nok mye mer hvis + +ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var +<a href="http://www.holderdeord.no/parliament-issues/48650">kun +partinene Høyre og Arbeiderpartiet</a> som stemte for +Datalagringsdirektivet, og en av dem må bytte mening for at det skal +bli flertall mot i Stortinget. Se mer om saken +<a href="http://www.holderdeord.no/issues/69-innfore-datalagringsdirektivet">Holder +de ord</a>.</p>