X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/f8a36d04e3add7c8b31712f1a5fa449f0092adf2..fa4e14fe2faecdba7833edabf9f9fe20fb3be1c7:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index 0b8c21d6fa..4fcf4416e6 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -6,6 +6,804 @@ http://people.skolelinux.org/pere/blog/ + + Hva henger under skibrua over E16 på Sollihøgda? + http://people.skolelinux.org/pere/blog/Hva_henger_under_skibrua_over_E16_p__Sollih_gda_.html + http://people.skolelinux.org/pere/blog/Hva_henger_under_skibrua_over_E16_p__Sollih_gda_.html + Sun, 21 Sep 2014 09:50:00 +0200 + <p>Rundt omkring i Oslo og Østlandsområdet henger det bokser over +veiene som jeg har lurt på hva gjør. De har ut fra plassering og +vinkling sett ut som bokser som sniffer ut et eller annet fra +forbipasserende trafikk, men det har vært uklart for meg hva det er de +leser av. Her om dagen tok jeg bilde av en slik boks som henger under +<a href="http://www.openstreetmap.no/?zoom=19&mlat=59.96396&mlon=10.34443&layers=B00000">ei +skibru på Sollihøgda</a>:</p> + +<p align="center"><img width="60%" src="http://people.skolelinux.org/pere/blog/images/2014-09-13-kapsch-sollihogda-crop.jpeg"></p> + +<p>Boksen er tydelig merket «Kapsch >>>», logoen til +<a href="http://www.kapsch.net/">det sveitsiske selskapet Kapsch</a> som +blant annet lager sensorsystemer for veitrafikk. Men de lager mye +forskjellig, og jeg kjente ikke igjen boksen på utseendet etter en +kjapp titt på produktlista til selskapet.</p> + +<p>I og med at boksen henger over veien E16, en riksvei vedlikeholdt +av Statens Vegvesen, så antok jeg at det burde være mulig å bruke +REST-API-et som gir tilgang til vegvesenets database over veier, +skilter og annet veirelatert til å finne ut hva i alle dager dette +kunne være. De har både +<a href="https://www.vegvesen.no/nvdb/api/dokumentasjon/datakatalog">en +datakatalog</a> og +<a href="https://www.vegvesen.no/nvdb/api/dokumentasjon/sok">et +søk</a>, der en kan søke etter ulike typer oppføringer innen for et +gitt geografisk område. Jeg laget et enkelt shell-script for å hente +ut antall av en gitt type innenfor området skibrua dekker, og listet +opp navnet på typene som ble funnet. Orket ikke slå opp hvordan +URL-koding av aktuelle strenger kunne gjøres mer generisk, og brukte +en stygg sed-linje i stedet.</p> + +<blockquote><pre> +#!/bin/sh +urlmap() { + sed \ + -e 's/ / /g' -e 's/{/%7B/g' \ + -e 's/}/%7D/g' -e 's/\[/%5B/g' \ + -e 's/\]/%5D/g' -e 's/ /%20/g' \ + -e 's/,/%2C/g' -e 's/\"/%22/g' \ + -e 's/:/%3A/g' +} + +lookup() { + url="$1" + curl -s -H 'Accept: application/vnd.vegvesen.nvdb-v1+xml' \ + "https://www.vegvesen.no/nvdb/api$url" | xmllint --format - +} + +for id in $(seq 1 874) ; do + search="{ + lokasjon: { + bbox: \"10.34425,59.96386,10.34458,59.96409\", + srid: \"WGS84\" + }, + objektTyper: [{ + id: $id, antall: 10 + }] +}" + + query=/sok?kriterie=$(echo $search | urlmap) + if lookup "$query" | + grep -q '&lt;totaltAntallReturnert>0&lt;' + then + : + else + echo $id + lookup "/datakatalog/objekttyper/$id" |grep '^ &lt;navn>' + fi +done + +exit 0 +</pre></blockquote> + +Aktuelt ID-område 1-874 var riktig i datakatalogen da jeg laget +scriptet. Det vil endre seg over tid. Skriptet listet så opp +aktuelle typer i og rundt skibrua: + +<blockquote><pre> +5 + &lt;navn>Rekkverk&lt;/navn> +14 + &lt;navn>Rekkverksende&lt;/navn> +47 + &lt;navn>Trafikklomme&lt;/navn> +49 + &lt;navn>Trafikkøy&lt;/navn> +60 + &lt;navn>Bru&lt;/navn> +79 + &lt;navn>Stikkrenne/Kulvert&lt;/navn> +80 + &lt;navn>Grøft, åpen&lt;/navn> +86 + &lt;navn>Belysningsstrekning&lt;/navn> +95 + &lt;navn>Skiltpunkt&lt;/navn> +96 + &lt;navn>Skiltplate&lt;/navn> +98 + &lt;navn>Referansestolpe&lt;/navn> +99 + &lt;navn>Vegoppmerking, langsgående&lt;/navn> +105 + &lt;navn>Fartsgrense&lt;/navn> +106 + &lt;navn>Vinterdriftsstrategi&lt;/navn> +172 + &lt;navn>Trafikkdeler&lt;/navn> +241 + &lt;navn>Vegdekke&lt;/navn> +293 + &lt;navn>Breddemåling&lt;/navn> +301 + &lt;navn>Kantklippareal&lt;/navn> +318 + &lt;navn>Snø-/isrydding&lt;/navn> +445 + &lt;navn>Skred&lt;/navn> +446 + &lt;navn>Dokumentasjon&lt;/navn> +452 + &lt;navn>Undergang&lt;/navn> +528 + &lt;navn>Tverrprofil&lt;/navn> +532 + &lt;navn>Vegreferanse&lt;/navn> +534 + &lt;navn>Region&lt;/navn> +535 + &lt;navn>Fylke&lt;/navn> +536 + &lt;navn>Kommune&lt;/navn> +538 + &lt;navn>Gate&lt;/navn> +539 + &lt;navn>Transportlenke&lt;/navn> +540 + &lt;navn>Trafikkmengde&lt;/navn> +570 + &lt;navn>Trafikkulykke&lt;/navn> +571 + &lt;navn>Ulykkesinvolvert enhet&lt;/navn> +572 + &lt;navn>Ulykkesinvolvert person&lt;/navn> +579 + &lt;navn>Politidistrikt&lt;/navn> +583 + &lt;navn>Vegbredde&lt;/navn> +591 + &lt;navn>Høydebegrensning&lt;/navn> +592 + &lt;navn>Nedbøyningsmåling&lt;/navn> +597 + &lt;navn>Støy-luft, Strekningsdata&lt;/navn> +601 + &lt;navn>Oppgravingsdata&lt;/navn> +602 + &lt;navn>Oppgravingslag&lt;/navn> +603 + &lt;navn>PMS-parsell&lt;/navn> +604 + &lt;navn>Vegnormalstrekning&lt;/navn> +605 + &lt;navn>Værrelatert strekning&lt;/navn> +616 + &lt;navn>Feltstrekning&lt;/navn> +617 + &lt;navn>Adressepunkt&lt;/navn> +626 + &lt;navn>Friksjonsmåleserie&lt;/navn> +629 + &lt;navn>Vegdekke, flatelapping&lt;/navn> +639 + &lt;navn>Kurvatur, horisontalelement&lt;/navn> +640 + &lt;navn>Kurvatur, vertikalelement&lt;/navn> +642 + &lt;navn>Kurvatur, vertikalpunkt&lt;/navn> +643 + &lt;navn>Statistikk, trafikkmengde&lt;/navn> +647 + &lt;navn>Statistikk, vegbredde&lt;/navn> +774 + &lt;navn>Nedbøyningsmåleserie&lt;/navn> +775 + &lt;navn>ATK, influensstrekning&lt;/navn> +794 + &lt;navn>Systemobjekt&lt;/navn> +810 + &lt;navn>Vinterdriftsklasse&lt;/navn> +821 + &lt;navn>Funksjonell vegklasse&lt;/navn> +825 + &lt;navn>Kurvatur, stigning&lt;/navn> +838 + &lt;navn>Vegbredde, beregnet&lt;/navn> +862 + &lt;navn>Reisetidsregistreringspunkt&lt;/navn> +871 + &lt;navn>Bruksklasse&lt;/navn> +</pre></blockquote> + +<p>Av disse ser ID 775 og 862 mest relevant ut. ID 775 antar jeg +refererer til fotoboksen som står like ved brua, mens +«Reisetidsregistreringspunkt» kanskje kan være boksen som henger der. +Hvordan finner jeg så ut hva dette kan være for noe. En titt på +<a href="http://labs.vegdata.no/nvdb-datakatalog/862-Reisetidsregistreringspunkt/">datakatalogsiden +for ID 862/Reisetidsregistreringspunkt</a> viser at det er finnes 53 +slike målere i Norge, og hvor de er plassert, men gir ellers få +detaljer. Det er plassert 40 på østlandet og 13 i Trondheimsregionen. +Men siden nevner «AutoPASS», og hvis en slår opp oppføringen på +Sollihøgda nevner den «Ciber AS» som ID for eksternt system. (Kan det +være snakk om +<a href="http://www.proff.no/selskap/ciber-norge-as/oslo/internettdesign-og-programmering/Z0I3KMF4/">Ciber +Norge AS</a>, et selskap eid av Ciber Europe Bv?) Et nettsøk på + «Ciber AS autopass» fører meg til en artikkel fra NRK Trøndelag i + 2013 med tittel +«<a href="http://www.nrk.no/trondelag/sjekk-dette-hvis-du-vil-unnga-ko-1.11327947">Sjekk +dette hvis du vil unngå kø</a>». Artikkelen henviser til vegvesenets +nettside +<a href="http://www.reisetider.no/reisetid/forside.html">reisetider.no</a> +som har en +<a href="http://www.reisetider.no/reisetid/omrade.html?omrade=5">kartside +for Østlandet</a> som viser at det måles mellom Sandvika og Sollihøgda. +Det kan dermed se ut til at jeg har funnet ut hva boksene gjør.</p> + +<p>Hvis det stemmer, så er dette bokser som leser av AutoPASS-ID-en +til alle passerende biler med AutoPASS-brikke, og dermed gjør det mulig +for de som kontrollerer boksene å holde rede på hvor en gitt bil er +når den passerte et slikt målepunkt. NRK-artikkelen forteller at +denne informasjonen i dag kun brukes til å koble to +AutoPASS-brikkepasseringer passeringer sammen for å beregne +reisetiden, og at bruken er godkjent av Datatilsynet. Det er desverre +ikke mulig for en sjåfør som passerer under en slik boks å kontrollere +at AutoPASS-ID-en kun brukes til dette i dag og i fremtiden.</p> + +<p>I tillegg til denne type AutoPASS-sniffere vet jeg at det også +finnes mange automatiske stasjoner som tar betalt pr. passering (aka +bomstasjoner), og der lagres informasjon om tid, sted og bilnummer i +10 år. Finnes det andre slike sniffere plassert ut på veiene?</p> + +<p>Personlig har jeg valgt å ikke bruke AutoPASS-brikke, for å gjøre +det vanskeligere og mer kostbart for de som vil invadere privatsfæren +og holde rede på hvor bilen min beveger seg til enhver tid. Jeg håper +flere vil gjøre det samme, selv om det gir litt høyere private +utgifter (dyrere bompassering). Vern om privatsfæren koster i disse +dager.</p> + +<p>Takk til Jan Kristian Jensen i Statens Vegvesen for tips om +dokumentasjon på vegvesenets REST-API.</p> + + + + + Speeding up the Debian installer using eatmydata and dpkg-divert + http://people.skolelinux.org/pere/blog/Speeding_up_the_Debian_installer_using_eatmydata_and_dpkg_divert.html + http://people.skolelinux.org/pere/blog/Speeding_up_the_Debian_installer_using_eatmydata_and_dpkg_divert.html + Tue, 16 Sep 2014 14:00:00 +0200 + <p>The <a href="https://www.debian.org/">Debian</a> installer could be +a lot quicker. When we install more than 2000 packages in +<a href="http://www.skolelinux.org/">Skolelinux / Debian Edu</a> using +tasksel in the installer, unpacking the binary packages take forever. +A part of the slow I/O issue was discussed in +<a href="https://bugs.debian.org/613428">bug #613428</a> about too +much file system sync-ing done by dpkg, which is the package +responsible for unpacking the binary packages. Other parts (like code +executed by postinst scripts) might also sync to disk during +installation. All this sync-ing to disk do not really make sense to +me. If the machine crash half-way through, I start over, I do not try +to salvage the half installed system. So the failure sync-ing is +supposed to protect against, hardware or system crash, is not really +relevant while the installer is running.</p> + +<p>A few days ago, I thought of a way to get rid of all the file +system sync()-ing in a fairly non-intrusive way, without the need to +change the code in several packages. The idea is not new, but I have +not heard anyone propose the approach using dpkg-divert before. It +depend on the small and clever package +<a href="https://packages.qa.debian.org/eatmydata">eatmydata</a>, which +uses LD_PRELOAD to replace the system functions for syncing data to +disk with functions doing nothing, thus allowing programs to live +dangerous while speeding up disk I/O significantly. Instead of +modifying the implementation of dpkg, apt and tasksel (which are the +packages responsible for selecting, fetching and installing packages), +it occurred to me that we could just divert the programs away, replace +them with a simple shell wrapper calling +"eatmydata&nbsp;$program&nbsp;$@", to get the same effect. +Two days ago I decided to test the idea, and wrapped up a simple +implementation for the Debian Edu udeb.</p> + +<p>The effect was stunning. In my first test it reduced the running +time of the pkgsel step (installing tasks) from 64 to less than 44 +minutes (20 minutes shaved off the installation) on an old Dell +Latitude D505 machine. I am not quite sure what the optimised time +would have been, as I messed up the testing a bit, causing the debconf +priority to get low enough for two questions to pop up during +installation. As soon as I saw the questions I moved the installation +along, but do not know how long the question were holding up the +installation. I did some more measurements using Debian Edu Jessie, +and got these results. The time measured is the time stamp in +/var/log/syslog between the "pkgsel: starting tasksel" and the +"pkgsel: finishing up" lines, if you want to do the same measurement +yourself. In Debian Edu, the tasksel dialog do not show up, and the +timing thus do not depend on how quickly the user handle the tasksel +dialog.</p> + +<p><table> + +<tr> +<th>Machine/setup</th> +<th>Original tasksel</th> +<th>Optimised tasksel</th> +<th>Reduction</th> +</tr> + +<tr> +<td>Latitude D505 Main+LTSP LXDE</td> +<td>64 min (07:46-08:50)</td> +<td><44 min (11:27-12:11)</td> +<td>>20 min 18%</td> +</tr> + +<tr> +<td>Latitude D505 Roaming LXDE</td> +<td>57 min (08:48-09:45)</td> +<td>34 min (07:43-08:17)</td> +<td>23 min 40%</td> +</tr> + +<tr> +<td>Latitude D505 Minimal</td> +<td>22 min (10:37-10:59)</td> +<td>11 min (11:16-11:27)</td> +<td>11 min 50%</td> +</tr> + +<tr> +<td>Thinkpad X200 Minimal</td> +<td>6 min (08:19-08:25)</td> +<td>4 min (08:04-08:08)</td> +<td>2 min 33%</td> +</tr> + +<tr> +<td>Thinkpad X200 Roaming KDE</td> +<td>19 min (09:21-09:40)</td> +<td>15 min (10:25-10:40)</td> +<td>4 min 21%</td> +</tr> + +</table></p> + +<p>The test is done using a netinst ISO on a USB stick, so some of the +time is spent downloading packages. The connection to the Internet +was 100Mbit/s during testing, so downloading should not be a +significant factor in the measurement. Download typically took a few +seconds to a few minutes, depending on the amount of packages being +installed.</p> + +<p>The speedup is implemented by using two hooks in +<a href="https://www.debian.org/devel/debian-installer/">Debian +Installer</a>, the pre-pkgsel.d hook to set up the diverts, and the +finish-install.d hook to remove the divert at the end of the +installation. I picked the pre-pkgsel.d hook instead of the +post-base-installer.d hook because I test using an ISO without the +eatmydata package included, and the post-base-installer.d hook in +Debian Edu can only operate on packages included in the ISO. The +negative effect of this is that I am unable to activate this +optimization for the kernel installation step in d-i. If the code is +moved to the post-base-installer.d hook, the speedup would be larger +for the entire installation.</p> + +<p>I've implemented this in the +<a href="https://packages.qa.debian.org/debian-edu-install">debian-edu-install</a> +git repository, and plan to provide the optimization as part of the +Debian Edu installation. If you want to test this yourself, you can +create two files in the installer (or in an udeb). One shell script +need do go into /usr/lib/pre-pkgsel.d/, with content like this:</p> + +<p><blockquote><pre> +#!/bin/sh +set -e +. /usr/share/debconf/confmodule +info() { + logger -t my-pkgsel "info: $*" +} +error() { + logger -t my-pkgsel "error: $*" +} +override_install() { + apt-install eatmydata || true + if [ -x /target/usr/bin/eatmydata ] ; then + for bin in dpkg apt-get aptitude tasksel ; do + file=/usr/bin/$bin + # Test that the file exist and have not been diverted already. + if [ -f /target$file ] ; then + info "diverting $file using eatmydata" + printf "#!/bin/sh\neatmydata $bin.distrib \"\$@\"\n" \ + > /target$file.edu + chmod 755 /target$file.edu + in-target dpkg-divert --package debian-edu-config \ + --rename --quiet --add $file + ln -sf ./$bin.edu /target$file + else + error "unable to divert $file, as it is missing." + fi + done + else + error "unable to find /usr/bin/eatmydata after installing the eatmydata pacage" + fi +} + +override_install +</pre></blockquote></p> + +<p>To clean up, another shell script should go into +/usr/lib/finish-install.d/ with code like this: + +<p><blockquote><pre> +#! /bin/sh -e +. /usr/share/debconf/confmodule +error() { + logger -t my-finish-install "error: $@" +} +remove_install_override() { + for bin in dpkg apt-get aptitude tasksel ; do + file=/usr/bin/$bin + if [ -x /target$file.edu ] ; then + rm /target$file + in-target dpkg-divert --package debian-edu-config \ + --rename --quiet --remove $file + rm /target$file.edu + else + error "Missing divert for $file." + fi + done + sync # Flush file buffers before continuing +} + +remove_install_override +</pre></blockquote></p> + +<p>In Debian Edu, I placed both code fragments in a separate script +edu-eatmydata-install and call it from the pre-pkgsel.d and +finish-install.d scripts.</p> + +<p>By now you might ask if this change should get into the normal +Debian installer too? I suspect it should, but am not sure the +current debian-installer coordinators find it useful enough. It also +depend on the side effects of the change. I'm not aware of any, but I +guess we will see if the change is safe after some more testing. +Perhaps there is some package in Debian depending on sync() and +fsync() having effect? Perhaps it should go into its own udeb, to +allow those of us wanting to enable it to do so without affecting +everyone.</p> + +<p>Update 2014-09-24: Since a few days ago, enabling this optimization +will break installation of all programs using gnutls because of +<ahref="https://bugs.debian.org/702711">bug #702711. An updated +eatmydata package in Debian will solve it.</p> + + + + + Good bye subkeys.pgp.net, welcome pool.sks-keyservers.net + http://people.skolelinux.org/pere/blog/Good_bye_subkeys_pgp_net__welcome_pool_sks_keyservers_net.html + http://people.skolelinux.org/pere/blog/Good_bye_subkeys_pgp_net__welcome_pool_sks_keyservers_net.html + Wed, 10 Sep 2014 13:10:00 +0200 + <p>Yesterday, I had the pleasure of attending a talk with the +<a href="http://www.nuug.no/">Norwegian Unix User Group</a> about +<a href="http://www.nuug.no/aktiviteter/20140909-sks-keyservers/">the +OpenPGP keyserver pool sks-keyservers.net</a>, and was very happy to +learn that there is a large set of publicly available key servers to +use when looking for peoples public key. So far I have used +subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former +were misbehaving, but those days are ended. The servers I have used +up until yesterday have been slow and some times unavailable. I hope +those problems are gone now.</p> + +<p>Behind the round robin DNS entry of the +<a href="https://sks-keyservers.net/">sks-keyservers.net</a> service +there is a pool of more than 100 keyservers which are checked every +day to ensure they are well connected and up to date. It must be +better than what I have used so far. :)</p> + +<p>Yesterdays speaker told me that the service is the default +keyserver provided by the default configuration in GnuPG, but this do +not seem to be used in Debian. Perhaps it should?</p> + +<p>Anyway, I've updated my ~/.gnupg/options file to now include this +line:</p> + +<p><blockquote><pre> +keyserver pool.sks-keyservers.net +</pre></blockquote></p> + +<p>With GnuPG version 2 one can also locate the keyserver using SRV +entries in DNS. Just for fun, I did just that at work, so now every +user of GnuPG at the University of Oslo should find a OpenGPG +keyserver automatically should their need it:</p> + +<p><blockquote><pre> +% host -t srv _pgpkey-http._tcp.uio.no +_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net. +% +</pre></blockquote></p> + +<p>Now if only +<a href="http://ietfreport.isoc.org/idref/draft-shaw-openpgp-hkp/">the +HKP lookup protocol</a> supported finding signature paths, I would be +very happy. It can look up a given key or search for a user ID, but I +normally do not want that, but to find a trust path from my key to +another key. Given a user ID or key ID, I would like to find (and +download) the keys representing a signature path from my key to the +key in question, to be able to get a trust path between the two keys. +This is as far as I can tell not possible today. Perhaps something +for a future version of the protocol?</p> + + + + + Do you need an agreement with MPEG-LA to publish and broadcast H.264 video in Norway? + http://people.skolelinux.org/pere/blog/Do_you_need_an_agreement_with_MPEG_LA_to_publish_and_broadcast_H_264_video_in_Norway_.html + http://people.skolelinux.org/pere/blog/Do_you_need_an_agreement_with_MPEG_LA_to_publish_and_broadcast_H_264_video_in_Norway_.html + Mon, 25 Aug 2014 22:10:00 +0200 + <p>Two years later, I am still not sure if it is legal here in Norway +to use or publish a video in H.264 or MPEG4 format edited by the +commercially licensed video editors, without limiting the use to +create "personal" or "non-commercial" videos or get a license +agreement with <a href="http://www.mpegla.com">MPEG LA</a>. If one +want to publish and broadcast video in a non-personal or commercial +setting, it might be that those tools can not be used, or that video +format can not be used, without breaking their copyright license. I +am not sure. +<a href="http://people.skolelinux.org/pere/blog/Trenger_en_avtale_med_MPEG_LA_for___publisere_og_kringkaste_H_264_video_.html">Back +then</a>, I found that the copyright license terms for Adobe Premiere +and Apple Final Cut Pro both specified that one could not use the +program to produce anything else without a patent license from MPEG +LA. The issue is not limited to those two products, though. Other +much used products like those from Avid and Sorenson Media have terms +of use are similar to those from Adobe and Apple. The complicating +factor making me unsure if those terms have effect in Norway or not is +that the patents in question are not valid in Norway, but copyright +licenses are.</p> + +<p>These are the terms for Avid Artist Suite, according to their +<a href="http://www.avid.com/US/about-avid/legal-notices/legal-enduserlicense2">published +end user</a> +<a href="http://www.avid.com/static/resources/common/documents/corporate/LICENSE.pdf">license +text</a> (converted to lower case text for easier reading):</p> + +<p><blockquote> +<p>18.2. MPEG-4. MPEG-4 technology may be included with the +software. MPEG LA, L.L.C. requires this notice: </p> + +<p>This product is licensed under the MPEG-4 visual patent portfolio +license for the personal and non-commercial use of a consumer for (i) +encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 +video”) and/or (ii) decoding MPEG-4 video that was encoded by a +consumer engaged in a personal and non-commercial activity and/or was +obtained from a video provider licensed by MPEG LA to provide MPEG-4 +video. No license is granted or shall be implied for any other +use. Additional information including that relating to promotional, +internal and commercial uses and licensing may be obtained from MPEG +LA, LLC. See http://www.mpegla.com. This product is licensed under +the MPEG-4 systems patent portfolio license for encoding in compliance +with the MPEG-4 systems standard, except that an additional license +and payment of royalties are necessary for encoding in connection with +(i) data stored or replicated in physical media which is paid for on a +title by title basis and/or (ii) data which is paid for on a title by +title basis and is transmitted to an end user for permanent storage +and/or use, such additional license may be obtained from MPEG LA, +LLC. See http://www.mpegla.com for additional details.</p> + +<p>18.3. H.264/AVC. H.264/AVC technology may be included with the +software. MPEG LA, L.L.C. requires this notice:</p> + +<p>This product is licensed under the AVC patent portfolio license for +the personal use of a consumer or other uses in which it does not +receive remuneration to (i) encode video in compliance with the AVC +standard (“AVC video”) and/or (ii) decode AVC video that was encoded +by a consumer engaged in a personal activity and/or was obtained from +a video provider licensed to provide AVC video. No license is granted +or shall be implied for any other use. Additional information may be +obtained from MPEG LA, L.L.C. See http://www.mpegla.com.</p> +</blockquote></p> + +<p>Note the requirement that the videos created can only be used for +personal or non-commercial purposes.</p> + +<p>The Sorenson Media software have +<a href="http://www.sorensonmedia.com/terms/">similar terms</a>:</p> + +<p><blockquote> + +<p>With respect to a license from Sorenson pertaining to MPEG-4 Video +Decoders and/or Encoders: Any such product is licensed under the +MPEG-4 visual patent portfolio license for the personal and +non-commercial use of a consumer for (i) encoding video in compliance +with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding +MPEG-4 video that was encoded by a consumer engaged in a personal and +non-commercial activity and/or was obtained from a video provider +licensed by MPEG LA to provide MPEG-4 video. No license is granted or +shall be implied for any other use. Additional information including +that relating to promotional, internal and commercial uses and +licensing may be obtained from MPEG LA, LLC. See +http://www.mpegla.com.</p> + +<p>With respect to a license from Sorenson pertaining to MPEG-4 +Consumer Recorded Data Encoder, MPEG-4 Systems Internet Data Encoder, +MPEG-4 Mobile Data Encoder, and/or MPEG-4 Unique Use Encoder: Any such +product is licensed under the MPEG-4 systems patent portfolio license +for encoding in compliance with the MPEG-4 systems standard, except +that an additional license and payment of royalties are necessary for +encoding in connection with (i) data stored or replicated in physical +media which is paid for on a title by title basis and/or (ii) data +which is paid for on a title by title basis and is transmitted to an +end user for permanent storage and/or use. Such additional license may +be obtained from MPEG LA, LLC. See http://www.mpegla.com for +additional details.</p> + +</blockquote></p> + +<p>Some free software like +<a href="https://handbrake.fr/">Handbrake</A> and +<a href="http://ffmpeg.org/">FFMPEG</a> uses GPL/LGPL licenses and do +not have any such terms included, so for those, there is no +requirement to limit the use to personal and non-commercial.</p> + + + + + Lenker for 2014-08-03 + http://people.skolelinux.org/pere/blog/Lenker_for_2014_08_03.html + http://people.skolelinux.org/pere/blog/Lenker_for_2014_08_03.html + Sun, 3 Aug 2014 23:00:00 +0200 + <p>Lenge siden jeg har hatt tid til å publisere lenker til skriverier +jeg har hatt glede og nytte av av å lese. Her er en liten norsk +lenkesamling.</p> + +<p><ul> + +<li><a href="http://www.nrk.no/ytring/sjoslag-om-fiskemilliardene-1.11576109">Sjøslag +om fiskemilliardene</a> (NRK Ytring 2014-03-03) - litt om hvordan de +norske felles matressurser røves fra felleskapet.</li> + +<li><a href="http://www.aftenposten.no/nyheter/Matkrisen-kan-komme-til-Norge-7522341.html">Matkrisen +kan komme til Norge</a> (Aftenposten 2014-4-01) - hvordan miljøendringene vil gjøre matproduksjonen i Norge mer sårbar.</li> + +<li><a href="http://www.nrk.no/ytring/norge-trenger-kornlager-1.11726744">Norge +trenger kornlager</a> (NRK Ytring 2014-06-07) Chr. Anton Smedshaug +forteller litt om Norges sårbare matsituasjon etter at Staten solgte +Norges kornlager.</li> + +<li><a href="http://www.nrk.no/norge/pst-vil-overvake-datatastaturer-1.11583286">PST +vil overvåke datatastaturer</a> (NRK 2014-03-04) - PST ønsker retten +til å bryte seg inn på private PC-er og legge inn spionprogrammer. +Hvilket nok vil gjøre Linux mer populært, men gjør at en i enda mindre +grad enn i dag kan stole på datamaskiner - neppe en god ide for +samfunnet totalt sett.</li> + +<li><a href="http://www.osloby.no/nyheter/Ruter-fremstar-som-et-pobelvelde-7490624.html">«Ruter +fremstår som et pøbelvelde»</a> (OsloBy 2014-03-05) - et eksempel på +hvordan kollektivtransportselskapet i Oslo håndterer sine kunder.</li> + +<li><a href="http://www.dagbladet.no/2014/03/05/nyheter/dbtv/reklame/clear_channel/32123808/">Clear +Channel nektet å vise Greenpeace-reklame i Oslo</a> (Dagbladet +2014-03-05) - forteller litt om hvordan hvilke budskap som når ut i +det offentlige rom kontrolleres i Norge.</li> + +<li><a href="http://www.dagbladet.no/2014/03/06/kultur/meninger/debattinnlegg/kronikk/22_juli/32175854/">Svarte +ikke på kritikken</a> (Dagbladet 2014-03-06) - innlegg fra Norsk +presseforbund der de nok en gang tar opp det forkastelige i at +politiet nå har full tilgang til å bedrive telefonkontroll av +advokater.</li> + +<li><a href="http://www.aftenposten.no/nyheter/uriks/Putin-spiller-poker_-ikke-sjakk-I-sjakk-har-man-regler-7495368.html">«Putin +spiller poker, ikke sjakk. I sjakk har man regler.»</a> (Aftenposten +2014-03-08) - sjakklegenden Kasparov forklarer litt om hvordan han ser +at Russlands politikk fungerer, blant annet i lys av started av +Ukraina-krisen.</li> + +<li><a href="http://www.aftenposten.no/meninger/kronikker/I-seng-med-fienden-7492605.html">I +seng med fienden</a> (Aftenposten 2014-03-10) - kronikk fra Eirik +H. Vinje om hvordan menn og kvinner settes opp mot hverandre i det +offentlige ordskiftet, kanskje på sviktende grunnlag.</li> + +<li><a href="http://www.aftenposten.no/amagasinet/Hvor-er-elevene-7501690.html">Fritt +frem for skulk</a> (Aftenposten 2014-03-14) - skildring av hvordan +norske elever i dag ikke lenger har rimelig krav om oppmøte på +skolen.</li> + +<li><a href="http://www.aftenposten.no/digital/Datalagringsdirektiv-avslorte-abort_-sykdom-og-vapenkjop--7503014.html">«Datalagringsdirektiv» +avslørte abort, sykdom og våpenkjøp</a> (Aftenposten 2014-03-14) - om +hvordan forskere har dokumentert hvordan innsamling av metadata om +telefoni og Internett-bruk kan være svært avslørende.</li> + +<li><a href="http://www.dagbladet.no/2014/03/14/kultur/meninger/ideer/lordagskommentaren/agnes_ravatn/32302856/">Konsentrasjonssvikt +på pensum</a> (Dagbladet 2014-03-14) - Kommentar om hvordan (feil) +bruk IKT i skolen kan ødelegge mer enn det bidrar til læring.</li> + +<li><a href="http://doremusnor.wordpress.com/2014/02/09/reservasjonsrettsstaten/">Reservasjonsrettsstaten</a> +(blogg fra Doremus 2014-02-09) - morsom beskrivelse om hvordan +regjeringens forslag til reservasjonsrett for leger kan utvides til å +gjelde alles samvittighet.</li> + +<li><a href="http://www.aftenposten.no/meninger/kronikker/Autoritar-gjokunge-7514915.html">Autoritær +gjøkunge</a> (Aftenposten 2014-03-25) - Kronikk av Bjørn Stærk om +snurpenots-overvåkningen som varsleren Snowden dokumenterte.</li> + +<li><a href="http://blogg.friprog.no/2014/03/leveransekrise-i-offentlig-sektor-mener-mike-bracken-executive-director-of-digital-in-the-cabinet-office/">Leveransekrise +i Offentlig sektor – mener Mike Bracken, Executive Director of Digital +in the Cabinet Office</a> (blogg fra Friprog-senteret 2014-03-26).</li> + +<li><a href="http://www.dagbladet.no/2014/03/26/kultur/meninger/kronikk/etiopia/avlytting/32499687/">Norge +må stanse avlyttingen</a> (Dagbladet 2014-03-26) - leserinnlegg fra +Felix Horne der han ber om at Norge gjør en innsats for å få slutt på +overvåkning av innbyggerne som gjøres i Norge av Etiopiske +myndigheter.</li> + +<li><a href="http://www.aftenposten.no/meninger/kronikker/Demokrati-er-ingen-naturlig-styreform-7521957.html">Demokrati +er ingen naturlig styreform</a> (Aftenposten 2014-04-01) - kronikk av +Stein Ringen om hvordan demokrati som styreform går tapt når +innbyggerne tar det for gitt.</li> + +<li><a href="http://www.nrk.no/ytring/ytringsansvar-ere-enhver-tilladte_-1.11618934">Ytringsansvar +ere Enhver tilladte!</a> (NRK Ytring 2014-04-01) - innspill fra Trygve +Svensson og Helge Svare om at hver enkelt av oss har et ansvar for å +ytre oss i den offentlige debatten.</li> + +<li><a href="http://www.aftenposten.no/meninger/Jeg-er-ingen-god-samfunnsborger-7527128.html">Jeg +er ingen god samfunnsborger</a> (Aftenposten 2014-04-16), kronikk av +Simen Tveitereid om alternative måter å motiveres i samfunnet, uten å +hige etter mer penger og flere ting.</li> + +<li><a href="http://www.aftenposten.no/meninger/debatt/Avgjorelsen-far-umiddelbar-virkning-7531811.html">DLD-dommen: +Avgjørelsen får umiddelbar virkning</a> (Aftenposten 2014-04-10) - +kronikk av Høyres Michael Tetzschner, en partiutbryter i DLD-saken som +stemte nei til DLD i Stortinget i 2011.</li> + +<li><a href="http://www.uhuru.biz/?p=1466">Datalagringsdirektivets +endelikt</a> (blogg fra John Wessel-Aas 2014-04-11) - oppsummering +av hvordan direktivet ble funnet ugyldig i EU-domstolen.</li> + +<li><a href="http://www.vg.no/nyheter/meninger/kronikk-kapitulasjonspresidenten/a/10147713/">Kronikk: +Kapitulasjonspresidenten</a> (VG 2014-04-22) - kronikk av Einar +Kr. Steffenak om hvordan Stortingspresidenten og regjeringen viser sin +prinsippløshet i møte med Kina.</li> + +<li><a href="http://www.aftenposten.no/meninger/kronikker/Innerst-inne-er-alle-nordmenn-7542617.html">Innerst +inne er alle nordmenn</a> (Aftenposten 2014-04-27) - kronikk fra Bjørn +Stærk om hvordan vi i Vesten i stor grad baserer oss på en fantasi om +at alle i verden bærer på en drøm om å bli som oss.</li> + +<li><a href="http://www.aftenposten.no/viten/uviten/Det-italienske-senatet-gav-seg-selv-134-milliarder-euro-i-sluttpakke--7575312.html">Det +italienske senatet gav seg selv 134 milliarder euro i sluttpakke</a> +(Aftenposten 2014-06-19) - forsker Simen Gaure forteller hvordan +løgner og fantasi fra nettkilder i stor grad blir akseptert som +sannhet - antagelig også av deg og meg.</li> + +<li><a href="http://www.dagbladet.no/2014/05/30/kultur/meninger/kronikk/skole/33576392/">Et +forsvar for bråkmakerne</a> (Dagbladet 2014-05-30) - kronikk av Dag +Øystein Nome som beskriver hvordan dagens skole ikke fungerer så godt +for mange elever.</li> + +<li><a href="http://www.osloby.no/nyheter/Betalte-med-slitt-seddel---havnet-i-arresten-7617208.html">Betalte +med slitt seddel - havnet i arresten</a> (Osloby 2014-06-25)) - +dokumentasjon av Oslopolitiets angrep på vår alles rett til å ferdes +uten elektronisk sporing. Jeg bruker kontanter i så stor grad som +mulig da banken ikke har noe med hvor jeg er og hva jeg kjøper. Vi +som gjør dette risikerer som beskrevet overgrep som frihetsberøvelse +og registrering og lagring av fingeravtrykk og bilde i politiets +database over mistenkte.</li> + +<li><a href="http://www.aftenposten.no/meninger/leder/Fredsprisen-til-Snowden-7620422.html">Fredsprisen +til Snowden</a> (Aftenposten 2014-06-28) - leder som forklarer hvorfor +varsleren Snowden bør få fredsprisen.</li> + +<li><a href="http://www.dagbladet.no/2014/08/01/kultur/meninger/dbmener/leder1/34598010/">Strategi +for politistaten</a> (Dagbladet 2014-08-01) - leder som advarer om +sterke krefter som bruker terrortrusselen til å lirke Norge nærmere å +bli en politistat.</li> + +<li><a href="http://www.nrk.no/ytring/vi-ma-tenke-nytt-om-narkotika-1.11859322">Vi +må tenke nytt om narkotika</a> (NRK Ytring 2014-08-03) - Mark Lewis +forklarer hvorfor legalisering og offentlig kontroll av +narkotikamarkedet er mye bedre enn å overlate det til kriminelle.</li> + + +</ul></p> + + + Debian Edu interview: Bernd Zeitzen http://people.skolelinux.org/pere/blog/Debian_Edu_interview__Bernd_Zeitzen.html @@ -341,591 +1139,5 @@ some or all of these features, please let me know.</p> - - Half the Coverity issues in Gnash fixed in the next release - http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html - http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html - Tue, 29 Apr 2014 14:20:00 +0200 - <p>I've been following <a href="http://www.getgnash.org/">the Gnash -project</a> for quite a while now. It is a free software -implementation of Adobe Flash, both a standalone player and a browser -plugin. Gnash implement support for the AVM1 format (and not the -newer AVM2 format - see -<a href="http://lightspark.github.io/">Lightspark</a> for that one), -allowing several flash based sites to work. Thanks to the friendly -developers at Youtube, it also work with Youtube videos, because the -Javascript code at Youtube detect Gnash and serve a AVM1 player to -those users. :) Would be great if someone found time to implement AVM2 -support, but it has not happened yet. If you install both Lightspark -and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file, -so you can get both handled as free software. Unfortunately, -Lightspark so far only implement a small subset of AVM2, and many -sites do not work yet.</p> - -<p>A few months ago, I started looking at -<a href="http://scan.coverity.com/">Coverity</a>, the static source -checker used to find heaps and heaps of bugs in free software (thanks -to the donation of a scanning service to free software projects by the -company developing this non-free code checker), and Gnash was one of -the projects I decided to check out. Coverity is able to find lock -errors, memory errors, dead code and more. A few days ago they even -extended it to also be able to find the heartbleed bug in OpenSSL. -There are heaps of checks being done on the instrumented code, and the -amount of bogus warnings is quite low compared to the other static -code checkers I have tested over the years.</p> - -<p>Since a few weeks ago, I've been working with the other Gnash -developers squashing bugs discovered by Coverity. I was quite happy -today when I checked the current status and saw that of the 777 issues -detected so far, 374 are marked as fixed. This make me confident that -the next Gnash release will be more stable and more dependable than -the previous one. Most of the reported issues were and are in the -test suite, but it also found a few in the rest of the code.</p> - -<p>If you want to help out, you find us on -<a href="https://lists.gnu.org/mailman/listinfo/gnash-dev">the -gnash-dev mailing list</a> and on -<a href="irc://irc.freenode.net/#gnash">the #gnash channel on -irc.freenode.net IRC server</a>.</p> - - - - - Install hardware dependent packages using tasksel (Isenkram 0.7) - http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html - http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html - Wed, 23 Apr 2014 14:50:00 +0200 - <p>It would be nice if it was easier in Debian to get all the hardware -related packages relevant for the computer installed automatically. -So I implemented one, using -<a href="http://packages.qa.debian.org/isenkram">my Isenkram -package</a>. To use it, install the tasksel and isenkram packages and -run tasksel as user root. You should be presented with a new option, -"Hardware specific packages (autodetected by isenkram)". When you -select it, tasksel will install the packages isenkram claim is fit for -the current hardware, hot pluggable or not.<p> - -<p>The implementation is in two files, one is the tasksel menu entry -description, and the other is the script used to extract the list of -packages to install. The first part is in -<tt>/usr/share/tasksel/descs/isenkram.desc</tt> and look like -this:</p> - -<p><blockquote><pre> -Task: isenkram -Section: hardware -Description: Hardware specific packages (autodetected by isenkram) - Based on the detected hardware various hardware specific packages are - proposed. -Test-new-install: mark show -Relevance: 8 -Packages: for-current-hardware -</pre></blockquote></p> - -<p>The second part is in -<tt>/usr/lib/tasksel/packages/for-current-hardware</tt> and look like -this:</p> - -<p><blockquote><pre> -#!/bin/sh -# -( - isenkram-lookup - isenkram-autoinstall-firmware -l -) | sort -u -</pre></blockquote></p> - -<p>All in all, a very short and simple implementation making it -trivial to install the hardware dependent package we all may want to -have installed on our machines. I've not been able to find a way to -get tasksel to tell you exactly which packages it plan to install -before doing the installation. So if you are curious or careful, -check the output from the isenkram-* command line tools first.</p> - -<p>The information about which packages are handling which hardware is -fetched either from the isenkram package itself in -/usr/share/isenkram/, from git.debian.org or from the APT package -database (using the Modaliases header). The APT package database -parsing have caused a nasty resource leak in the isenkram daemon (bugs -<a href="http://bugs.debian.org/719837">#719837</a> and -<a href="http://bugs.debian.org/730704">#730704</a>). The cause is in -the python-apt code (bug -<a href="http://bugs.debian.org/745487">#745487</a>), but using a -workaround I was able to get rid of the file descriptor leak and -reduce the memory leak from ~30 MiB per hardware detection down to -around 2 MiB per hardware detection. It should make the desktop -daemon a lot more useful. The fix is in version 0.7 uploaded to -unstable today.</p> - -<p>I believe the current way of mapping hardware to packages in -Isenkram is is a good draft, but in the future I expect isenkram to -use the AppStream data source for this. A proposal for getting proper -AppStream support into Debian is floating around as -<a href="https://wiki.debian.org/DEP-11">DEP-11</a>, and -<a href="https://wiki.debian.org/SummerOfCode2014/Projects#SummerOfCode2014.2FProjects.2FAppStreamDEP11Implementation.AppStream.2FDEP-11_for_the_Debian_Archive">GSoC -project</a> will take place this summer to improve the situation. I -look forward to seeing the result, and welcome patches for isenkram to -start using the information when it is ready.</p> - -<p>If you want your package to map to some specific hardware, either -add a "Xb-Modaliases" header to your control file like I did in -<a href="http://packages.qa.debian.org/pymissile">the pymissile -package</a> or submit a bug report with the details to the isenkram -package. See also -<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">all my -blog posts tagged isenkram</a> for details on the notation. I expect -the information will be migrated to AppStream eventually, but for the -moment I got no better place to store it.</p> - - - - - FreedomBox milestone - all packages now in Debian Sid - http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html - http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html - Tue, 15 Apr 2014 22:10:00 +0200 - <p>The <a href="https://wiki.debian.org/FreedomBox">Freedombox -project</a> is working on providing the software and hardware to make -it easy for non-technical people to host their data and communication -at home, and being able to communicate with their friends and family -encrypted and away from prying eyes. It is still going strong, and -today a major mile stone was reached.</p> - -<p>Today, the last of the packages currently used by the project to -created the system images were accepted into Debian Unstable. It was -the freedombox-setup package, which is used to configure the images -during build and on the first boot. Now all one need to get going is -the build code from the freedom-maker git repository and packages from -Debian. And once the freedombox-setup package enter testing, we can -build everything directly from Debian. :)</p> - -<p>Some key packages used by Freedombox are -<a href="http://packages.qa.debian.org/freedombox-setup">freedombox-setup</a>, -<a href="http://packages.qa.debian.org/plinth">plinth</a>, -<a href="http://packages.qa.debian.org/pagekite">pagekite</a>, -<a href="http://packages.qa.debian.org/tor">tor</a>, -<a href="http://packages.qa.debian.org/privoxy">privoxy</a>, -<a href="http://packages.qa.debian.org/owncloud">owncloud</a> and -<a href="http://packages.qa.debian.org/dnsmasq">dnsmasq</a>. There -are plans to integrate more packages into the setup. User -documentation is maintained on the Debian wiki. Please -<a href="https://wiki.debian.org/FreedomBox/Manual/Jessie">check out -the manual</a> and help us improve it.</p> - -<p>To test for yourself and create boot images with the FreedomBox -setup, run this on a Debian machine using a user with sudo rights to -become root:</p> - -<p><pre> -sudo apt-get install git vmdebootstrap mercurial python-docutils \ - mktorrent extlinux virtualbox qemu-user-static binfmt-support \ - u-boot-tools -git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \ - freedom-maker -make -C freedom-maker dreamplug-image raspberry-image virtualbox-image -</pre></p> - -<p>Root access is needed to run debootstrap and mount loopback -devices. See the README in the freedom-maker git repo for more -details on the build. If you do not want all three images, trim the -make line. Note that the virtualbox-image target is not really -virtualbox specific. It create a x86 image usable in kvm, qemu, -vmware and any other x86 virtual machine environment. You might need -the version of vmdebootstrap in Jessie to get the build working, as it -include fixes for a race condition with kpartx.</p> - -<p>If you instead want to install using a Debian CD and the preseed -method, boot a Debian Wheezy ISO and use this boot argument to load -the preseed values:</p> - -<p><pre> -url=<a href="http://www.reinholdtsen.name/freedombox/preseed-jessie.dat">http://www.reinholdtsen.name/freedombox/preseed-jessie.dat</a> -</pre></p> - -<p>I have not tested it myself the last few weeks, so I do not know if -it still work.</p> - -<p>If you wonder how to help, one task you could look at is using -systemd as the boot system. It will become the default for Linux in -Jessie, so we need to make sure it is usable on the Freedombox. I did -a simple test a few weeks ago, and noticed dnsmasq failed to start -during boot when using systemd. I suspect there are other problems -too. :) To detect problems, there is a test suite included, which can -be run from the plinth web interface.</p> - -<p>Give it a go and let us know how it goes on the mailing list, and help -us get the new release published. :) Please join us on -<a href="irc://irc.debian.org:6667/%23freedombox">IRC (#freedombox on -irc.debian.org)</a> and -<a href="http://lists.alioth.debian.org/mailman/listinfo/freedombox-discuss">the -mailing list</a> if you want to help make this vision come true.</p> - - - - - Språkkoder for POSIX locale i Norge - http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html - http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html - Fri, 11 Apr 2014 21:30:00 +0200 - <p>For 12 år siden, skrev jeg et lite notat om -<a href="http://i18n.skolelinux.no/localekoder.txt">bruk av språkkoder -i Norge</a>. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om -notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva -som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.</p> - -<p>Når en velger språk i programmer på unix, så velger en blant mange -språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt -locale i parantes):</p> - -<p><dl> -<dt>nb (nb_NO)</dt><dd>Bokmål i Norge</dd> -<dt>nn (nn_NO)</dt><dd>Nynorsk i Norge</dd> -<dt>se (se_NO)</dt><dd>Nordsamisk i Norge</dd> -</dl></p> - -<p>Alle programmer som bruker andre koder bør endres.</p> - -<p>Språkkoden bør brukes når .po-filer navngis og installeres. Dette -er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene -være navngitt nb.po, mens locale (LANG) bør være nb_NO.</p> - -<p>Hvis vi ikke får standardisert de kodene i alle programmene med -norske oversettelser, så er det umulig å gi LANG-variablen ett innhold -som fungerer for alle programmer.</p> - -<p>Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i -forbindelse med POSIX localer er standardisert i RFC 3066 og ISO -15897. Denne anbefalingen er i tråd med de angitte standardene.</p> - -<p>Følgende koder er eller har vært i bruk som locale-verdier for -"norske" språk. Disse bør unngås, og erstattes når de oppdages:</p> - -<p><table> -<tr><td>norwegian</td><td>-> nb_NO</td></tr> -<tr><td>bokmål </td><td>-> nb_NO</td></tr> -<tr><td>bokmal </td><td>-> nb_NO</td></tr> -<tr><td>nynorsk </td><td>-> nn_NO</td></tr> -<tr><td>no </td><td>-> nb_NO</td></tr> -<tr><td>no_NO </td><td>-> nb_NO</td></tr> -<tr><td>no_NY </td><td>-> nn_NO</td></tr> -<tr><td>sme_NO </td><td>-> se_NO</td></tr> -</table></p> - -<p>Merk at når det gjelder de samiske språkene, at se_NO i praksis -henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til -lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske -språkkoder, der gjør -<a href="http://www.divvun.no/">Divvun-prosjektet</a> en bedre -jobb.</p> - -<p><strong>Referanser:</strong></p> - -<ul> - - <li><a href="http://www.rfc-base.org/rfc-3066.html">RFC 3066 - Tags - for the Identification of Languages</a> (Erstatter RFC 1766)</li> - - <li><a href="http://www.loc.gov/standards/iso639-2/langcodes.html">ISO - 639</a> - Codes for the Representation of Names of Languages</li> - - <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n897-14652w25.pdf">ISO - DTR 14652</a> - locale-standard Specification method for cultural - conventions</li> - - <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n610.pdf">ISO - 15897: Registration procedures for cultural elements (cultural - registry)</a>, - <a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n849-15897wd6.pdf">(nytt - draft)</a></li> - - <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/">ISO/IEC - JTC1/SC22/WG20</a> - Gruppen for i18n-standardisering i ISO</li> - -<ul> - - - - - S3QL, a locally mounted cloud file system - nice free software - http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html - http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html - Wed, 9 Apr 2014 11:30:00 +0200 - <p>For a while now, I have been looking for a sensible offsite backup -solution for use at home. My requirements are simple, it must be -cheap and locally encrypted (in other words, I keep the encryption -keys, the storage provider do not have access to my private files). -One idea me and my friends had many years ago, before the cloud -storage providers showed up, was to use Google mail as storage, -writing a Linux block device storing blocks as emails in the mail -service provided by Google, and thus get heaps of free space. On top -of this one can add encryption, RAID and volume management to have -lots of (fairly slow, I admit that) cheap and encrypted storage. But -I never found time to implement such system. But the last few weeks I -have looked at a system called -<a href="https://bitbucket.org/nikratio/s3ql/">S3QL</a>, a locally -mounted network backed file system with the features I need.</p> - -<p>S3QL is a fuse file system with a local cache and cloud storage, -handling several different storage providers, any with Amazon S3, -Google Drive or OpenStack API. There are heaps of such storage -providers. S3QL can also use a local directory as storage, which -combined with sshfs allow for file storage on any ssh server. S3QL -include support for encryption, compression, de-duplication, snapshots -and immutable file systems, allowing me to mount the remote storage as -a local mount point, look at and use the files as if they were local, -while the content is stored in the cloud as well. This allow me to -have a backup that should survive fire. The file system can not be -shared between several machines at the same time, as only one can -mount it at the time, but any machine with the encryption key and -access to the storage service can mount it if it is unmounted.</p> - -<p>It is simple to use. I'm using it on Debian Wheezy, where the -package is included already. So to get started, run <tt>apt-get -install s3ql</tt>. Next, pick a storage provider. I ended up picking -Greenqloud, after reading their nice recipe on -<a href="https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy">how -to use S3QL with their Amazon S3 service</a>, because I trust the laws -in Iceland more than those in USA when it come to keeping my personal -data safe and private, and thus would rather spend money on a company -in Iceland. Another nice recipe is available from the article -<a href="http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage">S3QL -Filesystem for HPC Storage</a> by Jeff Layton in the HPC section of -Admin magazine. When the provider is picked, figure out how to get -the API key needed to connect to the storage API. With Greencloud, -the key did not show up until I had added payment details to my -account.</p> - -<p>Armed with the API access details, it is time to create the file -system. First, create a new bucket in the cloud. This bucket is the -file system storage area. I picked a bucket name reflecting the -machine that was going to store data there, but any name will do. -I'll refer to it as <tt>bucket-name</tt> below. In addition, one need -the API login and password, and a locally created password. Store it -all in ~root/.s3ql/authinfo2 like this: - -<p><blockquote><pre> -[s3c] -storage-url: s3c://s.greenqloud.com:443/bucket-name -backend-login: API-login -backend-password: API-password -fs-passphrase: local-password -</pre></blockquote></p> - -<p>I create my local passphrase using <tt>pwget 50</tt> or similar, -but any sensible way to create a fairly random password should do it. -Armed with these details, it is now time to run mkfs, entering the API -details and password to create it:</p> - -<p><blockquote><pre> -# mkdir -m 700 /var/lib/s3ql-cache -# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl s3c://s.greenqloud.com:443/bucket-name -Enter backend login: -Enter backend password: -Before using S3QL, make sure to read the user's guide, especially -the 'Important Rules to Avoid Loosing Data' section. -Enter encryption password: -Confirm encryption password: -Generating random encryption key... -Creating metadata tables... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.00 MB of compressed metadata. -# </pre></blockquote></p> - -<p>The next step is mounting the file system to make the storage available. - -<p><blockquote><pre> -# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 4 upload threads. -Downloading and decompressing metadata... -Reading metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Mounting filesystem... -# df -h /s3ql -Filesystem Size Used Avail Use% Mounted on -s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql -# -</pre></blockquote></p> - -<p>The file system is now ready for use. I use rsync to store my -backups in it, and as the metadata used by rsync is downloaded at -mount time, no network traffic (and storage cost) is triggered by -running rsync. To unmount, one should not use the normal umount -command, as this will not flush the cache to the cloud storage, but -instead running the umount.s3ql command like this: - -<p><blockquote><pre> -# umount.s3ql /s3ql -# -</pre></blockquote></p> - -<p>There is a fsck command available to check the file system and -correct any problems detected. This can be used if the local server -crashes while the file system is mounted, to reset the "already -mounted" flag. This is what it look like when processing a working -file system:</p> - -<p><blockquote><pre> -# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name -Using cached metadata. -File system seems clean, checking anyway. -Checking DB integrity... -Creating temporary extra indices... -Checking lost+found... -Checking cached objects... -Checking names (refcounts)... -Checking contents (names)... -Checking contents (inodes)... -Checking contents (parent inodes)... -Checking objects (reference counts)... -Checking objects (backend)... -..processed 5000 objects so far.. -..processed 10000 objects so far.. -..processed 15000 objects so far.. -Checking objects (sizes)... -Checking blocks (referenced objects)... -Checking blocks (refcounts)... -Checking inode-block mapping (blocks)... -Checking inode-block mapping (inodes)... -Checking inodes (refcounts)... -Checking inodes (sizes)... -Checking extended attributes (names)... -Checking extended attributes (inodes)... -Checking symlinks (inodes)... -Checking directory reachability... -Checking unix conventions... -Checking referential integrity... -Dropping temporary indices... -Backing up old metadata... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.89 MB of compressed metadata. -# -</pre></blockquote></p> - -<p>Thanks to the cache, working on files that fit in the cache is very -quick, about the same speed as local file access. Uploading large -amount of data is to me limited by the bandwidth out of and into my -house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, -which is very close to my upload speed, and downloading the same -Debian installation ISO gave me 610 kiB/s, close to my download speed. -Both were measured using <tt>dd</tt>. So for me, the bottleneck is my -network, not the file system code. I do not know what a good cache -size would be, but suspect that the cache should e larger than your -working set.</p> - -<p>I mentioned that only one machine can mount the file system at the -time. If another machine try, it is told that the file system is -busy:</p> - -<p><blockquote><pre> -# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 8 upload threads. -Backend reports that fs is still mounted elsewhere, aborting. -# -</pre></blockquote></p> - -<p>The file content is uploaded when the cache is full, while the -metadata is uploaded once every 24 hour by default. To ensure the -file system content is flushed to the cloud, one can either umount the -file system, or ask S3QL to flush the cache and metadata using -s3qlctrl: - -<p><blockquote><pre> -# s3qlctrl upload-meta /s3ql -# s3qlctrl flushcache /s3ql -# -</pre></blockquote></p> - -<p>If you are curious about how much space your data uses in the -cloud, and how much compression and deduplication cut down on the -storage usage, you can use s3qlstat on the mounted file system to get -a report:</p> - -<p><blockquote><pre> -# s3qlstat /s3ql -Directory entries: 9141 -Inodes: 9143 -Data blocks: 8851 -Total data size: 22049.38 MB -After de-duplication: 21955.46 MB (99.57% of total) -After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) -Database size: 2.39 MB (uncompressed) -(some values do not take into account not-yet-uploaded dirty blocks in cache) -# -</pre></blockquote></p> - -<p>I mentioned earlier that there are several possible suppliers of -storage. I did not try to locate them all, but am aware of at least -<a href="https://www.greenqloud.com/">Greenqloud</a>, -<a href="http://drive.google.com/">Google Drive</a>, -<a href="http://aws.amazon.com/s3/">Amazon S3 web serivces</a>, -<a href="http://www.rackspace.com/">Rackspace</a> and -<a href="http://crowncloud.net/">Crowncloud</A>. The latter even -accept payment in Bitcoin. Pick one that suit your need. Some of -them provide several GiB of free storage, but the prize models are -quite different and you will have to figure out what suits you -best.</p> - -<p>While researching this blog post, I had a look at research papers -and posters discussing the S3QL file system. There are several, which -told me that the file system is getting a critical check by the -science community and increased my confidence in using it. One nice -poster is titled -"<a href="http://www.lanl.gov/orgs/adtsc/publications/science_highlights_2013/docs/pg68_69.pdf">An -Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject -Store and Transformative Parallel I/O Approach</a>" by Hsing-Bung -Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields -and Pamela Smith. Please have a look.</p> - -<p>Given my problems with different file systems earlier, I decided to -check out the mounted S3QL file system to see if it would be usable as -a home directory (in other word, that it provided POSIX semantics when -it come to locking and umask handling etc). Running -<a href="http://people.skolelinux.org/pere/blog/Testing_if_a_file_system_can_be_used_for_home_directories___.html">my -test code to check file system semantics</a>, I was happy to discover that -no error was found. So the file system can be used for home -directories, if one chooses to do so.</p> - -<p>If you do not want a locally file system, and want something that -work without the Linux fuse file system, I would like to mention the -<a href="http://www.tarsnap.com/">Tarsnap service</a>, which also -provide locally encrypted backup using a command line client. It have -a nicer access control system, where one can split out read and write -access, allowing some systems to write to the backup and others to -only read from it.</p> - -<p>As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> - - -