<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Debian Edu interview: Bernd Zeitzen</title>
- <link>http://people.skolelinux.org/pere/blog/Debian_Edu_interview__Bernd_Zeitzen.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Debian_Edu_interview__Bernd_Zeitzen.html</guid>
- <pubDate>Thu, 31 Jul 2014 08:30:00 +0200</pubDate>
- <description><p>The complete and free “out of the box” software solution for
-schools, <a href="http://www.skolelinux.org/">Debian Edu /
-Skolelinux</a>, is used quite a lot in Germany, and one of the people
-involved is Bernd Zeitzen, who show up on the project mailing lists
-from time to time with interesting questions and tips on how to adjust
-the setup. I managed to interview him this summer.</p>
-
-<p><strong>Who are you, and how do you spend your days?</strong></p>
-
-<p>My name is Bernd Zeitzen and I'm married with Hedda, a self
-employed physiotherapist. My former profession is tool maker, but I
-haven't worked for 30 years in this job. 30 years ago I started to
-support my wife and become her officeworker and a few years later the
-administrator for a small computer network, today based on Ubuntu
-Server (Samba, OpenVPN). For her daily work she has to use Windows
-Desktops because the software she needs to organize her business only
-works with Windows . :-(</p>
-
-<p>In 1988 we started with one PC and DOS, then I learned to use
-Windows 98, 2000, XP, …, 8, Ubuntu, MacOSX. Today we are running a
-Linux server with 6 Windows clients and 10 persons (teacher of
-children with special needs, speech therapist, occupational therapist,
-psychologist and officeworkers) using our Samba shares via OpenVPN to
-work with the documentations of our patients.</p>
-
-<p><strong>How did you get in contact with the Skolelinux / Debian Edu
-project?</strong></p>
-
-<p>Two years ago a friend of mine asked me, if I want to get a job in
-his school (<a href="http://www.gymnasium-harsewinkel.de/">Gymnasium
-Harsewinkel</a>). They started with Skolelinux / Debian Edu and they
-were looking for people to give support to the teachers using the
-software and the network and teaching the pupils increasing their
-computer skills in optional lessons. I'm spending 4-6 hours a week
-with this job.</p>
-
-<p><strong>What do you see as the advantages of Skolelinux / Debian
-Edu?</strong></p>
-
-<p>The independence.</p>
-
-<p>First: Every person is allowed to use, share and develop the
-software. Even if you are poor, you are allowed to use the software
-included in Skolelinux/Debian Edu and all the other Free Software.</p>
-
-<p>Second: The software runs on old machines and this gives us the
-possibility to recycle computers, weeded out from offices. The
-servers and desktops are running for more than two years and they are
-working reliable. </p>
-
-<p>We have two servers (one tjener and one terminal server), 45
-workstations in three classrooms and seven laptops as a mobile
-solution for all classrooms. These machines are all booting from the
-terminal server. In the moment we are installing 30 laptops as mobile
-workstations. Then the pupils have the possibility to work with these
-machines in their classrooms. Internet access is realized by a WLAN
-router, connected to the schools network. This is all done without a
-dedicated system administrator or a computer science teacher.</p>
-
-<p><strong>What do you see as the disadvantages of Skolelinux / Debian
-Edu?</strong></p>
-
-<p>Teachers and pupils are Windows users. &lt;Irony on&gt; And Linux
-isn't cool. It's software for freaks using the command line. &lt;Irony
-off&gt; They don't realize the stability of the system. </p>
-
-<p><strong>Which free software do you use daily?</strong></p>
-
-<p>Firefox, Thunderbird, LibreOffice, Ubuntu Server 12.04 (Samba,
-Apache, MySQL, Joomla!, … and Skolelinux / Debian Edu)</p>
-
-<p><strong>Which strategy do you believe is the right one to use to
-get schools to use free software?</strong></p>
-
-<p>In Germany we have the situation: every school is free to decide
-which software they want to use. This decision is influenced by
-teachers who learned to use Windows and MS Office. They buy a PC with
-Windows preinstalled and an additional testing version of MS
-Office. They don't know about the possibility to use Free Software
-instead. Another problem are the publisher of school books. They
-develop their software, added to the school books, for Windows.</p>
-</description>
- </item>
-
- <item>
- <title>98.6 percent done with the Norwegian draft translation of Free Culture</title>
- <link>http://people.skolelinux.org/pere/blog/98_6_percent_done_with_the_Norwegian_draft_translation_of_Free_Culture.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/98_6_percent_done_with_the_Norwegian_draft_translation_of_Free_Culture.html</guid>
- <pubDate>Wed, 23 Jul 2014 22:40:00 +0200</pubDate>
- <description><p>This summer I finally had time to continue working on the Norwegian
-<a href="http://www.docbook.org/">docbook</a> version of the 2004 book
-<a href="http://free-culture.cc/">Free Culture</a> by Lawrence Lessig,
-to get a Norwegian text explaining the problems with todays copyright
-law. Yesterday, I finally completed translated the book text. There
-are still some foot/end notes left to translate, the colophon page
-need to be rewritten, and a few words and phrases still need to be
-translated, but the Norwegian text is ready for the first proof
-reading. :) More spell checking is needed, and several illustrations
-need to be cleaned up. The work stopped up because I had to give
-priority to other projects the last year, and the progress graph of
-the translation show this very well:</p>
-
-<p><img width="80%" align="center" src="https://github.com/petterreinholdtsen/free-culture-lessig/raw/master/progress.png"></p>
-
-<p>If you want to read the result, check out the
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig">github</a>
-project pages and the
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig/blob/master/archive/freeculture.nb.pdf?raw=true">PDF</a>,
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig/blob/master/archive/freeculture.nb.epub?raw=true">EPUB</a>
-and HTML version available in the
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig/tree/master/archive">archive
-directory</a>.</p>
-
-<p>Please report typos, bugs and improvements to the github project if
-you find any.</p>
-</description>
- </item>
-
- <item>
- <title>From English wiki to translated PDF and epub via Docbook</title>
- <link>http://people.skolelinux.org/pere/blog/From_English_wiki_to_translated_PDF_and_epub_via_Docbook.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/From_English_wiki_to_translated_PDF_and_epub_via_Docbook.html</guid>
- <pubDate>Tue, 17 Jun 2014 11:30:00 +0200</pubDate>
- <description><p>The <a href="http://www.skolelinux.org/">Debian Edu / Skolelinux
-project</a> provide an instruction manual for teachers, system
-administrators and other users that contain useful tips for setting up
-and maintaining a Debian Edu installation. This text is about how the
-text processing of this manual is handled in the project.</p>
-
-<p>One goal of the project is to provide information in the native
-language of its users, and for this we need to handle translations.
-But we also want to make sure each language contain the same
-information, so for this we need a good way to keep the translations
-in sync. And we want it to be easy for our users to improve the
-documentation, avoiding the need to learn special formats or tools to
-contribute, and the obvious way to do this is to make it possible to
-edit the documentation using a web browser. We also want it to be
-easy for translators to keep the translation up to date, and give them
-help in figuring out what need to be translated. Here is the list of
-tools and the process we have found trying to reach all these
-goals.</p>
-
-<p>We maintain the authoritative source of our manual in the
-<a href="https://wiki.debian.org/DebianEdu/Documentation/Wheezy/">Debian
-wiki</a>, as several wiki pages written in English. It consist of one
-front page with references to the different chapters, several pages
-for each chapter, and finally one "collection page" gluing all the
-chapters together into one large web page (aka
-<a href="https://wiki.debian.org/DebianEdu/Documentation/Wheezy/AllInOne">the
-AllInOne page</a>). The AllInOne page is the one used for further
-processing and translations. Thanks to the fact that the
-<a href="http://moinmo.in/">MoinMoin</a> installation on
-wiki.debian.org support exporting pages in
-<a href="http://www.docbook.org/">the Docbook format</a>, we can fetch
-the list of pages to export using the raw version of the AllInOne
-page, loop over each of them to generate a Docbook XML version of the
-manual. This process also download images and transform image
-references to use the locally downloaded images. The generated
-Docbook XML files are slightly broken, so some post-processing is done
-using the <tt>documentation/scripts/get_manual</tt> program, and the
-result is a nice Docbook XML file (debian-edu-wheezy-manual.xml) and
-a handfull of images. The XML file can now be used to generate PDF, HTML
-and epub versions of the English manual. This is the basic step of
-our process, making PDF (using dblatex), HTML (using xsltproc) and
-epub (using dbtoepub) version from Docbook XML, and the resulting files
-are placed in the debian-edu-doc-en binary package.</p>
-
-<p>But English documentation is not enough for us. We want translated
-documentation too, and we want to make it easy for translators to
-track the English original. For this we use the
-<a href="http://packages.qa.debian.org/p/poxml.html">poxml</a> package,
-which allow us to transform the English Docbook XML file into a
-translation file (a .pot file), usable with the normal gettext based
-translation tools used by those translating free software. The pot
-file is used to create and maintain translation files (several .po
-files), which the translations update with the native language
-translations of all titles, paragraphs and blocks of text in the
-original. The next step is combining the original English Docbook XML
-and the translation file (say debian-edu-wheezy-manual.nb.po), to
-create a translated Docbook XML file (in this case
-debian-edu-wheezy-manual.nb.xml). This translated (or partly
-translated, if the translation is not complete) Docbook XML file can
-then be used like the original to create a PDF, HTML and epub version
-of the documentation.</p>
-
-<p>The translators use different tools to edit the .po files. We
-recommend using
-<a href="http://www.kde.org/applications/development/lokalize/">lokalize</a>,
-while some use emacs and vi, others can use web based editors like
-<a href="http://pootle.translatehouse.org/">Poodle</a> or
-<a href="https://www.transifex.com/">Transifex</a>. All we care about
-is where the .po file end up, in our git repository. Updated
-translations can either be committed directly to git, or submitted as
-<a href="https://bugs.debian.org/src:debian-edu-doc">bug reports
-against the debian-edu-doc package</a>.</p>
-
-<p>One challenge is images, which both might need to be translated (if
-they show translated user applications), and are needed in different
-formats when creating PDF and HTML versions (epub is a HTML version in
-this regard). For this we transform the original PNG images to the
-needed density and format during build, and have a way to provide
-translated images by storing translated versions in
-images/$LANGUAGECODE/. I am a bit unsure about the details here. The
-package maintainers know more.</p>
-
-<p>If you wonder what the result look like, we provide
-<a href="http://maintainer.skolelinux.org/debian-edu-doc/">the content
-of the documentation packages on the web</a>. See for example the
-<a href="http://maintainer.skolelinux.org/debian-edu-doc/it/debian-edu-wheezy-manual.pdf">Italian
-PDF version</a> or the
-<a href="http://maintainer.skolelinux.org/debian-edu-doc/de/debian-edu-wheezy-manual.html">German
-HTML version</a>. We do not yet build the epub version by default,
-but perhaps it will be done in the future.</p>
-
-<p>To learn more, check out
-<a href="http://packages.qa.debian.org/d/debian-edu-doc.html">the
-debian-edu-doc package</a>,
-<a href="https://wiki.debian.org/DebianEdu/Documentation/Wheezy/">the
-manual on the wiki</a> and
-<a href="https://wiki.debian.org/DebianEdu/Documentation/Wheezy/Translations">the
-translation instructions</a> in the manual.</p>
-</description>
- </item>
-
- <item>
- <title>Hvordan enkelt laste ned filmer fra NRK med den "nye" løsningen</title>
- <link>http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK_med_den__nye__l_sningen.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK_med_den__nye__l_sningen.html</guid>
- <pubDate>Mon, 16 Jun 2014 19:20:00 +0200</pubDate>
- <description><p>Jeg har fortsatt behov for å kunne laste ned innslag fra NRKs
-nettsted av og til for å se senere når jeg ikke er på nett, men
-<a href="http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK.html">min
-oppskrift fra 2011</a> sluttet å fungere da NRK byttet
-avspillermetode. I dag fikk jeg endelig lett etter oppdatert løsning,
-og jeg er veldig glad for å fortelle at den enkleste måten å laste ned
-innslag er å bruke siste versjon 2014.06.07 av
-<a href="http://rg3.github.io/youtube-dl/">youtube-dl</a>. Støtten i
-youtube-dl <a href="https://github.com/rg3/youtube-dl/issues/2980">kom
-inn for 23 dager siden</a> og
-<a href="http://packages.qa.debian.org/y/youtube-dl.html">versjonen i
-Debian</a> fungerer fint også som backport til Debian Wheezy. Det er
-et lite problem, det håndterer kun URLer med små bokstaver, men hvis
-en har en URL med store bokstaver kan en bare gjøre alle store om til
-små bokstaver for å få youtube-dl til å laste ned. Rapporterte
-nettopp
-<a href="https://github.com/rg3/youtube-dl/issues/2980">problemet til
-utviklerne</a>, og antar de får fikset det snart.</p>
-
-<p>Dermed er alt klart til å laste ned dokumentarene om
-<a href="http://tv.nrk.no/program/KOID23005014/usas-hemmelige-avlytting">USAs
-hemmelige avlytting</a> og
-<a href="http://tv.nrk.no/program/KOID23005114/selskapene-bak-usas-avlytting">Selskapene
-bak USAs avlytting</a>, i tillegg til
-<a href="http://tv.nrk.no/program/KOID20005814/et-moete-med-edward-snowden">intervjuet
-med Edward Snowden gjort av den tyske tv-kanalen ARD</a>. Anbefaler
-alle å se disse, sammen med
-<a href="http://media.ccc.de/browse/congress/2013/30C3_-_5713_-_en_-_saal_2_-_201312301130_-_to_protect_and_infect_part_2_-_jacob.html">foredraget
-til Jacob Appelbaum på siste CCC-konferanse</a>, for å forstå mer om
-hvordan overvåkningen av borgerne brer om seg.</p>
-
-<p>Takk til gode venner på foreningen NUUGs IRC-kanal
-<a href="irc://irc.freenode.net/%23nuug">#nuug på irc.freenode.net</a>
-for tipsene som fikk meg i mål</a>.</p>
-
-<p><strong>Oppdatering 2014-06-17</strong>: Etter at jeg publiserte
-denne, ble jeg tipset om bloggposten
-"<a href="http://ingvar.blog.redpill-linpro.com/2012/05/31/downloading-hd-content-from-tv-nrk-no/">Downloading
-HD content from tv.nrk.no</a>" av Ingvar Hagelund, som har alternativ
-implementasjon og tips for å lage mkv-fil med undertekstene inkludert.
-Kanskje den passer bedre for deg? I tillegg ble feilen i youtube-dl
-ble fikset litt senere ut på dagen i går, samt at youtube-dl fikk
-støtte for å laste ned undertitler. Takk til Anders Einar Hilden for
-god innsats og youtube-dl-utviklerne for rask respons.</p>
-</description>
- </item>
-
- <item>
- <title>Free software car computer solution?</title>
- <link>http://people.skolelinux.org/pere/blog/Free_software_car_computer_solution_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_car_computer_solution_.html</guid>
- <pubDate>Thu, 29 May 2014 18:45:00 +0200</pubDate>
- <description><p>Dear lazyweb. I'm planning to set up a small Raspberry Pi computer
-in my car, connected to
-<a href="http://www.dx.com/p/400a-4-0-tft-lcd-digital-monitor-for-vehicle-parking-reverse-camera-1440x272-12v-dc-57776">a
-small screen</a> next to the rear mirror. I plan to hook it up with a
-GPS and a USB wifi card too. The idea is to get my own
-"<a href="http://en.wikipedia.org/wiki/Carputer">Carputer</a>". But I
-wonder if someone already created a good free software solution for
-such car computer.</p>
-
-<p>This is my current wish list for such system:</p>
+ <title>Idea for storing trusted timestamps in a Noark 5 archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</guid>
+ <pubDate>Wed, 7 Jun 2017 21:40:00 +0200</pubDate>
+ <description><p><em>This is a copy of
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2017-June/000297.html">an
+email I posted to the nikita-noark mailing list</a>. Please follow up
+there if you would like to discuss this topic. The background is that
+we are making a free software archive system based on the Norwegian
+<a href="https://www.arkivverket.no/forvaltning-og-utvikling/regelverk-og-standarder/noark-standarden">Noark
+5 standard</a> for government archives.</em></p>
+
+<p>I've been wondering a bit lately how trusted timestamps could be
+stored in Noark 5.
+<a href="https://en.wikipedia.org/wiki/Trusted_timestamping">Trusted
+timestamps</a> can be used to verify that some information
+(document/file/checksum/metadata) have not been changed since a
+specific time in the past. This is useful to verify the integrity of
+the documents in the archive.</p>
+
+<p>Then it occured to me, perhaps the trusted timestamps could be
+stored as dokument variants (ie dokumentobjekt referered to from
+dokumentbeskrivelse) with the filename set to the hash it is
+stamping?</p>
+
+<p>Given a "dokumentbeskrivelse" with an associated "dokumentobjekt",
+a new dokumentobjekt is associated with "dokumentbeskrivelse" with the
+same attributes as the stamped dokumentobjekt except these
+attributes:</p>
<ul>
- <li>Work on Raspberry Pi.</li>
+<li>format -> "RFC3161"
+<li>mimeType -> "application/timestamp-reply"
+<li>formatDetaljer -> "&lt;source URL for timestamp service&gt;"
+<li>filenavn -> "&lt;sjekksum&gt;.tsr"
- <li>Show current speed limit based on location, and warn if going too
- fast (for example using color codes yellow and red on the screen,
- or make a sound). This could be done either using either data from
- <a href="http://www.openstreetmap.org/">Openstreetmap</a> or OCR
- info gathered from a dashboard camera.</li>
+</ul>
- <li>Track automatic toll road passes and their cost, show total spent
- and make it possible to calculate toll costs for planned
- route.</li>
+<p>This assume a service following
+<a href="https://tools.ietf.org/html/rfc3161">IETF RFC 3161</a> is
+used, which specifiy the given MIME type for replies and the .tsr file
+ending for the content of such trusted timestamp. As far as I can
+tell from the Noark 5 specifications, it is OK to have several
+variants/renderings of a dokument attached to a given
+dokumentbeskrivelse objekt. It might be stretching it a bit to make
+some of these variants represent crypto-signatures useful for
+verifying the document integrity instead of representing the dokument
+itself.</p>
+
+<p>Using the source of the service in formatDetaljer allow several
+timestamping services to be used. This is useful to spread the risk
+of key compromise over several organisations. It would only be a
+problem to trust the timestamps if all of the organisations are
+compromised.</p>
+
+<p>The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "<sjekksum>.tsr" value mentioned
+above).</p>
- <li>Collect GPX tracks for use with OpenStreetMap.</li>
+<p><blockquote><pre>
+openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
+ | curl -s -H "Content-Type: application/timestamp-query" \
+ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
+</pre></blockquote></p>
- <li>Automatically detect and use any wireless connection to connect
- to home server. Try IP over DNS
- (<a href="http://dev.kryo.se/iodine/">iodine</a>) or ICMP
- (<a href="http://code.gerade.org/hans/">Hans</a>) if direct
- connection do not work.</li>
+<p>To verify the timestamp, you first need to download the public key
+of the trusted timestamp service, for example using this command:</p>
- <li>Set up mesh network to talk to other cars with the same system,
- or some standard car mesh protocol.</li>
+<p><blockquote><pre>
+wget -O ca-cert.txt \
+ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
+</pre></blockquote></p>
- <li>Warn when approaching speed cameras and speed camera ranges
- (speed calculated between two cameras).</li>
+<p>Note, the public key should be stored alongside the timestamps in
+the archive to make sure it is also available 100 years from now. It
+is probably a good idea to standardise how and were to store such
+public keys, to make it easier to find for those trying to verify
+documents 100 or 1000 years from now. :)</p>
- <li>Suport dashboard/front facing camera to discover speed limits and
- run OCR to track registration number of passing cars.</li>
+<p>The verification itself is a simple openssl command:</p>
-</ul>
+<p><blockquote><pre>
+openssl ts -verify -data $inputfile -in $sha256.tsr \
+ -CAfile ca-cert.txt -text
+</pre></blockquote></p>
-<p>If you know of any free software car computer system supporting
-some or all of these features, please let me know.</p>
+<p>Is there any reason this approach would not work? Is it somehow against
+the Noark 5 specification?</p>
</description>
</item>
<item>
- <title>Half the Coverity issues in Gnash fixed in the next release</title>
- <link>http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html</guid>
- <pubDate>Tue, 29 Apr 2014 14:20:00 +0200</pubDate>
- <description><p>I've been following <a href="http://www.getgnash.org/">the Gnash
-project</a> for quite a while now. It is a free software
-implementation of Adobe Flash, both a standalone player and a browser
-plugin. Gnash implement support for the AVM1 format (and not the
-newer AVM2 format - see
-<a href="http://lightspark.github.io/">Lightspark</a> for that one),
-allowing several flash based sites to work. Thanks to the friendly
-developers at Youtube, it also work with Youtube videos, because the
-Javascript code at Youtube detect Gnash and serve a AVM1 player to
-those users. :) Would be great if someone found time to implement AVM2
-support, but it has not happened yet. If you install both Lightspark
-and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file,
-so you can get both handled as free software. Unfortunately,
-Lightspark so far only implement a small subset of AVM2, and many
-sites do not work yet.</p>
-
-<p>A few months ago, I started looking at
-<a href="http://scan.coverity.com/">Coverity</a>, the static source
-checker used to find heaps and heaps of bugs in free software (thanks
-to the donation of a scanning service to free software projects by the
-company developing this non-free code checker), and Gnash was one of
-the projects I decided to check out. Coverity is able to find lock
-errors, memory errors, dead code and more. A few days ago they even
-extended it to also be able to find the heartbleed bug in OpenSSL.
-There are heaps of checks being done on the instrumented code, and the
-amount of bogus warnings is quite low compared to the other static
-code checkers I have tested over the years.</p>
-
-<p>Since a few weeks ago, I've been working with the other Gnash
-developers squashing bugs discovered by Coverity. I was quite happy
-today when I checked the current status and saw that of the 777 issues
-detected so far, 374 are marked as fixed. This make me confident that
-the next Gnash release will be more stable and more dependable than
-the previous one. Most of the reported issues were and are in the
-test suite, but it also found a few in the rest of the code.</p>
-
-<p>If you want to help out, you find us on
-<a href="https://lists.gnu.org/mailman/listinfo/gnash-dev">the
-gnash-dev mailing list</a> and on
-<a href="irc://irc.freenode.net/#gnash">the #gnash channel on
-irc.freenode.net IRC server</a>.</p>
+ <title>Når nynorskoversettelsen svikter til eksamen...</title>
+ <link>http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</guid>
+ <pubDate>Sat, 3 Jun 2017 08:20:00 +0200</pubDate>
+ <description><p><a href="http://www.aftenposten.no/norge/Krever-at-elever-ma-fa-annullert-eksamen-etter-rot-med-oppgavetekster-622459b.html">Aftenposten
+melder i dag</a> om feil i eksamensoppgavene for eksamen i politikk og
+menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var
+like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring
+på om den fri oversetterløsningen
+<a href="https://www.apertium.org/">Apertium</a> ville gjort en bedre
+jobb enn Utdanningsdirektoratet. Det kan se slik ut.</p>
+
+<p>Her er bokmålsoppgaven fra eksamenen:</p>
+
+<blockquote>
+<p>Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers
+rolle og muligheter til å håndtere internasjonale utfordringer, som
+for eksempel flykningekrisen.</p>
+
+<p>Vedlegge er eksempler på tekster som kan gi relevante perspektiver
+på temaet:</p>
+<ol>
+<li>Flykningeregnskapet 2016, UNHCR og IDMC
+<li>«Grenseløst Europa for fall» A-Magasinet, 26. november 2015
+</ol>
+
+</blockquote>
+
+<p>Dette oversetter Apertium slik:</p>
+
+<blockquote>
+<p>Drøft utfordringane knytte til nasjonalstatane sine og rolla til
+andre aktørar og høve til å handtera internasjonale utfordringar, som
+til dømes *flykningekrisen.</p>
+
+<p>Vedleggja er døme på tekster som kan gje relevante perspektiv på
+temaet:</p>
+
+<ol>
+<li>*Flykningeregnskapet 2016, *UNHCR og *IDMC</li>
+<li>«*Grenseløst Europa for fall» A-Magasinet, 26. november 2015</li>
+</ol>
+
+</blockquote>
+
+<p>Ord som ikke ble forstått er markert med stjerne (*), og trenger
+ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i
+oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at
+"andre aktørers rolle og muligheter til ..." burde vært oversatt til
+"rolla til andre aktørar og deira høve til ..." eller noe slikt, men
+det er kanskje flisespikking. Det understreker vel bare at det alltid
+trengs korrekturlesning etter automatisk oversettelse.</p>
</description>
</item>
<item>
- <title>Install hardware dependent packages using tasksel (Isenkram 0.7)</title>
- <link>http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html</guid>
- <pubDate>Wed, 23 Apr 2014 14:50:00 +0200</pubDate>
- <description><p>It would be nice if it was easier in Debian to get all the hardware
-related packages relevant for the computer installed automatically.
-So I implemented one, using
-<a href="http://packages.qa.debian.org/isenkram">my Isenkram
-package</a>. To use it, install the tasksel and isenkram packages and
-run tasksel as user root. You should be presented with a new option,
-"Hardware specific packages (autodetected by isenkram)". When you
-select it, tasksel will install the packages isenkram claim is fit for
-the current hardware, hot pluggable or not.<p>
-
-<p>The implementation is in two files, one is the tasksel menu entry
-description, and the other is the script used to extract the list of
-packages to install. The first part is in
-<tt>/usr/share/tasksel/descs/isenkram.desc</tt> and look like
-this:</p>
+ <title>Epost inn som arkivformat i Riksarkivarens forskrift?</title>
+ <link>http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</guid>
+ <pubDate>Thu, 27 Apr 2017 11:30:00 +0200</pubDate>
+ <description><p>I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på
+sin forskrift. Som en kan se er det ikke mye tid igjen før fristen
+som går ut på søndag. Denne forskriften er det som lister opp hvilke
+formater det er greit å arkivere i
+<a href="http://www.arkivverket.no/arkivverket/Offentleg-forvalting/Noark/Noark-5">Noark
+5-løsninger</a> i Norge.</p>
+
+<p>Jeg fant høringsdokumentene hos
+<a href="https://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">Norsk
+Arkivråd</a> etter å ha blitt tipset på epostlisten til
+<a href="https://github.com/hiOA-ABI/nikita-noark5-core">fri
+programvareprosjektet Nikita Noark5-Core</a>, som lager et Noark 5
+Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket
+være min interesse for tjenestegrensesnittsprosjektet har jeg lest en
+god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget
+at standard epost ikke er på listen over godkjente formater som kan
+arkiveres. Høringen med frist søndag er en glimrende mulighet til å
+forsøke å gjøre noe med det. Jeg holder på med
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/hoering-arkivforskrift.tex">egen
+høringsuttalelse</a>, og lurer på om andre er interessert i å støtte
+forslaget om å tillate arkivering av epost som epost i arkivet.</p>
+
+<p>Er du igang med å skrive egen høringsuttalelse allerede? I så fall
+kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror
+ikke det trengs så mye. Her et kort forslag til tekst:</p>
+
+<p><blockquote>
+
+ <p>Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse
+ 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om
+ revisjon av Forskrift om utfyllende tekniske og arkivfaglige
+ bestemmelser om behandling av offentlige arkiver (Riksarkivarens
+ forskrift).</p>
+
+ <p>Svært mye av vår kommuikasjon foregår i dag på e-post. Vi
+ foreslår derfor at Internett-e-post, slik det er beskrevet i IETF
+ RFC 5322,
+ <a href="https://tools.ietf.org/html/rfc5322">https://tools.ietf.org/html/rfc5322</a>. bør
+ inn som godkjent dokumentformat. Vi foreslår at forskriftens
+ oversikt over godkjente dokumentformater ved innlevering i § 5-16
+ endres til å ta med Internett-e-post.</p>
+
+</blockquote></p>
+
+<p>Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan
+epost kan lagres i en Noark 5-struktur, og holder på å skrive et
+forslag om hvordan dette kan gjøres som vil bli sendt over til
+arkivverket så snart det er ferdig. De som er interesserte kan
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/epostlagring.md">følge
+fremdriften på web</a>.</p>
+
+<p>Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev
+ <a href="https://www.nuug.no/news/NUUGs_h_ringuttalelse_til_Riksarkivarens_forskrift.shtml">sendt
+ inn av foreningen NUUG</a>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Offentlig elektronisk postjournal blokkerer tilgang for utvalgte webklienter</title>
+ <link>http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</guid>
+ <pubDate>Thu, 20 Apr 2017 13:00:00 +0200</pubDate>
+ <description><p>Jeg oppdaget i dag at <a href="https://www.oep.no/">nettstedet som
+publiserer offentlige postjournaler fra statlige etater</a>, OEP, har
+begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet
+ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl
+og curl. For å teste selv, kjør følgende:</p>
+
+<blockquote><pre>
+% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 404 Not Found
+% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 200 OK
+%
+</pre></blockquote>
+
+<p>Her kan en se at tjenesten gir «404 Not Found» for curl i
+standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera
+versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen
+2017-03-02.</p>
+
+<p>Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente
+informasjon fra oep.no. Kan blokkeringen være gjort for å hindre
+automatisert innsamling av informasjon fra OEP, slik Pressens
+Offentlighetsutvalg gjorde for å dokumentere hvordan departementene
+hindrer innsyn i
+<a href="http://presse.no/dette-mener-np/undergraver-offentlighetsloven/">rapporten
+«Slik hindrer departementer innsyn» som ble publiserte i januar
+2017</a>. Det virker usannsynlig, da det jo er trivielt å bytte
+User-Agent til noe nytt.</p>
+
+<p>Finnes det juridisk grunnlag for det offentlige å diskriminere
+webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter
+hva klienten sier at den heter? Da OEP eies av DIFI og driftes av
+Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to
+aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men
+<a href="https://www.oep.no/search/result.html?period=dateRange&fromDate=01.01.2016&toDate=01.04.2017&dateType=documentDate&caseDescription=&descType=both&caseNumber=&documentNumber=&sender=basefarm&senderType=both&documentType=all&legalAuthority=&archiveCode=&list2=196&searchType=advanced&Search=Search+in+records">postjournalen
+til DIFI viser kun to dokumenter</a> det siste året mellom DIFI og
+Basefarm.
+<a href="https://www.mimesbronn.no/request/blokkering_av_tilgang_til_oep_fo">Mimes brønn neste</a>,
+tenker jeg.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Free software archive system Nikita now able to store documents</title>
+ <link>http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</guid>
+ <pubDate>Sun, 19 Mar 2017 08:00:00 +0100</pubDate>
+ <description><p>The <a href="https://github.com/hiOA-ABI/nikita-noark5-core">Nikita
+Noark 5 core project</a> is implementing the Norwegian standard for
+keeping an electronic archive of government documents.
+<a href="http://www.arkivverket.no/arkivverket/Offentlig-forvaltning/Noark/Noark-5/English-version">The
+Noark 5 standard</a> document the requirement for data systems used by
+the archives in the Norwegian government, and the Noark 5 web interface
+specification document a REST web service for storing, searching and
+retrieving documents and metadata in such archive. I've been involved
+in the project since a few weeks before Christmas, when the Norwegian
+Unix User Group
+<a href="https://www.nuug.no/news/NOARK5_kjerne_som_fri_programvare_f_r_epostliste_hos_NUUG.shtml">announced
+it supported the project</a>. I believe this is an important project,
+and hope it can make it possible for the government archives in the
+future to use free software to keep the archives we citizens depend
+on. But as I do not hold such archive myself, personally my first use
+case is to store and analyse public mail journal metadata published
+from the government. I find it useful to have a clear use case in
+mind when developing, to make sure the system scratches one of my
+itches.</p>
+
+<p>If you would like to help make sure there is a free software
+alternatives for the archives, please join our IRC channel
+(<a href="irc://irc.freenode.net/%23nikita"">#nikita on
+irc.freenode.net</a>) and
+<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">the
+project mailing list</a>.</p>
+
+<p>When I got involved, the web service could store metadata about
+documents. But a few weeks ago, a new milestone was reached when it
+became possible to store full text documents too. Yesterday, I
+completed an implementation of a command line tool
+<tt>archive-pdf</tt> to upload a PDF file to the archive using this
+API. The tool is very simple at the moment, and find existing
+<a href="https://en.wikipedia.org/wiki/Fonds">fonds</a>, series and
+files while asking the user to select which one to use if more than
+one exist. Once a file is identified, the PDF is associated with the
+file and uploaded, using the title extracted from the PDF itself. The
+process is fairly similar to visiting the archive, opening a cabinet,
+locating a file and storing a piece of paper in the archive. Here is
+a test run directly after populating the database with test data using
+our API tester:</p>
<p><blockquote><pre>
-Task: isenkram
-Section: hardware
-Description: Hardware specific packages (autodetected by isenkram)
- Based on the detected hardware various hardware specific packages are
- proposed.
-Test-new-install: mark show
-Relevance: 8
-Packages: for-current-hardware
+~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
+using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
+using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
+
+ 0 - Title of the test case file created 2017-03-18T23:49:32.103446
+ 1 - Title of the test file created 2017-03-18T23:49:32.103446
+Select which mappe you want (or search term): 0
+Uploading mangelmelding/mangler.pdf
+ PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
+ File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
+~/src//noark5-tester$
</pre></blockquote></p>
-<p>The second part is in
-<tt>/usr/lib/tasksel/packages/for-current-hardware</tt> and look like
-this:</p>
+<p>You can see here how the fonds (arkiv) and serie (arkivdel) only had
+one option, while the user need to choose which file (mappe) to use
+among the two created by the API tester. The <tt>archive-pdf</tt>
+tool can be found in the git repository for the API tester.</p>
+
+<p>In the project, I have been mostly working on
+<a href="https://github.com/petterreinholdtsen/noark5-tester">the API
+tester</a> so far, while getting to know the code base. The API
+tester currently use
+<a href="https://en.wikipedia.org/wiki/HATEOAS">the HATEOAS links</a>
+to traverse the entire exposed service API and verify that the exposed
+operations and objects match the specification, as well as trying to
+create objects holding metadata and uploading a simple XML file to
+store. The tester has proved very useful for finding flaws in our
+implementation, as well as flaws in the reference site and the
+specification.</p>
+
+<p>The test document I uploaded is a summary of all the specification
+defects we have collected so far while implementing the web service.
+There are several unclear and conflicting parts of the specification,
+and we have
+<a href="https://github.com/petterreinholdtsen/noark5-tester/tree/master/mangelmelding">started
+writing down</a> the questions we get from implementing it. We use a
+format inspired by how <a href="http://www.opengroup.org/austin/">The
+Austin Group</a> collect defect reports for the POSIX standard with
+<a href="http://www.opengroup.org/austin/mantis.html">their
+instructions for the MANTIS defect tracker system</a>, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a <a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/mangelmelding/sendt/2017-03-15-mangel-prosess.md">request for a procedure for submitting defect reports</a> :).
+
+<p>The Nikita project is implemented using Java and Spring, and is
+fairly easy to get up and running using Docker containers for those
+that want to test the current code base. The API tester is
+implemented in Python.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Detecting NFS hangs on Linux without hanging yourself...</title>
+ <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</guid>
+ <pubDate>Thu, 9 Mar 2017 15:20:00 +0100</pubDate>
+ <description><p>Over the years, administrating thousand of NFS mounting linux
+computers at the time, I often needed a way to detect if the machine
+was experiencing NFS hang. If you try to use <tt>df</tt> or look at a
+file or directory affected by the hang, the process (and possibly the
+shell) will hang too. So you want to be able to detect this without
+risking the detection process getting stuck too. It has not been
+obvious how to do this. When the hang has lasted a while, it is
+possible to find messages like these in dmesg:</p>
+
+<p><blockquote>
+nfs: server nfsserver not responding, still trying
+<br>nfs: server nfsserver OK
+</blockquote></p>
+
+<p>It is hard to know if the hang is still going on, and it is hard to
+be sure looking in dmesg is going to work. If there are lots of other
+messages in dmesg the lines might have rotated out of site before they
+are noticed.</p>
+
+<p>While reading through the nfs client implementation in linux kernel
+code, I came across some statistics that seem to give a way to detect
+it. The om_timeouts sunrpc value in the kernel will increase every
+time the above log entry is inserted into dmesg. And after digging a
+bit further, I discovered that this value show up in
+/proc/self/mountstats on Linux.</p>
+
+<p>The mountstats content seem to be shared between files using the
+same file system context, so it is enough to check one of the
+mountstats files to get the state of the mount point for the machine.
+I assume this will not show lazy umounted NFS points, nor NFS mount
+points in a different process context (ie with a different filesystem
+view), but that does not worry me.</p>
+
+<p>The content for a NFS mount point look similar to this:</p>
<p><blockquote><pre>
-#!/bin/sh
-#
-(
- isenkram-lookup
- isenkram-autoinstall-firmware -l
-) | sort -u
+[...]
+device /dev/mapper/Debian-var mounted on /var with fstype ext3
+device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
+ opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
+ age: 7863311
+ caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
+ sec: flavor=1,pseudoflavor=1
+ events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
+ bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
+ RPC iostats version: 1.0 p/v: 100003/3 (nfs)
+ xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
+ per-op statistics
+ NULL: 0 0 0 0 0 0 0 0
+ GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
+ SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
+ LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
+ ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
+ READLINK: 125 125 0 20472 18620 0 1112 1118
+ READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
+ WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
+ CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
+ MKDIR: 3680 3680 0 773980 993920 26 23990 24245
+ SYMLINK: 903 903 0 233428 245488 6 5865 5917
+ MKNOD: 80 80 0 20148 21760 0 299 304
+ REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
+ RMDIR: 3367 3367 0 645112 484848 22 5782 6002
+ RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
+ LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
+ READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
+ READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
+ FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
+ FSINFO: 2 2 0 232 328 0 1 1
+ PATHCONF: 1 1 0 116 140 0 0 0
+ COMMIT: 0 0 0 0 0 0 0 0
+
+device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
+[...]
</pre></blockquote></p>
-<p>All in all, a very short and simple implementation making it
-trivial to install the hardware dependent package we all may want to
-have installed on our machines. I've not been able to find a way to
-get tasksel to tell you exactly which packages it plan to install
-before doing the installation. So if you are curious or careful,
-check the output from the isenkram-* command line tools first.</p>
-
-<p>The information about which packages are handling which hardware is
-fetched either from the isenkram package itself in
-/usr/share/isenkram/, from git.debian.org or from the APT package
-database (using the Modaliases header). The APT package database
-parsing have caused a nasty resource leak in the isenkram daemon (bugs
-<a href="http://bugs.debian.org/719837">#719837</a> and
-<a href="http://bugs.debian.org/730704">#730704</a>). The cause is in
-the python-apt code (bug
-<a href="http://bugs.debian.org/745487">#745487</a>), but using a
-workaround I was able to get rid of the file descriptor leak and
-reduce the memory leak from ~30 MiB per hardware detection down to
-around 2 MiB per hardware detection. It should make the desktop
-daemon a lot more useful. The fix is in version 0.7 uploaded to
-unstable today.</p>
-
-<p>I believe the current way of mapping hardware to packages in
-Isenkram is is a good draft, but in the future I expect isenkram to
-use the AppStream data source for this. A proposal for getting proper
-AppStream support into Debian is floating around as
-<a href="https://wiki.debian.org/DEP-11">DEP-11</a>, and
-<a href="https://wiki.debian.org/SummerOfCode2014/Projects#SummerOfCode2014.2FProjects.2FAppStreamDEP11Implementation.AppStream.2FDEP-11_for_the_Debian_Archive">GSoC
-project</a> will take place this summer to improve the situation. I
-look forward to seeing the result, and welcome patches for isenkram to
-start using the information when it is ready.</p>
-
-<p>If you want your package to map to some specific hardware, either
-add a "Xb-Modaliases" header to your control file like I did in
-<a href="http://packages.qa.debian.org/pymissile">the pymissile
-package</a> or submit a bug report with the details to the isenkram
-package. See also
-<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">all my
-blog posts tagged isenkram</a> for details on the notation. I expect
-the information will be migrated to AppStream eventually, but for the
-moment I got no better place to store it.</p>
+<p>The key number to look at is the third number in the per-op list.
+It is the number of NFS timeouts experiences per file system
+operation. Here 22 write timeouts and 5 access timeouts. If these
+numbers are increasing, I believe the machine is experiencing NFS
+hang. Unfortunately the timeout value do not start to increase right
+away. The NFS operations need to time out first, and this can take a
+while. The exact timeout value depend on the setup. For example the
+defaults for TCP and UDP mount points are quite different, and the
+timeout value is affected by the soft, hard, timeo and retrans NFS
+mount options.</p>
+
+<p>The only way I have been able to get working on Debian and RedHat
+Enterprise Linux for getting the timeout count is to peek in /proc/.
+But according to
+<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris
+10 System Administration Guide: Network Services</a>, the 'nfsstat -c'
+command can be used to get these timeout values. But this do not work
+on Linux, as far as I can tell. I
+<ahref="http://bugs.debian.org/857043">asked Debian about this</a>,
+but have not seen any replies yet.</p>
+
+<p>Is there a better way to figure out if a Linux NFS client is
+experiencing NFS hangs? Is there a way to detect which processes are
+affected? Is there a way to get the NFS mount going quickly once the
+network problem causing the NFS hang has been cleared? I would very
+much welcome some clues, as we regularly run into NFS hangs.</p>
</description>
</item>
<item>
- <title>FreedomBox milestone - all packages now in Debian Sid</title>
- <link>http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html</guid>
- <pubDate>Tue, 15 Apr 2014 22:10:00 +0200</pubDate>
- <description><p>The <a href="https://wiki.debian.org/FreedomBox">Freedombox
-project</a> is working on providing the software and hardware to make
-it easy for non-technical people to host their data and communication
-at home, and being able to communicate with their friends and family
-encrypted and away from prying eyes. It is still going strong, and
-today a major mile stone was reached.</p>
-
-<p>Today, the last of the packages currently used by the project to
-created the system images were accepted into Debian Unstable. It was
-the freedombox-setup package, which is used to configure the images
-during build and on the first boot. Now all one need to get going is
-the build code from the freedom-maker git repository and packages from
-Debian. And once the freedombox-setup package enter testing, we can
-build everything directly from Debian. :)</p>
-
-<p>Some key packages used by Freedombox are
-<a href="http://packages.qa.debian.org/freedombox-setup">freedombox-setup</a>,
-<a href="http://packages.qa.debian.org/plinth">plinth</a>,
-<a href="http://packages.qa.debian.org/pagekite">pagekite</a>,
-<a href="http://packages.qa.debian.org/tor">tor</a>,
-<a href="http://packages.qa.debian.org/privoxy">privoxy</a>,
-<a href="http://packages.qa.debian.org/owncloud">owncloud</a> and
-<a href="http://packages.qa.debian.org/dnsmasq">dnsmasq</a>. There
-are plans to integrate more packages into the setup. User
-documentation is maintained on the Debian wiki. Please
-<a href="https://wiki.debian.org/FreedomBox/Manual/Jessie">check out
-the manual</a> and help us improve it.</p>
-
-<p>To test for yourself and create boot images with the FreedomBox
-setup, run this on a Debian machine using a user with sudo rights to
-become root:</p>
-
-<p><pre>
-sudo apt-get install git vmdebootstrap mercurial python-docutils \
- mktorrent extlinux virtualbox qemu-user-static binfmt-support \
- u-boot-tools
-git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
- freedom-maker
-make -C freedom-maker dreamplug-image raspberry-image virtualbox-image
-</pre></p>
-
-<p>Root access is needed to run debootstrap and mount loopback
-devices. See the README in the freedom-maker git repo for more
-details on the build. If you do not want all three images, trim the
-make line. Note that the virtualbox-image target is not really
-virtualbox specific. It create a x86 image usable in kvm, qemu,
-vmware and any other x86 virtual machine environment. You might need
-the version of vmdebootstrap in Jessie to get the build working, as it
-include fixes for a race condition with kpartx.</p>
-
-<p>If you instead want to install using a Debian CD and the preseed
-method, boot a Debian Wheezy ISO and use this boot argument to load
-the preseed values:</p>
-
-<p><pre>
-url=<a href="http://www.reinholdtsen.name/freedombox/preseed-jessie.dat">http://www.reinholdtsen.name/freedombox/preseed-jessie.dat</a>
-</pre></p>
-
-<p>I have not tested it myself the last few weeks, so I do not know if
-it still work.</p>
-
-<p>If you wonder how to help, one task you could look at is using
-systemd as the boot system. It will become the default for Linux in
-Jessie, so we need to make sure it is usable on the Freedombox. I did
-a simple test a few weeks ago, and noticed dnsmasq failed to start
-during boot when using systemd. I suspect there are other problems
-too. :) To detect problems, there is a test suite included, which can
-be run from the plinth web interface.</p>
-
-<p>Give it a go and let us know how it goes on the mailing list, and help
-us get the new release published. :) Please join us on
-<a href="irc://irc.debian.org:6667/%23freedombox">IRC (#freedombox on
-irc.debian.org)</a> and
-<a href="http://lists.alioth.debian.org/mailman/listinfo/freedombox-discuss">the
-mailing list</a> if you want to help make this vision come true.</p>
+ <title>How does it feel to be wiretapped, when you should be doing the wiretapping...</title>
+ <link>http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</guid>
+ <pubDate>Wed, 8 Mar 2017 11:50:00 +0100</pubDate>
+ <description><p>So the new president in the United States of America claim to be
+surprised to discover that he was wiretapped during the election
+before he was elected president. He even claim this must be illegal.
+Well, doh, if it is one thing the confirmations from Snowden
+documented, it is that the entire population in USA is wiretapped, one
+way or another. Of course the president candidates were wiretapped,
+alongside the senators, judges and the rest of the people in USA.</p>
+
+<p>Next, the Federal Bureau of Investigation ask the Department of
+Justice to go public rejecting the claims that Donald Trump was
+wiretapped illegally. I fail to see the relevance, given that I am
+sure the surveillance industry in USA believe they have all the legal
+backing they need to conduct mass surveillance on the entire
+world.</p>
+
+<p>There is even the director of the FBI stating that he never saw an
+order requesting wiretapping of Donald Trump. That is not very
+surprising, given how the FISA court work, with all its activity being
+secret. Perhaps he only heard about it?</p>
+
+<p>What I find most sad in this story is how Norwegian journalists
+present it. In a news reports the other day in the radio from the
+Norwegian National broadcasting Company (NRK), I heard the journalist
+claim that 'the FBI denies any wiretapping', while the reality is that
+'the FBI denies any illegal wiretapping'. There is a fundamental and
+important difference, and it make me sad that the journalists are
+unable to grasp it.</p>
+
+<p><strong>Update 2017-03-13:</strong> Look like
+<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The
+Intercept report that US Senator Rand Paul confirm what I state above</a>.</p>
</description>
</item>
<item>
- <title>Språkkoder for POSIX locale i Norge</title>
- <link>http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html</guid>
- <pubDate>Fri, 11 Apr 2014 21:30:00 +0200</pubDate>
- <description><p>For 12 år siden, skrev jeg et lite notat om
-<a href="http://i18n.skolelinux.no/localekoder.txt">bruk av språkkoder
-i Norge</a>. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om
-notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva
-som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.</p>
-
-<p>Når en velger språk i programmer på unix, så velger en blant mange
-språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt
-locale i parantes):</p>
-
-<p><dl>
-<dt>nb (nb_NO)</dt><dd>Bokmål i Norge</dd>
-<dt>nn (nn_NO)</dt><dd>Nynorsk i Norge</dd>
-<dt>se (se_NO)</dt><dd>Nordsamisk i Norge</dd>
-</dl></p>
-
-<p>Alle programmer som bruker andre koder bør endres.</p>
-
-<p>Språkkoden bør brukes når .po-filer navngis og installeres. Dette
-er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene
-være navngitt nb.po, mens locale (LANG) bør være nb_NO.</p>
-
-<p>Hvis vi ikke får standardisert de kodene i alle programmene med
-norske oversettelser, så er det umulig å gi LANG-variablen ett innhold
-som fungerer for alle programmer.</p>
-
-<p>Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i
-forbindelse med POSIX localer er standardisert i RFC 3066 og ISO
-15897. Denne anbefalingen er i tråd med de angitte standardene.</p>
-
-<p>Følgende koder er eller har vært i bruk som locale-verdier for
-"norske" språk. Disse bør unngås, og erstattes når de oppdages:</p>
-
-<p><table>
-<tr><td>norwegian</td><td>-> nb_NO</td></tr>
-<tr><td>bokmål </td><td>-> nb_NO</td></tr>
-<tr><td>bokmal </td><td>-> nb_NO</td></tr>
-<tr><td>nynorsk </td><td>-> nn_NO</td></tr>
-<tr><td>no </td><td>-> nb_NO</td></tr>
-<tr><td>no_NO </td><td>-> nb_NO</td></tr>
-<tr><td>no_NY </td><td>-> nn_NO</td></tr>
-<tr><td>sme_NO </td><td>-> se_NO</td></tr>
-</table></p>
-
-<p>Merk at når det gjelder de samiske språkene, at se_NO i praksis
-henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til
-lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske
-språkkoder, der gjør
-<a href="http://www.divvun.no/">Divvun-prosjektet</a> en bedre
-jobb.</p>
-
-<p><strong>Referanser:</strong></p>
-
-<ul>
-
- <li><a href="http://www.rfc-base.org/rfc-3066.html">RFC 3066 - Tags
- for the Identification of Languages</a> (Erstatter RFC 1766)</li>
-
- <li><a href="http://www.loc.gov/standards/iso639-2/langcodes.html">ISO
- 639</a> - Codes for the Representation of Names of Languages</li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n897-14652w25.pdf">ISO
- DTR 14652</a> - locale-standard Specification method for cultural
- conventions</li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n610.pdf">ISO
- 15897: Registration procedures for cultural elements (cultural
- registry)</a>,
- <a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n849-15897wd6.pdf">(nytt
- draft)</a></li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/">ISO/IEC
- JTC1/SC22/WG20</a> - Gruppen for i18n-standardisering i ISO</li>
-
-<ul>
+ <title>Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress</title>
+ <link>http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</guid>
+ <pubDate>Fri, 3 Mar 2017 14:50:00 +0100</pubDate>
+ <description><p>For almost a year now, we have been working on making a Norwegian
+Bokmål edition of <a href="https://debian-handbook.info/">The Debian
+Administrator's Handbook</a>. Now, thanks to the tireless effort of
+Ole-Erik, Ingrid and Andreas, the initial translation is complete, and
+we are working on the proof reading to ensure consistent language and
+use of correct computer science terms. The plan is to make the book
+available on paper, as well as in electronic form. For that to
+happen, the proof reading must be completed and all the figures need
+to be translated. If you want to help out, get in touch.</p>
+
+<p><a href="http://people.skolelinux.org/pere/debian-handbook/debian-handbook-nb-NO.pdf">A
+
+fresh PDF edition</a> in A4 format (the final book will have smaller
+pages) of the book created every morning is available for
+proofreading. If you find any errors, please
+<a href="https://hosted.weblate.org/projects/debian-handbook/">visit
+Weblate and correct the error</a>. The
+<a href="http://l.github.io/debian-handbook/stat/nb-NO/index.html">state
+of the translation including figures</a> is a useful source for those
+provide Norwegian bokmål screen shots and figures.</p>
</description>
</item>
<item>
- <title>S3QL, a locally mounted cloud file system - nice free software</title>
- <link>http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</guid>
- <pubDate>Wed, 9 Apr 2014 11:30:00 +0200</pubDate>
- <description><p>For a while now, I have been looking for a sensible offsite backup
-solution for use at home. My requirements are simple, it must be
-cheap and locally encrypted (in other words, I keep the encryption
-keys, the storage provider do not have access to my private files).
-One idea me and my friends had many years ago, before the cloud
-storage providers showed up, was to use Google mail as storage,
-writing a Linux block device storing blocks as emails in the mail
-service provided by Google, and thus get heaps of free space. On top
-of this one can add encryption, RAID and volume management to have
-lots of (fairly slow, I admit that) cheap and encrypted storage. But
-I never found time to implement such system. But the last few weeks I
-have looked at a system called
-<a href="https://bitbucket.org/nikratio/s3ql/">S3QL</a>, a locally
-mounted network backed file system with the features I need.</p>
-
-<p>S3QL is a fuse file system with a local cache and cloud storage,
-handling several different storage providers, any with Amazon S3,
-Google Drive or OpenStack API. There are heaps of such storage
-providers. S3QL can also use a local directory as storage, which
-combined with sshfs allow for file storage on any ssh server. S3QL
-include support for encryption, compression, de-duplication, snapshots
-and immutable file systems, allowing me to mount the remote storage as
-a local mount point, look at and use the files as if they were local,
-while the content is stored in the cloud as well. This allow me to
-have a backup that should survive fire. The file system can not be
-shared between several machines at the same time, as only one can
-mount it at the time, but any machine with the encryption key and
-access to the storage service can mount it if it is unmounted.</p>
-
-<p>It is simple to use. I'm using it on Debian Wheezy, where the
-package is included already. So to get started, run <tt>apt-get
-install s3ql</tt>. Next, pick a storage provider. I ended up picking
-Greenqloud, after reading their nice recipe on
-<a href="https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy">how
-to use S3QL with their Amazon S3 service</a>, because I trust the laws
-in Iceland more than those in USA when it come to keeping my personal
-data safe and private, and thus would rather spend money on a company
-in Iceland. Another nice recipe is available from the article
-<a href="http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage">S3QL
-Filesystem for HPC Storage</a> by Jeff Layton in the HPC section of
-Admin magazine. When the provider is picked, figure out how to get
-the API key needed to connect to the storage API. With Greencloud,
-the key did not show up until I had added payment details to my
-account.</p>
-
-<p>Armed with the API access details, it is time to create the file
-system. First, create a new bucket in the cloud. This bucket is the
-file system storage area. I picked a bucket name reflecting the
-machine that was going to store data there, but any name will do.
-I'll refer to it as <tt>bucket-name</tt> below. In addition, one need
-the API login and password, and a locally created password. Store it
-all in ~root/.s3ql/authinfo2 like this:
-
-<p><blockquote><pre>
-[s3c]
-storage-url: s3c://s.greenqloud.com:443/bucket-name
-backend-login: API-login
-backend-password: API-password
-fs-passphrase: local-password
-</pre></blockquote></p>
-
-<p>I create my local passphrase using <tt>pwget 50</tt> or similar,
-but any sensible way to create a fairly random password should do it.
-Armed with these details, it is now time to run mkfs, entering the API
-details and password to create it:</p>
-
-<p><blockquote><pre>
-# mkdir -m 700 /var/lib/s3ql-cache
-# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl s3c://s.greenqloud.com:443/bucket-name
-Enter backend login:
-Enter backend password:
-Before using S3QL, make sure to read the user's guide, especially
-the 'Important Rules to Avoid Loosing Data' section.
-Enter encryption password:
-Confirm encryption password:
-Generating random encryption key...
-Creating metadata tables...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.00 MB of compressed metadata.
-# </pre></blockquote></p>
-
-<p>The next step is mounting the file system to make the storage available.
-
-<p><blockquote><pre>
-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 4 upload threads.
-Downloading and decompressing metadata...
-Reading metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Mounting filesystem...
-# df -h /s3ql
-Filesystem Size Used Avail Use% Mounted on
-s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql
-#
-</pre></blockquote></p>
-
-<p>The file system is now ready for use. I use rsync to store my
-backups in it, and as the metadata used by rsync is downloaded at
-mount time, no network traffic (and storage cost) is triggered by
-running rsync. To unmount, one should not use the normal umount
-command, as this will not flush the cache to the cloud storage, but
-instead running the umount.s3ql command like this:
-
-<p><blockquote><pre>
-# umount.s3ql /s3ql
-#
-</pre></blockquote></p>
-
-<p>There is a fsck command available to check the file system and
-correct any problems detected. This can be used if the local server
-crashes while the file system is mounted, to reset the "already
-mounted" flag. This is what it look like when processing a working
-file system:</p>
-
-<p><blockquote><pre>
-# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
-Using cached metadata.
-File system seems clean, checking anyway.
-Checking DB integrity...
-Creating temporary extra indices...
-Checking lost+found...
-Checking cached objects...
-Checking names (refcounts)...
-Checking contents (names)...
-Checking contents (inodes)...
-Checking contents (parent inodes)...
-Checking objects (reference counts)...
-Checking objects (backend)...
-..processed 5000 objects so far..
-..processed 10000 objects so far..
-..processed 15000 objects so far..
-Checking objects (sizes)...
-Checking blocks (referenced objects)...
-Checking blocks (refcounts)...
-Checking inode-block mapping (blocks)...
-Checking inode-block mapping (inodes)...
-Checking inodes (refcounts)...
-Checking inodes (sizes)...
-Checking extended attributes (names)...
-Checking extended attributes (inodes)...
-Checking symlinks (inodes)...
-Checking directory reachability...
-Checking unix conventions...
-Checking referential integrity...
-Dropping temporary indices...
-Backing up old metadata...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.89 MB of compressed metadata.
-#
-</pre></blockquote></p>
-
-<p>Thanks to the cache, working on files that fit in the cache is very
-quick, about the same speed as local file access. Uploading large
-amount of data is to me limited by the bandwidth out of and into my
-house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s,
-which is very close to my upload speed, and downloading the same
-Debian installation ISO gave me 610 kiB/s, close to my download speed.
-Both were measured using <tt>dd</tt>. So for me, the bottleneck is my
-network, not the file system code. I do not know what a good cache
-size would be, but suspect that the cache should e larger than your
-working set.</p>
-
-<p>I mentioned that only one machine can mount the file system at the
-time. If another machine try, it is told that the file system is
-busy:</p>
-
-<p><blockquote><pre>
-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 8 upload threads.
-Backend reports that fs is still mounted elsewhere, aborting.
-#
-</pre></blockquote></p>
-
-<p>The file content is uploaded when the cache is full, while the
-metadata is uploaded once every 24 hour by default. To ensure the
-file system content is flushed to the cloud, one can either umount the
-file system, or ask S3QL to flush the cache and metadata using
-s3qlctrl:
-
-<p><blockquote><pre>
-# s3qlctrl upload-meta /s3ql
-# s3qlctrl flushcache /s3ql
-#
-</pre></blockquote></p>
-
-<p>If you are curious about how much space your data uses in the
-cloud, and how much compression and deduplication cut down on the
-storage usage, you can use s3qlstat on the mounted file system to get
-a report:</p>
-
-<p><blockquote><pre>
-# s3qlstat /s3ql
-Directory entries: 9141
-Inodes: 9143
-Data blocks: 8851
-Total data size: 22049.38 MB
-After de-duplication: 21955.46 MB (99.57% of total)
-After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated)
-Database size: 2.39 MB (uncompressed)
-(some values do not take into account not-yet-uploaded dirty blocks in cache)
-#
-</pre></blockquote></p>
-
-<p>I mentioned earlier that there are several possible suppliers of
-storage. I did not try to locate them all, but am aware of at least
-<a href="https://www.greenqloud.com/">Greenqloud</a>,
-<a href="http://drive.google.com/">Google Drive</a>,
-<a href="http://aws.amazon.com/s3/">Amazon S3 web serivces</a>,
-<a href="http://www.rackspace.com/">Rackspace</a> and
-<a href="http://crowncloud.net/">Crowncloud</A>. The latter even
-accept payment in Bitcoin. Pick one that suit your need. Some of
-them provide several GiB of free storage, but the prize models are
-quite different and you will have to figure out what suits you
-best.</p>
-
-<p>While researching this blog post, I had a look at research papers
-and posters discussing the S3QL file system. There are several, which
-told me that the file system is getting a critical check by the
-science community and increased my confidence in using it. One nice
-poster is titled
-"<a href="http://www.lanl.gov/orgs/adtsc/publications/science_highlights_2013/docs/pg68_69.pdf">An
-Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject
-Store and Transformative Parallel I/O Approach</a>" by Hsing-Bung
-Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields
-and Pamela Smith. Please have a look.</p>
-
-<p>Given my problems with different file systems earlier, I decided to
-check out the mounted S3QL file system to see if it would be usable as
-a home directory (in other word, that it provided POSIX semantics when
-it come to locking and umask handling etc). Running
-<a href="http://people.skolelinux.org/pere/blog/Testing_if_a_file_system_can_be_used_for_home_directories___.html">my
-test code to check file system semantics</a>, I was happy to discover that
-no error was found. So the file system can be used for home
-directories, if one chooses to do so.</p>
-
-<p>If you do not want a locally file system, and want something that
-work without the Linux fuse file system, I would like to mention the
-<a href="http://www.tarsnap.com/">Tarsnap service</a>, which also
-provide locally encrypted backup using a command line client. It have
-a nicer access control system, where one can split out read and write
-access, allowing some systems to write to the backup and others to
-only read from it.</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+ <title>Unlimited randomness with the ChaosKey?</title>
+ <link>http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</guid>
+ <pubDate>Wed, 1 Mar 2017 20:50:00 +0100</pubDate>
+ <description><p>A few days ago I ordered a small batch of
+<a href="http://altusmetrum.org/ChaosKey/">the ChaosKey</a>, a small
+USB dongle for generating entropy created by Bdale Garbee and Keith
+Packard. Yesterday it arrived, and I am very happy to report that it
+work great! According to its designers, to get it to work out of the
+box, you need the Linux kernel version 4.1 or later. I tested on a
+Debian Stretch machine (kernel version 4.9), and there it worked just
+fine, increasing the available entropy very quickly. I wrote a small
+test oneliner to test. It first print the current entropy level,
+drain /dev/random, and then print the entropy level for five seconds.
+Here is the situation without the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+300
+0+1 oppføringer inn
+0+1 oppføringer ut
+28 byte kopiert, 0,000264565 s, 106 kB/s
+4
+8
+12
+17
+21
+%
+</pre></blockquote>
+
+<p>The entropy level increases by 3-4 every second. In such case any
+application requiring random bits (like a HTTPS enabled web server)
+will halt and wait for more entrpy. And here is the situation with
+the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+1079
+0+1 oppføringer inn
+0+1 oppføringer ut
+104 byte kopiert, 0,000487647 s, 213 kB/s
+433
+1028
+1031
+1035
+1038
+%
+</pre></blockquote>
+
+<p>Quite the difference. :) I bought a few more than I need, in case
+someone want to buy one here in Norway. :)</p>
+
+<p>Update: The dongle was presented at Debconf last year. You might
+find <a href="https://debconf16.debconf.org/talks/94/">the talk
+recording illuminating</a>. It explains exactly what the source of
+randomness is, if you are unable to spot it from the schema drawing
+available from the ChaosKey web site linked at the start of this blog
+post.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Detect OOXML files with undefined behaviour?</title>
+ <link>http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</guid>
+ <pubDate>Tue, 21 Feb 2017 00:20:00 +0100</pubDate>
+ <description><p>I just noticed
+<a href="http://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">the
+new Norwegian proposal for archiving rules in the goverment</a> list
+<a href="http://www.ecma-international.org/publications/standards/Ecma-376.htm">ECMA-376</a>
+/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term
+storage. Luckily such files will only be accepted based on
+pre-approval from the National Archive. Allowing OOXML files to be
+used for long term storage might seem like a good idea as long as we
+forget that there are plenty of ways for a "valid" OOXML document to
+have content with no defined interpretation in the standard, which
+lead to a question and an idea.</p>
+
+<p>Is there any tool to detect if a OOXML document depend on such
+undefined behaviour? It would be useful for the National Archive (and
+anyone else interested in verifying that a document is well defined)
+to have such tool available when considering to approve the use of
+OOXML. I'm aware of the
+<a href="https://github.com/arlm/officeotron/">officeotron OOXML
+validator</a>, but do not know how complete it is nor if it will
+report use of undefined behaviour. Are there other similar tools
+available? Please send me an email if you know of any such tool.</p>
</description>
</item>