<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Isenkram, Appstream and udev make life as a LEGO builder easier</title>
- <link>http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</guid>
- <pubDate>Fri, 7 Oct 2016 09:50:00 +0200</pubDate>
- <description><p><a href="http://packages.qa.debian.org/isenkram">The Isenkram
-system</a> provide a practical and easy way to figure out which
-packages support the hardware in a given machine. The command line
-tool <tt>isenkram-lookup</tt> and the tasksel options provide a
-convenient way to list and install packages relevant for the current
-hardware during system installation, both user space packages and
-firmware packages. The GUI background daemon on the other hand provide
-a pop-up proposing to install packages when a new dongle is inserted
-while using the computer. For example, if you plug in a smart card
-reader, the system will ask if you want to install <tt>pcscd</tt> if
-that package isn't already installed, and if you plug in a USB video
-camera the system will ask if you want to install <tt>cheese</tt> if
-cheese is currently missing. This already work just fine.</p>
-
-<p>But Isenkram depend on a database mapping from hardware IDs to
-package names. When I started no such database existed in Debian, so
-I made my own data set and included it with the isenkram package and
-made isenkram fetch the latest version of this database from git using
-http. This way the isenkram users would get updated package proposals
-as soon as I learned more about hardware related packages.</p>
-
-<p>The hardware is identified using modalias strings. The modalias
-design is from the Linux kernel where most hardware descriptors are
-made available as a strings that can be matched using filename style
-globbing. It handle USB, PCI, DMI and a lot of other hardware related
-identifiers.</p>
-
-<p>The downside to the Isenkram specific database is that there is no
-information about relevant distribution / Debian version, making
-isenkram propose obsolete packages too. But along came AppStream, a
-cross distribution mechanism to store and collect metadata about
-software packages. When I heard about the proposal, I contacted the
-people involved and suggested to add a hardware matching rule using
-modalias strings in the specification, to be able to use AppStream for
-mapping hardware to packages. This idea was accepted and AppStream is
-now a great way for a package to announce the hardware it support in a
-distribution neutral way. I wrote
-<a href="http://people.skolelinux.org/pere/blog/Using_appstream_with_isenkram_to_install_hardware_related_packages_in_Debian.html">a
-recipe on how to add such meta-information</a> in a blog post last
-December. If you have a hardware related package in Debian, please
-announce the relevant hardware IDs using AppStream.</p>
-
-<p>In Debian, almost all packages that can talk to a LEGO Mindestorms
-RCX or NXT unit, announce this support using AppStream. The effect is
-that when you insert such LEGO robot controller into your Debian
-machine, Isenkram will propose to install the packages needed to get
-it working. The intention is that this should allow the local user to
-start programming his robot controller right away without having to
-guess what packages to use or which permissions to fix.</p>
-
-<p>But when I sat down with my son the other day to program our NXT
-unit using his Debian Stretch computer, I discovered something
-annoying. The local console user (ie my son) did not get access to
-the USB device for programming the unit. This used to work, but no
-longer in Jessie and Stretch. After some investigation and asking
-around on #debian-devel, I discovered that this was because udev had
-changed the mechanism used to grant access to local devices. The
-ConsoleKit mechanism from <tt>/lib/udev/rules.d/70-udev-acl.rules</tt>
-no longer applied, because LDAP users no longer was added to the
-plugdev group during login. Michael Biebl told me that this method
-was obsolete and the new method used ACLs instead. This was good
-news, as the plugdev mechanism is a mess when using a remote user
-directory like LDAP. Using ACLs would make sure a user lost device
-access when she logged out, even if the user left behind a background
-process which would retain the plugdev membership with the ConsoleKit
-setup. Armed with this knowledge I moved on to fix the access problem
-for the LEGO Mindstorms related packages.</p>
-
-<p>The new system uses a udev tag, 'uaccess'. It can either be
-applied directly for a device, or is applied in
-/lib/udev/rules.d/70-uaccess.rules for classes of devices. As the
-LEGO Mindstorms udev rules did not have a class, I decided to add the
-tag directly in the udev rules files included in the packages. Here
-is one example. For the nqc C compiler for the RCX, the
-<tt>/lib/udev/rules.d/60-nqc.rules</tt> file now look like this:
-
-<p><pre>
-SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
- SYMLINK+="rcx-%k", TAG+="uaccess"
-</pre></p>
-
-<p>I suspect all packages using plugdev in their /lib/udev/rules.d/
-files should be changed to use this tag (either directly or indirectly
-via <tt>70-uaccess.rules</tt>). Perhaps a lintian check should be
-created to detect this?</p>
-
-<p>I've been unable to find good documentation on the uaccess feature.
-It is unclear to me if the uaccess tag is an internal implementation
-detail like the udev-acl tag used by
-<tt>/lib/udev/rules.d/70-udev-acl.rules</tt>. If it is, I guess the
-indirect method is the preferred way. Michael
-<a href="https://github.com/systemd/systemd/issues/4288">asked for more
-documentation from the systemd project</a> and I hope it will make
-this clearer. For now I use the generic classes when they exist and
-is already handled by <tt>70-uaccess.rules</tt>, and add the tag
-directly if no such class exist.</p>
-
-<p>To learn more about the isenkram system, please check out
-<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">my
-blog posts tagged isenkram</a>.</p>
-
-<p>To help out making life for LEGO constructors in Debian easier,
-please join us on our IRC channel
-<a href="irc://irc.debian.org/%23debian-lego">#debian-lego</a> and join
-the <a href="https://alioth.debian.org/projects/debian-lego/">Debian
-LEGO team</a> in the Alioth project we created yesterday. A mailing
-list is not yet created, but we are working on it. :)</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+ <title>Idea for storing trusted timestamps in a Noark 5 archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</guid>
+ <pubDate>Wed, 7 Jun 2017 21:40:00 +0200</pubDate>
+ <description><p><em>This is a copy of
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2017-June/000297.html">an
+email I posted to the nikita-noark mailing list</a>. Please follow up
+there if you would like to discuss this topic. The background is that
+we are making a free software archive system based on the Norwegian
+<a href="https://www.arkivverket.no/forvaltning-og-utvikling/regelverk-og-standarder/noark-standarden">Noark
+5 standard</a> for government archives.</em></p>
+
+<p>I've been wondering a bit lately how trusted timestamps could be
+stored in Noark 5.
+<a href="https://en.wikipedia.org/wiki/Trusted_timestamping">Trusted
+timestamps</a> can be used to verify that some information
+(document/file/checksum/metadata) have not been changed since a
+specific time in the past. This is useful to verify the integrity of
+the documents in the archive.</p>
+
+<p>Then it occured to me, perhaps the trusted timestamps could be
+stored as dokument variants (ie dokumentobjekt referered to from
+dokumentbeskrivelse) with the filename set to the hash it is
+stamping?</p>
+
+<p>Given a "dokumentbeskrivelse" with an associated "dokumentobjekt",
+a new dokumentobjekt is associated with "dokumentbeskrivelse" with the
+same attributes as the stamped dokumentobjekt except these
+attributes:</p>
+
+<ul>
+
+<li>format -> "RFC3161"
+<li>mimeType -> "application/timestamp-reply"
+<li>formatDetaljer -> "&lt;source URL for timestamp service&gt;"
+<li>filenavn -> "&lt;sjekksum&gt;.tsr"
+
+</ul>
+
+<p>This assume a service following
+<a href="https://tools.ietf.org/html/rfc3161">IETF RFC 3161</a> is
+used, which specifiy the given MIME type for replies and the .tsr file
+ending for the content of such trusted timestamp. As far as I can
+tell from the Noark 5 specifications, it is OK to have several
+variants/renderings of a dokument attached to a given
+dokumentbeskrivelse objekt. It might be stretching it a bit to make
+some of these variants represent crypto-signatures useful for
+verifying the document integrity instead of representing the dokument
+itself.</p>
+
+<p>Using the source of the service in formatDetaljer allow several
+timestamping services to be used. This is useful to spread the risk
+of key compromise over several organisations. It would only be a
+problem to trust the timestamps if all of the organisations are
+compromised.</p>
+
+<p>The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "<sjekksum>.tsr" value mentioned
+above).</p>
+
+<p><blockquote><pre>
+openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
+ | curl -s -H "Content-Type: application/timestamp-query" \
+ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
+</pre></blockquote></p>
+
+<p>To verify the timestamp, you first need to download the public key
+of the trusted timestamp service, for example using this command:</p>
+
+<p><blockquote><pre>
+wget -O ca-cert.txt \
+ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
+</pre></blockquote></p>
+
+<p>Note, the public key should be stored alongside the timestamps in
+the archive to make sure it is also available 100 years from now. It
+is probably a good idea to standardise how and were to store such
+public keys, to make it easier to find for those trying to verify
+documents 100 or 1000 years from now. :)</p>
+
+<p>The verification itself is a simple openssl command:</p>
+
+<p><blockquote><pre>
+openssl ts -verify -data $inputfile -in $sha256.tsr \
+ -CAfile ca-cert.txt -text
+</pre></blockquote></p>
+
+<p>Is there any reason this approach would not work? Is it somehow against
+the Noark 5 specification?</p>
</description>
</item>
<item>
- <title>Aftenposten-redaktøren med lua i hånda</title>
- <link>http://people.skolelinux.org/pere/blog/Aftenposten_redakt_ren_med_lua_i_h_nda.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Aftenposten_redakt_ren_med_lua_i_h_nda.html</guid>
- <pubDate>Fri, 9 Sep 2016 11:30:00 +0200</pubDate>
- <description><p>En av dagens nyheter er at Aftenpostens redaktør Espen Egil Hansen
-bruker
-<a href="https://www.nrk.no/kultur/aftenposten-brukar-heile-forsida-pa-facebook-kritikk-1.13126918">forsiden
-av papiravisen på et åpent brev til Facebooks sjef Mark Zuckerberg om
-Facebooks fjerning av bilder, tekster og sider de ikke liker</a>. Det
-må være uvant for redaktøren i avisen Aftenposten å stå med lua i
-handa og håpe på å bli hørt. Spesielt siden Aftenposten har vært med
-på å gi Facebook makten de nå demonstrerer at de har. Ved å melde seg
-inn i Facebook-samfunnet har de sagt ja til bruksvilkårene og inngått
-en antagelig bindende avtale. Kanskje de skulle lest og vurdert
-vilkårene litt nærmere før de sa ja, i stedet for å klage over at
-reglende de har valgt å akseptere blir fulgt? Personlig synes jeg
-vilkårene er uakseptable og det ville ikke falle meg inn å gå inn på
-en avtale med slike vilkår. I tillegg til uakseptable vilkår er det
-mange andre grunner til å unngå Facebook. Du kan finne en solid
-gjennomgang av flere slike argumenter hos
-<a href="https://stallman.org/facebook.html">Richard Stallmans side om
-Facebook</a>.
-
-<p>Jeg håper flere norske redaktører på samme vis må stå med lua i
-hånden inntil de forstår at de selv er med på å føre samfunnet på
-ville veier ved å omfavne Facebook slik de gjør når de omtaler og
-løfter frem saker fra Facebook, og tar i bruk Facebook som
-distribusjonskanal for sine nyheter. De bidrar til
-overvåkningssamfunnet og raderer ut lesernes privatsfære når de lenker
-til Facebook på sine sider, og låser seg selv inne i en omgivelse der
-det er Facebook, og ikke redaktøren, som sitter med makta.</p>
-
-<p>Men det vil nok ta tid, i et Norge der de fleste nettredaktører
-<a href="http://people.skolelinux.org/pere/blog/Snurpenot_overv_kning_av_sensitiv_personinformasjon.html">deler
-sine leseres personopplysinger med utenlands etterretning</a>.</p>
-
-<p>For øvrig burde varsleren Edward Snowden få politisk asyl i
-Norge.</p>
+ <title>Når nynorskoversettelsen svikter til eksamen...</title>
+ <link>http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</guid>
+ <pubDate>Sat, 3 Jun 2017 08:20:00 +0200</pubDate>
+ <description><p><a href="http://www.aftenposten.no/norge/Krever-at-elever-ma-fa-annullert-eksamen-etter-rot-med-oppgavetekster-622459b.html">Aftenposten
+melder i dag</a> om feil i eksamensoppgavene for eksamen i politikk og
+menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var
+like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring
+på om den fri oversetterløsningen
+<a href="https://www.apertium.org/">Apertium</a> ville gjort en bedre
+jobb enn Utdanningsdirektoratet. Det kan se slik ut.</p>
+
+<p>Her er bokmålsoppgaven fra eksamenen:</p>
+
+<blockquote>
+<p>Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers
+rolle og muligheter til å håndtere internasjonale utfordringer, som
+for eksempel flykningekrisen.</p>
+
+<p>Vedlegge er eksempler på tekster som kan gi relevante perspektiver
+på temaet:</p>
+<ol>
+<li>Flykningeregnskapet 2016, UNHCR og IDMC
+<li>«Grenseløst Europa for fall» A-Magasinet, 26. november 2015
+</ol>
+
+</blockquote>
+
+<p>Dette oversetter Apertium slik:</p>
+
+<blockquote>
+<p>Drøft utfordringane knytte til nasjonalstatane sine og rolla til
+andre aktørar og høve til å handtera internasjonale utfordringar, som
+til dømes *flykningekrisen.</p>
+
+<p>Vedleggja er døme på tekster som kan gje relevante perspektiv på
+temaet:</p>
+
+<ol>
+<li>*Flykningeregnskapet 2016, *UNHCR og *IDMC</li>
+<li>«*Grenseløst Europa for fall» A-Magasinet, 26. november 2015</li>
+</ol>
+
+</blockquote>
+
+<p>Ord som ikke ble forstått er markert med stjerne (*), og trenger
+ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i
+oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at
+"andre aktørers rolle og muligheter til ..." burde vært oversatt til
+"rolla til andre aktørar og deira høve til ..." eller noe slikt, men
+det er kanskje flisespikking. Det understreker vel bare at det alltid
+trengs korrekturlesning etter automatisk oversettelse.</p>
</description>
</item>
<item>
- <title>E-tjenesten ber om innsyn i eposten til partiene på Stortinget</title>
- <link>http://people.skolelinux.org/pere/blog/E_tjenesten_ber_om_innsyn_i_eposten_til_partiene_p__Stortinget.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/E_tjenesten_ber_om_innsyn_i_eposten_til_partiene_p__Stortinget.html</guid>
- <pubDate>Tue, 6 Sep 2016 23:00:00 +0200</pubDate>
- <description><p>I helga kom det et hårreisende forslag fra Lysne II-utvalget satt
-ned av Forsvarsdepartementet. Lysne II-utvalget var bedt om å vurdere
-ønskelista til Forsvarets etterretningstjeneste (e-tjenesten), og har
-kommet med
-<a href="http://www.aftenposten.no/norge/Utvalg-sier-ja-til-at-E-tjenesten-far-overvake-innholdet-i-all-internett--og-telefontrafikk-som-krysser-riksgrensen-603232b.html">forslag
-om at e-tjenesten skal få lov til a avlytte all Internett-trafikk</a>
-som passerer Norges grenser. Få er klar over at dette innebærer at
-e-tjenesten får tilgang til epost sendt til de fleste politiske
-partiene på Stortinget. Regjeringspartiet Høyre (@hoyre.no),
-støttepartiene Venstre (@venstre.no) og Kristelig Folkeparti (@krf.no)
-samt Sosialistisk Ventreparti (@sv.no) og Miljøpartiet de grønne
-(@mdg.no) har nemlig alle valgt å ta imot eposten sin via utenlandske
-tjenester. Det betyr at hvis noen sender epost til noen med en slik
-adresse vil innholdet i eposten, om dette forslaget blir vedtatt, gjøres
-tilgjengelig for e-tjenesten. Venstre, Sosialistisk Ventreparti og
-Miljøpartiet De Grønne har valgt å motta sin epost hos Google,
-Kristelig Folkeparti har valgt å motta sin epost hos Microsoft, og
-Høyre har valgt å motta sin epost hos Comendo med mottak i Danmark og
-Irland. Kun Arbeiderpartiet og Fremskrittspartiet har valgt å motta
-eposten sin i Norge, hos henholdsvis Intility AS og Telecomputing
-AS.</p>
-
-<p>Konsekvensen er at epost inn og ut av de politiske organisasjonene,
-til og fra partimedlemmer og partiets tillitsvalgte vil gjøres
-tilgjengelig for e-tjenesten for analyse og sortering. Jeg mistenker
-at kunnskapen som slik blir tilgjengelig vil være nyttig hvis en
-ønsker å vite hvilke argumenter som treffer publikum når en ønsker å
-påvirke Stortingets representanter.</p
-
-<p>Ved hjelp av MX-oppslag i DNS for epost-domene, tilhørende
-whois-oppslag av IP-adressene og traceroute for å se hvorvidt
-trafikken går via utlandet kan enhver få bekreftet at epost sendt til
-de omtalte partiene vil gjøres tilgjengelig for forsvarets
-etterretningstjeneste hvis forslaget blir vedtatt. En kan også bruke
-den kjekke nett-tjenesten <a href="http://ipinfo.io/">ipinfo.io</a>
-for å få en ide om hvor i verden en IP-adresse hører til.</p>
-
-<p>På den positive siden vil forslaget gjøre at enda flere blir
-motivert til å ta grep for å bruke
-<a href="https://www.torproject.org/">Tor</a> og krypterte
-kommunikasjonsløsninger for å kommunisere med sine kjære, for å sikre
-at privatsfæren vernes. Selv bruker jeg blant annet
-<a href="https://www.freedomboxfoundation.org/">FreedomBox</a> og
-<a href="https://whispersystems.org/">Signal</a> til slikt. Ingen av
-dem er optimale, men de fungerer ganske bra allerede og øker kostnaden
-for dem som ønsker å invadere mitt privatliv.</p>
-
-<p>For øvrig burde varsleren Edward Snowden få politisk asyl i
-Norge.</p>
-
-<!--
-
-venstre.no
- venstre.no mail is handled by 10 aspmx.l.google.com.
- venstre.no mail is handled by 20 alt1.aspmx.l.google.com.
- venstre.no mail is handled by 20 alt2.aspmx.l.google.com.
- venstre.no mail is handled by 30 aspmx2.googlemail.com.
- venstre.no mail is handled by 30 aspmx3.googlemail.com.
-
-traceroute to aspmx.l.google.com (173.194.222.27), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.411 ms 0.438 ms 0.536 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.375 ms 0.452 ms 0.548 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 1.940 ms 1.950 ms 1.942 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.910 ms 6.949 ms 7.283 ms
- 5 google-gw.nordu.net (109.105.98.6) 6.975 ms 6.967 ms 6.958 ms
- 6 209.85.250.192 (209.85.250.192) 7.337 ms 7.286 ms 10.890 ms
- 7 209.85.254.13 (209.85.254.13) 7.394 ms 209.85.254.31 (209.85.254.31) 7.586 ms 209.85.254.33 (209.85.254.33) 7.570 ms
- 8 209.85.251.255 (209.85.251.255) 15.686 ms 209.85.249.229 (209.85.249.229) 16.118 ms 209.85.251.255 (209.85.251.255) 16.073 ms
- 9 74.125.37.255 (74.125.37.255) 16.794 ms 216.239.40.248 (216.239.40.248) 16.113 ms 74.125.37.44 (74.125.37.44) 16.764 ms
-10 * * *
-
-mdg.no
- mdg.no mail is handled by 1 aspmx.l.google.com.
- mdg.no mail is handled by 5 alt2.aspmx.l.google.com.
- mdg.no mail is handled by 5 alt1.aspmx.l.google.com.
- mdg.no mail is handled by 10 aspmx2.googlemail.com.
- mdg.no mail is handled by 10 aspmx3.googlemail.com.
-sv.no
- sv.no mail is handled by 1 aspmx.l.google.com.
- sv.no mail is handled by 5 alt1.aspmx.l.google.com.
- sv.no mail is handled by 5 alt2.aspmx.l.google.com.
- sv.no mail is handled by 10 aspmx3.googlemail.com.
- sv.no mail is handled by 10 aspmx2.googlemail.com.
-hoyre.no
- hoyre.no mail is handled by 10 hoyre-no.mx1.comendosystems.com.
- hoyre.no mail is handled by 20 hoyre-no.mx2.comendosystems.net.
-
-traceroute to hoyre-no.mx1.comendosystems.com (89.104.206.4), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.450 ms 0.510 ms 0.591 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.383 ms 0.508 ms 0.596 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.311 ms 0.315 ms 0.300 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.837 ms 6.842 ms 6.834 ms
- 5 dk-uni.nordu.net (109.105.97.10) 26.073 ms 26.085 ms 26.076 ms
- 6 dix.1000m.soeborg.ip.comendo.dk (192.38.7.22) 15.372 ms 15.046 ms 15.123 ms
- 7 89.104.192.65 (89.104.192.65) 15.875 ms 15.990 ms 16.239 ms
- 8 89.104.192.179 (89.104.192.179) 15.676 ms 15.674 ms 15.664 ms
- 9 03dm-com.mx1.staysecuregroup.com (89.104.206.4) 15.637 ms * *
-
-krf.no
- krf.no mail is handled by 10 krf-no.mail.protection.outlook.com.
-
-traceroute to krf-no.mail.protection.outlook.com (213.199.154.42), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.401 ms 0.438 ms 0.536 ms
- 2 uio-gw8.uio.no (129.240.24.229) 11.076 ms 11.120 ms 11.204 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.232 ms 0.234 ms 0.271 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.811 ms 6.820 ms 6.815 ms
- 5 netnod-ix-ge-a-sth-4470.microsoft.com (195.245.240.181) 7.074 ms 7.013 ms 7.061 ms
- 6 ae1-0.sto-96cbe-1b.ntwk.msn.net (104.44.225.161) 7.227 ms 7.362 ms 7.293 ms
- 7 be-8-0.ibr01.ams.ntwk.msn.net (104.44.5.7) 41.993 ms 43.334 ms 41.939 ms
- 8 be-1-0.ibr02.ams.ntwk.msn.net (104.44.4.214) 43.153 ms 43.507 ms 43.404 ms
- 9 ae3-0.fra-96cbe-1b.ntwk.msn.net (104.44.5.17) 29.897 ms 29.831 ms 29.794 ms
-10 ae10-0.vie-96cbe-1a.ntwk.msn.net (198.206.164.1) 42.309 ms 42.130 ms 41.808 ms
-11 * ae8-0.vie-96cbe-1b.ntwk.msn.net (104.44.227.29) 41.425 ms *
-12 * * *
-
-arbeiderpartiet.no
- arbeiderpartiet.no mail is handled by 10 mail.intility.com.
- arbeiderpartiet.no mail is handled by 20 mail2.intility.com.
-
-traceroute to mail.intility.com (188.95.245.87), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.486 ms 0.508 ms 0.649 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.416 ms 0.508 ms 0.620 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.276 ms 0.278 ms 0.275 ms
- 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 0.374 ms 0.371 ms 0.416 ms
- 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 3.132 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 10.079 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 3.353 ms
- 6 te1-2-0.ar2.ulv89.as2116.net (195.0.243.194) 0.569 ms te5-0-0.ar2.ulv89.as2116.net (195.0.243.192) 0.661 ms 0.653 ms
- 7 cD2EC45C1.static.as2116.net (193.69.236.210) 0.654 ms 0.615 ms 0.590 ms
- 8 185.7.132.38 (185.7.132.38) 1.661 ms 1.808 ms 1.695 ms
- 9 185.7.132.100 (185.7.132.100) 1.793 ms 1.943 ms 1.546 ms
-10 * * *
-
-frp.no
- frp.no mail is handled by 10 mx03.telecomputing.no.
- frp.no mail is handled by 20 mx01.telecomputing.no.
-
-traceroute to mx03.telecomputing.no (95.128.105.102), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.378 ms 0.402 ms 0.479 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.361 ms 0.458 ms 0.548 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.361 ms 0.352 ms 0.336 ms
- 4 xe-2-2-0-0.san-peer2.osl.no.ip.tdc.net (193.156.90.16) 0.375 ms 0.366 ms 0.346 ms
- 5 xe-2-0-2-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.97) 0.780 ms xe-2-0-0-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.101) 0.713 ms xe-2-0-2-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.97) 0.759 ms
- 6 cpe.xe-0-2-0-100.ost-pe1.osl.no.customer.tdc.net (85.19.26.46) 0.837 ms 0.755 ms 0.759 ms
- 7 95.128.105.3 (95.128.105.3) 1.050 ms 1.288 ms 1.182 ms
- 8 mx03.telecomputing.no (95.128.105.102) 0.717 ms 0.703 ms 0.692 ms
-
--->
+ <title>Epost inn som arkivformat i Riksarkivarens forskrift?</title>
+ <link>http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</guid>
+ <pubDate>Thu, 27 Apr 2017 11:30:00 +0200</pubDate>
+ <description><p>I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på
+sin forskrift. Som en kan se er det ikke mye tid igjen før fristen
+som går ut på søndag. Denne forskriften er det som lister opp hvilke
+formater det er greit å arkivere i
+<a href="http://www.arkivverket.no/arkivverket/Offentleg-forvalting/Noark/Noark-5">Noark
+5-løsninger</a> i Norge.</p>
+
+<p>Jeg fant høringsdokumentene hos
+<a href="https://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">Norsk
+Arkivråd</a> etter å ha blitt tipset på epostlisten til
+<a href="https://github.com/hiOA-ABI/nikita-noark5-core">fri
+programvareprosjektet Nikita Noark5-Core</a>, som lager et Noark 5
+Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket
+være min interesse for tjenestegrensesnittsprosjektet har jeg lest en
+god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget
+at standard epost ikke er på listen over godkjente formater som kan
+arkiveres. Høringen med frist søndag er en glimrende mulighet til å
+forsøke å gjøre noe med det. Jeg holder på med
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/hoering-arkivforskrift.tex">egen
+høringsuttalelse</a>, og lurer på om andre er interessert i å støtte
+forslaget om å tillate arkivering av epost som epost i arkivet.</p>
+
+<p>Er du igang med å skrive egen høringsuttalelse allerede? I så fall
+kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror
+ikke det trengs så mye. Her et kort forslag til tekst:</p>
+
+<p><blockquote>
+
+ <p>Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse
+ 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om
+ revisjon av Forskrift om utfyllende tekniske og arkivfaglige
+ bestemmelser om behandling av offentlige arkiver (Riksarkivarens
+ forskrift).</p>
+
+ <p>Svært mye av vår kommuikasjon foregår i dag på e-post. Vi
+ foreslår derfor at Internett-e-post, slik det er beskrevet i IETF
+ RFC 5322,
+ <a href="https://tools.ietf.org/html/rfc5322">https://tools.ietf.org/html/rfc5322</a>. bør
+ inn som godkjent dokumentformat. Vi foreslår at forskriftens
+ oversikt over godkjente dokumentformater ved innlevering i § 5-16
+ endres til å ta med Internett-e-post.</p>
+
+</blockquote></p>
+
+<p>Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan
+epost kan lagres i en Noark 5-struktur, og holder på å skrive et
+forslag om hvordan dette kan gjøres som vil bli sendt over til
+arkivverket så snart det er ferdig. De som er interesserte kan
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/epostlagring.md">følge
+fremdriften på web</a>.</p>
+
+<p>Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev
+ <a href="https://www.nuug.no/news/NUUGs_h_ringuttalelse_til_Riksarkivarens_forskrift.shtml">sendt
+ inn av foreningen NUUG</a>.</p>
</description>
</item>
<item>
- <title>First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public</title>
- <link>http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</guid>
- <pubDate>Tue, 30 Aug 2016 10:10:00 +0200</pubDate>
- <description><p>In April we
-<a href="http://people.skolelinux.org/pere/blog/Lets_make_a_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook.html">started
-to work</a> on a Norwegian Bokmål edition of the "open access" book on
-how to set up and administrate a Debian system. Today I am happy to
-report that the first draft is now publicly available. You can find
-it on <a href="https://debian-handbook.info/get/">get the Debian
-Administrator's Handbook page</a> (under Other languages). The first
-eight chapters have a first draft translation, and we are working on
-proofreading the content. If you want to help out, please start
-contributing using
-<a href="https://hosted.weblate.org/projects/debian-handbook/">the
-hosted weblate project page</a>, and get in touch using
-<a href="http://lists.alioth.debian.org/mailman/listinfo/debian-handbook-translators">the
-translators mailing list</a>. Please also check out
-<a href="https://debian-handbook.info/contribute/">the instructions for
-contributors</a>. A good way to contribute is to proofread the text
-and update weblate if you find errors.</p>
-
-<p>Our goal is still to make the Norwegian book available on paper as well as
-electronic form.</p>
+ <title>Offentlig elektronisk postjournal blokkerer tilgang for utvalgte webklienter</title>
+ <link>http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</guid>
+ <pubDate>Thu, 20 Apr 2017 13:00:00 +0200</pubDate>
+ <description><p>Jeg oppdaget i dag at <a href="https://www.oep.no/">nettstedet som
+publiserer offentlige postjournaler fra statlige etater</a>, OEP, har
+begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet
+ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl
+og curl. For å teste selv, kjør følgende:</p>
+
+<blockquote><pre>
+% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 404 Not Found
+% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 200 OK
+%
+</pre></blockquote>
+
+<p>Her kan en se at tjenesten gir «404 Not Found» for curl i
+standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera
+versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen
+2017-03-02.</p>
+
+<p>Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente
+informasjon fra oep.no. Kan blokkeringen være gjort for å hindre
+automatisert innsamling av informasjon fra OEP, slik Pressens
+Offentlighetsutvalg gjorde for å dokumentere hvordan departementene
+hindrer innsyn i
+<a href="http://presse.no/dette-mener-np/undergraver-offentlighetsloven/">rapporten
+«Slik hindrer departementer innsyn» som ble publiserte i januar
+2017</a>. Det virker usannsynlig, da det jo er trivielt å bytte
+User-Agent til noe nytt.</p>
+
+<p>Finnes det juridisk grunnlag for det offentlige å diskriminere
+webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter
+hva klienten sier at den heter? Da OEP eies av DIFI og driftes av
+Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to
+aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men
+<a href="https://www.oep.no/search/result.html?period=dateRange&fromDate=01.01.2016&toDate=01.04.2017&dateType=documentDate&caseDescription=&descType=both&caseNumber=&documentNumber=&sender=basefarm&senderType=both&documentType=all&legalAuthority=&archiveCode=&list2=196&searchType=advanced&Search=Search+in+records">postjournalen
+til DIFI viser kun to dokumenter</a> det siste året mellom DIFI og
+Basefarm.
+<a href="https://www.mimesbronn.no/request/blokkering_av_tilgang_til_oep_fo">Mimes brønn neste</a>,
+tenker jeg.</p>
</description>
</item>
<item>
- <title>Coz can help you find bottlenecks in multi-threaded software - nice free software</title>
- <link>http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</guid>
- <pubDate>Thu, 11 Aug 2016 12:00:00 +0200</pubDate>
- <description><p>This summer, I read a great article
-"<a href="https://www.usenix.org/publications/login/summer2016/curtsinger">coz:
-This Is the Profiler You're Looking For</a>" in USENIX ;login: about
-how to profile multi-threaded programs. It presented a system for
-profiling software by running experiences in the running program,
-testing how run time performance is affected by "speeding up" parts of
-the code to various degrees compared to a normal run. It does this by
-slowing down parallel threads while the "faster up" code is running
-and measure how this affect processing time. The processing time is
-measured using probes inserted into the code, either using progress
-counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It
-can also measure unmodified code by measuring complete the program
-runtime and running the program several times instead.</p>
-
-<p>The project and presentation was so inspiring that I would like to
-get the system into Debian. I
-<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830708">created
-a WNPP request for it</a> and contacted upstream to try to make the
-system ready for Debian by sending patches. The build process need to
-be changed a bit to avoid running 'git clone' to get dependencies, and
-to include the JavaScript web page used to visualize the collected
-profiling information included in the source package.
-But I expect that should work out fairly soon.</p>
-
-<p>The way the system work is fairly simple. To run an coz experiment
-on a binary with debug symbols available, start the program like this:
+ <title>Free software archive system Nikita now able to store documents</title>
+ <link>http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</guid>
+ <pubDate>Sun, 19 Mar 2017 08:00:00 +0100</pubDate>
+ <description><p>The <a href="https://github.com/hiOA-ABI/nikita-noark5-core">Nikita
+Noark 5 core project</a> is implementing the Norwegian standard for
+keeping an electronic archive of government documents.
+<a href="http://www.arkivverket.no/arkivverket/Offentlig-forvaltning/Noark/Noark-5/English-version">The
+Noark 5 standard</a> document the requirement for data systems used by
+the archives in the Norwegian government, and the Noark 5 web interface
+specification document a REST web service for storing, searching and
+retrieving documents and metadata in such archive. I've been involved
+in the project since a few weeks before Christmas, when the Norwegian
+Unix User Group
+<a href="https://www.nuug.no/news/NOARK5_kjerne_som_fri_programvare_f_r_epostliste_hos_NUUG.shtml">announced
+it supported the project</a>. I believe this is an important project,
+and hope it can make it possible for the government archives in the
+future to use free software to keep the archives we citizens depend
+on. But as I do not hold such archive myself, personally my first use
+case is to store and analyse public mail journal metadata published
+from the government. I find it useful to have a clear use case in
+mind when developing, to make sure the system scratches one of my
+itches.</p>
+
+<p>If you would like to help make sure there is a free software
+alternatives for the archives, please join our IRC channel
+(<a href="irc://irc.freenode.net/%23nikita"">#nikita on
+irc.freenode.net</a>) and
+<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">the
+project mailing list</a>.</p>
+
+<p>When I got involved, the web service could store metadata about
+documents. But a few weeks ago, a new milestone was reached when it
+became possible to store full text documents too. Yesterday, I
+completed an implementation of a command line tool
+<tt>archive-pdf</tt> to upload a PDF file to the archive using this
+API. The tool is very simple at the moment, and find existing
+<a href="https://en.wikipedia.org/wiki/Fonds">fonds</a>, series and
+files while asking the user to select which one to use if more than
+one exist. Once a file is identified, the PDF is associated with the
+file and uploaded, using the title extracted from the PDF itself. The
+process is fairly similar to visiting the archive, opening a cabinet,
+locating a file and storing a piece of paper in the archive. Here is
+a test run directly after populating the database with test data using
+our API tester:</p>
<p><blockquote><pre>
-coz run --- program-to-run
+~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
+using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
+using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
+
+ 0 - Title of the test case file created 2017-03-18T23:49:32.103446
+ 1 - Title of the test file created 2017-03-18T23:49:32.103446
+Select which mappe you want (or search term): 0
+Uploading mangelmelding/mangler.pdf
+ PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
+ File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
+~/src//noark5-tester$
</pre></blockquote></p>
-<p>This will create a text file profile.coz with the instrumentation
-information. To show what part of the code affect the performance
-most, use a web browser and either point it to
-<a href="http://plasma-umass.github.io/coz/">http://plasma-umass.github.io/coz/</a>
-or use the copy from git (in the gh-pages branch). Check out this web
-site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the
-profiling more useful you include &lt;coz.h&gt; and insert the
-COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the
-code, rebuild and run the profiler. This allow coz to do more
-targeted experiments.</p>
-
-<p>A video published by ACM
-<a href="https://www.youtube.com/watch?v=jE0V-p1odPg">presenting the
-Coz profiler</a> is available from Youtube. There is also a paper
-from the 25th Symposium on Operating Systems Principles available
-titled
-<a href="https://www.usenix.org/conference/atc16/technical-sessions/presentation/curtsinger">Coz:
-finding code that counts with causal profiling</a>.</p>
-
-<p><a href="https://github.com/plasma-umass/coz">The source code</a>
-for Coz is available from github. It will only build with clang
-because it uses a
-<a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55606">C++
-feature missing in GCC</a>, but I've submitted
-<a href="https://github.com/plasma-umass/coz/pull/67">a patch to solve
-it</a> and hope it will be included in the upstream source soon.</p>
-
-<p>Please get in touch if you, like me, would like to see this piece
-of software in Debian. I would very much like some help with the
-packaging effort, as I lack the in depth knowledge on how to package
-C++ libraries.</p>
+<p>You can see here how the fonds (arkiv) and serie (arkivdel) only had
+one option, while the user need to choose which file (mappe) to use
+among the two created by the API tester. The <tt>archive-pdf</tt>
+tool can be found in the git repository for the API tester.</p>
+
+<p>In the project, I have been mostly working on
+<a href="https://github.com/petterreinholdtsen/noark5-tester">the API
+tester</a> so far, while getting to know the code base. The API
+tester currently use
+<a href="https://en.wikipedia.org/wiki/HATEOAS">the HATEOAS links</a>
+to traverse the entire exposed service API and verify that the exposed
+operations and objects match the specification, as well as trying to
+create objects holding metadata and uploading a simple XML file to
+store. The tester has proved very useful for finding flaws in our
+implementation, as well as flaws in the reference site and the
+specification.</p>
+
+<p>The test document I uploaded is a summary of all the specification
+defects we have collected so far while implementing the web service.
+There are several unclear and conflicting parts of the specification,
+and we have
+<a href="https://github.com/petterreinholdtsen/noark5-tester/tree/master/mangelmelding">started
+writing down</a> the questions we get from implementing it. We use a
+format inspired by how <a href="http://www.opengroup.org/austin/">The
+Austin Group</a> collect defect reports for the POSIX standard with
+<a href="http://www.opengroup.org/austin/mantis.html">their
+instructions for the MANTIS defect tracker system</a>, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a <a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/mangelmelding/sendt/2017-03-15-mangel-prosess.md">request for a procedure for submitting defect reports</a> :).
+
+<p>The Nikita project is implemented using Java and Spring, and is
+fairly easy to get up and running using Docker containers for those
+that want to test the current code base. The API tester is
+implemented in Python.</p>
</description>
</item>
<item>
- <title>Sales number for the Free Culture translation, first half of 2016</title>
- <link>http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</guid>
- <pubDate>Fri, 5 Aug 2016 22:45:00 +0200</pubDate>
- <description><p>As my regular readers probably remember, the last year I published
-a French and Norwegian translation of the classic
-<a href="http://www.free-culture.cc/">Free Culture book</a> by the
-founder of the Creative Commons movement, Lawrence Lessig. A bit less
-known is the fact that due to the way I created the translations,
-using docbook and po4a, I also recreated the English original. And
-because I already had created a new the PDF edition, I published it
-too. The revenue from the books are sent to the Creative Commons
-Corporation. In other words, I do not earn any money from this
-project, I just earn the warm fuzzy feeling that the text is available
-for a wider audience and more people can learn why the Creative
-Commons is needed.</p>
-
-<p>Today, just for fun, I had a look at the sales number over at
-Lulu.com, which take care of payment, printing and shipping. Much to
-my surprise, the English edition is selling better than both the
-French and Norwegian edition, despite the fact that it has been
-available in English since it was first published. In total, 24 paper
-books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:</p>
-
-<table border="0">
-<tr><th>Title / language</th><th>Quantity</th></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">Culture Libre / French</a></td><td align="right">3</td></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Fri kultur / Norwegian</a></td><td align="right">7</td></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">Free Culture / English</a></td><td align="right">14</td></tr>
-</table>
-
-<p>The books are available both from Lulu.com and from large book
-stores like Amazon and Barnes&Noble. Most revenue, around $10 per
-book, is sent to the Creative Commons project when the book is sold
-directly by Lulu.com. The other channels give less revenue. The
-summary from Lulu tell me 10 books was sold via the Amazon channel, 10
-via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells
-me that the revenue sent so far this year is USD $101.42. No idea
-what kind of sales numbers to expect, so I do not know if that is a
-good amount of sales for a 10 year old book or not. But it make me
-happy that the buyers find the book, and I hope they enjoy reading it
-as much as I did.</p>
-
-<p>The ebook edition is available for free from
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig">Github</a>.</p>
-
-<p>If you would like to translate and publish the book in your native
-language, I would be happy to help make it happen. Please get in
-touch.</p>
+ <title>Detecting NFS hangs on Linux without hanging yourself...</title>
+ <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</guid>
+ <pubDate>Thu, 9 Mar 2017 15:20:00 +0100</pubDate>
+ <description><p>Over the years, administrating thousand of NFS mounting linux
+computers at the time, I often needed a way to detect if the machine
+was experiencing NFS hang. If you try to use <tt>df</tt> or look at a
+file or directory affected by the hang, the process (and possibly the
+shell) will hang too. So you want to be able to detect this without
+risking the detection process getting stuck too. It has not been
+obvious how to do this. When the hang has lasted a while, it is
+possible to find messages like these in dmesg:</p>
+
+<p><blockquote>
+nfs: server nfsserver not responding, still trying
+<br>nfs: server nfsserver OK
+</blockquote></p>
+
+<p>It is hard to know if the hang is still going on, and it is hard to
+be sure looking in dmesg is going to work. If there are lots of other
+messages in dmesg the lines might have rotated out of site before they
+are noticed.</p>
+
+<p>While reading through the nfs client implementation in linux kernel
+code, I came across some statistics that seem to give a way to detect
+it. The om_timeouts sunrpc value in the kernel will increase every
+time the above log entry is inserted into dmesg. And after digging a
+bit further, I discovered that this value show up in
+/proc/self/mountstats on Linux.</p>
+
+<p>The mountstats content seem to be shared between files using the
+same file system context, so it is enough to check one of the
+mountstats files to get the state of the mount point for the machine.
+I assume this will not show lazy umounted NFS points, nor NFS mount
+points in a different process context (ie with a different filesystem
+view), but that does not worry me.</p>
+
+<p>The content for a NFS mount point look similar to this:</p>
+
+<p><blockquote><pre>
+[...]
+device /dev/mapper/Debian-var mounted on /var with fstype ext3
+device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
+ opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
+ age: 7863311
+ caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
+ sec: flavor=1,pseudoflavor=1
+ events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
+ bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
+ RPC iostats version: 1.0 p/v: 100003/3 (nfs)
+ xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
+ per-op statistics
+ NULL: 0 0 0 0 0 0 0 0
+ GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
+ SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
+ LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
+ ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
+ READLINK: 125 125 0 20472 18620 0 1112 1118
+ READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
+ WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
+ CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
+ MKDIR: 3680 3680 0 773980 993920 26 23990 24245
+ SYMLINK: 903 903 0 233428 245488 6 5865 5917
+ MKNOD: 80 80 0 20148 21760 0 299 304
+ REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
+ RMDIR: 3367 3367 0 645112 484848 22 5782 6002
+ RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
+ LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
+ READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
+ READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
+ FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
+ FSINFO: 2 2 0 232 328 0 1 1
+ PATHCONF: 1 1 0 116 140 0 0 0
+ COMMIT: 0 0 0 0 0 0 0 0
+
+device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
+[...]
+</pre></blockquote></p>
+
+<p>The key number to look at is the third number in the per-op list.
+It is the number of NFS timeouts experiences per file system
+operation. Here 22 write timeouts and 5 access timeouts. If these
+numbers are increasing, I believe the machine is experiencing NFS
+hang. Unfortunately the timeout value do not start to increase right
+away. The NFS operations need to time out first, and this can take a
+while. The exact timeout value depend on the setup. For example the
+defaults for TCP and UDP mount points are quite different, and the
+timeout value is affected by the soft, hard, timeo and retrans NFS
+mount options.</p>
+
+<p>The only way I have been able to get working on Debian and RedHat
+Enterprise Linux for getting the timeout count is to peek in /proc/.
+But according to
+<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris
+10 System Administration Guide: Network Services</a>, the 'nfsstat -c'
+command can be used to get these timeout values. But this do not work
+on Linux, as far as I can tell. I
+<ahref="http://bugs.debian.org/857043">asked Debian about this</a>,
+but have not seen any replies yet.</p>
+
+<p>Is there a better way to figure out if a Linux NFS client is
+experiencing NFS hangs? Is there a way to detect which processes are
+affected? Is there a way to get the NFS mount going quickly once the
+network problem causing the NFS hang has been cleared? I would very
+much welcome some clues, as we regularly run into NFS hangs.</p>
</description>
</item>
<item>
- <title>Vitenskapen tar som vanlig feil igjen - relativt feil</title>
- <link>http://people.skolelinux.org/pere/blog/Vitenskapen_tar_som_vanlig_feil_igjen___relativt_feil.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Vitenskapen_tar_som_vanlig_feil_igjen___relativt_feil.html</guid>
- <pubDate>Mon, 1 Aug 2016 16:00:00 +0200</pubDate>
- <description><p>For mange år siden leste jeg en klassisk tekst som gjorde såpass
-inntrykk på meg at jeg husker den fortsatt, flere år senere, og bruker
-argumentene fra den stadig vekk. Teksten var «The Relativity of
-Wrong» som Isaac Asimov publiserte i Skeptical Inquirer i 1989. Den
-gir litt perspektiv rundt formidlingen av vitenskapelige resultater.
-Jeg har hatt lyst til å kunne dele den også med folk som ikke
-behersker engelsk så godt, som barn og noen av mine eldre slektninger,
-og har savnet å ha den tilgjengelig på norsk. For to uker siden tok
-jeg meg sammen og kontaktet Asbjørn Dyrendal i foreningen Skepsis om
-de var interessert i å publisere en norsk utgave på bloggen sin, og da
-han var positiv tok jeg kontakt med Skeptical Inquirer og spurte om
-det var greit for dem. I løpet av noen dager fikk vi tilbakemelding
-fra Barry Karr hos The Skeptical Inquirer som hadde sjekket og fått OK
-fra Robyn Asimov som representerte arvingene i Asmiov-familien og gikk
-igang med oversettingen.</p>
-
-<p>Resultatet, <a href="http://www.skepsis.no/?p=1617">«Relativt
-feil»</a>, ble publisert på skepsis-bloggen for noen minutter siden.
-Jeg anbefaler deg på det varmeste å lese denne teksten og dele den med
-dine venner.</p>
-
-<p>For å håndtere oversettelsen og sikre at original og oversettelse
-var i sync brukte vi git, po4a, GNU make og Transifex. Det hele
-fungerte utmerket og gjorde det enkelt å dele tekstene og jobbe sammen
-om finpuss på formuleringene. Hadde hosted.weblate.org latt meg
-opprette nye prosjekter selv i stedet for å måtte kontakte
-administratoren der, så hadde jeg brukt weblate i stedet.</p>
+ <title>How does it feel to be wiretapped, when you should be doing the wiretapping...</title>
+ <link>http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</guid>
+ <pubDate>Wed, 8 Mar 2017 11:50:00 +0100</pubDate>
+ <description><p>So the new president in the United States of America claim to be
+surprised to discover that he was wiretapped during the election
+before he was elected president. He even claim this must be illegal.
+Well, doh, if it is one thing the confirmations from Snowden
+documented, it is that the entire population in USA is wiretapped, one
+way or another. Of course the president candidates were wiretapped,
+alongside the senators, judges and the rest of the people in USA.</p>
+
+<p>Next, the Federal Bureau of Investigation ask the Department of
+Justice to go public rejecting the claims that Donald Trump was
+wiretapped illegally. I fail to see the relevance, given that I am
+sure the surveillance industry in USA believe they have all the legal
+backing they need to conduct mass surveillance on the entire
+world.</p>
+
+<p>There is even the director of the FBI stating that he never saw an
+order requesting wiretapping of Donald Trump. That is not very
+surprising, given how the FISA court work, with all its activity being
+secret. Perhaps he only heard about it?</p>
+
+<p>What I find most sad in this story is how Norwegian journalists
+present it. In a news reports the other day in the radio from the
+Norwegian National broadcasting Company (NRK), I heard the journalist
+claim that 'the FBI denies any wiretapping', while the reality is that
+'the FBI denies any illegal wiretapping'. There is a fundamental and
+important difference, and it make me sad that the journalists are
+unable to grasp it.</p>
+
+<p><strong>Update 2017-03-13:</strong> Look like
+<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The
+Intercept report that US Senator Rand Paul confirm what I state above</a>.</p>
</description>
</item>
<item>
- <title>Techno TV broadcasting live across Norway and the Internet (#debconf16, #nuug) on @frikanalen</title>
- <link>http://people.skolelinux.org/pere/blog/Techno_TV_broadcasting_live_across_Norway_and_the_Internet___debconf16___nuug__on__frikanalen.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Techno_TV_broadcasting_live_across_Norway_and_the_Internet___debconf16___nuug__on__frikanalen.html</guid>
- <pubDate>Mon, 1 Aug 2016 10:30:00 +0200</pubDate>
- <description><p>Did you know there is a TV channel broadcasting talks from DebConf
-16 across an entire country? Or that there is a TV channel
-broadcasting talks by or about
-<a href="http://beta.frikanalen.no/video/625529/">Linus Torvalds</a>,
-<a href="http://beta.frikanalen.no/video/625599/">Tor</a>,
-<a href="http://beta.frikanalen.no/video/624019/">OpenID</A>,
-<a href="http://beta.frikanalen.no/video/625624/">Common Lisp</a>,
-<a href="http://beta.frikanalen.no/video/625446/">Civic Tech</a>,
-<a href="http://beta.frikanalen.no/video/625090/">EFF founder John Barlow</a>,
-<a href="http://beta.frikanalen.no/video/625432/">how to make 3D
-printer electronics</a> and many more fascinating topics? It works
-using only free software (all of it
-<a href="http://github.com/Frikanalen">available from Github</a>), and
-is administrated using a web browser and a web API.</p>
-
-<p>The TV channel is the Norwegian open channel
-<a href="http://www.frikanalen.no/">Frikanalen</a>, and I am involved
-via <a href="https://www.nuug.no/">the NUUG member association</a> in
-running and developing the software for the channel. The channel is
-organised as a member organisation where its members can upload and
-broadcast what they want (think of it as Youtube for national
-broadcasting television). Individuals can broadcast too. The time
-slots are handled on a first come, first serve basis. Because the
-channel have almost no viewers and very few active members, we can
-experiment with TV technology without too much flack when we make
-mistakes. And thanks to the few active members, most of the slots on
-the schedule are free. I see this as an opportunity to spread
-knowledge about technology and free software, and have a script I run
-regularly to fill up all the open slots the next few days with
-technology related video. The end result is a channel I like to
-describe as Techno TV - filled with interesting talks and
-presentations.</p>
-
-<p>It is available on channel 50 on the Norwegian national digital TV
-network (RiksTV). It is also available as a multicast stream on
-Uninett. And finally, it is available as
-<a href="http://beta.frikanalen.no/">a WebM unicast stream</a> from
-Frikanalen and NUUG. Check it out. :)</p>
+ <title>Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress</title>
+ <link>http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</guid>
+ <pubDate>Fri, 3 Mar 2017 14:50:00 +0100</pubDate>
+ <description><p>For almost a year now, we have been working on making a Norwegian
+Bokmål edition of <a href="https://debian-handbook.info/">The Debian
+Administrator's Handbook</a>. Now, thanks to the tireless effort of
+Ole-Erik, Ingrid and Andreas, the initial translation is complete, and
+we are working on the proof reading to ensure consistent language and
+use of correct computer science terms. The plan is to make the book
+available on paper, as well as in electronic form. For that to
+happen, the proof reading must be completed and all the figures need
+to be translated. If you want to help out, get in touch.</p>
+
+<p><a href="http://people.skolelinux.org/pere/debian-handbook/debian-handbook-nb-NO.pdf">A
+
+fresh PDF edition</a> in A4 format (the final book will have smaller
+pages) of the book created every morning is available for
+proofreading. If you find any errors, please
+<a href="https://hosted.weblate.org/projects/debian-handbook/">visit
+Weblate and correct the error</a>. The
+<a href="http://l.github.io/debian-handbook/stat/nb-NO/index.html">state
+of the translation including figures</a> is a useful source for those
+provide Norwegian bokmål screen shots and figures.</p>
</description>
</item>
<item>
- <title>Unlocking HTC Desire HD on Linux using unruu and fastboot</title>
- <link>http://people.skolelinux.org/pere/blog/Unlocking_HTC_Desire_HD_on_Linux_using_unruu_and_fastboot.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlocking_HTC_Desire_HD_on_Linux_using_unruu_and_fastboot.html</guid>
- <pubDate>Thu, 7 Jul 2016 11:30:00 +0200</pubDate>
- <description><p>Yesterday, I tried to unlock a HTC Desire HD phone, and it proved
-to be a slight challenge. Here is the recipe if I ever need to do it
-again. It all started by me wanting to try the recipe to set up
-<a href="https://blog.torproject.org/blog/mission-impossible-hardening-android-security-and-privacy">an
-hardened Android installation</a> from the Tor project blog on a
-device I had access to. It is a old mobile phone with a broken
-microphone The initial idea had been to just
-<a href="http://wiki.cyanogenmod.org/w/Install_CM_for_ace">install
-CyanogenMod on it</a>, but did not quite find time to start on it
-until a few days ago.</p>
-
-<p>The unlock process is supposed to be simple: (1) Boot into the boot
-loader (press volume down and power at the same time), (2) select
-'fastboot' before (3) connecting the device via USB to a Linux
-machine, (4) request the device identifier token by running 'fastboot
-oem get_identifier_token', (5) request the device unlocking key using
-the <a href="http://www.htcdev.com/bootloader/">HTC developer web
-site</a> and unlock the phone using the key file emailed to you.</p>
-
-<p>Unfortunately, this only work fi you have hboot version 2.00.0029
-or newer, and the device I was working on had 2.00.0027. This
-apparently can be easily fixed by downloading a Windows program and
-running it on your Windows machine, if you accept the terms Microsoft
-require you to accept to use Windows - which I do not. So I had to
-come up with a different approach. I got a lot of help from AndyCap
-on #nuug, and would not have been able to get this working without
-him.</p>
-
-<p>First I needed to extract the hboot firmware from
-<a href="http://www.htcdev.com/ruu/PD9810000_Ace_Sense30_S_hboot_2.00.0029.exe">the
-windows binary for HTC Desire HD</a> downloaded as 'the RUU' from HTC.
-For this there is is <a href="https://github.com/kmdm/unruu/">a github
-project named unruu</a> using libunshield. The unshield tool did not
-recognise the file format, but unruu worked and extracted rom.zip,
-containing the new hboot firmware and a text file describing which
-devices it would work for.</p>
-
-<p>Next, I needed to get the new firmware into the device. For this I
-followed some instructions
-<a href="http://www.htc1guru.com/2013/09/new-ruu-zips-posted/">available
-from HTC1Guru.com</a>, and ran these commands as root on a Linux
-machine with Debian testing:</p>
-
-<p><pre>
-adb reboot-bootloader
-fastboot oem rebootRUU
-fastboot flash zip rom.zip
-fastboot flash zip rom.zip
-fastboot reboot
-</pre></p>
-
-<p>The flash command apparently need to be done twice to take effect,
-as the first is just preparations and the second one do the flashing.
-The adb command is just to get to the boot loader menu, so turning the
-device on while holding volume down and the power button should work
-too.</p>
-
-<p>With the new hboot version in place I could start following the
-instructions on the HTC developer web site. I got the device token
-like this:</p>
-
-<p><pre>
-fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //'
-</pre>
-
-<p>And once I got the unlock code via email, I could use it like
-this:</p>
-
-<p><pre>
-fastboot flash unlocktoken Unlock_code.bin
-</pre></p>
-
-<p>And with that final step in place, the phone was unlocked and I
-could start stuffing the software of my own choosing into the device.
-So far I only inserted a replacement recovery image to wipe the phone
-before I start. We will see what happen next. Perhaps I should
-install <a href="https://www.debian.org/">Debian</a> on it. :)</p>
+ <title>Unlimited randomness with the ChaosKey?</title>
+ <link>http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</guid>
+ <pubDate>Wed, 1 Mar 2017 20:50:00 +0100</pubDate>
+ <description><p>A few days ago I ordered a small batch of
+<a href="http://altusmetrum.org/ChaosKey/">the ChaosKey</a>, a small
+USB dongle for generating entropy created by Bdale Garbee and Keith
+Packard. Yesterday it arrived, and I am very happy to report that it
+work great! According to its designers, to get it to work out of the
+box, you need the Linux kernel version 4.1 or later. I tested on a
+Debian Stretch machine (kernel version 4.9), and there it worked just
+fine, increasing the available entropy very quickly. I wrote a small
+test oneliner to test. It first print the current entropy level,
+drain /dev/random, and then print the entropy level for five seconds.
+Here is the situation without the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+300
+0+1 oppføringer inn
+0+1 oppføringer ut
+28 byte kopiert, 0,000264565 s, 106 kB/s
+4
+8
+12
+17
+21
+%
+</pre></blockquote>
+
+<p>The entropy level increases by 3-4 every second. In such case any
+application requiring random bits (like a HTTPS enabled web server)
+will halt and wait for more entrpy. And here is the situation with
+the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+1079
+0+1 oppføringer inn
+0+1 oppføringer ut
+104 byte kopiert, 0,000487647 s, 213 kB/s
+433
+1028
+1031
+1035
+1038
+%
+</pre></blockquote>
+
+<p>Quite the difference. :) I bought a few more than I need, in case
+someone want to buy one here in Norway. :)</p>
+
+<p>Update: The dongle was presented at Debconf last year. You might
+find <a href="https://debconf16.debconf.org/talks/94/">the talk
+recording illuminating</a>. It explains exactly what the source of
+randomness is, if you are unable to spot it from the schema drawing
+available from the ChaosKey web site linked at the start of this blog
+post.</p>
</description>
</item>
<item>
- <title>How to use the Signal app if you only have a land line (ie no mobile phone)</title>
- <link>http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html</guid>
- <pubDate>Sun, 3 Jul 2016 14:20:00 +0200</pubDate>
- <description><p>For a while now, I have wanted to test
-<a href="https://whispersystems.org/">the Signal app</a>, as it is
-said to provide end to end encrypted communication and several of my
-friends and family are already using it. As I by choice do not own a
-mobile phone, this proved to be harder than expected. And I wanted to
-have the source of the client and know that it was the code used on my
-machine. But yesterday I managed to get it working. I used the
-Github source, compared it to the source in
-<a href="https://chrome.google.com/webstore/detail/signal-private-messenger/bikioccmkafdpakkkcpdbppfkghcmihk?hl=en-US">the
-Signal Chrome app</a> available from the Chrome web store, applied
-patches to use the production Signal servers, started the app and
-asked for the hidden "register without a smart phone" form. Here is
-the recipe how I did it.</p>
-
-<p>First, I fetched the Signal desktop source from Github, using
-
-<pre>
-git clone https://github.com/WhisperSystems/Signal-Desktop.git
-</pre>
-
-<p>Next, I patched the source to use the production servers, to be
-able to talk to other Signal users:</p>
-
-<pre>
-cat &lt;&lt;EOF | patch -p0
-diff -ur ./js/background.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js
---- ./js/background.js 2016-06-29 13:43:15.630344628 +0200
-+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js 2016-06-29 14:06:29.530300934 +0200
-@@ -47,8 +47,8 @@
- });
- });
-
-- var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
-- var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
-+ var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org:4433';
-+ var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
- var messageReceiver;
- window.getSocketStatus = function() {
- if (messageReceiver) {
-diff -ur ./js/expire.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js
---- ./js/expire.js 2016-06-29 13:43:15.630344628 +0200
-+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js2016-06-29 14:06:29.530300934 +0200
-@@ -1,6 +1,6 @@
- ;(function() {
- 'use strict';
-- var BUILD_EXPIRATION = 0;
-+ var BUILD_EXPIRATION = 1474492690000;
-
- window.extension = window.extension || {};
-
-EOF
-</pre>
-
-<p>The first part is changing the servers, and the second is updating
-an expiration timestamp. This timestamp need to be updated regularly.
-It is set 90 days in the future by the build process (Gruntfile.js).
-The value is seconds since 1970 times 1000, as far as I can tell.</p>
-
-<p>Based on a tip and good help from the #nuug IRC channel, I wrote a
-script to launch Signal in Chromium.</p>
-
-<pre>
-#!/bin/sh
-cd $(dirname $0)
-mkdir -p userdata
-exec chromium \
- --proxy-server="socks://localhost:9050" \
- --user-data-dir=`pwd`/userdata --load-and-launch-app=`pwd`
-</pre>
-
-<p> The script start the app and configure Chromium to use the Tor
-SOCKS5 proxy to make sure those controlling the Signal servers (today
-Amazon and Whisper Systems) as well as those listening on the lines
-will have a harder time location my laptop based on the Signal
-connections if they use source IP address.</p>
-
-<p>When the script starts, one need to follow the instructions under
-"Standalone Registration" in the CONTRIBUTING.md file in the git
-repository. I right clicked on the Signal window to get up the
-Chromium debugging tool, visited the 'Console' tab and wrote
-'extension.install("standalone")' on the console prompt to get the
-registration form. Then I entered by land line phone number and
-pressed 'Call'. 5 seconds later the phone rang and a robot voice
-repeated the verification code three times. After entering the number
-into the verification code field in the form, I could start using
-Signal from my laptop.
-
-<p>As far as I can tell, The Signal app will leak who is talking to
-whom and thus who know who to those controlling the central server,
-but such leakage is hard to avoid with a centrally controlled server
-setup. It is something to keep in mind when using Signal - the
-content of your chats are harder to intercept, but the meta data
-exposing your contact network is available to people you do not know.
-So better than many options, but not great. And sadly the usage is
-connected to my land line, thus allowing those controlling the server
-to associate it to my home and person. I would prefer it if only
-those I knew could tell who I was on Signal. There are options
-avoiding such information leakage, but most of my friends are not
-using them, so I am stuck with Signal for now.</p>
+ <title>Detect OOXML files with undefined behaviour?</title>
+ <link>http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</guid>
+ <pubDate>Tue, 21 Feb 2017 00:20:00 +0100</pubDate>
+ <description><p>I just noticed
+<a href="http://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">the
+new Norwegian proposal for archiving rules in the goverment</a> list
+<a href="http://www.ecma-international.org/publications/standards/Ecma-376.htm">ECMA-376</a>
+/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term
+storage. Luckily such files will only be accepted based on
+pre-approval from the National Archive. Allowing OOXML files to be
+used for long term storage might seem like a good idea as long as we
+forget that there are plenty of ways for a "valid" OOXML document to
+have content with no defined interpretation in the standard, which
+lead to a question and an idea.</p>
+
+<p>Is there any tool to detect if a OOXML document depend on such
+undefined behaviour? It would be useful for the National Archive (and
+anyone else interested in verifying that a document is well defined)
+to have such tool available when considering to approve the use of
+OOXML. I'm aware of the
+<a href="https://github.com/arlm/officeotron/">officeotron OOXML
+validator</a>, but do not know how complete it is nor if it will
+report use of undefined behaviour. Are there other similar tools
+available? Please send me an email if you know of any such tool.</p>
</description>
</item>