<link>http://people.skolelinux.org/pere/blog/</link>
<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
+ <item>
+ <title>Idea for storing trusted timestamps in a Noark 5 archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</guid>
+ <pubDate>Wed, 7 Jun 2017 21:40:00 +0200</pubDate>
+ <description><p><em>This is a copy of
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2017-June/000297.html">an
+email I posted to the nikita-noark mailing list</a>. Please follow up
+there if you would like to discuss this topic. The background is that
+we are making a free software archive system based on the Norwegian
+<a href="https://www.arkivverket.no/forvaltning-og-utvikling/regelverk-og-standarder/noark-standarden">Noark
+5 standard</a> for government archives.</em></p>
+
+<p>I've been wondering a bit lately how trusted timestamps could be
+stored in Noark 5.
+<a href="https://en.wikipedia.org/wiki/Trusted_timestamping">Trusted
+timestamps</a> can be used to verify that some information
+(document/file/checksum/metadata) have not been changed since a
+specific time in the past. This is useful to verify the integrity of
+the documents in the archive.</p>
+
+<p>Then it occured to me, perhaps the trusted timestamps could be
+stored as dokument variants (ie dokumentobjekt referered to from
+dokumentbeskrivelse) with the filename set to the hash it is
+stamping?</p>
+
+<p>Given a "dokumentbeskrivelse" with an associated "dokumentobjekt",
+a new dokumentobjekt is associated with "dokumentbeskrivelse" with the
+same attributes as the stamped dokumentobjekt except these
+attributes:</p>
+
+<ul>
+
+<li>format -> "RFC3161"
+<li>mimeType -> "application/timestamp-reply"
+<li>formatDetaljer -> "&lt;source URL for timestamp service&gt;"
+<li>filenavn -> "&lt;sjekksum&gt;.tsr"
+
+</ul>
+
+<p>This assume a service following
+<a href="https://tools.ietf.org/html/rfc3161">IETF RFC 3161</a> is
+used, which specifiy the given MIME type for replies and the .tsr file
+ending for the content of such trusted timestamp. As far as I can
+tell from the Noark 5 specifications, it is OK to have several
+variants/renderings of a dokument attached to a given
+dokumentbeskrivelse objekt. It might be stretching it a bit to make
+some of these variants represent crypto-signatures useful for
+verifying the document integrity instead of representing the dokument
+itself.</p>
+
+<p>Using the source of the service in formatDetaljer allow several
+timestamping services to be used. This is useful to spread the risk
+of key compromise over several organisations. It would only be a
+problem to trust the timestamps if all of the organisations are
+compromised.</p>
+
+<p>The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "<sjekksum>.tsr" value mentioned
+above).</p>
+
+<p><blockquote><pre>
+openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
+ | curl -s -H "Content-Type: application/timestamp-query" \
+ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
+</pre></blockquote></p>
+
+<p>To verify the timestamp, you first need to download the public key
+of the trusted timestamp service, for example using this command:</p>
+
+<p><blockquote><pre>
+wget -O ca-cert.txt \
+ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
+</pre></blockquote></p>
+
+<p>Note, the public key should be stored alongside the timestamps in
+the archive to make sure it is also available 100 years from now. It
+is probably a good idea to standardise how and were to store such
+public keys, to make it easier to find for those trying to verify
+documents 100 or 1000 years from now. :)</p>
+
+<p>The verification itself is a simple openssl command:</p>
+
+<p><blockquote><pre>
+openssl ts -verify -data $inputfile -in $sha256.tsr \
+ -CAfile ca-cert.txt -text
+</pre></blockquote></p>
+
+<p>Is there any reason this approach would not work? Is it somehow against
+the Noark 5 specification?</p>
+</description>
+ </item>
+
+ <item>
+ <title>Når nynorskoversettelsen svikter til eksamen...</title>
+ <link>http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/N_r_nynorskoversettelsen_svikter_til_eksamen___.html</guid>
+ <pubDate>Sat, 3 Jun 2017 08:20:00 +0200</pubDate>
+ <description><p><a href="http://www.aftenposten.no/norge/Krever-at-elever-ma-fa-annullert-eksamen-etter-rot-med-oppgavetekster-622459b.html">Aftenposten
+melder i dag</a> om feil i eksamensoppgavene for eksamen i politikk og
+menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var
+like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring
+på om den fri oversetterløsningen
+<a href="https://www.apertium.org/">Apertium</a> ville gjort en bedre
+jobb enn Utdanningsdirektoratet. Det kan se slik ut.</p>
+
+<p>Her er bokmålsoppgaven fra eksamenen:</p>
+
+<blockquote>
+<p>Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers
+rolle og muligheter til å håndtere internasjonale utfordringer, som
+for eksempel flykningekrisen.</p>
+
+<p>Vedlegge er eksempler på tekster som kan gi relevante perspektiver
+på temaet:</p>
+<ol>
+<li>Flykningeregnskapet 2016, UNHCR og IDMC
+<li>«Grenseløst Europa for fall» A-Magasinet, 26. november 2015
+</ol>
+
+</blockquote>
+
+<p>Dette oversetter Apertium slik:</p>
+
+<blockquote>
+<p>Drøft utfordringane knytte til nasjonalstatane sine og rolla til
+andre aktørar og høve til å handtera internasjonale utfordringar, som
+til dømes *flykningekrisen.</p>
+
+<p>Vedleggja er døme på tekster som kan gje relevante perspektiv på
+temaet:</p>
+
+<ol>
+<li>*Flykningeregnskapet 2016, *UNHCR og *IDMC</li>
+<li>«*Grenseløst Europa for fall» A-Magasinet, 26. november 2015</li>
+</ol>
+
+</blockquote>
+
+<p>Ord som ikke ble forstått er markert med stjerne (*), og trenger
+ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i
+oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at
+"andre aktørers rolle og muligheter til ..." burde vært oversatt til
+"rolla til andre aktørar og deira høve til ..." eller noe slikt, men
+det er kanskje flisespikking. Det understreker vel bare at det alltid
+trengs korrekturlesning etter automatisk oversettelse.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Epost inn som arkivformat i Riksarkivarens forskrift?</title>
+ <link>http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Epost_inn_som_arkivformat_i_Riksarkivarens_forskrift_.html</guid>
+ <pubDate>Thu, 27 Apr 2017 11:30:00 +0200</pubDate>
+ <description><p>I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på
+sin forskrift. Som en kan se er det ikke mye tid igjen før fristen
+som går ut på søndag. Denne forskriften er det som lister opp hvilke
+formater det er greit å arkivere i
+<a href="http://www.arkivverket.no/arkivverket/Offentleg-forvalting/Noark/Noark-5">Noark
+5-løsninger</a> i Norge.</p>
+
+<p>Jeg fant høringsdokumentene hos
+<a href="https://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">Norsk
+Arkivråd</a> etter å ha blitt tipset på epostlisten til
+<a href="https://github.com/hiOA-ABI/nikita-noark5-core">fri
+programvareprosjektet Nikita Noark5-Core</a>, som lager et Noark 5
+Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket
+være min interesse for tjenestegrensesnittsprosjektet har jeg lest en
+god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget
+at standard epost ikke er på listen over godkjente formater som kan
+arkiveres. Høringen med frist søndag er en glimrende mulighet til å
+forsøke å gjøre noe med det. Jeg holder på med
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/hoering-arkivforskrift.tex">egen
+høringsuttalelse</a>, og lurer på om andre er interessert i å støtte
+forslaget om å tillate arkivering av epost som epost i arkivet.</p>
+
+<p>Er du igang med å skrive egen høringsuttalelse allerede? I så fall
+kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror
+ikke det trengs så mye. Her et kort forslag til tekst:</p>
+
+<p><blockquote>
+
+ <p>Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse
+ 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om
+ revisjon av Forskrift om utfyllende tekniske og arkivfaglige
+ bestemmelser om behandling av offentlige arkiver (Riksarkivarens
+ forskrift).</p>
+
+ <p>Svært mye av vår kommuikasjon foregår i dag på e-post. Vi
+ foreslår derfor at Internett-e-post, slik det er beskrevet i IETF
+ RFC 5322,
+ <a href="https://tools.ietf.org/html/rfc5322">https://tools.ietf.org/html/rfc5322</a>. bør
+ inn som godkjent dokumentformat. Vi foreslår at forskriftens
+ oversikt over godkjente dokumentformater ved innlevering i § 5-16
+ endres til å ta med Internett-e-post.</p>
+
+</blockquote></p>
+
+<p>Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan
+epost kan lagres i en Noark 5-struktur, og holder på å skrive et
+forslag om hvordan dette kan gjøres som vil bli sendt over til
+arkivverket så snart det er ferdig. De som er interesserte kan
+<a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/docs/epostlagring.md">følge
+fremdriften på web</a>.</p>
+
+<p>Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev
+ <a href="https://www.nuug.no/news/NUUGs_h_ringuttalelse_til_Riksarkivarens_forskrift.shtml">sendt
+ inn av foreningen NUUG</a>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Offentlig elektronisk postjournal blokkerer tilgang for utvalgte webklienter</title>
+ <link>http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Offentlig_elektronisk_postjournal_blokkerer_tilgang_for_utvalgte_webklienter.html</guid>
+ <pubDate>Thu, 20 Apr 2017 13:00:00 +0200</pubDate>
+ <description><p>Jeg oppdaget i dag at <a href="https://www.oep.no/">nettstedet som
+publiserer offentlige postjournaler fra statlige etater</a>, OEP, har
+begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet
+ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl
+og curl. For å teste selv, kjør følgende:</p>
+
+<blockquote><pre>
+% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 404 Not Found
+% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 200 OK
+%
+</pre></blockquote>
+
+<p>Her kan en se at tjenesten gir «404 Not Found» for curl i
+standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera
+versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen
+2017-03-02.</p>
+
+<p>Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente
+informasjon fra oep.no. Kan blokkeringen være gjort for å hindre
+automatisert innsamling av informasjon fra OEP, slik Pressens
+Offentlighetsutvalg gjorde for å dokumentere hvordan departementene
+hindrer innsyn i
+<a href="http://presse.no/dette-mener-np/undergraver-offentlighetsloven/">rapporten
+«Slik hindrer departementer innsyn» som ble publiserte i januar
+2017</a>. Det virker usannsynlig, da det jo er trivielt å bytte
+User-Agent til noe nytt.</p>
+
+<p>Finnes det juridisk grunnlag for det offentlige å diskriminere
+webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter
+hva klienten sier at den heter? Da OEP eies av DIFI og driftes av
+Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to
+aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men
+<a href="https://www.oep.no/search/result.html?period=dateRange&fromDate=01.01.2016&toDate=01.04.2017&dateType=documentDate&caseDescription=&descType=both&caseNumber=&documentNumber=&sender=basefarm&senderType=both&documentType=all&legalAuthority=&archiveCode=&list2=196&searchType=advanced&Search=Search+in+records">postjournalen
+til DIFI viser kun to dokumenter</a> det siste året mellom DIFI og
+Basefarm.
+<a href="https://www.mimesbronn.no/request/blokkering_av_tilgang_til_oep_fo">Mimes brønn neste</a>,
+tenker jeg.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Free software archive system Nikita now able to store documents</title>
+ <link>http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</guid>
+ <pubDate>Sun, 19 Mar 2017 08:00:00 +0100</pubDate>
+ <description><p>The <a href="https://github.com/hiOA-ABI/nikita-noark5-core">Nikita
+Noark 5 core project</a> is implementing the Norwegian standard for
+keeping an electronic archive of government documents.
+<a href="http://www.arkivverket.no/arkivverket/Offentlig-forvaltning/Noark/Noark-5/English-version">The
+Noark 5 standard</a> document the requirement for data systems used by
+the archives in the Norwegian government, and the Noark 5 web interface
+specification document a REST web service for storing, searching and
+retrieving documents and metadata in such archive. I've been involved
+in the project since a few weeks before Christmas, when the Norwegian
+Unix User Group
+<a href="https://www.nuug.no/news/NOARK5_kjerne_som_fri_programvare_f_r_epostliste_hos_NUUG.shtml">announced
+it supported the project</a>. I believe this is an important project,
+and hope it can make it possible for the government archives in the
+future to use free software to keep the archives we citizens depend
+on. But as I do not hold such archive myself, personally my first use
+case is to store and analyse public mail journal metadata published
+from the government. I find it useful to have a clear use case in
+mind when developing, to make sure the system scratches one of my
+itches.</p>
+
+<p>If you would like to help make sure there is a free software
+alternatives for the archives, please join our IRC channel
+(<a href="irc://irc.freenode.net/%23nikita"">#nikita on
+irc.freenode.net</a>) and
+<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">the
+project mailing list</a>.</p>
+
+<p>When I got involved, the web service could store metadata about
+documents. But a few weeks ago, a new milestone was reached when it
+became possible to store full text documents too. Yesterday, I
+completed an implementation of a command line tool
+<tt>archive-pdf</tt> to upload a PDF file to the archive using this
+API. The tool is very simple at the moment, and find existing
+<a href="https://en.wikipedia.org/wiki/Fonds">fonds</a>, series and
+files while asking the user to select which one to use if more than
+one exist. Once a file is identified, the PDF is associated with the
+file and uploaded, using the title extracted from the PDF itself. The
+process is fairly similar to visiting the archive, opening a cabinet,
+locating a file and storing a piece of paper in the archive. Here is
+a test run directly after populating the database with test data using
+our API tester:</p>
+
+<p><blockquote><pre>
+~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
+using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
+using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
+
+ 0 - Title of the test case file created 2017-03-18T23:49:32.103446
+ 1 - Title of the test file created 2017-03-18T23:49:32.103446
+Select which mappe you want (or search term): 0
+Uploading mangelmelding/mangler.pdf
+ PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
+ File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
+~/src//noark5-tester$
+</pre></blockquote></p>
+
+<p>You can see here how the fonds (arkiv) and serie (arkivdel) only had
+one option, while the user need to choose which file (mappe) to use
+among the two created by the API tester. The <tt>archive-pdf</tt>
+tool can be found in the git repository for the API tester.</p>
+
+<p>In the project, I have been mostly working on
+<a href="https://github.com/petterreinholdtsen/noark5-tester">the API
+tester</a> so far, while getting to know the code base. The API
+tester currently use
+<a href="https://en.wikipedia.org/wiki/HATEOAS">the HATEOAS links</a>
+to traverse the entire exposed service API and verify that the exposed
+operations and objects match the specification, as well as trying to
+create objects holding metadata and uploading a simple XML file to
+store. The tester has proved very useful for finding flaws in our
+implementation, as well as flaws in the reference site and the
+specification.</p>
+
+<p>The test document I uploaded is a summary of all the specification
+defects we have collected so far while implementing the web service.
+There are several unclear and conflicting parts of the specification,
+and we have
+<a href="https://github.com/petterreinholdtsen/noark5-tester/tree/master/mangelmelding">started
+writing down</a> the questions we get from implementing it. We use a
+format inspired by how <a href="http://www.opengroup.org/austin/">The
+Austin Group</a> collect defect reports for the POSIX standard with
+<a href="http://www.opengroup.org/austin/mantis.html">their
+instructions for the MANTIS defect tracker system</a>, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a <a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/mangelmelding/sendt/2017-03-15-mangel-prosess.md">request for a procedure for submitting defect reports</a> :).
+
+<p>The Nikita project is implemented using Java and Spring, and is
+fairly easy to get up and running using Docker containers for those
+that want to test the current code base. The API tester is
+implemented in Python.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Detecting NFS hangs on Linux without hanging yourself...</title>
+ <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</guid>
+ <pubDate>Thu, 9 Mar 2017 15:20:00 +0100</pubDate>
+ <description><p>Over the years, administrating thousand of NFS mounting linux
+computers at the time, I often needed a way to detect if the machine
+was experiencing NFS hang. If you try to use <tt>df</tt> or look at a
+file or directory affected by the hang, the process (and possibly the
+shell) will hang too. So you want to be able to detect this without
+risking the detection process getting stuck too. It has not been
+obvious how to do this. When the hang has lasted a while, it is
+possible to find messages like these in dmesg:</p>
+
+<p><blockquote>
+nfs: server nfsserver not responding, still trying
+<br>nfs: server nfsserver OK
+</blockquote></p>
+
+<p>It is hard to know if the hang is still going on, and it is hard to
+be sure looking in dmesg is going to work. If there are lots of other
+messages in dmesg the lines might have rotated out of site before they
+are noticed.</p>
+
+<p>While reading through the nfs client implementation in linux kernel
+code, I came across some statistics that seem to give a way to detect
+it. The om_timeouts sunrpc value in the kernel will increase every
+time the above log entry is inserted into dmesg. And after digging a
+bit further, I discovered that this value show up in
+/proc/self/mountstats on Linux.</p>
+
+<p>The mountstats content seem to be shared between files using the
+same file system context, so it is enough to check one of the
+mountstats files to get the state of the mount point for the machine.
+I assume this will not show lazy umounted NFS points, nor NFS mount
+points in a different process context (ie with a different filesystem
+view), but that does not worry me.</p>
+
+<p>The content for a NFS mount point look similar to this:</p>
+
+<p><blockquote><pre>
+[...]
+device /dev/mapper/Debian-var mounted on /var with fstype ext3
+device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
+ opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
+ age: 7863311
+ caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
+ sec: flavor=1,pseudoflavor=1
+ events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
+ bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
+ RPC iostats version: 1.0 p/v: 100003/3 (nfs)
+ xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
+ per-op statistics
+ NULL: 0 0 0 0 0 0 0 0
+ GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
+ SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
+ LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
+ ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
+ READLINK: 125 125 0 20472 18620 0 1112 1118
+ READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
+ WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
+ CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
+ MKDIR: 3680 3680 0 773980 993920 26 23990 24245
+ SYMLINK: 903 903 0 233428 245488 6 5865 5917
+ MKNOD: 80 80 0 20148 21760 0 299 304
+ REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
+ RMDIR: 3367 3367 0 645112 484848 22 5782 6002
+ RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
+ LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
+ READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
+ READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
+ FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
+ FSINFO: 2 2 0 232 328 0 1 1
+ PATHCONF: 1 1 0 116 140 0 0 0
+ COMMIT: 0 0 0 0 0 0 0 0
+
+device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
+[...]
+</pre></blockquote></p>
+
+<p>The key number to look at is the third number in the per-op list.
+It is the number of NFS timeouts experiences per file system
+operation. Here 22 write timeouts and 5 access timeouts. If these
+numbers are increasing, I believe the machine is experiencing NFS
+hang. Unfortunately the timeout value do not start to increase right
+away. The NFS operations need to time out first, and this can take a
+while. The exact timeout value depend on the setup. For example the
+defaults for TCP and UDP mount points are quite different, and the
+timeout value is affected by the soft, hard, timeo and retrans NFS
+mount options.</p>
+
+<p>The only way I have been able to get working on Debian and RedHat
+Enterprise Linux for getting the timeout count is to peek in /proc/.
+But according to
+<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris
+10 System Administration Guide: Network Services</a>, the 'nfsstat -c'
+command can be used to get these timeout values. But this do not work
+on Linux, as far as I can tell. I
+<ahref="http://bugs.debian.org/857043">asked Debian about this</a>,
+but have not seen any replies yet.</p>
+
+<p>Is there a better way to figure out if a Linux NFS client is
+experiencing NFS hangs? Is there a way to detect which processes are
+affected? Is there a way to get the NFS mount going quickly once the
+network problem causing the NFS hang has been cleared? I would very
+much welcome some clues, as we regularly run into NFS hangs.</p>
+</description>
+ </item>
+
<item>
<title>How does it feel to be wiretapped, when you should be doing the wiretapping...</title>
<link>http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</link>
<p>Next, the Federal Bureau of Investigation ask the Department of
Justice to go public rejecting the claims that Donald Trump was
wiretapped illegally. I fail to see the relevance, given that I am
-sure the surveillance industry in USA according to themselves believe
-they have all the legal backing they need to conduct mass surveillance
-on the entire world.</p>
+sure the surveillance industry in USA believe they have all the legal
+backing they need to conduct mass surveillance on the entire
+world.</p>
<p>There is even the director of the FBI stating that he never saw an
order requesting wiretapping of Donald Trump. That is not very
'the FBI denies any illegal wiretapping'. There is a fundamental and
important difference, and it make me sad that the journalists are
unable to grasp it.</p>
+
+<p><strong>Update 2017-03-13:</strong> Look like
+<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The
+Intercept report that US Senator Rand Paul confirm what I state above</a>.</p>
</description>
</item>
</description>
</item>
- <item>
- <title>Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)</title>
- <link>http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</guid>
- <pubDate>Mon, 13 Feb 2017 21:30:00 +0100</pubDate>
- <description><p>A few days ago, we received the ruling from
-<a href="http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html">my
-day in court</a>. The case in question is a challenge of the seizure
-of the DNS domain popcorn-time.no. The ruling simply did not mention
-most of our arguments, and seemed to take everything ØKOKRIM said at
-face value, ignoring our demonstration and explanations. But it is
-hard to tell for sure, as we still have not seen most of the documents
-in the case and thus were unprepared and unable to contradict several
-of the claims made in court by the opposition. We are considering an
-appeal, but it is partly a question of funding, as it is costing us
-quite a bit to pay for our lawyer. If you want to help, please
-<a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to the
-NUUG defense fund</a>.</p>
-
-<p>The details of the case, as far as we know it, is available in
-Norwegian from
-<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the NUUG
-blog</a>. This also include
-<a href="https://www.nuug.no/news/Avslag_etter_rettslig_h_ring_om_DNS_beslaget___vurderer_veien_videre.shtml">the
-ruling itself</a>.</p>
-</description>
- </item>
-
- <item>
- <title>A day in court challenging seizure of popcorn-time.no for #domstolkontroll</title>
- <link>http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</guid>
- <pubDate>Fri, 3 Feb 2017 11:10:00 +0100</pubDate>
- <description><p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2017-02-01-popcorn-time-in-court.jpeg"></p>
-
-<p>On Wednesday, I spent the entire day in court in Follo Tingrett
-representing <a href="https://www.nuug.no/">the member association
-NUUG</a>, alongside <a href="https://www.efn.no/">the member
-association EFN</a> and <a href="http://www.imc.no">the DNS registrar
-IMC</a>, challenging the seizure of the DNS name popcorn-time.no. It
-was interesting to sit in a court of law for the first time in my
-life. Our team can be seen in the picture above: attorney Ola
-Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil
-Eriksen and NUUG board member Petter Reinholdtsen.</p>
-
-<p><a href="http://www.domstol.no/no/Enkelt-domstol/follo-tingrett/Nar-gar-rettssaken/Beramming/?cid=AAAA1701301512081262234UJFBVEZZZZZEJBAvtale">The
-case at hand</a> is that the Norwegian National Authority for
-Investigation and Prosecution of Economic and Environmental Crime (aka
-Økokrim) decided on their own, to seize a DNS domain early last
-year, without following
-<a href="https://www.norid.no/no/regelverk/navnepolitikk/#link12">the
-official policy of the Norwegian DNS authority</a> which require a
-court decision. The web site in question was a site covering Popcorn
-Time. And Popcorn Time is the name of a technology with both legal
-and illegal applications. Popcorn Time is a client combining
-searching a Bittorrent directory available on the Internet with
-downloading/distribute content via Bittorrent and playing the
-downloaded content on screen. It can be used illegally if it is used
-to distribute content against the will of the right holder, but it can
-also be used legally to play a lot of content, for example the
-millions of movies
-<a href="https://archive.org/details/movies">available from the
-Internet Archive</a> or the collection
-<a href="http://vodo.net/films/">available from Vodo</a>. We created
-<a href="magnet:?xt=urn:btih:86c1802af5a667ca56d3918aecb7d3c0f7173084&dn=PresentasjonFolloTingrett.mov&tr=udp%3A%2F%2Fpublic.popcorn-tracker.org%3A6969%2Fannounce">a
-video demonstrating legally use of Popcorn Time</a> and played it in
-Court. It can of course be downloaded using Bittorrent.</p>
-
-<p>I did not quite know what to expect from a day in court. The
-government held on to their version of the story and we held on to
-ours, and I hope the judge is able to make sense of it all. We will
-know in two weeks time. Unfortunately I do not have high hopes, as
-the Government have the upper hand here with more knowledge about the
-case, better training in handling criminal law and in general higher
-standing in the courts than fairly unknown DNS registrar and member
-associations. It is expensive to be right also in Norway. So far the
-case have cost more than NOK 70 000,-. To help fund the case, NUUG
-and EFN have asked for donations, and managed to collect around NOK 25
-000,- so far. Given the presentation from the Government, I expect
-the government to appeal if the case go our way. And if the case do
-not go our way, I hope we have enough funding to appeal.</p>
-
-<p>From the other side came two people from Økokrim. On the benches,
-appearing to be part of the group from the government were two people
-from the Simonsen Vogt Wiik lawyer office, and three others I am not
-quite sure who was. Økokrim had proposed to present two witnesses
-from The Motion Picture Association, but this was rejected because
-they did not speak Norwegian and it was a bit late to bring in a
-translator, but perhaps the two from MPA were present anyway. All
-seven appeared to know each other. Good to see the case is take
-seriously.</p>
-
-<p>If you, like me, believe the courts should be involved before a DNS
-domain is hijacked by the government, or you believe the Popcorn Time
-technology have a lot of useful and legal applications, I suggest you
-too <a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to
-the NUUG defense fund</a>. Both Bitcoin and bank transfer are
-available. If NUUG get more than we need for the legal action (very
-unlikely), the rest will be spend promoting free software, open
-standards and unix-like operating systems in Norway, so no matter what
-happens the money will be put to good use.</p>
-
-<p>If you want to lean more about the case, I recommend you check out
-<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the blog
-posts from NUUG covering the case</a>. They cover the legal arguments
-on both sides.</p>
-</description>
- </item>
-
- <item>
- <title>Nasjonalbiblioteket avslutter sin ulovlige bruk av Google Skjemaer</title>
- <link>http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</guid>
- <pubDate>Thu, 12 Jan 2017 09:40:00 +0100</pubDate>
- <description><p>I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul
-arrangerte Nasjonalbiblioteket
-<a href="http://www.nb.no/Bibliotekutvikling/Kunnskapsorganisering/Nasjonalt-verksregister/Seminar-om-verksregister">et
-seminar om sitt knakende gode tiltak «verksregister»</a>. Eneste
-måten å melde seg på dette seminaret var å sende personopplysninger
-til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis,
-da det bør være mulig å delta på seminarer arrangert av det offentlige
-uten å måtte dele sine interesser, posisjon og andre
-personopplysninger med Google. Jeg ba derfor om innsyn via
-<a href="https://www.mimesbronn.no/">Mimes brønn</a> i
-<a href="https://www.mimesbronn.no/request/personopplysninger_til_google_sk">avtaler
-og vurderinger Nasjonalbiblioteket hadde rundt dette</a>.
-Personopplysningsloven legger klare rammer for hva som må være på
-plass før en kan be tredjeparter, spesielt i utlandet, behandle
-personopplysninger på sine vegne, så det burde eksistere grundig
-dokumentasjon før noe slikt kan bli lovlig. To jurister hos
-Nasjonalbiblioteket mente først dette var helt i orden, og at Googles
-standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg
-var merkelig, men har ikke hatt kapasitet til å følge opp saken før
-for to dager siden.</p>
-
-<p>Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket
-om at Datatilsynet underkjente Googles standardavtaler som
-databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg
-for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI
-for å finne bedre måter å håndtere påmeldinger i tråd med
-personopplysningsloven. Det er fantastisk å se at av og til hjelper
-det å spørre hva i alle dager det offentlige holder på med.</p>
-</description>
- </item>
-
- <item>
- <title>Bryter NAV sin egen personvernerklæring?</title>
- <link>http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</guid>
- <pubDate>Wed, 11 Jan 2017 06:50:00 +0100</pubDate>
- <description><p>Jeg leste med interesse en nyhetssak hos
-<a href="http://www.digi.no/artikler/nav-avslorer-trygdemisbruk-ved-a-spore-ip-adresser/367394">digi.no</a>
-og
-<a href="https://www.nrk.no/buskerud/trygdesvindlere-avslores-av-utenlandske-ip-adresser-1.13313461">NRK</a>
-om at det ikke bare er meg, men at også NAV bedriver geolokalisering
-av IP-adresser, og at det gjøres analyse av IP-adressene til de som
-sendes inn meldekort for å se om meldekortet sendes inn fra
-utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare,
-er sitert i NRK på at «De to er jo blant annet avslørt av
-IP-adresser. At man ser at meldekortet kommer fra utlandet.»</p>
-
-<p>Jeg synes det er fint at det blir bedre kjent at IP-adresser
-knyttes til enkeltpersoner og at innsamlet informasjon brukes til å
-stedsbestemme personer også av aktører her i Norge. Jeg ser det som
-nok et argument for å bruke
-<a href="https://www.torproject.org/">Tor</a> så mye som mulig for å
-gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin
-privatsfære og unngå å dele sin fysiske plassering med
-uvedkommede.</p>
-
-<P>Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble
-tipset (takk #nuug) om
-<a href="https://www.nav.no/no/NAV+og+samfunn/Kontakt+NAV/Teknisk+brukerstotte/Snarveier/personvernerkl%C3%A6ring-for-arbeids-og-velferdsetaten">NAVs
-personvernerklæring</a>, som under punktet «Personvern og statistikk»
-lyder:</p>
-
-<p><blockquote>
-
-<p>«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene
-dannes fordi din nettleser automatisk sender en rekke opplysninger til
-NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det
-er eksempelvis opplysninger om hvilken nettleser og -versjon du
-bruker, og din internettadresse (ip-adresse). For hver side som vises,
-lagres følgende opplysninger:</p>
-
-<ul>
-<li>hvilken side du ser på</li>
-<li>dato og tid</li>
-<li>hvilken nettleser du bruker</li>
-<li>din ip-adresse</li>
-</ul>
-
-<p>Ingen av opplysningene vil bli brukt til å identifisere
-enkeltpersoner. NAV bruker disse opplysningene til å generere en
-samlet statistikk som blant annet viser hvilke sider som er mest
-populære. Statistikken er et redskap til å forbedre våre
-tjenester.»</p>
-
-</blockquote></p>
-
-<p>Jeg klarer ikke helt å se hvordan analyse av de besøkendes
-IP-adresser for å se hvem som sender inn meldekort via web fra en
-IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om
-at «ingen av opplysningene vil bli brukt til å identifisere
-enkeltpersoner». Det virker dermed for meg som at NAV bryter sine
-egen personvernerklæring, hvilket
-<a href="http://people.skolelinux.org/pere/blog/Er_lover_brutt_n_r_personvernpolicy_ikke_stemmer_med_praksis_.html">Datatilsynet
-fortalte meg i starten av desember antagelig er brudd på
-personopplysningsloven</a>.
-
-<p>I tillegg er personvernerklæringen ganske misvisende i og med at
-NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i
-tillegg ber brukernes nettleser kontakte fem andre nettjenere
-(script.hotjar.com, static.hotjar.com, vars.hotjar.com,
-www.google-analytics.com og www.googletagmanager.com), slik at
-personopplysninger blir gjort tilgjengelig for selskapene Hotjar og
-Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og
-NSA). Jeg klarer heller ikke se hvordan slikt spredning av
-personopplysninger kan være i tråd med kravene i
-personopplysningloven, eller i tråd med NAVs personvernerklæring.</p>
-
-<p>Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller
-kanskje Datatilsynet bør gjøre det?</p>
-</description>
- </item>
-
- <item>
- <title>Where did that package go? &mdash; geolocated IP traceroute</title>
- <link>http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</guid>
- <pubDate>Mon, 9 Jan 2017 12:20:00 +0100</pubDate>
- <description><p>Did you ever wonder where the web trafic really flow to reach the
-web servers, and who own the network equipment it is flowing through?
-It is possible to get a glimpse of this from using traceroute, but it
-is hard to find all the details. Many years ago, I wrote a system to
-map the Norwegian Internet (trying to figure out if our plans for a
-network game service would get low enough latency, and who we needed
-to talk to about setting up game servers close to the users. Back
-then I used traceroute output from many locations (I asked my friends
-to run a script and send me their traceroute output) to create the
-graph and the map. The output from traceroute typically look like
-this:
-
-<p><pre>
-traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms
- 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms
- 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms
- 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms
- 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms
- 8 * * *
- 9 * * *
-[...]
-</pre></p>
-
-<p>This show the DNS names and IP addresses of (at least some of the)
-network equipment involved in getting the data traffic from me to the
-www.stortinget.no server, and how long it took in milliseconds for a
-package to reach the equipment and return to me. Three packages are
-sent, and some times the packages do not follow the same path. This
-is shown for hop 5, where three different IP addresses replied to the
-traceroute request.</p>
-
-<p>There are many ways to measure trace routes. Other good traceroute
-implementations I use are traceroute (using ICMP packages) mtr (can do
-both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP
-traceroute and a lot of other capabilities). All of them are easily
-available in <a href="https://www.debian.org/">Debian</a>.</p>
-
-<p>This time around, I wanted to know the geographic location of
-different route points, to visualize how visiting a web page spread
-information about the visit to a lot of servers around the globe. The
-background is that a web site today often will ask the browser to get
-from many servers the parts (for example HTML, JSON, fonts,
-JavaScript, CSS, video) required to display the content. This will
-leak information about the visit to those controlling these servers
-and anyone able to peek at the data traffic passing by (like your ISP,
-the ISPs backbone provider, FRA, GCHQ, NSA and others).</p>
-
-<p>Lets pick an example, the Norwegian parliament web site
-www.stortinget.no. It is read daily by all members of parliament and
-their staff, as well as political journalists, activits and many other
-citizens of Norway. A visit to the www.stortinget.no web site will
-ask your browser to contact 8 other servers: ajax.googleapis.com,
-insights.hotjar.com, script.hotjar.com, static.hotjar.com,
-stats.g.doubleclick.net, www.google-analytics.com,
-www.googletagmanager.com and www.netigate.se. I extracted this by
-asking <a href="http://phantomjs.org/">PhantomJS</a> to visit the
-Stortinget web page and tell me all the URLs PhantomJS downloaded to
-render the page (in HAR format using
-<a href="https://github.com/ariya/phantomjs/blob/master/examples/netsniff.js">their
-netsniff example</a>. I am very grateful to Gorm for showing me how
-to do this). My goal is to visualize network traces to all IP
-addresses behind these DNS names, do show where visitors personal
-information is spread when visiting the page.</p>
-
-<p align="center"><a href="www.stortinget.no-geoip.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geoip-small.png" alt="map of combined traces for URLs used by www.stortinget.no using GeoIP"/></a></p>
-
-<p>When I had a look around for options, I could not find any good
-free software tools to do this, and decided I needed my own traceroute
-wrapper outputting KML based on locations looked up using GeoIP. KML
-is easy to work with and easy to generate, and understood by several
-of the GIS tools I have available. I got good help from by NUUG
-colleague Anders Einar with this, and the result can be seen in
-<a href="https://github.com/petterreinholdtsen/kmltraceroute">my
-kmltraceroute git repository</a>. Unfortunately, the quality of the
-free GeoIP databases I could find (and the for-pay databases my
-friends had access to) is not up to the task. The IP addresses of
-central Internet infrastructure would typically be placed near the
-controlling companies main office, and not where the router is really
-located, as you can see from <a href="www.stortinget.no-geoip.kml">the
-KML file I created</a> using the GeoLite City dataset from MaxMind.
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy-small.png" alt="scapy traceroute graph for URLs used by www.stortinget.no"/></a></p>
-
-<p>I also had a look at the visual traceroute graph created by
-<a href="http://www.secdev.org/projects/scapy/">the scrapy project</a>,
-showing IP network ownership (aka AS owner) for the IP address in
-question.
-<a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg">The
-graph display a lot of useful information about the traceroute in SVG
-format</a>, and give a good indication on who control the network
-equipment involved, but it do not include geolocation. This graph
-make it possible to see the information is made available at least for
-UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level
-3 Communications and NetDNA.</p>
-
-<p align="center"><a href="https://geotraceroute.com/index.php?node=4&host=www.stortinget.no"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-small.png" alt="example geotraceroute view for www.stortinget.no"/></a></p>
-
-<p>In the process, I came across the
-<a href="https://geotraceroute.com/">web service GeoTraceroute</a> by
-Salim Gasmi. Its methology of combining guesses based on DNS names,
-various location databases and finally use latecy times to rule out
-candidate locations seemed to do a very good job of guessing correct
-geolocation. But it could only do one trace at the time, did not have
-a sensor in Norway and did not make the geolocations easily available
-for postprocessing. So I contacted the developer and asked if he
-would be willing to share the code (he refused until he had time to
-clean it up), but he was interested in providing the geolocations in a
-machine readable format, and willing to set up a sensor in Norway. So
-since yesterday, it is possible to run traces from Norway in this
-service thanks to a sensor node set up by
-<a href="https://www.nuug.no/">the NUUG assosiation</a>, and get the
-trace in KML format for further processing.</p>
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.png" alt="map of combined traces for URLs used by www.stortinget.no using geotraceroute"/></a></p>
-
-<p>Here we can see a lot of trafic passes Sweden on its way to
-Denmark, Germany, Holland and Ireland. Plenty of places where the
-Snowden confirmations verified the traffic is read by various actors
-without your best interest as their top priority.</p>
-
-<p>Combining KML files is trivial using a text editor, so I could loop
-over all the hosts behind the urls imported by www.stortinget.no and
-ask for the KML file from GeoTraceroute, and create a combined KML
-file with all the traces (unfortunately only one of the IP addresses
-behind the DNS name is traced this time. To get them all, one would
-have to request traces using IP number instead of DNS names from
-GeoTraceroute). That might be the next step in this project.</p>
-
-<p>Armed with these tools, I find it a lot easier to figure out where
-the IP traffic moves and who control the boxes involved in moving it.
-And every time the link crosses for example the Swedish border, we can
-be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in
-Britain and NSA in USA and cables around the globe. (Hm, what should
-we tell them? :) Keep that in mind if you ever send anything
-unencrypted over the Internet.</p>
-
-<p>PS: KML files are drawn using
-<a href="http://ivanrublev.me/kml/">the KML viewer from Ivan
-Rublev<a/>, as it was less cluttered than the local Linux application
-Marble. There are heaps of other options too.</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>Introducing ical-archiver to split out old iCalendar entries</title>
- <link>http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</guid>
- <pubDate>Wed, 4 Jan 2017 12:20:00 +0100</pubDate>
- <description><p>Do you have a large <a href="https://icalendar.org/">iCalendar</a>
-file with lots of old entries, and would like to archive them to save
-space and resources? At least those of us using KOrganizer know that
-turning on and off an event set become slower and slower the more
-entries are in the set. While working on migrating our calendars to a
-<a href="http://radicale.org/">Radicale CalDAV server</a> on our
-<a href="https://freedomboxfoundation.org/">Freedombox server</a/>, my
-loved one wondered if I could find a way to split up the calendar file
-she had in KOrganizer, and I set out to write a tool. I spent a few
-days writing and polishing the system, and it is now ready for general
-consumption. The
-<a href="https://github.com/petterreinholdtsen/ical-archiver">code for
-ical-archiver</a> is publicly available from a git repository on
-github. The system is written in Python and depend on
-<a href="http://eventable.github.io/vobject/">the vobject Python
-module</a>.</p>
-
-<p>To use it, locate the iCalendar file you want to operate on and
-give it as an argument to the ical-archiver script. This will
-generate a set of new files, one file per component type per year for
-all components expiring more than two years in the past. The vevent,
-vtodo and vjournal entries are handled by the script. The remaining
-entries are stored in a 'remaining' file.</p>
-
-<p>This is what a test run can look like:
-
-<p><pre>
-% ical-archiver t/2004-2016.ics
-Found 3612 vevents
-Found 6 vtodos
-Found 2 vjournals
-Writing t/2004-2016.ics-subset-vevent-2004.ics
-Writing t/2004-2016.ics-subset-vevent-2005.ics
-Writing t/2004-2016.ics-subset-vevent-2006.ics
-Writing t/2004-2016.ics-subset-vevent-2007.ics
-Writing t/2004-2016.ics-subset-vevent-2008.ics
-Writing t/2004-2016.ics-subset-vevent-2009.ics
-Writing t/2004-2016.ics-subset-vevent-2010.ics
-Writing t/2004-2016.ics-subset-vevent-2011.ics
-Writing t/2004-2016.ics-subset-vevent-2012.ics
-Writing t/2004-2016.ics-subset-vevent-2013.ics
-Writing t/2004-2016.ics-subset-vevent-2014.ics
-Writing t/2004-2016.ics-subset-vjournal-2007.ics
-Writing t/2004-2016.ics-subset-vjournal-2011.ics
-Writing t/2004-2016.ics-subset-vtodo-2012.ics
-Writing t/2004-2016.ics-remaining.ics
-%
-</pre></p>
-
-<p>As you can see, the original file is untouched and new files are
-written with names derived from the original file. If you are happy
-with their content, the *-remaining.ics file can replace the original
-the the others can be archived or imported as historical calendar
-collections.</p>
-
-<p>The script should probably be improved a bit. The error handling
-when discovering broken entries is not good, and I am not sure yet if
-it make sense to split different entry types into separate files or
-not. The program is thus likely to change. If you find it
-interesting, please get in touch. :)</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
</channel>
</rss>