<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Hvordan enkelt laste ned filmer fra NRK med den "nye" løsningen</title>
- <link>http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK_med_den__nye__l_sningen.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK_med_den__nye__l_sningen.html</guid>
- <pubDate>Mon, 16 Jun 2014 19:20:00 +0200</pubDate>
- <description><p>Jeg har fortsatt behov for å kunne laste ned innslag fra NRKs
-nettsted av og til for å se senere når jeg ikke er på nett, men
-<a href="http://people.skolelinux.org/pere/blog/Hvordan_enkelt_laste_ned_filmer_fra_NRK.html">min
-oppskrift fra 2011</a> sluttet å fungere da NRK byttet
-avspillermetode. I dag fikk jeg endelig lett etter oppdatert løsning,
-og jeg er veldig glad for å fortelle at den enkleste måten å laste ned
-innslag er å bruke siste versjon 2014.06.07 av youtube-dl. Støtten i
-youtube-dl <a href="https://github.com/rg3/youtube-dl/issues/2980">kom
-inn for 23 dager siden</a> og
-<a href="http://packages.qa.debian.org/y/youtube-dl.html">versjonen i
-Debian</a> fungerer fint også som backport til Debian Wheezy. Det er
-et lite problem, det håndterer kun URLer med små bokstaver, men hvis
-en har en URL med store bokstaver kan en bare gjøre alle store om til
-små bokstaver for å få youtube-dl til å laste ned. Rapporterte
-problemet nettopp til utviklerne, og antar de får fikset det
-snart.</p>
-
-<p>Dermed er alt klart til å laste ned dokumentarene om
-<a href="http://tv.nrk.no/program/KOID23005014/usas-hemmelige-avlytting">USAs
-hemmelige avlytting</a> og
-<a href="http://tv.nrk.no/program/KOID23005114/selskapene-bak-usas-avlytting">Selskapene
-bak USAs avlytting</a>, i tillegg til
-<a href="http://tv.nrk.no/program/KOID20005814/et-moete-med-edward-snowden">intervjuet
-med Edward Snowden gjort av den tyske tv-kanalen ARD</a>. Anbefaler
-alle å se disse, sammen med
-<a href="http://media.ccc.de/browse/congress/2013/30C3_-_5713_-_en_-_saal_2_-_201312301130_-_to_protect_and_infect_part_2_-_jacob.html">foredraget
-til Jacob Appelbaum på siste CCC-konferanse</a>, for å forstå mer om
-hvordan overvåkningen av borgerne brer om seg.</p>
-
-<p>Takk til gode venner på foreningen NUUGs IRC-kanal
-<a href="irc://irc.freenode.net/%23nuug">#nuug på irc.freenode.net</a>
-for tipsene som fikk meg i mål</a>.</p>
+ <title>Detecting NFS hangs on Linux without hanging yourself...</title>
+ <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</guid>
+ <pubDate>Thu, 9 Mar 2017 15:20:00 +0100</pubDate>
+ <description><p>Over the years, administrating thousand of NFS mounting linux
+computers at the time, I often needed a way to detect if the machine
+was experiencing NFS hang. If you try to use <tt>df</tt> or look at a
+file or directory affected by the hang, the process (and possibly the
+shell) will hang too. So you want to be able to detect this without
+risking the detection process getting stuck too. It has not been
+obvious how to do this. When the hang has lasted a while, it is
+possible to find messages like these in dmesg:</p>
+
+<p><blockquote>
+nfs: server nfsserver not responding, still trying
+<br>nfs: server nfsserver OK
+</blockquote></p>
+
+<p>It is hard to know if the hang is still going on, and it is hard to
+be sure looking in dmesg is going to work. If there are lots of other
+messages in dmesg the lines might have rotated out of site before they
+are noticed.</p>
+
+<p>While reading through the nfs client implementation in linux kernel
+code, I came across some statistics that seem to give a way to detect
+it. The om_timeouts sunrpc value in the kernel will increase every
+time the above log entry is inserted into dmesg. And after digging a
+bit further, I discovered that this value show up in
+/proc/self/mountstats on Linux.</p>
+
+<p>The mountstats content seem to be shared between files using the
+same file system context, so it is enough to check one of the
+mountstats files to get the state of the mount point for the machine.
+I assume this will not show lazy umounted NFS points, nor NFS mount
+points in a different process context (ie with a different filesystem
+view), but that does not worry me.</p>
+
+<p>The content for a NFS mount point look similar to this:</p>
+
+<p><blockquote><pre>
+[...]
+device /dev/mapper/Debian-var mounted on /var with fstype ext3
+device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
+ opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
+ age: 7863311
+ caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
+ sec: flavor=1,pseudoflavor=1
+ events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
+ bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
+ RPC iostats version: 1.0 p/v: 100003/3 (nfs)
+ xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
+ per-op statistics
+ NULL: 0 0 0 0 0 0 0 0
+ GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
+ SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
+ LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
+ ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
+ READLINK: 125 125 0 20472 18620 0 1112 1118
+ READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
+ WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
+ CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
+ MKDIR: 3680 3680 0 773980 993920 26 23990 24245
+ SYMLINK: 903 903 0 233428 245488 6 5865 5917
+ MKNOD: 80 80 0 20148 21760 0 299 304
+ REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
+ RMDIR: 3367 3367 0 645112 484848 22 5782 6002
+ RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
+ LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
+ READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
+ READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
+ FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
+ FSINFO: 2 2 0 232 328 0 1 1
+ PATHCONF: 1 1 0 116 140 0 0 0
+ COMMIT: 0 0 0 0 0 0 0 0
+
+device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
+[...]
+</pre></blockquote></p>
+
+<p>The key number to look at is the third number in the per-op list.
+It is the number of NFS timeouts experiences per file system
+operation. Here 22 write timeouts and 5 access timeouts. If these
+numbers are increasing, I believe the machine is experiencing NFS
+hang. Unfortunately the timeout value do not start to increase right
+away. The NFS operations need to time out first, and this can take a
+while. The exact timeout value depend on the setup. For example the
+defaults for TCP and UDP mount points are quite different, and the
+timeout value is affected by the soft, hard, timeo and retrans NFS
+mount options.</p>
+
+<p>The only way I have been able to get working on Debian and RedHat
+Enterprise Linux for getting the timeout count is to peek in /proc/.
+But according to
+<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris
+10 System Administration Guide: Network Services</a>, the 'nfsstat -c'
+command can be used to get these timeout values. But this do not work
+on Linux, as far as I can tell. I
+<ahref="http://bugs.debian.org/857043">asked Debian about this</a>,
+but have not seen any replies yet.</p>
+
+<p>Is there a better way to figure out if a Linux NFS client is
+experiencing NFS hangs? Is there a way to detect which processes are
+affected? Is there a way to get the NFS mount going quickly once the
+network problem causing the NFS hang has been cleared? I would very
+much welcome some clues, as we regularly run into NFS hangs.</p>
</description>
</item>
<item>
- <title>Free software car computer solution?</title>
- <link>http://people.skolelinux.org/pere/blog/Free_software_car_computer_solution_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_car_computer_solution_.html</guid>
- <pubDate>Thu, 29 May 2014 18:45:00 +0200</pubDate>
- <description><p>Dear lazyweb. I'm planning to set up a small Raspberry Pi computer
-in my car, connected to
-<a href="http://www.dx.com/p/400a-4-0-tft-lcd-digital-monitor-for-vehicle-parking-reverse-camera-1440x272-12v-dc-57776">a
-small screen</a> next to the rear mirror. I plan to hook it up with a
-GPS and a USB wifi card too. The idea is to get my own
-"<a href="http://en.wikipedia.org/wiki/Carputer">Carputer</a>". But I
-wonder if someone already created a good free software solution for
-such car computer.</p>
-
-<p>This is my current wish list for such system:</p>
-
-<ul>
-
- <li>Work on Raspberry Pi.</li>
-
- <li>Show current speed limit based on location, and warn if going too
- fast (for example using color codes yellow and red on the screen,
- or make a sound). This could be done either using either data from
- <a href="http://www.openstreetmap.org/">Openstreetmap</a> or OCR
- info gathered from a dashboard camera.</li>
-
- <li>Track automatic toll road passes and their cost, show total spent
- and make it possible to calculate toll costs for planned
- route.</li>
-
- <li>Collect GPX tracks for use with OpenStreetMap.</li>
-
- <li>Automatically detect and use any wireless connection to connect
- to home server. Try IP over DNS
- (<a href="http://dev.kryo.se/iodine/">iodine</a>) or ICMP
- (<a href="http://code.gerade.org/hans/">Hans</a>) if direct
- connection do not work.</li>
-
- <li>Set up mesh network to talk to other cars with the same system,
- or some standard car mesh protocol.</li>
-
- <li>Warn when approaching speed cameras and speed camera ranges
- (speed calculated between two cameras).</li>
-
- <li>Suport dashboard/front facing camera to discover speed limits and
- run OCR to track registration number of passing cars.</li>
-
-</ul>
-
-<p>If you know of any free software car computer system supporting
-some or all of these features, please let me know.</p>
+ <title>How does it feel to be wiretapped, when you should be doing the wiretapping...</title>
+ <link>http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</guid>
+ <pubDate>Wed, 8 Mar 2017 11:50:00 +0100</pubDate>
+ <description><p>So the new president in the United States of America claim to be
+surprised to discover that he was wiretapped during the election
+before he was elected president. He even claim this must be illegal.
+Well, doh, if it is one thing the confirmations from Snowden
+documented, it is that the entire population in USA is wiretapped, one
+way or another. Of course the president candidates were wiretapped,
+alongside the senators, judges and the rest of the people in USA.</p>
+
+<p>Next, the Federal Bureau of Investigation ask the Department of
+Justice to go public rejecting the claims that Donald Trump was
+wiretapped illegally. I fail to see the relevance, given that I am
+sure the surveillance industry in USA believe they have all the legal
+backing they need to conduct mass surveillance on the entire
+world.</p>
+
+<p>There is even the director of the FBI stating that he never saw an
+order requesting wiretapping of Donald Trump. That is not very
+surprising, given how the FISA court work, with all its activity being
+secret. Perhaps he only heard about it?</p>
+
+<p>What I find most sad in this story is how Norwegian journalists
+present it. In a news reports the other day in the radio from the
+Norwegian National broadcasting Company (NRK), I heard the journalist
+claim that 'the FBI denies any wiretapping', while the reality is that
+'the FBI denies any illegal wiretapping'. There is a fundamental and
+important difference, and it make me sad that the journalists are
+unable to grasp it.</p>
+
+<p><strong>Update 2017-03-13:</strong> Look like
+<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The
+Intercept report that US Senator Rand Paul confirm what I state above</a>.</p>
</description>
</item>
<item>
- <title>Half the Coverity issues in Gnash fixed in the next release</title>
- <link>http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Half_the_Coverity_issues_in_Gnash_fixed_in_the_next_release.html</guid>
- <pubDate>Tue, 29 Apr 2014 14:20:00 +0200</pubDate>
- <description><p>I've been following <a href="http://www.getgnash.org/">the Gnash
-project</a> for quite a while now. It is a free software
-implementation of Adobe Flash, both a standalone player and a browser
-plugin. Gnash implement support for the AVM1 format (and not the
-newer AVM2 format - see
-<a href="http://lightspark.github.io/">Lightspark</a> for that one),
-allowing several flash based sites to work. Thanks to the friendly
-developers at Youtube, it also work with Youtube videos, because the
-Javascript code at Youtube detect Gnash and serve a AVM1 player to
-those users. :) Would be great if someone found time to implement AVM2
-support, but it has not happened yet. If you install both Lightspark
-and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file,
-so you can get both handled as free software. Unfortunately,
-Lightspark so far only implement a small subset of AVM2, and many
-sites do not work yet.</p>
-
-<p>A few months ago, I started looking at
-<a href="http://scan.coverity.com/">Coverity</a>, the static source
-checker used to find heaps and heaps of bugs in free software (thanks
-to the donation of a scanning service to free software projects by the
-company developing this non-free code checker), and Gnash was one of
-the projects I decided to check out. Coverity is able to find lock
-errors, memory errors, dead code and more. A few days ago they even
-extended it to also be able to find the heartbleed bug in OpenSSL.
-There are heaps of checks being done on the instrumented code, and the
-amount of bogus warnings is quite low compared to the other static
-code checkers I have tested over the years.</p>
-
-<p>Since a few weeks ago, I've been working with the other Gnash
-developers squashing bugs discovered by Coverity. I was quite happy
-today when I checked the current status and saw that of the 777 issues
-detected so far, 374 are marked as fixed. This make me confident that
-the next Gnash release will be more stable and more dependable than
-the previous one. Most of the reported issues were and are in the
-test suite, but it also found a few in the rest of the code.</p>
-
-<p>If you want to help out, you find us on
-<a href="https://lists.gnu.org/mailman/listinfo/gnash-dev">the
-gnash-dev mailing list</a> and on
-<a href="irc://irc.freenode.net/#gnash">the #gnash channel on
-irc.freenode.net IRC server</a>.</p>
+ <title>Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress</title>
+ <link>http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</guid>
+ <pubDate>Fri, 3 Mar 2017 14:50:00 +0100</pubDate>
+ <description><p>For almost a year now, we have been working on making a Norwegian
+Bokmål edition of <a href="https://debian-handbook.info/">The Debian
+Administrator's Handbook</a>. Now, thanks to the tireless effort of
+Ole-Erik, Ingrid and Andreas, the initial translation is complete, and
+we are working on the proof reading to ensure consistent language and
+use of correct computer science terms. The plan is to make the book
+available on paper, as well as in electronic form. For that to
+happen, the proof reading must be completed and all the figures need
+to be translated. If you want to help out, get in touch.</p>
+
+<p><a href="http://people.skolelinux.org/pere/debian-handbook/debian-handbook-nb-NO.pdf">A
+
+fresh PDF edition</a> in A4 format (the final book will have smaller
+pages) of the book created every morning is available for
+proofreading. If you find any errors, please
+<a href="https://hosted.weblate.org/projects/debian-handbook/">visit
+Weblate and correct the error</a>. The
+<a href="http://l.github.io/debian-handbook/stat/nb-NO/index.html">state
+of the translation including figures</a> is a useful source for those
+provide Norwegian bokmål screen shots and figures.</p>
</description>
</item>
<item>
- <title>Install hardware dependent packages using tasksel (Isenkram 0.7)</title>
- <link>http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Install_hardware_dependent_packages_using_tasksel__Isenkram_0_7_.html</guid>
- <pubDate>Wed, 23 Apr 2014 14:50:00 +0200</pubDate>
- <description><p>It would be nice if it was easier in Debian to get all the hardware
-related packages relevant for the computer installed automatically.
-So I implemented one, using
-<a href="http://packages.qa.debian.org/isenkram">my Isenkram
-package</a>. To use it, install the tasksel and isenkram packages and
-run tasksel as user root. You should be presented with a new option,
-"Hardware specific packages (autodetected by isenkram)". When you
-select it, tasksel will install the packages isenkram claim is fit for
-the current hardware, hot pluggable or not.<p>
-
-<p>The implementation is in two files, one is the tasksel menu entry
-description, and the other is the script used to extract the list of
-packages to install. The first part is in
-<tt>/usr/share/tasksel/descs/isenkram.desc</tt> and look like
-this:</p>
-
-<p><blockquote><pre>
-Task: isenkram
-Section: hardware
-Description: Hardware specific packages (autodetected by isenkram)
- Based on the detected hardware various hardware specific packages are
- proposed.
-Test-new-install: mark show
-Relevance: 8
-Packages: for-current-hardware
-</pre></blockquote></p>
-
-<p>The second part is in
-<tt>/usr/lib/tasksel/packages/for-current-hardware</tt> and look like
-this:</p>
-
-<p><blockquote><pre>
-#!/bin/sh
-#
-(
- isenkram-lookup
- isenkram-autoinstall-firmware -l
-) | sort -u
-</pre></blockquote></p>
-
-<p>All in all, a very short and simple implementation making it
-trivial to install the hardware dependent package we all may want to
-have installed on our machines. I've not been able to find a way to
-get tasksel to tell you exactly which packages it plan to install
-before doing the installation. So if you are curious or careful,
-check the output from the isenkram-* command line tools first.</p>
-
-<p>The information about which packages are handling which hardware is
-fetched either from the isenkram package itself in
-/usr/share/isenkram/, from git.debian.org or from the APT package
-database (using the Modaliases header). The APT package database
-parsing have caused a nasty resource leak in the isenkram daemon (bugs
-<a href="http://bugs.debian.org/719837">#719837</a> and
-<a href="http://bugs.debian.org/730704">#730704</a>). The cause is in
-the python-apt code (bug
-<a href="http://bugs.debian.org/745487">#745487</a>), but using a
-workaround I was able to get rid of the file descriptor leak and
-reduce the memory leak from ~30 MiB per hardware detection down to
-around 2 MiB per hardware detection. It should make the desktop
-daemon a lot more useful. The fix is in version 0.7 uploaded to
-unstable today.</p>
-
-<p>I believe the current way of mapping hardware to packages in
-Isenkram is is a good draft, but in the future I expect isenkram to
-use the AppStream data source for this. A proposal for getting proper
-AppStream support into Debian is floating around as
-<a href="https://wiki.debian.org/DEP-11">DEP-11</a>, and
-<a href="https://wiki.debian.org/SummerOfCode2014/Projects#SummerOfCode2014.2FProjects.2FAppStreamDEP11Implementation.AppStream.2FDEP-11_for_the_Debian_Archive">GSoC
-project</a> will take place this summer to improve the situation. I
-look forward to seeing the result, and welcome patches for isenkram to
-start using the information when it is ready.</p>
-
-<p>If you want your package to map to some specific hardware, either
-add a "Xb-Modaliases" header to your control file like I did in
-<a href="http://packages.qa.debian.org/pymissile">the pymissile
-package</a> or submit a bug report with the details to the isenkram
-package. See also
-<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">all my
-blog posts tagged isenkram</a> for details on the notation. I expect
-the information will be migrated to AppStream eventually, but for the
-moment I got no better place to store it.</p>
+ <title>Unlimited randomness with the ChaosKey?</title>
+ <link>http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</guid>
+ <pubDate>Wed, 1 Mar 2017 20:50:00 +0100</pubDate>
+ <description><p>A few days ago I ordered a small batch of
+<a href="http://altusmetrum.org/ChaosKey/">the ChaosKey</a>, a small
+USB dongle for generating entropy created by Bdale Garbee and Keith
+Packard. Yesterday it arrived, and I am very happy to report that it
+work great! According to its designers, to get it to work out of the
+box, you need the Linux kernel version 4.1 or later. I tested on a
+Debian Stretch machine (kernel version 4.9), and there it worked just
+fine, increasing the available entropy very quickly. I wrote a small
+test oneliner to test. It first print the current entropy level,
+drain /dev/random, and then print the entropy level for five seconds.
+Here is the situation without the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+300
+0+1 oppføringer inn
+0+1 oppføringer ut
+28 byte kopiert, 0,000264565 s, 106 kB/s
+4
+8
+12
+17
+21
+%
+</pre></blockquote>
+
+<p>The entropy level increases by 3-4 every second. In such case any
+application requiring random bits (like a HTTPS enabled web server)
+will halt and wait for more entrpy. And here is the situation with
+the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+1079
+0+1 oppføringer inn
+0+1 oppføringer ut
+104 byte kopiert, 0,000487647 s, 213 kB/s
+433
+1028
+1031
+1035
+1038
+%
+</pre></blockquote>
+
+<p>Quite the difference. :) I bought a few more than I need, in case
+someone want to buy one here in Norway. :)</p>
+
+<p>Update: The dongle was presented at Debconf last year. You might
+find <a href="https://debconf16.debconf.org/talks/94/">the talk
+recording illuminating</a>. It explains exactly what the source of
+randomness is, if you are unable to spot it from the schema drawing
+available from the ChaosKey web site linked at the start of this blog
+post.</p>
</description>
</item>
<item>
- <title>FreedomBox milestone - all packages now in Debian Sid</title>
- <link>http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/FreedomBox_milestone___all_packages_now_in_Debian_Sid.html</guid>
- <pubDate>Tue, 15 Apr 2014 22:10:00 +0200</pubDate>
- <description><p>The <a href="https://wiki.debian.org/FreedomBox">Freedombox
-project</a> is working on providing the software and hardware to make
-it easy for non-technical people to host their data and communication
-at home, and being able to communicate with their friends and family
-encrypted and away from prying eyes. It is still going strong, and
-today a major mile stone was reached.</p>
-
-<p>Today, the last of the packages currently used by the project to
-created the system images were accepted into Debian Unstable. It was
-the freedombox-setup package, which is used to configure the images
-during build and on the first boot. Now all one need to get going is
-the build code from the freedom-maker git repository and packages from
-Debian. And once the freedombox-setup package enter testing, we can
-build everything directly from Debian. :)</p>
-
-<p>Some key packages used by Freedombox are
-<a href="http://packages.qa.debian.org/freedombox-setup">freedombox-setup</a>,
-<a href="http://packages.qa.debian.org/plinth">plinth</a>,
-<a href="http://packages.qa.debian.org/pagekite">pagekite</a>,
-<a href="http://packages.qa.debian.org/tor">tor</a>,
-<a href="http://packages.qa.debian.org/privoxy">privoxy</a>,
-<a href="http://packages.qa.debian.org/owncloud">owncloud</a> and
-<a href="http://packages.qa.debian.org/dnsmasq">dnsmasq</a>. There
-are plans to integrate more packages into the setup. User
-documentation is maintained on the Debian wiki. Please
-<a href="https://wiki.debian.org/FreedomBox/Manual/Jessie">check out
-the manual</a> and help us improve it.</p>
-
-<p>To test for yourself and create boot images with the FreedomBox
-setup, run this on a Debian machine using a user with sudo rights to
-become root:</p>
-
-<p><pre>
-sudo apt-get install git vmdebootstrap mercurial python-docutils \
- mktorrent extlinux virtualbox qemu-user-static binfmt-support \
- u-boot-tools
-git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
- freedom-maker
-make -C freedom-maker dreamplug-image raspberry-image virtualbox-image
-</pre></p>
-
-<p>Root access is needed to run debootstrap and mount loopback
-devices. See the README in the freedom-maker git repo for more
-details on the build. If you do not want all three images, trim the
-make line. Note that the virtualbox-image target is not really
-virtualbox specific. It create a x86 image usable in kvm, qemu,
-vmware and any other x86 virtual machine environment. You might need
-the version of vmdebootstrap in Jessie to get the build working, as it
-include fixes for a race condition with kpartx.</p>
-
-<p>If you instead want to install using a Debian CD and the preseed
-method, boot a Debian Wheezy ISO and use this boot argument to load
-the preseed values:</p>
-
-<p><pre>
-url=<a href="http://www.reinholdtsen.name/freedombox/preseed-jessie.dat">http://www.reinholdtsen.name/freedombox/preseed-jessie.dat</a>
-</pre></p>
-
-<p>I have not tested it myself the last few weeks, so I do not know if
-it still work.</p>
-
-<p>If you wonder how to help, one task you could look at is using
-systemd as the boot system. It will become the default for Linux in
-Jessie, so we need to make sure it is usable on the Freedombox. I did
-a simple test a few weeks ago, and noticed dnsmasq failed to start
-during boot when using systemd. I suspect there are other problems
-too. :) To detect problems, there is a test suite included, which can
-be run from the plinth web interface.</p>
-
-<p>Give it a go and let us know how it goes on the mailing list, and help
-us get the new release published. :) Please join us on
-<a href="irc://irc.debian.org:6667/%23freedombox">IRC (#freedombox on
-irc.debian.org)</a> and
-<a href="http://lists.alioth.debian.org/mailman/listinfo/freedombox-discuss">the
-mailing list</a> if you want to help make this vision come true.</p>
+ <title>Detect OOXML files with undefined behaviour?</title>
+ <link>http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</guid>
+ <pubDate>Tue, 21 Feb 2017 00:20:00 +0100</pubDate>
+ <description><p>I just noticed
+<a href="http://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">the
+new Norwegian proposal for archiving rules in the goverment</a> list
+<a href="http://www.ecma-international.org/publications/standards/Ecma-376.htm">ECMA-376</a>
+/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term
+storage. Luckily such files will only be accepted based on
+pre-approval from the National Archive. Allowing OOXML files to be
+used for long term storage might seem like a good idea as long as we
+forget that there are plenty of ways for a "valid" OOXML document to
+have content with no defined interpretation in the standard, which
+lead to a question and an idea.</p>
+
+<p>Is there any tool to detect if a OOXML document depend on such
+undefined behaviour? It would be useful for the National Archive (and
+anyone else interested in verifying that a document is well defined)
+to have such tool available when considering to approve the use of
+OOXML. I'm aware of the
+<a href="https://github.com/arlm/officeotron/">officeotron OOXML
+validator</a>, but do not know how complete it is nor if it will
+report use of undefined behaviour. Are there other similar tools
+available? Please send me an email if you know of any such tool.</p>
</description>
</item>
<item>
- <title>Språkkoder for POSIX locale i Norge</title>
- <link>http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Spr_kkoder_for_POSIX_locale_i_Norge.html</guid>
- <pubDate>Fri, 11 Apr 2014 21:30:00 +0200</pubDate>
- <description><p>For 12 år siden, skrev jeg et lite notat om
-<a href="http://i18n.skolelinux.no/localekoder.txt">bruk av språkkoder
-i Norge</a>. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om
-notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva
-som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.</p>
-
-<p>Når en velger språk i programmer på unix, så velger en blant mange
-språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt
-locale i parantes):</p>
-
-<p><dl>
-<dt>nb (nb_NO)</dt><dd>Bokmål i Norge</dd>
-<dt>nn (nn_NO)</dt><dd>Nynorsk i Norge</dd>
-<dt>se (se_NO)</dt><dd>Nordsamisk i Norge</dd>
-</dl></p>
-
-<p>Alle programmer som bruker andre koder bør endres.</p>
-
-<p>Språkkoden bør brukes når .po-filer navngis og installeres. Dette
-er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene
-være navngitt nb.po, mens locale (LANG) bør være nb_NO.</p>
-
-<p>Hvis vi ikke får standardisert de kodene i alle programmene med
-norske oversettelser, så er det umulig å gi LANG-variablen ett innhold
-som fungerer for alle programmer.</p>
-
-<p>Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i
-forbindelse med POSIX localer er standardisert i RFC 3066 og ISO
-15897. Denne anbefalingen er i tråd med de angitte standardene.</p>
-
-<p>Følgende koder er eller har vært i bruk som locale-verdier for
-"norske" språk. Disse bør unngås, og erstattes når de oppdages:</p>
-
-<p><table>
-<tr><td>norwegian</td><td>-> nb_NO</td></tr>
-<tr><td>bokmål </td><td>-> nb_NO</td></tr>
-<tr><td>bokmal </td><td>-> nb_NO</td></tr>
-<tr><td>nynorsk </td><td>-> nn_NO</td></tr>
-<tr><td>no </td><td>-> nb_NO</td></tr>
-<tr><td>no_NO </td><td>-> nb_NO</td></tr>
-<tr><td>no_NY </td><td>-> nn_NO</td></tr>
-<tr><td>sme_NO </td><td>-> se_NO</td></tr>
-</table></p>
-
-<p>Merk at når det gjelder de samiske språkene, at se_NO i praksis
-henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til
-lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske
-språkkoder, der gjør
-<a href="http://www.divvun.no/">Divvun-prosjektet</a> en bedre
-jobb.</p>
-
-<p><strong>Referanser:</strong></p>
-
-<ul>
-
- <li><a href="http://www.rfc-base.org/rfc-3066.html">RFC 3066 - Tags
- for the Identification of Languages</a> (Erstatter RFC 1766)</li>
-
- <li><a href="http://www.loc.gov/standards/iso639-2/langcodes.html">ISO
- 639</a> - Codes for the Representation of Names of Languages</li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n897-14652w25.pdf">ISO
- DTR 14652</a> - locale-standard Specification method for cultural
- conventions</li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n610.pdf">ISO
- 15897: Registration procedures for cultural elements (cultural
- registry)</a>,
- <a href="http://std.dkuug.dk/jtc1/sc22/wg20/docs/n849-15897wd6.pdf">(nytt
- draft)</a></li>
-
- <li><a href="http://std.dkuug.dk/jtc1/sc22/wg20/">ISO/IEC
- JTC1/SC22/WG20</a> - Gruppen for i18n-standardisering i ISO</li>
-
-<ul>
+ <title>Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)</title>
+ <link>http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</guid>
+ <pubDate>Mon, 13 Feb 2017 21:30:00 +0100</pubDate>
+ <description><p>A few days ago, we received the ruling from
+<a href="http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html">my
+day in court</a>. The case in question is a challenge of the seizure
+of the DNS domain popcorn-time.no. The ruling simply did not mention
+most of our arguments, and seemed to take everything ØKOKRIM said at
+face value, ignoring our demonstration and explanations. But it is
+hard to tell for sure, as we still have not seen most of the documents
+in the case and thus were unprepared and unable to contradict several
+of the claims made in court by the opposition. We are considering an
+appeal, but it is partly a question of funding, as it is costing us
+quite a bit to pay for our lawyer. If you want to help, please
+<a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to the
+NUUG defense fund</a>.</p>
+
+<p>The details of the case, as far as we know it, is available in
+Norwegian from
+<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the NUUG
+blog</a>. This also include
+<a href="https://www.nuug.no/news/Avslag_etter_rettslig_h_ring_om_DNS_beslaget___vurderer_veien_videre.shtml">the
+ruling itself</a>.</p>
</description>
</item>
<item>
- <title>S3QL, a locally mounted cloud file system - nice free software</title>
- <link>http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</guid>
- <pubDate>Wed, 9 Apr 2014 11:30:00 +0200</pubDate>
- <description><p>For a while now, I have been looking for a sensible offsite backup
-solution for use at home. My requirements are simple, it must be
-cheap and locally encrypted (in other words, I keep the encryption
-keys, the storage provider do not have access to my private files).
-One idea me and my friends had many years ago, before the cloud
-storage providers showed up, was to use Google mail as storage,
-writing a Linux block device storing blocks as emails in the mail
-service provided by Google, and thus get heaps of free space. On top
-of this one can add encryption, RAID and volume management to have
-lots of (fairly slow, I admit that) cheap and encrypted storage. But
-I never found time to implement such system. But the last few weeks I
-have looked at a system called
-<a href="https://bitbucket.org/nikratio/s3ql/">S3QL</a>, a locally
-mounted network backed file system with the features I need.</p>
-
-<p>S3QL is a fuse file system with a local cache and cloud storage,
-handling several different storage providers, any with Amazon S3,
-Google Drive or OpenStack API. There are heaps of such storage
-providers. S3QL can also use a local directory as storage, which
-combined with sshfs allow for file storage on any ssh server. S3QL
-include support for encryption, compression, de-duplication, snapshots
-and immutable file systems, allowing me to mount the remote storage as
-a local mount point, look at and use the files as if they were local,
-while the content is stored in the cloud as well. This allow me to
-have a backup that should survive fire. The file system can not be
-shared between several machines at the same time, as only one can
-mount it at the time, but any machine with the encryption key and
-access to the storage service can mount it if it is unmounted.</p>
-
-<p>It is simple to use. I'm using it on Debian Wheezy, where the
-package is included already. So to get started, run <tt>apt-get
-install s3ql</tt>. Next, pick a storage provider. I ended up picking
-Greenqloud, after reading their nice recipe on
-<a href="https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy">how
-to use S3QL with their Amazon S3 service</a>, because I trust the laws
-in Iceland more than those in USA when it come to keeping my personal
-data safe and private, and thus would rather spend money on a company
-in Iceland. Another nice recipe is available from the article
-<a href="http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage">S3QL
-Filesystem for HPC Storage</a> by Jeff Layton in the HPC section of
-Admin magazine. When the provider is picked, figure out how to get
-the API key needed to connect to the storage API. With Greencloud,
-the key did not show up until I had added payment details to my
-account.</p>
-
-<p>Armed with the API access details, it is time to create the file
-system. First, create a new bucket in the cloud. This bucket is the
-file system storage area. I picked a bucket name reflecting the
-machine that was going to store data there, but any name will do.
-I'll refer to it as <tt>bucket-name</tt> below. In addition, one need
-the API login and password, and a locally created password. Store it
-all in ~root/.s3ql/authinfo2 like this:
-
-<p><blockquote><pre>
-[s3c]
-storage-url: s3c://s.greenqloud.com:443/bucket-name
-backend-login: API-login
-backend-password: API-password
-fs-passphrase: local-password
-</pre></blockquote></p>
-
-<p>I create my local passphrase using <tt>pwget 50</tt> or similar,
-but any sensible way to create a fairly random password should do it.
-Armed with these details, it is now time to run mkfs, entering the API
-details and password to create it:</p>
-
-<p><blockquote><pre>
-# mkdir -m 700 /var/lib/s3ql-cache
-# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl s3c://s.greenqloud.com:443/bucket-name
-Enter backend login:
-Enter backend password:
-Before using S3QL, make sure to read the user's guide, especially
-the 'Important Rules to Avoid Loosing Data' section.
-Enter encryption password:
-Confirm encryption password:
-Generating random encryption key...
-Creating metadata tables...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.00 MB of compressed metadata.
-# </pre></blockquote></p>
-
-<p>The next step is mounting the file system to make the storage available.
-
-<p><blockquote><pre>
-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 4 upload threads.
-Downloading and decompressing metadata...
-Reading metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Mounting filesystem...
-# df -h /s3ql
-Filesystem Size Used Avail Use% Mounted on
-s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql
-#
-</pre></blockquote></p>
-
-<p>The file system is now ready for use. I use rsync to store my
-backups in it, and as the metadata used by rsync is downloaded at
-mount time, no network traffic (and storage cost) is triggered by
-running rsync. To unmount, one should not use the normal umount
-command, as this will not flush the cache to the cloud storage, but
-instead running the umount.s3ql command like this:
-
-<p><blockquote><pre>
-# umount.s3ql /s3ql
-#
-</pre></blockquote></p>
-
-<p>There is a fsck command available to check the file system and
-correct any problems detected. This can be used if the local server
-crashes while the file system is mounted, to reset the "already
-mounted" flag. This is what it look like when processing a working
-file system:</p>
-
-<p><blockquote><pre>
-# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
-Using cached metadata.
-File system seems clean, checking anyway.
-Checking DB integrity...
-Creating temporary extra indices...
-Checking lost+found...
-Checking cached objects...
-Checking names (refcounts)...
-Checking contents (names)...
-Checking contents (inodes)...
-Checking contents (parent inodes)...
-Checking objects (reference counts)...
-Checking objects (backend)...
-..processed 5000 objects so far..
-..processed 10000 objects so far..
-..processed 15000 objects so far..
-Checking objects (sizes)...
-Checking blocks (referenced objects)...
-Checking blocks (refcounts)...
-Checking inode-block mapping (blocks)...
-Checking inode-block mapping (inodes)...
-Checking inodes (refcounts)...
-Checking inodes (sizes)...
-Checking extended attributes (names)...
-Checking extended attributes (inodes)...
-Checking symlinks (inodes)...
-Checking directory reachability...
-Checking unix conventions...
-Checking referential integrity...
-Dropping temporary indices...
-Backing up old metadata...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.89 MB of compressed metadata.
-#
-</pre></blockquote></p>
-
-<p>Thanks to the cache, working on files that fit in the cache is very
-quick, about the same speed as local file access. Uploading large
-amount of data is to me limited by the bandwidth out of and into my
-house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s,
-which is very close to my upload speed, and downloading the same
-Debian installation ISO gave me 610 kiB/s, close to my download speed.
-Both were measured using <tt>dd</tt>. So for me, the bottleneck is my
-network, not the file system code. I do not know what a good cache
-size would be, but suspect that the cache should e larger than your
-working set.</p>
-
-<p>I mentioned that only one machine can mount the file system at the
-time. If another machine try, it is told that the file system is
-busy:</p>
-
-<p><blockquote><pre>
-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
- --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 8 upload threads.
-Backend reports that fs is still mounted elsewhere, aborting.
-#
-</pre></blockquote></p>
-
-<p>The file content is uploaded when the cache is full, while the
-metadata is uploaded once every 24 hour by default. To ensure the
-file system content is flushed to the cloud, one can either umount the
-file system, or ask S3QL to flush the cache and metadata using
-s3qlctrl:
-
-<p><blockquote><pre>
-# s3qlctrl upload-meta /s3ql
-# s3qlctrl flushcache /s3ql
-#
-</pre></blockquote></p>
-
-<p>If you are curious about how much space your data uses in the
-cloud, and how much compression and deduplication cut down on the
-storage usage, you can use s3qlstat on the mounted file system to get
-a report:</p>
-
-<p><blockquote><pre>
-# s3qlstat /s3ql
-Directory entries: 9141
-Inodes: 9143
-Data blocks: 8851
-Total data size: 22049.38 MB
-After de-duplication: 21955.46 MB (99.57% of total)
-After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated)
-Database size: 2.39 MB (uncompressed)
-(some values do not take into account not-yet-uploaded dirty blocks in cache)
-#
-</pre></blockquote></p>
-
-<p>I mentioned earlier that there are several possible suppliers of
-storage. I did not try to locate them all, but am aware of at least
-<a href="https://www.greenqloud.com/">Greenqloud</a>,
-<a href="http://drive.google.com/">Google Drive</a>,
-<a href="http://aws.amazon.com/s3/">Amazon S3 web serivces</a>,
-<a href="http://www.rackspace.com/">Rackspace</a> and
-<a href="http://crowncloud.net/">Crowncloud</A>. The latter even
-accept payment in Bitcoin. Pick one that suit your need. Some of
-them provide several GiB of free storage, but the prize models are
-quite different and you will have to figure out what suits you
-best.</p>
-
-<p>While researching this blog post, I had a look at research papers
-and posters discussing the S3QL file system. There are several, which
-told me that the file system is getting a critical check by the
-science community and increased my confidence in using it. One nice
-poster is titled
-"<a href="http://www.lanl.gov/orgs/adtsc/publications/science_highlights_2013/docs/pg68_69.pdf">An
-Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject
-Store and Transformative Parallel I/O Approach</a>" by Hsing-Bung
-Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields
-and Pamela Smith. Please have a look.</p>
-
-<p>Given my problems with different file systems earlier, I decided to
-check out the mounted S3QL file system to see if it would be usable as
-a home directory (in other word, that it provided POSIX semantics when
-it come to locking and umask handling etc). Running
-<a href="http://people.skolelinux.org/pere/blog/Testing_if_a_file_system_can_be_used_for_home_directories___.html">my
-test code to check file system semantics</a>, I was happy to discover that
-no error was found. So the file system can be used for home
-directories, if one chooses to do so.</p>
-
-<p>If you do not want a locally file system, and want something that
-work without the Linux fuse file system, I would like to mention the
-<a href="http://www.tarsnap.com/">Tarsnap service</a>, which also
-provide locally encrypted backup using a command line client. It have
-a nicer access control system, where one can split out read and write
-access, allowing some systems to write to the backup and others to
-only read from it.</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+ <title>A day in court challenging seizure of popcorn-time.no for #domstolkontroll</title>
+ <link>http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</guid>
+ <pubDate>Fri, 3 Feb 2017 11:10:00 +0100</pubDate>
+ <description><p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2017-02-01-popcorn-time-in-court.jpeg"></p>
+
+<p>On Wednesday, I spent the entire day in court in Follo Tingrett
+representing <a href="https://www.nuug.no/">the member association
+NUUG</a>, alongside <a href="https://www.efn.no/">the member
+association EFN</a> and <a href="http://www.imc.no">the DNS registrar
+IMC</a>, challenging the seizure of the DNS name popcorn-time.no. It
+was interesting to sit in a court of law for the first time in my
+life. Our team can be seen in the picture above: attorney Ola
+Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil
+Eriksen and NUUG board member Petter Reinholdtsen.</p>
+
+<p><a href="http://www.domstol.no/no/Enkelt-domstol/follo-tingrett/Nar-gar-rettssaken/Beramming/?cid=AAAA1701301512081262234UJFBVEZZZZZEJBAvtale">The
+case at hand</a> is that the Norwegian National Authority for
+Investigation and Prosecution of Economic and Environmental Crime (aka
+Økokrim) decided on their own, to seize a DNS domain early last
+year, without following
+<a href="https://www.norid.no/no/regelverk/navnepolitikk/#link12">the
+official policy of the Norwegian DNS authority</a> which require a
+court decision. The web site in question was a site covering Popcorn
+Time. And Popcorn Time is the name of a technology with both legal
+and illegal applications. Popcorn Time is a client combining
+searching a Bittorrent directory available on the Internet with
+downloading/distribute content via Bittorrent and playing the
+downloaded content on screen. It can be used illegally if it is used
+to distribute content against the will of the right holder, but it can
+also be used legally to play a lot of content, for example the
+millions of movies
+<a href="https://archive.org/details/movies">available from the
+Internet Archive</a> or the collection
+<a href="http://vodo.net/films/">available from Vodo</a>. We created
+<a href="magnet:?xt=urn:btih:86c1802af5a667ca56d3918aecb7d3c0f7173084&dn=PresentasjonFolloTingrett.mov&tr=udp%3A%2F%2Fpublic.popcorn-tracker.org%3A6969%2Fannounce">a
+video demonstrating legally use of Popcorn Time</a> and played it in
+Court. It can of course be downloaded using Bittorrent.</p>
+
+<p>I did not quite know what to expect from a day in court. The
+government held on to their version of the story and we held on to
+ours, and I hope the judge is able to make sense of it all. We will
+know in two weeks time. Unfortunately I do not have high hopes, as
+the Government have the upper hand here with more knowledge about the
+case, better training in handling criminal law and in general higher
+standing in the courts than fairly unknown DNS registrar and member
+associations. It is expensive to be right also in Norway. So far the
+case have cost more than NOK 70 000,-. To help fund the case, NUUG
+and EFN have asked for donations, and managed to collect around NOK 25
+000,- so far. Given the presentation from the Government, I expect
+the government to appeal if the case go our way. And if the case do
+not go our way, I hope we have enough funding to appeal.</p>
+
+<p>From the other side came two people from Økokrim. On the benches,
+appearing to be part of the group from the government were two people
+from the Simonsen Vogt Wiik lawyer office, and three others I am not
+quite sure who was. Økokrim had proposed to present two witnesses
+from The Motion Picture Association, but this was rejected because
+they did not speak Norwegian and it was a bit late to bring in a
+translator, but perhaps the two from MPA were present anyway. All
+seven appeared to know each other. Good to see the case is take
+seriously.</p>
+
+<p>If you, like me, believe the courts should be involved before a DNS
+domain is hijacked by the government, or you believe the Popcorn Time
+technology have a lot of useful and legal applications, I suggest you
+too <a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to
+the NUUG defense fund</a>. Both Bitcoin and bank transfer are
+available. If NUUG get more than we need for the legal action (very
+unlikely), the rest will be spend promoting free software, open
+standards and unix-like operating systems in Norway, so no matter what
+happens the money will be put to good use.</p>
+
+<p>If you want to lean more about the case, I recommend you check out
+<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the blog
+posts from NUUG covering the case</a>. They cover the legal arguments
+on both sides.</p>
</description>
</item>
<item>
- <title>EU-domstolen bekreftet i dag at datalagringsdirektivet er ulovlig</title>
- <link>http://people.skolelinux.org/pere/blog/EU_domstolen_bekreftet_i_dag_at_datalagringsdirektivet_er_ulovlig.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/EU_domstolen_bekreftet_i_dag_at_datalagringsdirektivet_er_ulovlig.html</guid>
- <pubDate>Tue, 8 Apr 2014 11:30:00 +0200</pubDate>
- <description><p>I dag kom endelig avgjørelsen fra EU-domstolen om
-datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i
-strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva
-datalagringsdirektivet er for noe, så er det
-<a href="http://tv.nrk.no/program/koid75005313/tema-dine-digitale-spor-datalagringsdirektivet">en
-flott dokumentar tilgjengelig hos NRK</a> som jeg tidligere
-<a href="http://people.skolelinux.org/pere/blog/Dokumentaren_om_Datalagringsdirektivet_sendes_endelig_p__NRK.html">har
-anbefalt</a> alle å se.</p>
-
-<p>Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at
-det kommer flere ut over dagen. Flere kan finnes
-<a href="http://www.mylder.no/?drill=datalagringsdirektivet&intern=1">via
-mylder</a>.</p>
-
-<p><ul>
-
-<li><a href="http://e24.no/digital/eu-domstolen-datalagringsdirektivet-er-ugyldig/22879592">EU-domstolen:
-Datalagringsdirektivet er ugyldig</a> - e24.no 2014-04-08
-
-<li><a href="http://www.aftenposten.no/nyheter/iriks/EU-domstolen-Datalagringsdirektivet-er-ulovlig-7529032.html">EU-domstolen:
-Datalagringsdirektivet er ulovlig</a> - aftenposten.no 2014-04-08
-
-<li><a href="http://www.aftenposten.no/nyheter/iriks/politikk/Krever-DLD-stopp-i-Norge-7530086.html">Krever
-DLD-stopp i Norge</a> - aftenposten.no 2014-04-08
-
-<li><a href="http://www.p4.no/story.aspx?id=566431">Apenes: - En
-gledens dag</a> - p4.no 2014-04-08
-
-<li><a href="http://www.nrk.no/norge/_-datalagringsdirektivet-er-ugyldig-1.11655929">EU-domstolen:
-– Datalagringsdirektivet er ugyldig</a> - nrk.no 2014-04-08</li>
-
-<li><a href="http://www.vg.no/nyheter/utenriks/data-og-nett/eu-domstolen-datalagringsdirektivet-er-ugyldig/a/10130280/">EU-domstolen:
-Datalagringsdirektivet er ugyldig</a> - vg.no 2014-04-08</li>
-
-<li><a href="http://www.dagbladet.no/2014/04/08/nyheter/innenriks/datalagringsdirektivet/personvern/32711646/">-
-Vi bør skrote hele datalagringsdirektivet</a> - dagbladet.no
-2014-04-08</li>
-
-<li><a href="http://www.digi.no/928137/eu-domstolen-dld-er-ugyldig">EU-domstolen:
-DLD er ugyldig</a> - digi.no 2014-04-08</li>
-
-<li><a href="http://www.irishtimes.com/business/sectors/technology/european-court-declares-data-retention-directive-invalid-1.1754150">European
-court declares data retention directive invalid</a> - irishtimes.com
-2014-04-08</li>
-
-<li><a href="http://www.reuters.com/article/2014/04/08/us-eu-data-ruling-idUSBREA370F020140408?feedType=RSS">EU
-court rules against requirement to keep data of telecom users</a> -
-reuters.com 2014-04-08</li>
-
-</ul>
-</p>
-
-<p>Jeg synes det er veldig fint at nok en stemme slår fast at
-totalitær overvåkning av befolkningen er uakseptabelt, men det er
-fortsatt like viktig å beskytte privatsfæren som før, da de
-teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror
-innsats i prosjekter som
-<a href="https://wiki.debian.org/FreedomBox">Freedombox</a> og
-<a href="http://www.dugnadsnett.no/">Dugnadsnett</a> er viktigere enn
-noen gang.</p>
-
-<p><strong>Update 2014-04-08 12:10</strong>: Kronerullingen for å
-stoppe datalagringsdirektivet i Norge gjøres hos foreningen
-<a href="http://www.digitaltpersonvern.no/">Digitalt Personvern</a>,
-som har samlet inn 843 215,- så langt men trenger nok mye mer hvis
-
-ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var
-<a href="http://www.holderdeord.no/parliament-issues/48650">kun
-partinene Høyre og Arbeiderpartiet</a> som stemte for
-Datalagringsdirektivet, og en av dem må bytte mening for at det skal
-bli flertall mot i Stortinget. Se mer om saken
-<a href="http://www.holderdeord.no/issues/69-innfore-datalagringsdirektivet">Holder
-de ord</a>.</p>
+ <title>Nasjonalbiblioteket avslutter sin ulovlige bruk av Google Skjemaer</title>
+ <link>http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</guid>
+ <pubDate>Thu, 12 Jan 2017 09:40:00 +0100</pubDate>
+ <description><p>I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul
+arrangerte Nasjonalbiblioteket
+<a href="http://www.nb.no/Bibliotekutvikling/Kunnskapsorganisering/Nasjonalt-verksregister/Seminar-om-verksregister">et
+seminar om sitt knakende gode tiltak «verksregister»</a>. Eneste
+måten å melde seg på dette seminaret var å sende personopplysninger
+til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis,
+da det bør være mulig å delta på seminarer arrangert av det offentlige
+uten å måtte dele sine interesser, posisjon og andre
+personopplysninger med Google. Jeg ba derfor om innsyn via
+<a href="https://www.mimesbronn.no/">Mimes brønn</a> i
+<a href="https://www.mimesbronn.no/request/personopplysninger_til_google_sk">avtaler
+og vurderinger Nasjonalbiblioteket hadde rundt dette</a>.
+Personopplysningsloven legger klare rammer for hva som må være på
+plass før en kan be tredjeparter, spesielt i utlandet, behandle
+personopplysninger på sine vegne, så det burde eksistere grundig
+dokumentasjon før noe slikt kan bli lovlig. To jurister hos
+Nasjonalbiblioteket mente først dette var helt i orden, og at Googles
+standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg
+var merkelig, men har ikke hatt kapasitet til å følge opp saken før
+for to dager siden.</p>
+
+<p>Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket
+om at Datatilsynet underkjente Googles standardavtaler som
+databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg
+for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI
+for å finne bedre måter å håndtere påmeldinger i tråd med
+personopplysningsloven. Det er fantastisk å se at av og til hjelper
+det å spørre hva i alle dager det offentlige holder på med.</p>
</description>
</item>
<item>
- <title>ReactOS Windows clone - nice free software</title>
- <link>http://people.skolelinux.org/pere/blog/ReactOS_Windows_clone___nice_free_software.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/ReactOS_Windows_clone___nice_free_software.html</guid>
- <pubDate>Tue, 1 Apr 2014 12:10:00 +0200</pubDate>
- <description><p>Microsoft have announced that Windows XP reaches its end of life
-2014-04-08, in 7 days. But there are heaps of machines still running
-Windows XP, and depending on Windows XP to run their applications, and
-upgrading will be expensive, both when it comes to money and when it
-comes to the amount of effort needed to migrate from Windows XP to a
-new operating system. Some obvious options (buy new a Windows
-machine, buy a MacOSX machine, install Linux on the existing machine)
-are already well known and covered elsewhere. Most of them involve
-leaving the user applications installed on Windows XP behind and
-trying out replacements or updated versions. In this blog post I want
-to mention one strange bird that allow people to keep the hardware and
-the existing Windows XP applications and run them on a free software
-operating system that is Windows XP compatible.</p>
-
-<p><a href="http://www.reactos.org/">ReactOS</a> is a free software
-operating system (GNU GPL licensed) working on providing a operating
-system that is binary compatible with Windows, able to run windows
-programs directly and to use Windows drivers for hardware directly.
-The project goal is for Windows user to keep their existing machines,
-drivers and software, and gain the advantages from user a operating
-system without usage limitations caused by non-free licensing. It is
-a Windows clone running directly on the hardware, so quite different
-from the approach taken by <a href="http://www.winehq.org/">the Wine
-project</a>, which make it possible to run Windows binaries on
-Linux.</p>
-
-<p>The ReactOS project share code with the Wine project, so most
-shared libraries available on Windows are already implemented already.
-There is also a software manager like the one we are used to on Linux,
-allowing the user to install free software applications with a simple
-click directly from the Internet. Check out the
-<a href="http://www.reactos.org/screenshots">screen shots on the
-project web site</a> for an idea what it look like (it looks just like
-Windows before metro).</p>
-
-<p>I do not use ReactOS myself, preferring Linux and Unix like
-operating systems. I've tested it, and it work fine in a virt-manager
-virtual machine. The browser, minesweeper, notepad etc is working
-fine as far as I can tell. Unfortunately, my main test application
-is the software included on a CD with the Lego Mindstorms NXT, which
-seem to install just fine from CD but fail to leave any binaries on
-the disk after the installation. So no luck with that test software.
-No idea why, but hope someone else figure out and fix the problem.
-I've tried the ReactOS Live ISO on a physical machine, and it seemed
-to work just fine. If you like Windows and want to keep running your
-old Windows binaries, check it out by
-<a href="http://www.reactos.org/download">downloading</a> the
-installation CD, the live CD or the preinstalled virtual machine
-image.</p>
+ <title>Bryter NAV sin egen personvernerklæring?</title>
+ <link>http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</guid>
+ <pubDate>Wed, 11 Jan 2017 06:50:00 +0100</pubDate>
+ <description><p>Jeg leste med interesse en nyhetssak hos
+<a href="http://www.digi.no/artikler/nav-avslorer-trygdemisbruk-ved-a-spore-ip-adresser/367394">digi.no</a>
+og
+<a href="https://www.nrk.no/buskerud/trygdesvindlere-avslores-av-utenlandske-ip-adresser-1.13313461">NRK</a>
+om at det ikke bare er meg, men at også NAV bedriver geolokalisering
+av IP-adresser, og at det gjøres analyse av IP-adressene til de som
+sendes inn meldekort for å se om meldekortet sendes inn fra
+utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare,
+er sitert i NRK på at «De to er jo blant annet avslørt av
+IP-adresser. At man ser at meldekortet kommer fra utlandet.»</p>
+
+<p>Jeg synes det er fint at det blir bedre kjent at IP-adresser
+knyttes til enkeltpersoner og at innsamlet informasjon brukes til å
+stedsbestemme personer også av aktører her i Norge. Jeg ser det som
+nok et argument for å bruke
+<a href="https://www.torproject.org/">Tor</a> så mye som mulig for å
+gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin
+privatsfære og unngå å dele sin fysiske plassering med
+uvedkommede.</p>
+
+<P>Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble
+tipset (takk #nuug) om
+<a href="https://www.nav.no/no/NAV+og+samfunn/Kontakt+NAV/Teknisk+brukerstotte/Snarveier/personvernerkl%C3%A6ring-for-arbeids-og-velferdsetaten">NAVs
+personvernerklæring</a>, som under punktet «Personvern og statistikk»
+lyder:</p>
+
+<p><blockquote>
+
+<p>«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene
+dannes fordi din nettleser automatisk sender en rekke opplysninger til
+NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det
+er eksempelvis opplysninger om hvilken nettleser og -versjon du
+bruker, og din internettadresse (ip-adresse). For hver side som vises,
+lagres følgende opplysninger:</p>
+
+<ul>
+<li>hvilken side du ser på</li>
+<li>dato og tid</li>
+<li>hvilken nettleser du bruker</li>
+<li>din ip-adresse</li>
+</ul>
+
+<p>Ingen av opplysningene vil bli brukt til å identifisere
+enkeltpersoner. NAV bruker disse opplysningene til å generere en
+samlet statistikk som blant annet viser hvilke sider som er mest
+populære. Statistikken er et redskap til å forbedre våre
+tjenester.»</p>
+
+</blockquote></p>
+
+<p>Jeg klarer ikke helt å se hvordan analyse av de besøkendes
+IP-adresser for å se hvem som sender inn meldekort via web fra en
+IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om
+at «ingen av opplysningene vil bli brukt til å identifisere
+enkeltpersoner». Det virker dermed for meg som at NAV bryter sine
+egen personvernerklæring, hvilket
+<a href="http://people.skolelinux.org/pere/blog/Er_lover_brutt_n_r_personvernpolicy_ikke_stemmer_med_praksis_.html">Datatilsynet
+fortalte meg i starten av desember antagelig er brudd på
+personopplysningsloven</a>.
+
+<p>I tillegg er personvernerklæringen ganske misvisende i og med at
+NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i
+tillegg ber brukernes nettleser kontakte fem andre nettjenere
+(script.hotjar.com, static.hotjar.com, vars.hotjar.com,
+www.google-analytics.com og www.googletagmanager.com), slik at
+personopplysninger blir gjort tilgjengelig for selskapene Hotjar og
+Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og
+NSA). Jeg klarer heller ikke se hvordan slikt spredning av
+personopplysninger kan være i tråd med kravene i
+personopplysningloven, eller i tråd med NAVs personvernerklæring.</p>
+
+<p>Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller
+kanskje Datatilsynet bør gjøre det?</p>
</description>
</item>
<item>
- <title>Debian Edu interview: Roger Marsal</title>
- <link>http://people.skolelinux.org/pere/blog/Debian_Edu_interview__Roger_Marsal.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Debian_Edu_interview__Roger_Marsal.html</guid>
- <pubDate>Sun, 30 Mar 2014 11:40:00 +0200</pubDate>
- <description><p><a href="http://www.skolelinux.org/">Debian Edu / Skolelinux</a>
-keep gaining new users. Some weeks ago, a person showed up on IRC,
-<a href="irc://irc.debian.org/#debian-edu">#debian-edu</a>, with a
-wish to contribute, and I managed to get a interview with this great
-contributor Roger Marsal to learn more about his background.</p>
-
-<p><strong>Who are you, and how do you spend your days?</strong></p>
-
-<p>My name is Roger Marsal, I'm 27 years old (1986 generation) and I
-live in Barcelona, Spain. I've got a strong business background and I
-work as a patrimony manager and as a real estate agent. Additionally,
-I've co-founded a British based tech company that is nowadays on the
-last development phase of a new social networking concept.</p>
-
-<p>I'm a Linux enthusiast that started its journey with Ubuntu four years
-ago and have recently switched to Debian seeking rock solid stability
-and as a necessary step to gain expertise.</p>
-
-<p>In a nutshell, I spend my days working and learning as much as I
-can to face both my job, entrepreneur project and feed my Linux
-hunger.</p>
-
-<p><strong>How did you get in contact with the Skolelinux / Debian Edu
-project?</strong></p>
-
-<p>I discovered the <a href="http://www.ltsp.org/">LTSP</a> advantages
-with "Ubuntu 12.04 alternate install" and after a year of use I
-started looking for an alternative. Even though I highly value and
-respect the Ubuntu project, I thought it was necessary for me to
-change to a more robust and stable alternative. As far as I was using
-Debian on my personal laptop I thought it would be fine to install
-Debian and configure an LTSP server myself. Surprised, I discovered
-that the Debian project also supported a kind of Edubuntu equivalent,
-and after having some pain I obtained a Debian Edu network up and
-running. I just loved it.</p>
-
-<p><strong>What do you see as the advantages of Skolelinux / Debian
-Edu?</strong></p>
-
-<p>I found a main advantage in that, once you know "the tips and
-tricks", a new installation just works out of the box. It's the most
-complete alternative I've found to create an LTSP network. All the
-other distributions seems to be made of plastic, Debian Edu seems to
-be made of steel.</p>
-
-<p><strong>What do you see as the disadvantages of Skolelinux / Debian
-Edu?</strong></p>
-
-<p>I found two main disadvantages.</p>
-
-<p>I'm not an expert but I've got notions and I had to spent a considerable
-amount of time trying to bring up a standard network topology. I'm quite
-stubborn and I just worked until I did but I'm sure many people with few
-resources (not big schools, but academies for example) would have switched
-or dropped.</p>
-
-<p>It's amazing how such a complex system like Debian Edu has achieved
-this out-of-the-box state. Even though tweaking without breaking gets
-more difficult, as more factors have to be considered. This can
-discourage many people too.</p>
-
-<p><strong>Which free software do you use daily?</strong></p>
-
-<p>I use Debian, Firefox, Okular, Inkscape, LibreOffice and
-Virtualbox.</p>
-
-
-<p><strong>Which strategy do you believe is the right one to use to
-get schools to use free software?</strong></p>
-
-<p>I don't think there is a need for a particular strategy. The free
-attribute in both "freedom" and "no price" meanings is what will
-really bring free software to schools. In my experience I can think of
-the <a href="http://www.r-project.org/">"R" statistical language</a>; a
-few years a ago was an extremely nerd tool for university people.
-Today it's being increasingly used to teach statistics at many
-different level of studies. I believe free and open software will
-increasingly gain popularity, but I'm sure schools will be one of the
-first scenarios where this will happen.</p>
+ <title>Where did that package go? &mdash; geolocated IP traceroute</title>
+ <link>http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</guid>
+ <pubDate>Mon, 9 Jan 2017 12:20:00 +0100</pubDate>
+ <description><p>Did you ever wonder where the web trafic really flow to reach the
+web servers, and who own the network equipment it is flowing through?
+It is possible to get a glimpse of this from using traceroute, but it
+is hard to find all the details. Many years ago, I wrote a system to
+map the Norwegian Internet (trying to figure out if our plans for a
+network game service would get low enough latency, and who we needed
+to talk to about setting up game servers close to the users. Back
+then I used traceroute output from many locations (I asked my friends
+to run a script and send me their traceroute output) to create the
+graph and the map. The output from traceroute typically look like
+this:
+
+<p><pre>
+traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
+ 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms
+ 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms
+ 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms
+ 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms
+ 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms
+ 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms
+ 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms
+ 8 * * *
+ 9 * * *
+[...]
+</pre></p>
+
+<p>This show the DNS names and IP addresses of (at least some of the)
+network equipment involved in getting the data traffic from me to the
+www.stortinget.no server, and how long it took in milliseconds for a
+package to reach the equipment and return to me. Three packages are
+sent, and some times the packages do not follow the same path. This
+is shown for hop 5, where three different IP addresses replied to the
+traceroute request.</p>
+
+<p>There are many ways to measure trace routes. Other good traceroute
+implementations I use are traceroute (using ICMP packages) mtr (can do
+both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP
+traceroute and a lot of other capabilities). All of them are easily
+available in <a href="https://www.debian.org/">Debian</a>.</p>
+
+<p>This time around, I wanted to know the geographic location of
+different route points, to visualize how visiting a web page spread
+information about the visit to a lot of servers around the globe. The
+background is that a web site today often will ask the browser to get
+from many servers the parts (for example HTML, JSON, fonts,
+JavaScript, CSS, video) required to display the content. This will
+leak information about the visit to those controlling these servers
+and anyone able to peek at the data traffic passing by (like your ISP,
+the ISPs backbone provider, FRA, GCHQ, NSA and others).</p>
+
+<p>Lets pick an example, the Norwegian parliament web site
+www.stortinget.no. It is read daily by all members of parliament and
+their staff, as well as political journalists, activits and many other
+citizens of Norway. A visit to the www.stortinget.no web site will
+ask your browser to contact 8 other servers: ajax.googleapis.com,
+insights.hotjar.com, script.hotjar.com, static.hotjar.com,
+stats.g.doubleclick.net, www.google-analytics.com,
+www.googletagmanager.com and www.netigate.se. I extracted this by
+asking <a href="http://phantomjs.org/">PhantomJS</a> to visit the
+Stortinget web page and tell me all the URLs PhantomJS downloaded to
+render the page (in HAR format using
+<a href="https://github.com/ariya/phantomjs/blob/master/examples/netsniff.js">their
+netsniff example</a>. I am very grateful to Gorm for showing me how
+to do this). My goal is to visualize network traces to all IP
+addresses behind these DNS names, do show where visitors personal
+information is spread when visiting the page.</p>
+
+<p align="center"><a href="www.stortinget.no-geoip.kml"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geoip-small.png" alt="map of combined traces for URLs used by www.stortinget.no using GeoIP"/></a></p>
+
+<p>When I had a look around for options, I could not find any good
+free software tools to do this, and decided I needed my own traceroute
+wrapper outputting KML based on locations looked up using GeoIP. KML
+is easy to work with and easy to generate, and understood by several
+of the GIS tools I have available. I got good help from by NUUG
+colleague Anders Einar with this, and the result can be seen in
+<a href="https://github.com/petterreinholdtsen/kmltraceroute">my
+kmltraceroute git repository</a>. Unfortunately, the quality of the
+free GeoIP databases I could find (and the for-pay databases my
+friends had access to) is not up to the task. The IP addresses of
+central Internet infrastructure would typically be placed near the
+controlling companies main office, and not where the router is really
+located, as you can see from <a href="www.stortinget.no-geoip.kml">the
+KML file I created</a> using the GeoLite City dataset from MaxMind.
+
+<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy-small.png" alt="scapy traceroute graph for URLs used by www.stortinget.no"/></a></p>
+
+<p>I also had a look at the visual traceroute graph created by
+<a href="http://www.secdev.org/projects/scapy/">the scrapy project</a>,
+showing IP network ownership (aka AS owner) for the IP address in
+question.
+<a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg">The
+graph display a lot of useful information about the traceroute in SVG
+format</a>, and give a good indication on who control the network
+equipment involved, but it do not include geolocation. This graph
+make it possible to see the information is made available at least for
+UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level
+3 Communications and NetDNA.</p>
+
+<p align="center"><a href="https://geotraceroute.com/index.php?node=4&host=www.stortinget.no"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-small.png" alt="example geotraceroute view for www.stortinget.no"/></a></p>
+
+<p>In the process, I came across the
+<a href="https://geotraceroute.com/">web service GeoTraceroute</a> by
+Salim Gasmi. Its methology of combining guesses based on DNS names,
+various location databases and finally use latecy times to rule out
+candidate locations seemed to do a very good job of guessing correct
+geolocation. But it could only do one trace at the time, did not have
+a sensor in Norway and did not make the geolocations easily available
+for postprocessing. So I contacted the developer and asked if he
+would be willing to share the code (he refused until he had time to
+clean it up), but he was interested in providing the geolocations in a
+machine readable format, and willing to set up a sensor in Norway. So
+since yesterday, it is possible to run traces from Norway in this
+service thanks to a sensor node set up by
+<a href="https://www.nuug.no/">the NUUG assosiation</a>, and get the
+trace in KML format for further processing.</p>
+
+<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.kml"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.png" alt="map of combined traces for URLs used by www.stortinget.no using geotraceroute"/></a></p>
+
+<p>Here we can see a lot of trafic passes Sweden on its way to
+Denmark, Germany, Holland and Ireland. Plenty of places where the
+Snowden confirmations verified the traffic is read by various actors
+without your best interest as their top priority.</p>
+
+<p>Combining KML files is trivial using a text editor, so I could loop
+over all the hosts behind the urls imported by www.stortinget.no and
+ask for the KML file from GeoTraceroute, and create a combined KML
+file with all the traces (unfortunately only one of the IP addresses
+behind the DNS name is traced this time. To get them all, one would
+have to request traces using IP number instead of DNS names from
+GeoTraceroute). That might be the next step in this project.</p>
+
+<p>Armed with these tools, I find it a lot easier to figure out where
+the IP traffic moves and who control the boxes involved in moving it.
+And every time the link crosses for example the Swedish border, we can
+be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in
+Britain and NSA in USA and cables around the globe. (Hm, what should
+we tell them? :) Keep that in mind if you ever send anything
+unencrypted over the Internet.</p>
+
+<p>PS: KML files are drawn using
+<a href="http://ivanrublev.me/kml/">the KML viewer from Ivan
+Rublev<a/>, as it was less cluttered than the local Linux application
+Marble. There are heaps of other options too.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>