+ <item>
+ <title>The SysVinit upstream project just migrated to git</title>
+ <link>http://people.skolelinux.org/pere/blog/The_SysVinit_upstream_project_just_migrated_to_git.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/The_SysVinit_upstream_project_just_migrated_to_git.html</guid>
+ <pubDate>Sun, 18 Feb 2018 09:20:00 +0100</pubDate>
+ <description><p>Surprising as it might sound, there are still computers using the
+traditional Sys V init system, and there probably will be until
+systemd start working on Hurd and FreeBSD.
+<a href="https://savannah.nongnu.org/projects/sysvinit">The upstream
+project still exist</a>, though, and up until today, the upstream
+source was available from Savannah via subversion. I am happy to
+report that this just changed.</p>
+
+<p>The upstream source is now in Git, and consist of three
+repositories:</p>
+
+<ul>
+
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit.git">sysvinit</a></li>
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit/insserv.git">insserv</a></li>
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit/startpar.git">startpar</a></li>
+
+</ul>
+
+<p>I do not really spend much time on the project these days, and I
+has mostly retired, but found it best to migrate the source to a good
+version control system to help those willing to move it forward.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Using VLC to stream bittorrent sources</title>
+ <link>http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html</guid>
+ <pubDate>Wed, 14 Feb 2018 08:00:00 +0100</pubDate>
+ <description><p>A few days ago, a new major version of
+<a href="https://www.videolan.org/">VLC</a> was announced, and I
+decided to check out if it now supported streaming over
+<a href="http://bittorrent.org/">bittorrent</a> and
+<a href="https://webtorrent.io">webtorrent</a>. Bittorrent is one of
+the most efficient ways to distribute large files on the Internet, and
+Webtorrent is a variant of Bittorrent using
+<a href="https://webrtc.org">WebRTC</a> as its transport channel,
+allowing web pages to stream and share files using the same technique.
+The network protocols are similar but not identical, so a client
+supporting one of them can not talk to a client supporting the other.
+I was a bit surprised with what I discovered when I started to look.
+Looking at
+<a href="https://www.videolan.org/vlc/releases/3.0.0.html">the release
+notes</a> did not help answering this question, so I started searching
+the web. I found several news articles from 2013, most of them
+tracing the news from Torrentfreak
+("<a href=https://torrentfreak.com/open-source-giant-vlc-mulls-bittorrent-support-130211/">Open
+Source Giant VLC Mulls BitTorrent Streaming Support</a>"), about a
+initiative to pay someone to create a VLC patch for bittorrent
+support. To figure out what happend with this initiative, I headed
+over to the #videolan IRC channel and asked if there were some bug or
+feature request tickets tracking such feature. I got an answer from
+lead developer Jean-Babtiste Kempf, telling me that there was a patch
+but neither he nor anyone else knew where it was. So I searched a bit
+more, and came across an independent
+<a href="https://github.com/johang/vlc-bittorrent">VLC plugin to add
+bittorrent support</a>, created by Johan Gunnarsson in 2016/2017.
+Again according to Jean-Babtiste, this is not the patch he was talking
+about.</p>
+
+<p>Anyway, to test the plugin, I made a working Debian package from
+the git repository, with some modifications. After installing this
+package, I could stream videos from
+<a href="https://www.archive.org/">The Internet Archive</a> using VLC
+commands like this:</p>
+
+<p><blockquote><pre>
+vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent
+</pre></blockquote></p>
+
+<p>The plugin is supposed to handle magnet links too, but since The
+Internet Archive do not have magnet links and I did not want to spend
+time tracking down another source, I have not tested it. It can take
+quite a while before the video start playing without any indication of
+what is going on from VLC. It took 10-20 seconds when I measured it.
+Some times the plugin seem unable to find the correct video file to
+play, and show the metadata XML file name in the VLC status line. I
+have no idea why.</p>
+
+<p>I have created a <a href="https://bugs.debian.org/890360">request for
+a new package in Debian (RFP)</a> and
+<a href="https://github.com/johang/vlc-bittorrent/issues/1">asked if
+the upstream author is willing to help make this happen</a>. Now we
+wait to see what come out of this. I do not want to maintain a
+package that is not maintained upstream, nor do I really have time to
+maintain more packages myself, so I might leave it at this. But I
+really hope someone step up to do the packaging, and hope upstream is
+still maintaining the source. If you want to help, please update the
+RFP request or the upstream issue.</p>
+
+<p>I have not found any traces of webtorrent support for VLC.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Version 3.1 of Cura, the 3D print slicer, is now in Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Version_3_1_of_Cura__the_3D_print_slicer__is_now_in_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Version_3_1_of_Cura__the_3D_print_slicer__is_now_in_Debian.html</guid>
+ <pubDate>Tue, 13 Feb 2018 06:20:00 +0100</pubDate>
+ <description><p>A new version of the
+<a href="https://tracker.debian.org/pkg/cura">3D printer slicer
+software Cura</a>, version 3.1.0, is now available in Debian Testing
+(aka Buster) and Debian Unstable (aka Sid). I hope you find it
+useful. It was uploaded the last few days, and the last update will
+enter testing tomorrow. See the
+<a href="https://ultimaker.com/en/products/cura-software/release-notes">release
+notes</a> for the list of bug fixes and new features. Version 3.2
+was announced 6 days ago. We will try to get it into Debian as
+well.</p>
+
+<p>More information related to 3D printing is available on the
+<a href="https://wiki.debian.org/3DPrinting">3D printing</a> and
+<a href="https://wiki.debian.org/3D-printer">3D printer</a> wiki pages
+in Debian.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>How hard can æ, ø and å be?</title>
+ <link>http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</guid>
+ <pubDate>Sun, 11 Feb 2018 17:10:00 +0100</pubDate>
+ <description><img src="http://people.skolelinux.org/pere/blog/images/2018-02-11-peppes-unicode.jpeg" align="right"/>
+
+<p>We write 2018, and it is 30 years since Unicode was introduced.
+Most of us in Norway have come to expect the use of our alphabet to
+just work with any computer system. But it is apparently beyond reach
+of the computers printing recites at a restaurant. Recently I visited
+a Peppes pizza resturant, and noticed a few details on the recite.
+Notice how 'ø' and 'å' are replaced with strange symbols in
+'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi
+gleder oss til å se deg igjen'.</p>
+
+<p>I would say that this state is passed sad and over in embarrassing.</p>
+
+<p>I removed personal and private information to be nice.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Legal to share more than 11,000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sun, 7 Jan 2018 23:30:00 +0100</pubDate>
+ <description><p>I've continued to track down list of movies that are legal to
+distribute on the Internet, and identified more than 11,000 title IDs
+in The Internet Movie Database (IMDB) so far. Most of them (57%) are
+feature films from USA published before 1923. I've also tracked down
+more than 24,000 movies I have not yet been able to map to IMDB title
+ID, so the real number could be a lot higher. According to the front
+web page for <a href="https://retrofilmvault.com/">Retro Film
+Vault</A>, there are 44,000 public domain films, so I guess there are
+still some left to identify.</p>
+
+<p>The complete data set is available from
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+public git repository</a>, including the scripts used to create it.
+Most of the data is collected using web scraping, for example from the
+"product catalog" of companies selling copies of public domain movies,
+but any source I find believable is used. I've so far had to throw
+out three sources because I did not trust the public domain status of
+the movies listed.</p>
+
+<p>Anyway, this is the summary of the 28 collected data sources so
+far:</p>
+
+<p><pre>
+ 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
+ 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json
+ 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json
+ 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json
+ 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json
+ 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
+ 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json
+ 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json
+ 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json
+ 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
+ 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json
+ 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json
+ 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json
+ 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json
+ 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json
+ 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
+ 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json
+ 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json
+ 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json
+11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID
+</pre></p>
+
+<p> I keep finding more data sources. I found the cinemovies source
+just a few days ago, and as you can see from the summary, it extended
+my list with 63 movies. Check out the mklist-* scripts in the git
+repository if you are curious how the lists are created. Many of the
+titles are extracted using searches on IMDB, where I look for the
+title and year, and accept search results with only one movie listed
+if the year matches. This allow me to automatically use many lists of
+movies without IMDB title ID references at the cost of increasing the
+risk of wrongly identify a IMDB title ID as public domain. So far my
+random manual checks have indicated that the method is solid, but I
+really wish all lists of public domain movies would include unique
+movie identifier like the IMDB title ID. It would make the job of
+counting movies in the public domain a lot easier.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Cura, the nice 3D print slicer, is now in Debian Unstable</title>
+ <link>http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</guid>
+ <pubDate>Sun, 17 Dec 2017 07:00:00 +0100</pubDate>
+ <description><p>After several months of working and waiting, I am happy to report
+that the nice and user friendly 3D printer slicer software Cura just
+entered Debian Unstable. It consist of five packages,
+<a href="https://tracker.debian.org/pkg/cura">cura</a>,
+<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>,
+<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>,
+<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>,
+<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and
+<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last
+two, uranium and cura, entered Unstable yesterday. This should make
+it easier for Debian users to print on at least the Ultimaker class of
+3D printers. My nearest 3D printer is an Ultimaker 2+, so it will
+make life easier for at least me. :)</p>
+
+<p>The work to make this happen was done by Gregor Riepl, and I was
+happy to assist him in sponsoring the packages. With the introduction
+of Cura, Debian is up to three 3D printer slicers at your service,
+Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D
+printer, give it a go. :)</p>
+
+<p>The 3D printer software is maintained by the 3D printer Debian
+team, flocking together on the
+<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a>
+mailing list and the
+<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a>
+IRC channel.</p>
+
+<p>The next step for Cura in Debian is to update the cura package to
+version 3.0.3 and then update the entire set of packages to version
+3.1.0 which showed up the last few days.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Idea for finding all public domain movies in the USA</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</guid>
+ <pubDate>Wed, 13 Dec 2017 10:15:00 +0100</pubDate>
+ <description><p>While looking at
+<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies
+for the copyright renewal entries for movies published in the USA</a>,
+an idea occurred to me. The number of renewals are so few per year, it
+should be fairly quick to transcribe them all and add references to
+the corresponding IMDB title ID. This would give the (presumably)
+complete list of movies published 28 years earlier that did _not_
+enter the public domain for the transcribed year. By fetching the
+list of USA movies published 28 years earlier and subtract the movies
+with renewals, we should be left with movies registered in IMDB that
+are now in the public domain. For the year 1955 (which is the one I
+have looked at the most), the total number of pages to transcribe is
+21. For the 28 years from 1950 to 1978, it should be in the range
+500-600 pages. It is just a few days of work, and spread among a
+small group of people it should be doable in a few weeks of spare
+time.</p>
+
+<p>A typical copyright renewal entry look like this (the first one
+listed for 1955):</p>
+
+<p><blockquote>
+ ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer
+ Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH);
+ 10Jun55; R151558.
+</blockquote></p>
+
+<p>The movie title as well as registration and renewal dates are easy
+enough to locate by a program (split on first comma and look for
+DDmmmYY). The rest of the text is not required to find the movie in
+IMDB, but is useful to confirm the correct movie is found. I am not
+quite sure what the L and R numbers mean, but suspect they are
+reference numbers into the archive of the US Copyright Office.</p>
+
+<p>Tracking down the equivalent IMDB title ID is probably going to be
+a manual task, but given the year it is fairly easy to search for the
+movie title using for example
+<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>.
+Using this search, I find that the equivalent IMDB title ID for the
+first renewal entry from 1955 is
+<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p>
+
+<p>I suspect the best way to do this would be to make a specialised
+web service to make it easy for contributors to transcribe and track
+down IMDB title IDs. In the web service, once a entry is transcribed,
+the title and year could be extracted from the text, a search in IMDB
+conducted for the user to pick the equivalent IMDB title ID right
+away. By spreading out the work among volunteers, it would also be
+possible to make at least two persons transcribe the same entries to
+be able to discover any typos introduced. But I will need help to
+make this happen, as I lack the spare time to do all of this on my
+own. If you would like to help, please get in touch. Perhaps you can
+draft a web service for crowd sourcing the task?</p>
+
+<p>Note, Project Gutenberg already have some
+<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed
+copies of the US Copyright Office renewal protocols</a>, but I have
+not been able to find any film renewals there, so I suspect they only
+have copies of renewal for written works. I have not been able to find
+any transcribed versions of movie renewals so far. Perhaps they exist
+somewhere?</p>
+
+<p>I would love to figure out methods for finding all the public
+domain works in other countries too, but it is a lot harder. At least
+for Norway and Great Britain, such work involve tracking down the
+people involved in making the movie and figuring out when they died.
+It is hard enough to figure out who was part of making a movie, but I
+do not know how to automate such procedure without a registry of every
+person involved in making movies and their death year.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Is the short movie «Empty Socks» from 1927 in the public domain or not?</title>
+ <link>http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</guid>
+ <pubDate>Tue, 5 Dec 2017 12:30:00 +0100</pubDate>
+ <description><p>Three years ago, a presumed lost animation film,
+<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from
+1927</a>, was discovered in the Norwegian National Library. At the
+time it was discovered, it was generally assumed to be copyrighted by
+The Walt Disney Company, and I blogged about
+<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my
+reasoning to conclude</a> that it would would enter the Norwegian
+equivalent of the public domain in 2053, based on my understanding of
+Norwegian Copyright Law. But a few days ago, I came across
+<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a
+blog post claiming the movie was already in the public domain</a>, at
+least in USA. The reasoning is as follows: The film was released in
+November or Desember 1927 (sources disagree), and presumably
+registered its copyright that year. At that time, right holders of
+movies registered by the copyright office received government
+protection for there work for 28 years. After 28 years, the copyright
+had to be renewed if the wanted the government to protect it further.
+The blog post I found claim such renewal did not happen for this
+movie, and thus it entered the public domain in 1956. Yet someone
+claim the copyright was renewed and the movie is still copyright
+protected. Can anyone help me to figure out which claim is correct?
+I have not been able to find Empty Socks in Catalog of copyright
+entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures
+<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available
+from the University of Pennsylvania</a>, neither in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page
+45 for the first half of 1955</a>, nor in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page
+119 for the second half of 1955</a>. It is of course possible that
+the renewal entry was left out of the printed catalog by mistake. Is
+there some way to rule out this possibility? Please help, and update
+the wikipedia page with your findings.
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Metadata proposal for movies on the Internet Archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</guid>
+ <pubDate>Tue, 28 Nov 2017 12:00:00 +0100</pubDate>
+ <description><p>It would be easier to locate the movie you want to watch in
+<a href="https://www.archive.org/">the Internet Archive</a>, if the
+metadata about each movie was more complete and accurate. In the
+archiving community, a well known saying state that good metadata is a
+love letter to the future. The metadata in the Internet Archive could
+use a face lift for the future to love us back. Here is a proposal
+for a small improvement that would make the metadata more useful
+today. I've been unable to find any document describing the various
+standard fields available when uploading videos to the archive, so
+this proposal is based on my best quess and searching through several
+of the existing movies.</p>
+
+<p>I have a few use cases in mind. First of all, I would like to be
+able to count the number of distinct movies in the Internet Archive,
+without duplicates. I would further like to identify the IMDB title
+ID of the movies in the Internet Archive, to be able to look up a IMDB
+title ID and know if I can fetch the video from there and share it
+with my friends.</p>
+
+<p>Second, I would like the Butter data provider for The Internet
+archive
+(<a href="https://github.com/butterproviders/butter-provider-archive">available
+from github</a>), to list as many of the good movies as possible. The
+plugin currently do a search in the archive with the following
+parameters:</p>
+
+<p><pre>
+collection:moviesandfilms
+AND NOT collection:movie_trailers
+AND -mediatype:collection
+AND format:"Archive BitTorrent"
+AND year
+</pre></p>
+
+<p>Most of the cool movies that fail to show up in Butter do so
+because the 'year' field is missing. The 'year' field is populated by
+the year part from the 'date' field, and should be when the movie was
+released (date or year). Two such examples are
+<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur
+from 1905</a> and
+<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes
+2: Gran Dillama from 2013</a>, where the year metadata field is
+missing.</p>
+
+So, my proposal is simply, for every movie in The Internet Archive
+where an IMDB title ID exist, please fill in these metadata fields
+(note, they can be updated also long after the video was uploaded, but
+as far as I can tell, only by the uploader):
+
+<dl>
+
+<dt>mediatype</dt>
+<dd>Should be 'movie' for movies.</dd>
+
+<dt>collection</dt>
+<dd>Should contain 'moviesandfilms'.</dd>
+
+<dt>title</dt>
+<dd>The title of the movie, without the publication year.</dd>
+
+<dt>date</dt>
+<dd>The data or year the movie was released. This make the movie show
+up in Butter, as well as make it possible to know the age of the
+movie and is useful to figure out copyright status.</dd>
+
+<dt>director</dt>
+<dd>The director of the movie. This make it easier to know if the
+correct movie is found in movie databases.</dd>
+
+<dt>publisher</dt>
+<dd>The production company making the movie. Also useful for
+identifying the correct movie.</dd>
+
+<dt>links</dt>
+
+<dd>Add a link to the IMDB title page, for example like this: &lt;a
+href="http://www.imdb.com/title/tt0028496/"&gt;Movie in
+IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for
+counting of number of unique movies in the Archive. Other external
+references, like to TMDB, could be added like this too.</dd>
+
+</dl>
+
+<p>I did consider proposing a Custom field for the IMDB title ID (for
+example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it
+will be easier to simply place it in the links free text field.</p>
+
+<p>I created
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+list of IMDB title IDs for several thousand movies in the Internet
+Archive</a>, but I also got a list of several thousand movies without
+such IMDB title ID (and quite a few duplicates). It would be great if
+this data set could be integrated into the Internet Archive metadata
+to be available for everyone in the future, but with the current
+policy of leaving metadata editing to the uploaders, it will take a
+while before this happen. If you have uploaded movies into the
+Internet Archive, you can help. Please consider following my proposal
+above for your movies, to ensure that movie is properly
+counted. :)</p>
+
+<p>The list is mostly generated using wikidata, which based on
+Wikipedia articles make it possible to link between IMDB and movies in
+the Internet Archive. But there are lots of movies without a
+Wikipedia article, and some movies where only a collection page exist
+(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the
+Caminandes example above</a>, where there are three movies but only
+one Wikidata entry).</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Legal to share more than 3000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
+ <description><p>A month ago, I blogged about my work to
+<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
+check the copyright status of IMDB entries</a>, and try to count the
+number of movies listed in IMDB that is legal to distribute on the
+Internet. I have continued to look for good data sources, and
+identified a few more. The code used to extract information from
+various data sources is available in
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+git repository</a>, currently available from github.</p>
+
+<p>So far I have identified 3186 unique IMDB title IDs. To gain
+better understanding of the structure of the data set, I created a
+histogram of the year associated with each movie (typically release
+year). It is interesting to notice where the peaks and dips in the
+graph are located. I wonder why they are placed there. I suspect
+World War II caused the dip around 1940, but what caused the peak
+around 2010?</p>
+
+<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
+
+<p>I've so far identified ten sources for IMDB title IDs for movies in
+the public domain or with a free license. This is the statistics
+reported when running 'make stats' in the git repository:</p>
+
+<pre>
+ 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
+ 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
+ 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
+ 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
+ 3186 unique IMDB title IDs in total
+</pre>
+
+<p>The entries without IMDB title ID are candidates to increase the
+data set, but might equally well be duplicates of entries already
+listed with IMDB title ID in one of the other sources, or represent
+movies that lack a IMDB title ID. I've seen examples of all these
+situations when peeking at the entries without IMDB title ID. Based
+on these data sources, the lower bound for movies listed in IMDB that
+are legal to distribute on the Internet is between 3186 and 4713.
+
+<p>It would be great for improving the accuracy of this measurement,
+if the various sources added IMDB title ID to their metadata. I have
+tried to reach the people behind the various sources to ask if they
+are interested in doing this, without any replies so far. Perhaps you
+can help me get in touch with the people behind VODO, Public Domain
+Torrents, Public Domain Movies and Public Domain Review to try to
+convince them to add more metadata to their movie entries?</p>
+
+<p>Another way you could help is by adding pages to Wikipedia about
+movies that are legal to distribute on the Internet. If such page
+exist and include a link to both IMDB and The Internet Archive, the
+script used to generate free-movies-archive-org-wikidata.json should
+pick up the mapping as soon as wikidata is updates.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Some notes on fault tolerant storage systems</title>
+ <link>http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</guid>
+ <pubDate>Wed, 1 Nov 2017 15:35:00 +0100</pubDate>
+ <description><p>If you care about how fault tolerant your storage is, you might
+find these articles and papers interesting. They have formed how I
+think of when designing a storage system.</p>
+
+<ul>
+
+<li>USENIX :login; <a
+href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy
+Does Not Imply Fault Tolerance. Analysis of Distributed Storage
+Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan,
+Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi
+H. Arpaci-Dusseau</li>
+
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why
+RAID 5 stops working in 2009</a> by Robin Harris</li>
+
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why
+RAID 6 stops working in 2019</a> by Robin Harris</li>
+
+<li>USENIX FAST'07
+<a href="http://research.google.com/archive/disk_failures.pdf">Failure
+Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro,
+Wolf-Dietrich Weber and Luiz André Barroso</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data
+Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug
+Hughes</li>
+
+<li>USENIX FAST'08
+<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An
+Analysis of Data Corruption in the Storage Stack</a> by
+L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C.
+Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li>
+
+<li>USENIX FAST'07 <a
+href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk
+failures in the real world: what does an MTTF of 1,000,000 hours mean
+to you?</a> by B. Schroeder and G. A. Gibson.</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are
+Disks the Dominant Contributor for Storage Failures? A Comprehensive
+Study of Storage Subsystem Failure Characteristics</a> by Weihang
+Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li>
+
+<li>SIGMETRICS 2007
+<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An
+analysis of latent sector errors in disk drives</a> by
+L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li>
+
+</ul>
+
+<p>Several of these research papers are based on data collected from
+hundred thousands or millions of disk, and their findings are eye
+opening. The short story is simply do not implicitly trust RAID or
+redundant storage systems. Details matter. And unfortunately there
+are few options on Linux addressing all the identified issues. Both
+ZFS and Btrfs are doing a fairly good job, but have legal and
+practical issues on their own. I wonder how cluster file systems like
+Ceph do in this regard. After all, there is an old saying, you know
+you have a distributed system when the crash of a computer you have
+never heard of stops you from getting any work done. The same holds
+true if fault tolerance do not work.</p>
+
+<p>Just remember, in the end, it do not matter how redundant, or how
+fault tolerant your storage is, if you do not continuously monitor its
+status to detect and replace failed disks.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Web services for writing academic LaTeX papers as a team</title>
+ <link>http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</guid>
+ <pubDate>Tue, 31 Oct 2017 21:00:00 +0100</pubDate>
+ <description><p>I was surprised today to learn that a friend in academia did not
+know there are easily available web services available for writing
+LaTeX documents as a team. I thought it was common knowledge, but to
+make sure at least my readers are aware of it, I would like to mention
+these useful services for writing LaTeX documents. Some of them even
+provide a WYSIWYG editor to ease writing even further.</p>
+
+<p>There are two commercial services available,
+<a href="https://sharelatex.com">ShareLaTeX</a> and
+<a href="https://overleaf.com">Overleaf</a>. They are very easy to
+use. Just start a new document, select which publisher to write for
+(ie which LaTeX style to use), and start writing. Note, these two
+have announced their intention to join forces, so soon it will only be
+one joint service. I've used both for different documents, and they
+work just fine. While
+<a href="https://github.com/sharelatex/sharelatex">ShareLaTeX is free
+software</a>, while the latter is not. According to <a
+href="https://www.overleaf.com/help/17-is-overleaf-open-source">a
+announcement from Overleaf</a>, they plan to keep the ShareLaTeX code
+base maintained as free software.</p>
+
+But these two are not the only alternatives.
+<a href="https://app.fiduswriter.org/">Fidus Writer</a> is another free
+software solution with <a href="https://github.com/fiduswriter">the
+source available on github</a>. I have not used it myself. Several
+others can be found on the nice
+<a href="https://alternativeto.net/software/sharelatex/">alterntiveTo
+web service</a>.
+
+<p>If you like Google Docs or Etherpad, but would like to write
+documents in LaTeX, you should check out these services. You can even
+host your own, if you want to. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Locating IMDB IDs of movies in the Internet Archive using Wikidata</title>
+ <link>http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</guid>
+ <pubDate>Wed, 25 Oct 2017 12:20:00 +0200</pubDate>
+ <description><p>Recently, I needed to automatically check the copyright status of a
+set of <a href="http://www.imdb.com/">The Internet Movie database
+(IMDB)</a> entries, to figure out which one of the movies they refer
+to can be freely distributed on the Internet. This proved to be
+harder than it sounds. IMDB for sure list movies without any
+copyright protection, where the copyright protection has expired or
+where the movie is lisenced using a permissive license like one from
+Creative Commons. These are mixed with copyright protected movies,
+and there seem to be no way to separate these classes of movies using
+the information in IMDB.</p>
+
+<p>First I tried to look up entries manually in IMDB,
+<a href="https://www.wikipedia.org/">Wikipedia</a> and
+<a href="https://www.archive.org/">The Internet Archive</a>, to get a
+feel how to do this. It is hard to know for sure using these sources,
+but it should be possible to be reasonable confident a movie is "out
+of copyright" with a few hours work per movie. As I needed to check
+almost 20,000 entries, this approach was not sustainable. I simply
+can not work around the clock for about 6 years to check this data
+set.</p>
+
+<p>I asked the people behind The Internet Archive if they could
+introduce a new metadata field in their metadata XML for IMDB ID, but
+was told that they leave it completely to the uploaders to update the
+metadata. Some of the metadata entries had IMDB links in the
+description, but I found no way to download all metadata files in bulk
+to locate those ones and put that approach aside.</p>
+
+<p>In the process I noticed several Wikipedia articles about movies
+had links to both IMDB and The Internet Archive, and it occured to me
+that I could use the Wikipedia RDF data set to locate entries with
+both, to at least get a lower bound on the number of movies on The
+Internet Archive with a IMDB ID. This is useful based on the
+assumption that movies distributed by The Internet Archive can be
+legally distributed on the Internet. With some help from the RDF
+community (thank you DanC), I was able to come up with this query to
+pass to <a href="https://query.wikidata.org/">the SPARQL interface on
+Wikidata</a>:
+
+<p><pre>
+SELECT ?work ?imdb ?ia ?when ?label
+WHERE
+{
+ ?work wdt:P31/wdt:P279* wd:Q11424.
+ ?work wdt:P345 ?imdb.
+ ?work wdt:P724 ?ia.
+ OPTIONAL {
+ ?work wdt:P577 ?when.
+ ?work rdfs:label ?label.
+ FILTER(LANG(?label) = "en").
+ }
+}
+</pre></p>
+
+<p>If I understand the query right, for every film entry anywhere in
+Wikpedia, it will return the IMDB ID and The Internet Archive ID, and
+when the movie was released and its English title, if either or both
+of the latter two are available. At the moment the result set contain
+2338 entries. Of course, it depend on volunteers including both
+correct IMDB and The Internet Archive IDs in the wikipedia articles
+for the movie. It should be noted that the result will include
+duplicates if the movie have entries in several languages. There are
+some bogus entries, either because The Internet Archive ID contain a
+typo or because the movie is not available from The Internet Archive.
+I did not verify the IMDB IDs, as I am unsure how to do that
+automatically.</p>
+
+<p>I wrote a small python script to extract the data set from Wikidata
+and check if the XML metadata for the movie is available from The
+Internet Archive, and after around 1.5 hour it produced a list of 2097
+free movies and their IMDB ID. In total, 171 entries in Wikidata lack
+the refered Internet Archive entry. I assume the 70 "disappearing"
+entries (ie 2338-2097-171) are duplicate entries.</p>
+
+<p>This is not too bad, given that The Internet Archive report to
+contain <a href="https://archive.org/details/feature_films">5331
+feature films</a> at the moment, but it also mean more than 3000
+movies are missing on Wikipedia or are missing the pair of references
+on Wikipedia.</p>
+
+<p>I was curious about the distribution by release year, and made a
+little graph to show how the amount of free movies is spread over the
+years:<p>
+
+<p><img src="http://people.skolelinux.org/pere/blog/images/2017-10-25-verk-i-det-fri-filmer.png"></p>
+
+<p>I expect the relative distribution of the remaining 3000 movies to
+be similar.</p>
+
+<p>If you want to help, and want to ensure Wikipedia can be used to
+cross reference The Internet Archive and The Internet Movie Database,
+please make sure entries like this are listed under the "External
+links" heading on the Wikipedia article for the movie:</p>
+
+<p><pre>
+* {{Internet Archive film|id=FightingLady}}
+* {{IMDb title|id=0036823|title=The Fighting Lady}}
+</pre></p>
+
+<p>Please verify the links on the final page, to make sure you did not
+introduce a typo.</p>
+
+<p>Here is the complete list, if you want to correct the 171
+identified Wikipedia entries with broken links to The Internet
+Archive: <a href="http://www.wikidata.org/entity/Q1140317">Q1140317</a>,
+<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
+<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
+<a href="http://www.wikidata.org/entity/Q470560">Q470560</a>,
+<a href="http://www.wikidata.org/entity/Q743340">Q743340</a>,
+<a href="http://www.wikidata.org/entity/Q822580">Q822580</a>,
+<a href="http://www.wikidata.org/entity/Q480696">Q480696</a>,
+<a href="http://www.wikidata.org/entity/Q128761">Q128761</a>,
+<a href="http://www.wikidata.org/entity/Q1307059">Q1307059</a>,
+<a href="http://www.wikidata.org/entity/Q1335091">Q1335091</a>,
+<a href="http://www.wikidata.org/entity/Q1537166">Q1537166</a>,
+<a href="http://www.wikidata.org/entity/Q1438334">Q1438334</a>,
+<a href="http://www.wikidata.org/entity/Q1479751">Q1479751</a>,
+<a href="http://www.wikidata.org/entity/Q1497200">Q1497200</a>,
+<a href="http://www.wikidata.org/entity/Q1498122">Q1498122</a>,
+<a href="http://www.wikidata.org/entity/Q865973">Q865973</a>,
+<a href="http://www.wikidata.org/entity/Q834269">Q834269</a>,
+<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
+<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
+<a href="http://www.wikidata.org/entity/Q1548193">Q1548193</a>,
+<a href="http://www.wikidata.org/entity/Q499031">Q499031</a>,
+<a href="http://www.wikidata.org/entity/Q1564769">Q1564769</a>,
+<a href="http://www.wikidata.org/entity/Q1585239">Q1585239</a>,
+<a href="http://www.wikidata.org/entity/Q1585569">Q1585569</a>,
+<a href="http://www.wikidata.org/entity/Q1624236">Q1624236</a>,
+<a href="http://www.wikidata.org/entity/Q4796595">Q4796595</a>,
+<a href="http://www.wikidata.org/entity/Q4853469">Q4853469</a>,
+<a href="http://www.wikidata.org/entity/Q4873046">Q4873046</a>,
+<a href="http://www.wikidata.org/entity/Q915016">Q915016</a>,
+<a href="http://www.wikidata.org/entity/Q4660396">Q4660396</a>,
+<a href="http://www.wikidata.org/entity/Q4677708">Q4677708</a>,
+<a href="http://www.wikidata.org/entity/Q4738449">Q4738449</a>,
+<a href="http://www.wikidata.org/entity/Q4756096">Q4756096</a>,
+<a href="http://www.wikidata.org/entity/Q4766785">Q4766785</a>,
+<a href="http://www.wikidata.org/entity/Q880357">Q880357</a>,
+<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
+<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
+<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
+<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
+<a href="http://www.wikidata.org/entity/Q1194170">Q1194170</a>,
+<a href="http://www.wikidata.org/entity/Q940014">Q940014</a>,
+<a href="http://www.wikidata.org/entity/Q946863">Q946863</a>,
+<a href="http://www.wikidata.org/entity/Q172837">Q172837</a>,
+<a href="http://www.wikidata.org/entity/Q573077">Q573077</a>,
+<a href="http://www.wikidata.org/entity/Q1219005">Q1219005</a>,
+<a href="http://www.wikidata.org/entity/Q1219599">Q1219599</a>,
+<a href="http://www.wikidata.org/entity/Q1643798">Q1643798</a>,
+<a href="http://www.wikidata.org/entity/Q1656352">Q1656352</a>,
+<a href="http://www.wikidata.org/entity/Q1659549">Q1659549</a>,
+<a href="http://www.wikidata.org/entity/Q1660007">Q1660007</a>,
+<a href="http://www.wikidata.org/entity/Q1698154">Q1698154</a>,
+<a href="http://www.wikidata.org/entity/Q1737980">Q1737980</a>,
+<a href="http://www.wikidata.org/entity/Q1877284">Q1877284</a>,
+<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
+<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
+<a href="http://www.wikidata.org/entity/Q1199451">Q1199451</a>,
+<a href="http://www.wikidata.org/entity/Q1211871">Q1211871</a>,
+<a href="http://www.wikidata.org/entity/Q1212179">Q1212179</a>,
+<a href="http://www.wikidata.org/entity/Q1238382">Q1238382</a>,
+<a href="http://www.wikidata.org/entity/Q4906454">Q4906454</a>,
+<a href="http://www.wikidata.org/entity/Q320219">Q320219</a>,
+<a href="http://www.wikidata.org/entity/Q1148649">Q1148649</a>,
+<a href="http://www.wikidata.org/entity/Q645094">Q645094</a>,
+<a href="http://www.wikidata.org/entity/Q5050350">Q5050350</a>,
+<a href="http://www.wikidata.org/entity/Q5166548">Q5166548</a>,
+<a href="http://www.wikidata.org/entity/Q2677926">Q2677926</a>,
+<a href="http://www.wikidata.org/entity/Q2698139">Q2698139</a>,
+<a href="http://www.wikidata.org/entity/Q2707305">Q2707305</a>,
+<a href="http://www.wikidata.org/entity/Q2740725">Q2740725</a>,
+<a href="http://www.wikidata.org/entity/Q2024780">Q2024780</a>,
+<a href="http://www.wikidata.org/entity/Q2117418">Q2117418</a>,
+<a href="http://www.wikidata.org/entity/Q2138984">Q2138984</a>,
+<a href="http://www.wikidata.org/entity/Q1127992">Q1127992</a>,
+<a href="http://www.wikidata.org/entity/Q1058087">Q1058087</a>,
+<a href="http://www.wikidata.org/entity/Q1070484">Q1070484</a>,
+<a href="http://www.wikidata.org/entity/Q1080080">Q1080080</a>,
+<a href="http://www.wikidata.org/entity/Q1090813">Q1090813</a>,
+<a href="http://www.wikidata.org/entity/Q1251918">Q1251918</a>,
+<a href="http://www.wikidata.org/entity/Q1254110">Q1254110</a>,
+<a href="http://www.wikidata.org/entity/Q1257070">Q1257070</a>,
+<a href="http://www.wikidata.org/entity/Q1257079">Q1257079</a>,
+<a href="http://www.wikidata.org/entity/Q1197410">Q1197410</a>,
+<a href="http://www.wikidata.org/entity/Q1198423">Q1198423</a>,
+<a href="http://www.wikidata.org/entity/Q706951">Q706951</a>,
+<a href="http://www.wikidata.org/entity/Q723239">Q723239</a>,
+<a href="http://www.wikidata.org/entity/Q2079261">Q2079261</a>,
+<a href="http://www.wikidata.org/entity/Q1171364">Q1171364</a>,
+<a href="http://www.wikidata.org/entity/Q617858">Q617858</a>,
+<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
+<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
+<a href="http://www.wikidata.org/entity/Q324513">Q324513</a>,
+<a href="http://www.wikidata.org/entity/Q374172">Q374172</a>,
+<a href="http://www.wikidata.org/entity/Q7533269">Q7533269</a>,
+<a href="http://www.wikidata.org/entity/Q970386">Q970386</a>,
+<a href="http://www.wikidata.org/entity/Q976849">Q976849</a>,
+<a href="http://www.wikidata.org/entity/Q7458614">Q7458614</a>,
+<a href="http://www.wikidata.org/entity/Q5347416">Q5347416</a>,
+<a href="http://www.wikidata.org/entity/Q5460005">Q5460005</a>,
+<a href="http://www.wikidata.org/entity/Q5463392">Q5463392</a>,
+<a href="http://www.wikidata.org/entity/Q3038555">Q3038555</a>,
+<a href="http://www.wikidata.org/entity/Q5288458">Q5288458</a>,
+<a href="http://www.wikidata.org/entity/Q2346516">Q2346516</a>,
+<a href="http://www.wikidata.org/entity/Q5183645">Q5183645</a>,
+<a href="http://www.wikidata.org/entity/Q5185497">Q5185497</a>,
+<a href="http://www.wikidata.org/entity/Q5216127">Q5216127</a>,
+<a href="http://www.wikidata.org/entity/Q5223127">Q5223127</a>,
+<a href="http://www.wikidata.org/entity/Q5261159">Q5261159</a>,
+<a href="http://www.wikidata.org/entity/Q1300759">Q1300759</a>,
+<a href="http://www.wikidata.org/entity/Q5521241">Q5521241</a>,
+<a href="http://www.wikidata.org/entity/Q7733434">Q7733434</a>,
+<a href="http://www.wikidata.org/entity/Q7736264">Q7736264</a>,
+<a href="http://www.wikidata.org/entity/Q7737032">Q7737032</a>,
+<a href="http://www.wikidata.org/entity/Q7882671">Q7882671</a>,
+<a href="http://www.wikidata.org/entity/Q7719427">Q7719427</a>,
+<a href="http://www.wikidata.org/entity/Q7719444">Q7719444</a>,
+<a href="http://www.wikidata.org/entity/Q7722575">Q7722575</a>,
+<a href="http://www.wikidata.org/entity/Q2629763">Q2629763</a>,
+<a href="http://www.wikidata.org/entity/Q2640346">Q2640346</a>,
+<a href="http://www.wikidata.org/entity/Q2649671">Q2649671</a>,
+<a href="http://www.wikidata.org/entity/Q7703851">Q7703851</a>,
+<a href="http://www.wikidata.org/entity/Q7747041">Q7747041</a>,
+<a href="http://www.wikidata.org/entity/Q6544949">Q6544949</a>,
+<a href="http://www.wikidata.org/entity/Q6672759">Q6672759</a>,
+<a href="http://www.wikidata.org/entity/Q2445896">Q2445896</a>,
+<a href="http://www.wikidata.org/entity/Q12124891">Q12124891</a>,
+<a href="http://www.wikidata.org/entity/Q3127044">Q3127044</a>,
+<a href="http://www.wikidata.org/entity/Q2511262">Q2511262</a>,
+<a href="http://www.wikidata.org/entity/Q2517672">Q2517672</a>,
+<a href="http://www.wikidata.org/entity/Q2543165">Q2543165</a>,
+<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
+<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
+<a href="http://www.wikidata.org/entity/Q12126890">Q12126890</a>,
+<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
+<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
+<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
+<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
+<a href="http://www.wikidata.org/entity/Q2559509">Q2559509</a>,
+<a href="http://www.wikidata.org/entity/Q2559912">Q2559912</a>,
+<a href="http://www.wikidata.org/entity/Q7760469">Q7760469</a>,
+<a href="http://www.wikidata.org/entity/Q6703974">Q6703974</a>,
+<a href="http://www.wikidata.org/entity/Q4744">Q4744</a>,
+<a href="http://www.wikidata.org/entity/Q7766962">Q7766962</a>,
+<a href="http://www.wikidata.org/entity/Q7768516">Q7768516</a>,
+<a href="http://www.wikidata.org/entity/Q7769205">Q7769205</a>,
+<a href="http://www.wikidata.org/entity/Q7769988">Q7769988</a>,
+<a href="http://www.wikidata.org/entity/Q2946945">Q2946945</a>,
+<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
+<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q6909175">Q6909175</a>,
+<a href="http://www.wikidata.org/entity/Q7405709">Q7405709</a>,
+<a href="http://www.wikidata.org/entity/Q7416149">Q7416149</a>,
+<a href="http://www.wikidata.org/entity/Q7239952">Q7239952</a>,
+<a href="http://www.wikidata.org/entity/Q7317332">Q7317332</a>,
+<a href="http://www.wikidata.org/entity/Q7783674">Q7783674</a>,
+<a href="http://www.wikidata.org/entity/Q7783704">Q7783704</a>,
+<a href="http://www.wikidata.org/entity/Q7857590">Q7857590</a>,
+<a href="http://www.wikidata.org/entity/Q3372526">Q3372526</a>,
+<a href="http://www.wikidata.org/entity/Q3372642">Q3372642</a>,
+<a href="http://www.wikidata.org/entity/Q3372816">Q3372816</a>,
+<a href="http://www.wikidata.org/entity/Q3372909">Q3372909</a>,
+<a href="http://www.wikidata.org/entity/Q7959649">Q7959649</a>,
+<a href="http://www.wikidata.org/entity/Q7977485">Q7977485</a>,
+<a href="http://www.wikidata.org/entity/Q7992684">Q7992684</a>,
+<a href="http://www.wikidata.org/entity/Q3817966">Q3817966</a>,
+<a href="http://www.wikidata.org/entity/Q3821852">Q3821852</a>,
+<a href="http://www.wikidata.org/entity/Q3420907">Q3420907</a>,
+<a href="http://www.wikidata.org/entity/Q3429733">Q3429733</a>,
+<a href="http://www.wikidata.org/entity/Q774474">Q774474</a></p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>A one-way wall on the border?</title>
+ <link>http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html</guid>
+ <pubDate>Sat, 14 Oct 2017 22:10:00 +0200</pubDate>
+ <description><p>I find it fascinating how many of the people being locked inside
+the proposed border wall between USA and Mexico support the idea. The
+proposal to keep Mexicans out reminds me of
+<a href="http://www.history.com/news/10-things-you-may-not-know-about-the-berlin-wall">the
+propaganda twist from the East Germany government</a> calling the wall
+the “Antifascist Bulwark” after erecting the Berlin Wall, claiming
+that the wall was erected to keep enemies from creeping into East
+Germany, while it was obvious to the people locked inside it that it
+was erected to keep the people from escaping.</p>
+
+<p>Do the people in USA supporting this wall really believe it is a
+one way wall, only keeping people on the outside from getting in,
+while not keeping people in the inside from getting out?</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Generating 3D prints in Debian using Cura and Slic3r(-prusa)</title>
+ <link>http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html</guid>
+ <pubDate>Mon, 9 Oct 2017 10:50:00 +0200</pubDate>
+ <description><p>At my nearby maker space,
+<a href="http://sonen.ifi.uio.no/">Sonen</a>, I heard the story that it
+was easier to generate gcode files for theyr 3D printers (Ultimake 2+)
+on Windows and MacOS X than Linux, because the software involved had
+to be manually compiled and set up on Linux while premade packages
+worked out of the box on Windows and MacOS X. I found this annoying,
+as the software involved,
+<a href="https://github.com/Ultimaker/Cura">Cura</a>, is free software
+and should be trivial to get up and running on Linux if someone took
+the time to package it for the relevant distributions. I even found
+<a href="https://bugs.debian.org/706656">a request for adding into
+Debian</a> from 2013, which had seem some activity over the years but
+never resulted in the software showing up in Debian. So a few days
+ago I offered my help to try to improve the situation.</p>
+
+<p>Now I am very happy to see that all the packages required by a
+working Cura in Debian are uploaded into Debian and waiting in the NEW
+queue for the ftpmasters to have a look. You can track the progress
+on
+<a href="https://qa.debian.org/developer.php?email=3dprinter-general%40lists.alioth.debian.org">the
+status page for the 3D printer team</a>.</p>
+
+<p>The uploaded packages are a bit behind upstream, and was uploaded
+now to get slots in <a href="https://ftp-master.debian.org/new.html">the NEW
+queue</a> while we work up updating the packages to the latest
+upstream version.</p>
+
+<p>On a related note, two competitors for Cura, which I found harder
+to use and was unable to configure correctly for Ultimaker 2+ in the
+short time I spent on it, are already in Debian. If you are looking
+for 3D printer "slicers" and want something already available in
+Debian, check out
+<a href="https://tracker.debian.org/pkg/slic3r">slic3r</a> and
+<a href="https://tracker.debian.org/pkg/slic3r-prusa">slic3r-prusa</a>.
+The latter is a fork of the former.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Visualizing GSM radio chatter using gr-gsm and Hopglass</title>
+ <link>http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html</guid>
+ <pubDate>Fri, 29 Sep 2017 10:30:00 +0200</pubDate>
+ <description><p>Every mobile phone announce its existence over radio to the nearby
+mobile cell towers. And this radio chatter is available for anyone
+with a radio receiver capable of receiving them. Details about the
+mobile phones with very good accuracy is of course collected by the
+phone companies, but this is not the topic of this blog post. The
+mobile phone radio chatter make it possible to figure out when a cell
+phone is nearby, as it include the SIM card ID (IMSI). By paying
+attention over time, one can see when a phone arrive and when it leave
+an area. I believe it would be nice to make this information more
+available to the general public, to make more people aware of how
+their phones are announcing their whereabouts to anyone that care to
+listen.</p>
+
+<p>I am very happy to report that we managed to get something
+visualizing this information up and running for
+<a href="http://norwaymakers.org/osf17">Oslo Skaperfestival 2017</a>
+(Oslo Makers Festival) taking place today and tomorrow at Deichmanske
+library. The solution is based on the
+<a href="http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html">simple
+recipe for listening to GSM chatter</a> I posted a few days ago, and
+will show up at the stand of <a href="http://sonen.ifi.uio.no/">Åpen
+Sone from the Computer Science department of the University of
+Oslo</a>. The presentation will show the nearby mobile phones (aka
+IMSIs) as dots in a web browser graph, with lines to the dot
+representing mobile base station it is talking to. It was working in
+the lab yesterday, and was moved into place this morning.</p>
+
+<p>We set up a fairly powerful desktop machine using Debian
+Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers
+connected and visualize the visible cell phone towers using an
+<a href="https://github.com/marlow925/hopglass">English version of
+Hopglass</a>. A fairly powerfull machine is needed as the
+grgsm_livemon_headless processes from
+<a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a> converting
+the radio signal to data packages is quite CPU intensive.</p>
+
+<p>The frequencies to listen to, are identified using a slightly
+patched scan-and-livemon (to set the --args values for each receiver),
+and the Hopglass data is generated using the
+<a href="https://github.com/petterreinholdtsen/IMSI-catcher/tree/meshviewer-output">patches
+in my meshviewer-output branch</a>. For some reason we could not get
+more than four SDRs working. There is also a geographical map trying
+to show the location of the base stations, but I believe their
+coordinates are hardcoded to some random location in Germany, I
+believe. The code should be replaced with code to look up location in
+a text file, a sqlite database or one of the online databases
+mentioned in
+<a href="https://github.com/Oros42/IMSI-catcher/issues/14">the github
+issue for the topic</a>.
+
+<p>If this sound interesting, visit the stand at the festival!</p>
+</description>
+ </item>
+
+ <item>
+ <title>Easier recipe to observe the cell phones around you</title>
+ <link>http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html</guid>
+ <pubDate>Sun, 24 Sep 2017 08:30:00 +0200</pubDate>
+ <description><p>A little more than a month ago I wrote
+<a href="http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html">how
+to observe the SIM card ID (aka IMSI number) of mobile phones talking
+to nearby mobile phone base stations using Debian GNU/Linux and a
+cheap USB software defined radio</a>, and thus being able to pinpoint
+the location of people and equipment (like cars and trains) with an
+accuracy of a few kilometer. Since then we have worked to make the
+procedure even simpler, and it is now possible to do this without any
+manual frequency tuning and without building your own packages.</p>
+
+<p>The <a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a>
+package is now included in Debian testing and unstable, and the
+IMSI-catcher code no longer require root access to fetch and decode
+the GSM data collected using gr-gsm.</p>
+
+<p>Here is an updated recipe, using packages built by Debian and a git
+clone of two python scripts:</p>
+
+<ol>
+
+<li>Start with a Debian machine running the Buster version (aka
+ testing).</li>
+
+<li>Run '<tt>apt install gr-gsm python-numpy python-scipy
+ python-scapy</tt>' as root to install required packages.</li>
+
+<li>Fetch the code decoding GSM packages using '<tt>git clone
+ github.com/Oros42/IMSI-catcher.git</tt>'.</li>
+
+<li>Insert USB software defined radio supported by GNU Radio.</li>
+
+<li>Enter the IMSI-catcher directory and run '<tt>python
+ scan-and-livemon</tt>' to locate the frequency of nearby base
+ stations and start listening for GSM packages on one of them.</li>
+
+<li>Enter the IMSI-catcher directory and run '<tt>python
+ simple_IMSI-catcher.py</tt>' to display the collected information.</li>
+
+</ol>
+
+<p>Note, due to a bug somewhere the scan-and-livemon program (actually
+<a href="https://github.com/ptrkrysik/gr-gsm/issues/336">its underlying
+program grgsm_scanner</a>) do not work with the HackRF radio. It does
+work with RTL 8232 and other similar USB radio receivers you can get
+very cheaply
+(<a href="https://www.ebay.com/sch/items/?_nkw=rtl+2832">for example
+from ebay</a>), so for now the solution is to scan using the RTL radio
+and only use HackRF for fetching GSM data.</p>
+
+<p>As far as I can tell, a cell phone only show up on one of the
+frequencies at the time, so if you are going to track and count every
+cell phone around you, you need to listen to all the frequencies used.
+To listen to several frequencies, use the --numrecv argument to
+scan-and-livemon to use several receivers. Further, I am not sure if
+phones using 3G or 4G will show as talking GSM to base stations, so
+this approach might not see all phones around you. I typically see
+0-400 IMSI numbers an hour when looking around where I live.</p>
+
+<p>I've tried to run the scanner on a
+<a href="https://wiki.debian.org/RaspberryPi">Raspberry Pi 2 and 3
+running Debian Buster</a>, but the grgsm_livemon_headless process seem
+to be too CPU intensive to keep up. When GNU Radio print 'O' to
+stdout, I am told there it is caused by a buffer overflow between the
+radio and GNU Radio, caused by the program being unable to read the
+GSM data fast enough. If you see a stream of 'O's from the terminal
+where you started scan-and-livemon, you need a give the process more
+CPU power. Perhaps someone are able to optimize the code to a point
+where it become possible to set up RPi3 based GSM sniffers? I tried
+using Raspbian instead of Debian, but there seem to be something wrong
+with GNU Radio on raspbian, causing glibc to abort().</p>
+</description>
+ </item>
+
+ <item>
+ <title>Simpler recipe on how to make a simple $7 IMSI Catcher using Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html</guid>
+ <pubDate>Wed, 9 Aug 2017 23:59:00 +0200</pubDate>
+ <description><p>On friday, I came across an interesting article in the Norwegian
+web based ICT news magazine digi.no on
+<a href="https://www.digi.no/artikler/sikkerhetsforsker-lagde-enkel-imsi-catcher-for-60-kroner-na-kan-mobiler-kartlegges-av-alle/398588">how
+to collect the IMSI numbers of nearby cell phones</a> using the cheap
+DVB-T software defined radios. The article refered to instructions
+and <a href="https://www.youtube.com/watch?v=UjwgNd_as30">a recipe by
+Keld Norman on Youtube on how to make a simple $7 IMSI Catcher</a>, and I decided to test them out.</p>
+
+<p>The instructions said to use Ubuntu, install pip using apt (to
+bypass apt), use pip to install pybombs (to bypass both apt and pip),
+and the ask pybombs to fetch and build everything you need from
+scratch. I wanted to see if I could do the same on the most recent
+Debian packages, but this did not work because pybombs tried to build
+stuff that no longer build with the most recent openssl library or
+some other version skew problem. While trying to get this recipe
+working, I learned that the apt->pip->pybombs route was a long detour,
+and the only piece of software dependency missing in Debian was the
+gr-gsm package. I also found out that the lead upstream developer of
+gr-gsm (the name stand for GNU Radio GSM) project already had a set of
+Debian packages provided in an Ubuntu PPA repository. All I needed to
+do was to dget the Debian source package and built it.</p>
+
+<p>The IMSI collector is a python script listening for packages on the
+loopback network device and printing to the terminal some specific GSM
+packages with IMSI numbers in them. The code is fairly short and easy
+to understand. The reason this work is because gr-gsm include a tool
+to read GSM data from a software defined radio like a DVB-T USB stick
+and other software defined radios, decode them and inject them into a
+network device on your Linux machine (using the loopback device by
+default). This proved to work just fine, and I've been testing the
+collector for a few days now.</p>
+
+<p>The updated and simpler recipe is thus to</p>
+
+<ol>
+
+<li>start with a Debian machine running Stretch or newer,</li>
+
+<li>build and install the gr-gsm package available from
+<a href="http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/">http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/</a>,</li>
+
+<li>clone the git repostory from <a href="https://github.com/Oros42/IMSI-catcher">https://github.com/Oros42/IMSI-catcher</a>,</li>
+
+<li>run grgsm_livemon and adjust the frequency until the terminal
+where it was started is filled with a stream of text (meaning you
+found a GSM station).</li>
+
+<li>go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers.</li>
+
+</ol>
+
+<p>To make it even easier in the future to get this sniffer up and
+running, I decided to package
+<a href="https://github.com/ptrkrysik/gr-gsm/">the gr-gsm project</a>
+for Debian (<a href="https://bugs.debian.org/871055">WNPP
+#871055</a>), and the package was uploaded into the NEW queue today.
+Luckily the gnuradio maintainer has promised to help me, as I do not
+know much about gnuradio stuff yet.</p>
+
+<p>I doubt this "IMSI cacher" is anywhere near as powerfull as
+commercial tools like
+<a href="https://www.thespyphone.com/portable-imsi-imei-catcher/">The
+Spy Phone Portable IMSI / IMEI Catcher</a> or the
+<a href="https://en.wikipedia.org/wiki/Stingray_phone_tracker">Harris
+Stingray</a>, but I hope the existance of cheap alternatives can make
+more people realise how their whereabouts when carrying a cell phone
+is easily tracked. Seeing the data flow on the screen, realizing that
+I live close to a police station and knowing that the police is also
+wearing cell phones, I wonder how hard it would be for criminals to
+track the position of the police officers to discover when there are
+police near by, or for foreign military forces to track the location
+of the Norwegian military forces, or for anyone to track the location
+of government officials...</p>
+
+<p>It is worth noting that the data reported by the IMSI-catcher
+script mentioned above is only a fraction of the data broadcasted on
+the GSM network. It will only collect one frequency at the time,
+while a typical phone will be using several frequencies, and not all
+phones will be using the frequencies tracked by the grgsm_livemod
+program. Also, there is a lot of radio chatter being ignored by the
+simple_IMSI-catcher script, which would be collected by extending the
+parser code. I wonder if gr-gsm can be set up to listen to more than
+one frequency?</p>
+</description>
+ </item>
+
+ <item>
+ <title>Norwegian Bokmål edition of Debian Administrator's Handbook is now available</title>
+ <link>http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_edition_of_Debian_Administrator_s_Handbook_is_now_available.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_edition_of_Debian_Administrator_s_Handbook_is_now_available.html</guid>
+ <pubDate>Tue, 25 Jul 2017 21:10:00 +0200</pubDate>
+ <description><p align="center"><img align="center" src="http://people.skolelinux.org/pere/blog/images/2017-07-25-debian-handbook-nb-testprint.png"/></p>
+
+<p>I finally received a copy of the Norwegian Bokmål edition of
+"<a href="https://debian-handbook.info/">The Debian Administrator's
+Handbook</a>". This test copy arrived in the mail a few days ago, and
+I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition
+<a href="https://debian-handbook.info/get/#norwegian">is available
+from lulu.com</a>. If you buy it quickly, you save 25% on the list
+price. The book is also available for download in electronic form as
+PDF, EPUB and Mobipocket, as can be
+<a href="https://debian-handbook.info/browse/nb-NO/stable/">read online
+as a web page</a>.</p>
+
+<p>This is the second book I publish (the first was the book
+"<a href="http://free-culture.cc/">Free Culture</a>" by Lawrence Lessig
+in
+<a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">English</a>,
+<a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">French</a>
+and
+<a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Norwegian
+Bokmål</a>), and I am very excited to finally wrap up this
+project. I hope
+"<a href="http://www.lulu.com/shop/rapha%C3%ABl-hertzog-and-roland-mas/h%C3%A5ndbok-for-debian-administratoren/paperback/product-23262290.html">Håndbok
+for Debian-administratoren</a>" will be well received.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Updated sales number for my Free Culture paper editions</title>
+ <link>http://people.skolelinux.org/pere/blog/Updated_sales_number_for_my_Free_Culture_paper_editions.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Updated_sales_number_for_my_Free_Culture_paper_editions.html</guid>
+ <pubDate>Mon, 12 Jun 2017 11:40:00 +0200</pubDate>
+ <description><p>It is pleasing to see that the work we put down in publishing new
+editions of the classic <a href="http://www.free-culture.cc/">Free
+Culture book</a> by the founder of the Creative Commons movement,
+Lawrence Lessig, is still being appreciated. I had a look at the
+latest sales numbers for the paper edition today. Not too impressive,
+but happy to see some buyers still exist. All the revenue from the
+books is sent to the <a href="https://creativecommons.org/">Creative
+Commons Corporation</a>, and they receive the largest cut if you buy
+directly from Lulu. Most books are sold via Amazon, with Ingram
+second and only a small fraction directly from Lulu. The ebook
+edition is available for free from
+<a href="https://github.com/petterreinholdtsen/free-culture-lessig">Github</a>.</p>
+
+<table border="0">
+<tr><th rowspan="2" valign="bottom">Title / language</th><th colspan="3">Quantity</th></tr>
+<tr><th>2016 jan-jun</th><th>2016 jul-dec</th><th>2017 jan-may</th></tr>
+
+<tr>
+ <td><a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">Culture Libre / French</a></td>
+ <td align="right">3</td>
+ <td align="right">6</td>
+ <td align="right">15</td>
+</tr>
+
+<tr>
+ <td><a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Fri kultur / Norwegian</a></td>
+ <td align="right">7</td>
+ <td align="right">1</td>
+ <td align="right">0</td>
+</tr>
+
+<tr>
+ <td><a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">Free Culture / English</a></td>
+ <td align="right">14</td>
+ <td align="right">27</td>
+ <td align="right">16</td>
+</tr>
+
+<tr>
+ <td>Total</td>
+ <td align="right">24</td>
+ <td align="right">34</td>
+ <td align="right">31</td>
+</tr>
+
+</table>
+
+<p>A bit sad to see the low sales number on the Norwegian edition, and
+a bit surprising the English edition still selling so well.</p>
+
+<p>If you would like to translate and publish the book in your native
+language, I would be happy to help make it happen. Please get in
+touch.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Release 0.1.1 of free software archive system Nikita announced</title>
+ <link>http://people.skolelinux.org/pere/blog/Release_0_1_1_of_free_software_archive_system_Nikita_announced.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Release_0_1_1_of_free_software_archive_system_Nikita_announced.html</guid>
+ <pubDate>Sat, 10 Jun 2017 00:40:00 +0200</pubDate>
+ <description><p>I am very happy to report that the
+<a href="https://github.com/hiOA-ABI/nikita-noark5-core">Nikita Noark 5
+core project</a> tagged its second release today. The free software
+solution is an implementation of the Norwegian archive standard Noark
+5 used by government offices in Norway. These were the changes in
+version 0.1.1 since version 0.1.0 (from NEWS.md):
+
+<ul>
+
+ <li>Continued work on the angularjs GUI, including document upload.</li>
+ <li>Implemented correspondencepartPerson, correspondencepartUnit and
+ correspondencepartInternal</li>
+ <li>Applied for coverity coverage and started submitting code on
+ regualr basis.</li>
+ <li>Started fixing bugs reported by coverity</li>
+ <li>Corrected and completed HATEOAS links to make sure entire API is
+ available via URLs in _links.</li>
+ <li>Corrected all relation URLs to use trailing slash.</li>
+ <li>Add initial support for storing data in ElasticSearch.</li>
+ <li>Now able to receive and store uploaded files in the archive.</li>
+ <li>Changed JSON output for object lists to have relations in _links.</li>
+ <li>Improve JSON output for empty object lists.</li>
+ <li>Now uses correct MIME type application/vnd.noark5-v4+json.</li>
+ <li>Added support for docker container images.</li>
+ <li>Added simple API browser implemented in JavaScript/Angular.</li>
+ <li>Started on archive client implemented in JavaScript/Angular.</li>
+ <li>Started on prototype to show the public mail journal.</li>
+ <li>Improved performance by disabling Sprint FileWatcher.</li>
+ <li>Added support for 'arkivskaper', 'saksmappe' and 'journalpost'.</li>
+ <li>Added support for some metadata codelists.</li>
+ <li>Added support for Cross-origin resource sharing (CORS).</li>
+ <li>Changed login method from Basic Auth to JSON Web Token (RFC 7519)
+ style.</li>
+ <li>Added support for GET-ing ny-* URLs.</li>
+ <li>Added support for modifying entities using PUT and eTag.</li>
+ <li>Added support for returning XML output on request.</li>
+ <li>Removed support for English field and class names, limiting ourself
+ to the official names.</li>
+ <li>...</li>
+
+</ul>
+
+<p>If this sound interesting to you, please contact us on IRC (#nikita
+on irc.freenode.net) or email
+(<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">nikita-noark
+mailing list).</p>
+</description>
+ </item>
+
+ <item>
+ <title>Idea for storing trusted timestamps in a Noark 5 archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html</guid>
+ <pubDate>Wed, 7 Jun 2017 21:40:00 +0200</pubDate>
+ <description><p><em>This is a copy of
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2017-June/000297.html">an
+email I posted to the nikita-noark mailing list</a>. Please follow up
+there if you would like to discuss this topic. The background is that
+we are making a free software archive system based on the Norwegian
+<a href="https://www.arkivverket.no/forvaltning-og-utvikling/regelverk-og-standarder/noark-standarden">Noark
+5 standard</a> for government archives.</em></p>
+
+<p>I've been wondering a bit lately how trusted timestamps could be
+stored in Noark 5.
+<a href="https://en.wikipedia.org/wiki/Trusted_timestamping">Trusted
+timestamps</a> can be used to verify that some information
+(document/file/checksum/metadata) have not been changed since a
+specific time in the past. This is useful to verify the integrity of
+the documents in the archive.</p>
+
+<p>Then it occured to me, perhaps the trusted timestamps could be
+stored as dokument variants (ie dokumentobjekt referered to from
+dokumentbeskrivelse) with the filename set to the hash it is
+stamping?</p>
+
+<p>Given a "dokumentbeskrivelse" with an associated "dokumentobjekt",
+a new dokumentobjekt is associated with "dokumentbeskrivelse" with the
+same attributes as the stamped dokumentobjekt except these
+attributes:</p>
+
+<ul>
+
+<li>format -> "RFC3161"
+<li>mimeType -> "application/timestamp-reply"
+<li>formatDetaljer -> "&lt;source URL for timestamp service&gt;"
+<li>filenavn -> "&lt;sjekksum&gt;.tsr"
+
+</ul>
+
+<p>This assume a service following
+<a href="https://tools.ietf.org/html/rfc3161">IETF RFC 3161</a> is
+used, which specifiy the given MIME type for replies and the .tsr file
+ending for the content of such trusted timestamp. As far as I can
+tell from the Noark 5 specifications, it is OK to have several
+variants/renderings of a dokument attached to a given
+dokumentbeskrivelse objekt. It might be stretching it a bit to make
+some of these variants represent crypto-signatures useful for
+verifying the document integrity instead of representing the dokument
+itself.</p>
+
+<p>Using the source of the service in formatDetaljer allow several
+timestamping services to be used. This is useful to spread the risk
+of key compromise over several organisations. It would only be a
+problem to trust the timestamps if all of the organisations are
+compromised.</p>
+
+<p>The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "<sjekksum>.tsr" value mentioned
+above).</p>
+
+<p><blockquote><pre>
+openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
+ | curl -s -H "Content-Type: application/timestamp-query" \
+ --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
+</pre></blockquote></p>
+
+<p>To verify the timestamp, you first need to download the public key
+of the trusted timestamp service, for example using this command:</p>
+
+<p><blockquote><pre>
+wget -O ca-cert.txt \
+ https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
+</pre></blockquote></p>
+
+<p>Note, the public key should be stored alongside the timestamps in
+the archive to make sure it is also available 100 years from now. It
+is probably a good idea to standardise how and were to store such
+public keys, to make it easier to find for those trying to verify
+documents 100 or 1000 years from now. :)</p>
+
+<p>The verification itself is a simple openssl command:</p>
+
+<p><blockquote><pre>
+openssl ts -verify -data $inputfile -in $sha256.tsr \
+ -CAfile ca-cert.txt -text
+</pre></blockquote></p>
+
+<p>Is there any reason this approach would not work? Is it somehow against
+the Noark 5 specification?</p>
+</description>
+ </item>
+
+ <item>
+ <title>Free software archive system Nikita now able to store documents</title>
+ <link>http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Free_software_archive_system_Nikita_now_able_to_store_documents.html</guid>
+ <pubDate>Sun, 19 Mar 2017 08:00:00 +0100</pubDate>
+ <description><p>The <a href="https://github.com/hiOA-ABI/nikita-noark5-core">Nikita
+Noark 5 core project</a> is implementing the Norwegian standard for
+keeping an electronic archive of government documents.
+<a href="http://www.arkivverket.no/arkivverket/Offentlig-forvaltning/Noark/Noark-5/English-version">The
+Noark 5 standard</a> document the requirement for data systems used by
+the archives in the Norwegian government, and the Noark 5 web interface
+specification document a REST web service for storing, searching and
+retrieving documents and metadata in such archive. I've been involved
+in the project since a few weeks before Christmas, when the Norwegian
+Unix User Group
+<a href="https://www.nuug.no/news/NOARK5_kjerne_som_fri_programvare_f_r_epostliste_hos_NUUG.shtml">announced
+it supported the project</a>. I believe this is an important project,
+and hope it can make it possible for the government archives in the
+future to use free software to keep the archives we citizens depend
+on. But as I do not hold such archive myself, personally my first use
+case is to store and analyse public mail journal metadata published
+from the government. I find it useful to have a clear use case in
+mind when developing, to make sure the system scratches one of my
+itches.</p>
+
+<p>If you would like to help make sure there is a free software
+alternatives for the archives, please join our IRC channel
+(<a href="irc://irc.freenode.net/%23nikita"">#nikita on
+irc.freenode.net</a>) and
+<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">the
+project mailing list</a>.</p>
+
+<p>When I got involved, the web service could store metadata about
+documents. But a few weeks ago, a new milestone was reached when it
+became possible to store full text documents too. Yesterday, I
+completed an implementation of a command line tool
+<tt>archive-pdf</tt> to upload a PDF file to the archive using this
+API. The tool is very simple at the moment, and find existing
+<a href="https://en.wikipedia.org/wiki/Fonds">fonds</a>, series and
+files while asking the user to select which one to use if more than
+one exist. Once a file is identified, the PDF is associated with the
+file and uploaded, using the title extracted from the PDF itself. The
+process is fairly similar to visiting the archive, opening a cabinet,
+locating a file and storing a piece of paper in the archive. Here is
+a test run directly after populating the database with test data using
+our API tester:</p>
+
+<p><blockquote><pre>
+~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
+using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
+using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
+
+ 0 - Title of the test case file created 2017-03-18T23:49:32.103446
+ 1 - Title of the test file created 2017-03-18T23:49:32.103446
+Select which mappe you want (or search term): 0
+Uploading mangelmelding/mangler.pdf
+ PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
+ File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
+~/src//noark5-tester$
+</pre></blockquote></p>
+
+<p>You can see here how the fonds (arkiv) and serie (arkivdel) only had
+one option, while the user need to choose which file (mappe) to use
+among the two created by the API tester. The <tt>archive-pdf</tt>
+tool can be found in the git repository for the API tester.</p>
+
+<p>In the project, I have been mostly working on
+<a href="https://github.com/petterreinholdtsen/noark5-tester">the API
+tester</a> so far, while getting to know the code base. The API
+tester currently use
+<a href="https://en.wikipedia.org/wiki/HATEOAS">the HATEOAS links</a>
+to traverse the entire exposed service API and verify that the exposed
+operations and objects match the specification, as well as trying to
+create objects holding metadata and uploading a simple XML file to
+store. The tester has proved very useful for finding flaws in our
+implementation, as well as flaws in the reference site and the
+specification.</p>
+
+<p>The test document I uploaded is a summary of all the specification
+defects we have collected so far while implementing the web service.
+There are several unclear and conflicting parts of the specification,
+and we have
+<a href="https://github.com/petterreinholdtsen/noark5-tester/tree/master/mangelmelding">started
+writing down</a> the questions we get from implementing it. We use a
+format inspired by how <a href="http://www.opengroup.org/austin/">The
+Austin Group</a> collect defect reports for the POSIX standard with
+<a href="http://www.opengroup.org/austin/mantis.html">their
+instructions for the MANTIS defect tracker system</a>, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a <a href="https://github.com/petterreinholdtsen/noark5-tester/blob/master/mangelmelding/sendt/2017-03-15-mangel-prosess.md">request for a procedure for submitting defect reports</a> :).
+
+<p>The Nikita project is implemented using Java and Spring, and is
+fairly easy to get up and running using Docker containers for those
+that want to test the current code base. The API tester is
+implemented in Python.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Detecting NFS hangs on Linux without hanging yourself...</title>
+ <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html</guid>
+ <pubDate>Thu, 9 Mar 2017 15:20:00 +0100</pubDate>
+ <description><p>Over the years, administrating thousand of NFS mounting linux
+computers at the time, I often needed a way to detect if the machine
+was experiencing NFS hang. If you try to use <tt>df</tt> or look at a
+file or directory affected by the hang, the process (and possibly the
+shell) will hang too. So you want to be able to detect this without
+risking the detection process getting stuck too. It has not been
+obvious how to do this. When the hang has lasted a while, it is
+possible to find messages like these in dmesg:</p>
+
+<p><blockquote>
+nfs: server nfsserver not responding, still trying
+<br>nfs: server nfsserver OK
+</blockquote></p>
+
+<p>It is hard to know if the hang is still going on, and it is hard to
+be sure looking in dmesg is going to work. If there are lots of other
+messages in dmesg the lines might have rotated out of site before they
+are noticed.</p>
+
+<p>While reading through the nfs client implementation in linux kernel
+code, I came across some statistics that seem to give a way to detect
+it. The om_timeouts sunrpc value in the kernel will increase every
+time the above log entry is inserted into dmesg. And after digging a
+bit further, I discovered that this value show up in
+/proc/self/mountstats on Linux.</p>
+
+<p>The mountstats content seem to be shared between files using the
+same file system context, so it is enough to check one of the
+mountstats files to get the state of the mount point for the machine.
+I assume this will not show lazy umounted NFS points, nor NFS mount
+points in a different process context (ie with a different filesystem
+view), but that does not worry me.</p>
+
+<p>The content for a NFS mount point look similar to this:</p>
+
+<p><blockquote><pre>
+[...]
+device /dev/mapper/Debian-var mounted on /var with fstype ext3
+device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
+ opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
+ age: 7863311
+ caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
+ sec: flavor=1,pseudoflavor=1
+ events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
+ bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
+ RPC iostats version: 1.0 p/v: 100003/3 (nfs)
+ xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
+ per-op statistics
+ NULL: 0 0 0 0 0 0 0 0
+ GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
+ SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
+ LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
+ ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
+ READLINK: 125 125 0 20472 18620 0 1112 1118
+ READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
+ WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
+ CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
+ MKDIR: 3680 3680 0 773980 993920 26 23990 24245
+ SYMLINK: 903 903 0 233428 245488 6 5865 5917
+ MKNOD: 80 80 0 20148 21760 0 299 304
+ REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
+ RMDIR: 3367 3367 0 645112 484848 22 5782 6002
+ RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
+ LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
+ READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
+ READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
+ FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
+ FSINFO: 2 2 0 232 328 0 1 1
+ PATHCONF: 1 1 0 116 140 0 0 0
+ COMMIT: 0 0 0 0 0 0 0 0
+
+device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
+[...]
+</pre></blockquote></p>
+
+<p>The key number to look at is the third number in the per-op list.
+It is the number of NFS timeouts experiences per file system
+operation. Here 22 write timeouts and 5 access timeouts. If these
+numbers are increasing, I believe the machine is experiencing NFS
+hang. Unfortunately the timeout value do not start to increase right
+away. The NFS operations need to time out first, and this can take a
+while. The exact timeout value depend on the setup. For example the
+defaults for TCP and UDP mount points are quite different, and the
+timeout value is affected by the soft, hard, timeo and retrans NFS
+mount options.</p>
+
+<p>The only way I have been able to get working on Debian and RedHat
+Enterprise Linux for getting the timeout count is to peek in /proc/.
+But according to
+<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris
+10 System Administration Guide: Network Services</a>, the 'nfsstat -c'
+command can be used to get these timeout values. But this do not work
+on Linux, as far as I can tell. I
+<ahref="http://bugs.debian.org/857043">asked Debian about this</a>,
+but have not seen any replies yet.</p>
+
+<p>Is there a better way to figure out if a Linux NFS client is
+experiencing NFS hangs? Is there a way to detect which processes are
+affected? Is there a way to get the NFS mount going quickly once the
+network problem causing the NFS hang has been cleared? I would very
+much welcome some clues, as we regularly run into NFS hangs.</p>
+</description>
+ </item>
+
+ <item>
+ <title>How does it feel to be wiretapped, when you should be doing the wiretapping...</title>
+ <link>http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html</guid>
+ <pubDate>Wed, 8 Mar 2017 11:50:00 +0100</pubDate>
+ <description><p>So the new president in the United States of America claim to be
+surprised to discover that he was wiretapped during the election
+before he was elected president. He even claim this must be illegal.
+Well, doh, if it is one thing the confirmations from Snowden
+documented, it is that the entire population in USA is wiretapped, one
+way or another. Of course the president candidates were wiretapped,
+alongside the senators, judges and the rest of the people in USA.</p>
+
+<p>Next, the Federal Bureau of Investigation ask the Department of
+Justice to go public rejecting the claims that Donald Trump was
+wiretapped illegally. I fail to see the relevance, given that I am
+sure the surveillance industry in USA believe they have all the legal
+backing they need to conduct mass surveillance on the entire
+world.</p>
+
+<p>There is even the director of the FBI stating that he never saw an
+order requesting wiretapping of Donald Trump. That is not very
+surprising, given how the FISA court work, with all its activity being
+secret. Perhaps he only heard about it?</p>
+
+<p>What I find most sad in this story is how Norwegian journalists
+present it. In a news reports the other day in the radio from the
+Norwegian National broadcasting Company (NRK), I heard the journalist
+claim that 'the FBI denies any wiretapping', while the reality is that
+'the FBI denies any illegal wiretapping'. There is a fundamental and
+important difference, and it make me sad that the journalists are
+unable to grasp it.</p>
+
+<p><strong>Update 2017-03-13:</strong> Look like
+<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The
+Intercept report that US Senator Rand Paul confirm what I state above</a>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress</title>
+ <link>http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_translation_of_The_Debian_Administrator_s_Handbook_complete__proofreading_in_progress.html</guid>
+ <pubDate>Fri, 3 Mar 2017 14:50:00 +0100</pubDate>
+ <description><p>For almost a year now, we have been working on making a Norwegian
+Bokmål edition of <a href="https://debian-handbook.info/">The Debian
+Administrator's Handbook</a>. Now, thanks to the tireless effort of
+Ole-Erik, Ingrid and Andreas, the initial translation is complete, and
+we are working on the proof reading to ensure consistent language and
+use of correct computer science terms. The plan is to make the book
+available on paper, as well as in electronic form. For that to
+happen, the proof reading must be completed and all the figures need
+to be translated. If you want to help out, get in touch.</p>
+
+<p><a href="http://people.skolelinux.org/pere/debian-handbook/debian-handbook-nb-NO.pdf">A
+
+fresh PDF edition</a> in A4 format (the final book will have smaller
+pages) of the book created every morning is available for
+proofreading. If you find any errors, please
+<a href="https://hosted.weblate.org/projects/debian-handbook/">visit
+Weblate and correct the error</a>. The
+<a href="http://l.github.io/debian-handbook/stat/nb-NO/index.html">state
+of the translation including figures</a> is a useful source for those
+provide Norwegian bokmål screen shots and figures.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Unlimited randomness with the ChaosKey?</title>
+ <link>http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</guid>
+ <pubDate>Wed, 1 Mar 2017 20:50:00 +0100</pubDate>
+ <description><p>A few days ago I ordered a small batch of
+<a href="http://altusmetrum.org/ChaosKey/">the ChaosKey</a>, a small
+USB dongle for generating entropy created by Bdale Garbee and Keith
+Packard. Yesterday it arrived, and I am very happy to report that it
+work great! According to its designers, to get it to work out of the
+box, you need the Linux kernel version 4.1 or later. I tested on a
+Debian Stretch machine (kernel version 4.9), and there it worked just
+fine, increasing the available entropy very quickly. I wrote a small
+test oneliner to test. It first print the current entropy level,
+drain /dev/random, and then print the entropy level for five seconds.
+Here is the situation without the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+300
+0+1 oppføringer inn
+0+1 oppføringer ut
+28 byte kopiert, 0,000264565 s, 106 kB/s
+4
+8
+12
+17
+21
+%
+</pre></blockquote>
+
+<p>The entropy level increases by 3-4 every second. In such case any
+application requiring random bits (like a HTTPS enabled web server)
+will halt and wait for more entrpy. And here is the situation with
+the ChaosKey inserted:</p>
+
+<blockquote><pre>
+% cat /proc/sys/kernel/random/entropy_avail; \
+ dd bs=1M if=/dev/random of=/dev/null count=1; \
+ for n in $(seq 1 5); do \
+ cat /proc/sys/kernel/random/entropy_avail; \
+ sleep 1; \
+ done
+1079
+0+1 oppføringer inn
+0+1 oppføringer ut
+104 byte kopiert, 0,000487647 s, 213 kB/s
+433
+1028
+1031
+1035
+1038
+%
+</pre></blockquote>
+
+<p>Quite the difference. :) I bought a few more than I need, in case
+someone want to buy one here in Norway. :)</p>
+
+<p>Update: The dongle was presented at Debconf last year. You might
+find <a href="https://debconf16.debconf.org/talks/94/">the talk
+recording illuminating</a>. It explains exactly what the source of
+randomness is, if you are unable to spot it from the schema drawing
+available from the ChaosKey web site linked at the start of this blog
+post.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Detect OOXML files with undefined behaviour?</title>
+ <link>http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</guid>
+ <pubDate>Tue, 21 Feb 2017 00:20:00 +0100</pubDate>
+ <description><p>I just noticed
+<a href="http://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">the
+new Norwegian proposal for archiving rules in the goverment</a> list
+<a href="http://www.ecma-international.org/publications/standards/Ecma-376.htm">ECMA-376</a>
+/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term
+storage. Luckily such files will only be accepted based on
+pre-approval from the National Archive. Allowing OOXML files to be
+used for long term storage might seem like a good idea as long as we
+forget that there are plenty of ways for a "valid" OOXML document to
+have content with no defined interpretation in the standard, which
+lead to a question and an idea.</p>
+
+<p>Is there any tool to detect if a OOXML document depend on such
+undefined behaviour? It would be useful for the National Archive (and
+anyone else interested in verifying that a document is well defined)
+to have such tool available when considering to approve the use of
+OOXML. I'm aware of the
+<a href="https://github.com/arlm/officeotron/">officeotron OOXML
+validator</a>, but do not know how complete it is nor if it will
+report use of undefined behaviour. Are there other similar tools
+available? Please send me an email if you know of any such tool.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)</title>
+ <link>http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</guid>
+ <pubDate>Mon, 13 Feb 2017 21:30:00 +0100</pubDate>
+ <description><p>A few days ago, we received the ruling from
+<a href="http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html">my
+day in court</a>. The case in question is a challenge of the seizure
+of the DNS domain popcorn-time.no. The ruling simply did not mention
+most of our arguments, and seemed to take everything ØKOKRIM said at
+face value, ignoring our demonstration and explanations. But it is
+hard to tell for sure, as we still have not seen most of the documents
+in the case and thus were unprepared and unable to contradict several
+of the claims made in court by the opposition. We are considering an
+appeal, but it is partly a question of funding, as it is costing us
+quite a bit to pay for our lawyer. If you want to help, please
+<a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to the
+NUUG defense fund</a>.</p>
+
+<p>The details of the case, as far as we know it, is available in
+Norwegian from
+<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the NUUG
+blog</a>. This also include
+<a href="https://www.nuug.no/news/Avslag_etter_rettslig_h_ring_om_DNS_beslaget___vurderer_veien_videre.shtml">the
+ruling itself</a>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>A day in court challenging seizure of popcorn-time.no for #domstolkontroll</title>
+ <link>http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</guid>
+ <pubDate>Fri, 3 Feb 2017 11:10:00 +0100</pubDate>
+ <description><p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2017-02-01-popcorn-time-in-court.jpeg"></p>
+
+<p>On Wednesday, I spent the entire day in court in Follo Tingrett
+representing <a href="https://www.nuug.no/">the member association
+NUUG</a>, alongside <a href="https://www.efn.no/">the member
+association EFN</a> and <a href="http://www.imc.no">the DNS registrar
+IMC</a>, challenging the seizure of the DNS name popcorn-time.no. It
+was interesting to sit in a court of law for the first time in my
+life. Our team can be seen in the picture above: attorney Ola
+Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil
+Eriksen and NUUG board member Petter Reinholdtsen.</p>
+
+<p><a href="http://www.domstol.no/no/Enkelt-domstol/follo-tingrett/Nar-gar-rettssaken/Beramming/?cid=AAAA1701301512081262234UJFBVEZZZZZEJBAvtale">The
+case at hand</a> is that the Norwegian National Authority for
+Investigation and Prosecution of Economic and Environmental Crime (aka
+Økokrim) decided on their own, to seize a DNS domain early last
+year, without following
+<a href="https://www.norid.no/no/regelverk/navnepolitikk/#link12">the
+official policy of the Norwegian DNS authority</a> which require a
+court decision. The web site in question was a site covering Popcorn
+Time. And Popcorn Time is the name of a technology with both legal
+and illegal applications. Popcorn Time is a client combining
+searching a Bittorrent directory available on the Internet with
+downloading/distribute content via Bittorrent and playing the
+downloaded content on screen. It can be used illegally if it is used
+to distribute content against the will of the right holder, but it can
+also be used legally to play a lot of content, for example the
+millions of movies
+<a href="https://archive.org/details/movies">available from the
+Internet Archive</a> or the collection
+<a href="http://vodo.net/films/">available from Vodo</a>. We created
+<a href="magnet:?xt=urn:btih:86c1802af5a667ca56d3918aecb7d3c0f7173084&dn=PresentasjonFolloTingrett.mov&tr=udp%3A%2F%2Fpublic.popcorn-tracker.org%3A6969%2Fannounce">a
+video demonstrating legally use of Popcorn Time</a> and played it in
+Court. It can of course be downloaded using Bittorrent.</p>
+
+<p>I did not quite know what to expect from a day in court. The
+government held on to their version of the story and we held on to
+ours, and I hope the judge is able to make sense of it all. We will
+know in two weeks time. Unfortunately I do not have high hopes, as
+the Government have the upper hand here with more knowledge about the
+case, better training in handling criminal law and in general higher
+standing in the courts than fairly unknown DNS registrar and member
+associations. It is expensive to be right also in Norway. So far the
+case have cost more than NOK 70 000,-. To help fund the case, NUUG
+and EFN have asked for donations, and managed to collect around NOK 25
+000,- so far. Given the presentation from the Government, I expect
+the government to appeal if the case go our way. And if the case do
+not go our way, I hope we have enough funding to appeal.</p>
+
+<p>From the other side came two people from Økokrim. On the benches,
+appearing to be part of the group from the government were two people
+from the Simonsen Vogt Wiik lawyer office, and three others I am not
+quite sure who was. Økokrim had proposed to present two witnesses
+from The Motion Picture Association, but this was rejected because
+they did not speak Norwegian and it was a bit late to bring in a
+translator, but perhaps the two from MPA were present anyway. All
+seven appeared to know each other. Good to see the case is take
+seriously.</p>
+
+<p>If you, like me, believe the courts should be involved before a DNS
+domain is hijacked by the government, or you believe the Popcorn Time
+technology have a lot of useful and legal applications, I suggest you
+too <a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to
+the NUUG defense fund</a>. Both Bitcoin and bank transfer are
+available. If NUUG get more than we need for the legal action (very
+unlikely), the rest will be spend promoting free software, open
+standards and unix-like operating systems in Norway, so no matter what
+happens the money will be put to good use.</p>
+
+<p>If you want to lean more about the case, I recommend you check out
+<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the blog
+posts from NUUG covering the case</a>. They cover the legal arguments
+on both sides.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Where did that package go? &mdash; geolocated IP traceroute</title>
+ <link>http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</guid>
+ <pubDate>Mon, 9 Jan 2017 12:20:00 +0100</pubDate>
+ <description><p>Did you ever wonder where the web trafic really flow to reach the
+web servers, and who own the network equipment it is flowing through?
+It is possible to get a glimpse of this from using traceroute, but it
+is hard to find all the details. Many years ago, I wrote a system to
+map the Norwegian Internet (trying to figure out if our plans for a
+network game service would get low enough latency, and who we needed
+to talk to about setting up game servers close to the users. Back
+then I used traceroute output from many locations (I asked my friends
+to run a script and send me their traceroute output) to create the
+graph and the map. The output from traceroute typically look like
+this:
+
+<p><pre>
+traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
+ 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms
+ 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms
+ 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms
+ 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms
+ 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms
+ 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms
+ 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms
+ 8 * * *
+ 9 * * *
+[...]
+</pre></p>
+
+<p>This show the DNS names and IP addresses of (at least some of the)
+network equipment involved in getting the data traffic from me to the
+www.stortinget.no server, and how long it took in milliseconds for a
+package to reach the equipment and return to me. Three packages are
+sent, and some times the packages do not follow the same path. This
+is shown for hop 5, where three different IP addresses replied to the
+traceroute request.</p>
+
+<p>There are many ways to measure trace routes. Other good traceroute
+implementations I use are traceroute (using ICMP packages) mtr (can do
+both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP
+traceroute and a lot of other capabilities). All of them are easily
+available in <a href="https://www.debian.org/">Debian</a>.</p>
+
+<p>This time around, I wanted to know the geographic location of
+different route points, to visualize how visiting a web page spread
+information about the visit to a lot of servers around the globe. The
+background is that a web site today often will ask the browser to get
+from many servers the parts (for example HTML, JSON, fonts,
+JavaScript, CSS, video) required to display the content. This will
+leak information about the visit to those controlling these servers
+and anyone able to peek at the data traffic passing by (like your ISP,
+the ISPs backbone provider, FRA, GCHQ, NSA and others).</p>
+
+<p>Lets pick an example, the Norwegian parliament web site
+www.stortinget.no. It is read daily by all members of parliament and
+their staff, as well as political journalists, activits and many other
+citizens of Norway. A visit to the www.stortinget.no web site will
+ask your browser to contact 8 other servers: ajax.googleapis.com,
+insights.hotjar.com, script.hotjar.com, static.hotjar.com,
+stats.g.doubleclick.net, www.google-analytics.com,
+www.googletagmanager.com and www.netigate.se. I extracted this by
+asking <a href="http://phantomjs.org/">PhantomJS</a> to visit the
+Stortinget web page and tell me all the URLs PhantomJS downloaded to
+render the page (in HAR format using
+<a href="https://github.com/ariya/phantomjs/blob/master/examples/netsniff.js">their
+netsniff example</a>. I am very grateful to Gorm for showing me how
+to do this). My goal is to visualize network traces to all IP
+addresses behind these DNS names, do show where visitors personal
+information is spread when visiting the page.</p>
+
+<p align="center"><a href="www.stortinget.no-geoip.kml"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geoip-small.png" alt="map of combined traces for URLs used by www.stortinget.no using GeoIP"/></a></p>
+
+<p>When I had a look around for options, I could not find any good
+free software tools to do this, and decided I needed my own traceroute
+wrapper outputting KML based on locations looked up using GeoIP. KML
+is easy to work with and easy to generate, and understood by several
+of the GIS tools I have available. I got good help from by NUUG
+colleague Anders Einar with this, and the result can be seen in
+<a href="https://github.com/petterreinholdtsen/kmltraceroute">my
+kmltraceroute git repository</a>. Unfortunately, the quality of the
+free GeoIP databases I could find (and the for-pay databases my
+friends had access to) is not up to the task. The IP addresses of
+central Internet infrastructure would typically be placed near the
+controlling companies main office, and not where the router is really
+located, as you can see from <a href="www.stortinget.no-geoip.kml">the
+KML file I created</a> using the GeoLite City dataset from MaxMind.
+
+<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy-small.png" alt="scapy traceroute graph for URLs used by www.stortinget.no"/></a></p>
+
+<p>I also had a look at the visual traceroute graph created by
+<a href="http://www.secdev.org/projects/scapy/">the scrapy project</a>,
+showing IP network ownership (aka AS owner) for the IP address in
+question.
+<a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg">The
+graph display a lot of useful information about the traceroute in SVG
+format</a>, and give a good indication on who control the network
+equipment involved, but it do not include geolocation. This graph
+make it possible to see the information is made available at least for
+UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level
+3 Communications and NetDNA.</p>
+
+<p align="center"><a href="https://geotraceroute.com/index.php?node=4&host=www.stortinget.no"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-small.png" alt="example geotraceroute view for www.stortinget.no"/></a></p>
+
+<p>In the process, I came across the
+<a href="https://geotraceroute.com/">web service GeoTraceroute</a> by
+Salim Gasmi. Its methology of combining guesses based on DNS names,
+various location databases and finally use latecy times to rule out
+candidate locations seemed to do a very good job of guessing correct
+geolocation. But it could only do one trace at the time, did not have
+a sensor in Norway and did not make the geolocations easily available
+for postprocessing. So I contacted the developer and asked if he
+would be willing to share the code (he refused until he had time to
+clean it up), but he was interested in providing the geolocations in a
+machine readable format, and willing to set up a sensor in Norway. So
+since yesterday, it is possible to run traces from Norway in this
+service thanks to a sensor node set up by
+<a href="https://www.nuug.no/">the NUUG assosiation</a>, and get the
+trace in KML format for further processing.</p>
+
+<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.kml"><img
+src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.png" alt="map of combined traces for URLs used by www.stortinget.no using geotraceroute"/></a></p>
+
+<p>Here we can see a lot of trafic passes Sweden on its way to
+Denmark, Germany, Holland and Ireland. Plenty of places where the
+Snowden confirmations verified the traffic is read by various actors
+without your best interest as their top priority.</p>
+
+<p>Combining KML files is trivial using a text editor, so I could loop
+over all the hosts behind the urls imported by www.stortinget.no and
+ask for the KML file from GeoTraceroute, and create a combined KML
+file with all the traces (unfortunately only one of the IP addresses
+behind the DNS name is traced this time. To get them all, one would
+have to request traces using IP number instead of DNS names from
+GeoTraceroute). That might be the next step in this project.</p>
+
+<p>Armed with these tools, I find it a lot easier to figure out where
+the IP traffic moves and who control the boxes involved in moving it.
+And every time the link crosses for example the Swedish border, we can
+be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in
+Britain and NSA in USA and cables around the globe. (Hm, what should
+we tell them? :) Keep that in mind if you ever send anything
+unencrypted over the Internet.</p>
+
+<p>PS: KML files are drawn using
+<a href="http://ivanrublev.me/kml/">the KML viewer from Ivan
+Rublev<a/>, as it was less cluttered than the local Linux application
+Marble. There are heaps of other options too.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Introducing ical-archiver to split out old iCalendar entries</title>
+ <link>http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</guid>
+ <pubDate>Wed, 4 Jan 2017 12:20:00 +0100</pubDate>
+ <description><p>Do you have a large <a href="https://icalendar.org/">iCalendar</a>
+file with lots of old entries, and would like to archive them to save
+space and resources? At least those of us using KOrganizer know that
+turning on and off an event set become slower and slower the more
+entries are in the set. While working on migrating our calendars to a
+<a href="http://radicale.org/">Radicale CalDAV server</a> on our
+<a href="https://freedomboxfoundation.org/">Freedombox server</a/>, my
+loved one wondered if I could find a way to split up the calendar file
+she had in KOrganizer, and I set out to write a tool. I spent a few
+days writing and polishing the system, and it is now ready for general
+consumption. The
+<a href="https://github.com/petterreinholdtsen/ical-archiver">code for
+ical-archiver</a> is publicly available from a git repository on
+github. The system is written in Python and depend on
+<a href="http://eventable.github.io/vobject/">the vobject Python
+module</a>.</p>
+
+<p>To use it, locate the iCalendar file you want to operate on and
+give it as an argument to the ical-archiver script. This will
+generate a set of new files, one file per component type per year for
+all components expiring more than two years in the past. The vevent,
+vtodo and vjournal entries are handled by the script. The remaining
+entries are stored in a 'remaining' file.</p>
+
+<p>This is what a test run can look like:
+
+<p><pre>
+% ical-archiver t/2004-2016.ics
+Found 3612 vevents
+Found 6 vtodos
+Found 2 vjournals
+Writing t/2004-2016.ics-subset-vevent-2004.ics
+Writing t/2004-2016.ics-subset-vevent-2005.ics
+Writing t/2004-2016.ics-subset-vevent-2006.ics
+Writing t/2004-2016.ics-subset-vevent-2007.ics
+Writing t/2004-2016.ics-subset-vevent-2008.ics
+Writing t/2004-2016.ics-subset-vevent-2009.ics
+Writing t/2004-2016.ics-subset-vevent-2010.ics
+Writing t/2004-2016.ics-subset-vevent-2011.ics
+Writing t/2004-2016.ics-subset-vevent-2012.ics
+Writing t/2004-2016.ics-subset-vevent-2013.ics
+Writing t/2004-2016.ics-subset-vevent-2014.ics
+Writing t/2004-2016.ics-subset-vjournal-2007.ics
+Writing t/2004-2016.ics-subset-vjournal-2011.ics
+Writing t/2004-2016.ics-subset-vtodo-2012.ics
+Writing t/2004-2016.ics-remaining.ics
+%
+</pre></p>
+
+<p>As you can see, the original file is untouched and new files are
+written with names derived from the original file. If you are happy
+with their content, the *-remaining.ics file can replace the original
+the the others can be archived or imported as historical calendar
+collections.</p>
+
+<p>The script should probably be improved a bit. The error handling
+when discovering broken entries is not good, and I am not sure yet if
+it make sense to split different entry types into separate files or
+not. The program is thus likely to change. If you find it
+interesting, please get in touch. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Appstream just learned how to map hardware to packages too!</title>
+ <link>http://people.skolelinux.org/pere/blog/Appstream_just_learned_how_to_map_hardware_to_packages_too_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Appstream_just_learned_how_to_map_hardware_to_packages_too_.html</guid>
+ <pubDate>Fri, 23 Dec 2016 10:30:00 +0100</pubDate>
+ <description><p>I received a very nice Christmas present today. As my regular
+readers probably know, I have been working on the
+<a href="http://packages.qa.debian.org/isenkram">the Isenkram
+system</a> for many years. The goal of the Isenkram system is to make
+it easier for users to figure out what to install to get a given piece
+of hardware to work in Debian, and a key part of this system is a way
+to map hardware to packages. Isenkram have its own mapping database,
+and also uses data provided by each package using the AppStream
+metadata format. And today,
+<a href="https://tracker.debian.org/pkg/appstream">AppStream</a> in
+Debian learned to look up hardware the same way Isenkram is doing it,
+ie using fnmatch():</p>
+
+<p><pre>
+% appstreamcli what-provides modalias \
+ usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
+Identifier: pymissile [generic]
+Name: pymissile
+Summary: Control original Striker USB Missile Launcher
+Package: pymissile
+% appstreamcli what-provides modalias usb:v0694p0002d0000
+Identifier: libnxt [generic]
+Name: libnxt
+Summary: utility library for talking to the LEGO Mindstorms NXT brick
+Package: libnxt
+---
+Identifier: t2n [generic]
+Name: t2n
+Summary: Simple command-line tool for Lego NXT
+Package: t2n
+---
+Identifier: python-nxt [generic]
+Name: python-nxt
+Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot
+Package: python-nxt
+---
+Identifier: nbc [generic]
+Name: nbc
+Summary: C compiler for LEGO Mindstorms NXT bricks
+Package: nbc
+%
+</pre></p>
+
+<p>A similar query can be done using the combined AppStream and
+Isenkram databases using the isenkram-lookup tool:</p>
+
+<p><pre>
+% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
+pymissile
+% isenkram-lookup usb:v0694p0002d0000
+libnxt
+nbc
+python-nxt
+t2n
+%
+</pre></p>
+
+<p>You can find modalias values relevant for your machine using
+<tt>cat $(find /sys/devices/ -name modalias)</tt>.
+
+<p>If you want to make this system a success and help Debian users
+make the most of the hardware they have, please
+help<a href="https://wiki.debian.org/AppStream/Guidelines">add
+AppStream metadata for your package following the guidelines</a>
+documented in the wiki. So far only 11 packages provide such
+information, among the several hundred hardware specific packages in
+Debian. The Isenkram database on the other hand contain 101 packages,
+mostly related to USB dongles. Most of the packages with hardware
+mapping in AppStream are LEGO Mindstorms related, because I have, as
+part of my involvement in
+<a href="https://wiki.debian.org/LegoDesigners">the Debian LEGO
+team</a> given priority to making sure LEGO users get proposed the
+complete set of packages in Debian for that particular hardware. The
+team also got a nice Christmas present today. The
+<a href="https://tracker.debian.org/pkg/nxt-firmware">nxt-firmware
+package</a> made it into Debian. With this package in place, it is
+now possible to use the LEGO Mindstorms NXT unit with only free
+software, as the nxt-firmware package contain the source and firmware
+binaries for the NXT brick.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Isenkram updated with a lot more hardware-package mappings</title>
+ <link>http://people.skolelinux.org/pere/blog/Isenkram_updated_with_a_lot_more_hardware_package_mappings.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram_updated_with_a_lot_more_hardware_package_mappings.html</guid>
+ <pubDate>Tue, 20 Dec 2016 11:55:00 +0100</pubDate>
+ <description><p><a href="http://packages.qa.debian.org/isenkram">The Isenkram
+system</a> I wrote two years ago to make it easier in Debian to find
+and install packages to get your hardware dongles to work, is still
+going strong. It is a system to look up the hardware present on or
+connected to the current system, and map the hardware to Debian
+packages. It can either be done using the tools in isenkram-cli or
+using the user space daemon in the isenkram package. The latter will
+notify you, when inserting new hardware, about what packages to
+install to get the dongle working. It will even provide a button to
+click on to ask packagekit to install the packages.</p>
+
+<p>Here is an command line example from my Thinkpad laptop:</p>
+
+<p><pre>
+% isenkram-lookup
+bluez
+cheese
+ethtool
+fprintd
+fprintd-demo
+gkrellm-thinkbat
+hdapsd
+libpam-fprintd
+pidgin-blinklight
+thinkfan
+tlp
+tp-smapi-dkms
+tp-smapi-source
+tpb
+%
+</pre></p>
+
+<p>It can also list the firware package providing firmware requested
+by the load kernel modules, which in my case is an empty list because
+I have all the firmware my machine need:
+
+<p><pre>
+% /usr/sbin/isenkram-autoinstall-firmware -l
+info: did not find any firmware files requested by loaded kernel modules. exiting
+%
+</pre></p>
+
+<p>The last few days I had a look at several of the around 250
+packages in Debian with udev rules. These seem like good candidates
+to install when a given hardware dongle is inserted, and I found
+several that should be proposed by isenkram. I have not had time to
+check all of them, but am happy to report that now there are 97
+packages packages mapped to hardware by Isenkram. 11 of these
+packages provide hardware mapping using AppStream, while the rest are
+listed in the modaliases file provided in isenkram.</p>
+
+<p>These are the packages with hardware mappings at the moment. The
+<strong>marked packages</strong> are also announcing their hardware
+support using AppStream, for everyone to use:</p>
+
+<p>air-quality-sensor, alsa-firmware-loaders, argyll,
+<strong>array-info</strong>, avarice, avrdude, b43-fwcutter,
+bit-babbler, bluez, bluez-firmware, <strong>brltty</strong>,
+<strong>broadcom-sta-dkms</strong>, calibre, cgminer, cheese, colord,
+<strong>colorhug-client</strong>, dahdi-firmware-nonfree, dahdi-linux,
+dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd,
+fprintd-demo, <strong>galileo</strong>, gkrellm-thinkbat, gphoto2,
+gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus,
+gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip,
+ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup,
+<strong>libnxt</strong>, libpam-fprintd, <strong>lomoco</strong>,
+madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel,
+<strong>nbc</strong>, <strong>nqc</strong>, nut-hal-drivers, ola,
+open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils,
+pcscd, pidgin-blinklight, printer-driver-splix,
+<strong>pymissile</strong>, python-nxt, qlandkartegt,
+qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl,
+soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools,
+<strong>t2n</strong>, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms,
+tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking,
+virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse,
+xserver-xorg-input-wacom, xserver-xorg-video-qxl,
+xserver-xorg-video-vmware, yubikey-personalization and
+zd1211-firmware</p>
+
+<p>If you know of other packages, please let me know with a wishlist
+bug report against the isenkram-cli package, and ask the package
+maintainer to
+<a href="https://wiki.debian.org/AppStream/Guidelines">add AppStream
+metadata according to the guidelines</a> to provide the information
+for everyone. In time, I hope to get rid of the isenkram specific
+hardware mapping and depend exclusively on AppStream.</p>
+
+<p>Note, the AppStream metadata for broadcom-sta-dkms is matching too
+much hardware, and suggest that the package with with any ethernet
+card. See <a href="http://bugs.debian.org/838735">bug #838735</a> for
+the details. I hope the maintainer find time to address it soon. In
+the mean time I provide an override in isenkram.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Oolite, a life in space as vagabond and mercenary - nice free software</title>
+ <link>http://people.skolelinux.org/pere/blog/Oolite__a_life_in_space_as_vagabond_and_mercenary___nice_free_software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Oolite__a_life_in_space_as_vagabond_and_mercenary___nice_free_software.html</guid>
+ <pubDate>Sun, 11 Dec 2016 11:40:00 +0100</pubDate>
+ <description><p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2016-12-11-nice-oolite.png"/></p>
+
+<p>In my early years, I played
+<a href="http://wiki.alioth.net/index.php/Classic_Elite">the epic game
+Elite</a> on my PC. I spent many months trading and fighting in
+space, and reached the 'elite' fighting status before I moved on. The
+original Elite game was available on Commodore 64 and the IBM PC
+edition I played had a 64 KB executable. I am still impressed today
+that the authors managed to squeeze both a 3D engine and details about
+more than 2000 planet systems across 7 galaxies into a binary so
+small.</p>
+
+<p>I have known about <a href="http://www.oolite.org/">the free
+software game Oolite inspired by Elite</a> for a while, but did not
+really have time to test it properly until a few days ago. It was
+great to discover that my old knowledge about trading routes were
+still valid. But my fighting and flying abilities were gone, so I had
+to retrain to be able to dock on a space station. And I am still not
+able to make much resistance when I am attacked by pirates, so I
+bougth and mounted the most powerful laser in the rear to be able to
+put up at least some resistance while fleeing for my life. :)</p>
+
+<p>When playing Elite in the late eighties, I had to discover
+everything on my own, and I had long lists of prices seen on different
+planets to be able to decide where to trade what. This time I had the
+advantages of the
+<a href="http://wiki.alioth.net/index.php/Main_Page">Elite wiki</a>,
+where information about each planet is easily available with common
+price ranges and suggested trading routes. This improved my ability
+to earn money and I have been able to earn enough to buy a lot of
+useful equipent in a few days. I believe I originally played for
+months before I could get a docking computer, while now I could get it
+after less then a week.</p>
+
+<p>If you like science fiction and dreamed of a life as a vagabond in
+space, you should try out Oolite. It is available for Linux, MacOSX
+and Windows, and is included in Debian and derivatives since 2011.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Quicker Debian installations using eatmydata</title>
+ <link>http://people.skolelinux.org/pere/blog/Quicker_Debian_installations_using_eatmydata.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Quicker_Debian_installations_using_eatmydata.html</guid>
+ <pubDate>Fri, 25 Nov 2016 14:50:00 +0100</pubDate>
+ <description><p>Two years ago, I did some experiments with eatmydata and the Debian
+installation system, observing how using
+<a href="http://people.skolelinux.org/pere/blog/Speeding_up_the_Debian_installer_using_eatmydata_and_dpkg_divert.html">eatmydata
+could speed up the installation</a> quite a bit. My testing measured
+speedup around 20-40 percent for Debian Edu, where we install around
+1000 packages from within the installer. The eatmydata package
+provide a way to disable/delay file system flushing. This is a bit
+risky in the general case, as files that should be stored on disk will
+stay only in memory a bit longer than expected, causing problems if a
+machine crashes at an inconvenient time. But for an installation, if
+the machine crashes during installation the process is normally
+restarted, and avoiding disk operations as much as possible to speed
+up the process make perfect sense.
+
+<p>I added code in the Debian Edu specific installation code to enable
+<a href="https://tracker.debian.org/pkg/libeatmydata">eatmydata</a>,
+but did not have time to push it any further. But a few months ago I
+picked it up again and worked with the libeatmydata package maintainer
+Mattia Rizzolo to make it easier for everyone to get this installation
+speedup in Debian. Thanks to our cooperation There is now an
+eatmydata-udeb package in Debian testing and unstable, and simply
+enabling/installing it in debian-installer (d-i) is enough to get the
+quicker installations. It can be enabled using preseeding. The
+following untested kernel argument should do the trick:</p>
+
+<blockquote><pre>
+preseed/early_command="anna-install eatmydata-udeb"
+</pre></blockquote>
+
+<p>This should ask d-i to install the package inside the d-i
+environment early in the installation sequence. Having it installed
+in d-i in turn will make sure the relevant scripts are called just
+after debootstrap filled /target/ with the freshly installed Debian
+system to configure apt to run dpkg with eatmydata. This is enough to
+speed up the installation process. There is a proposal to
+<a href="https://bugs.debian.org/841153">extend the idea a bit further
+by using /etc/ld.so.preload instead of apt.conf</a>, but I have not
+tested its impact.</p>
+
+</description>
+ </item>
+
+ <item>
+ <title>Coz profiler for multi-threaded software is now in Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Coz_profiler_for_multi_threaded_software_is_now_in_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Coz_profiler_for_multi_threaded_software_is_now_in_Debian.html</guid>
+ <pubDate>Sun, 13 Nov 2016 12:30:00 +0100</pubDate>
+ <description><p><a href="http://coz-profiler.org/">The Coz profiler</a>, a nice
+profiler able to run benchmarking experiments on the instrumented
+multi-threaded program, finally
+<a href="https://tracker.debian.org/pkg/coz-profiler">made it into
+Debian unstable yesterday</A>. Lluís Vilanova and I have spent many
+months since
+<a href="http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html">I
+blogged about the coz tool</a> in August working with upstream to make
+it suitable for Debian. There are still issues with clang
+compatibility, inline assembly only working x86 and minimized
+JavaScript libraries.</p>
+
+<p>To test it, install 'coz-profiler' using apt and run it like this:</p>
+
+<p><blockquote>
+<tt>coz run --- /path/to/binary-with-debug-info</tt>
+</blockquote></p>
+
+<p>This will produce a profile.coz file in the current working
+directory with the profiling information. This is then given to a
+JavaScript application provided in the package and available from
+<a href="http://plasma-umass.github.io/coz/">a project web page</a>.
+To start the local copy, invoke it in a browser like this:</p>
+
+<p><blockquote>
+<tt>sensible-browser /usr/share/coz-profiler/viewer/index.htm</tt>
+</blockquote></p>
+
+<p>See the project home page and the
+<a href="https://www.usenix.org/publications/login/summer2016/curtsinger">USENIX
+;login: article on Coz</a> for more information on how it is
+working.</p>
+</description>
+ </item>
+
+ <item>
+ <title>How to talk with your loved ones in private</title>
+ <link>http://people.skolelinux.org/pere/blog/How_to_talk_with_your_loved_ones_in_private.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_to_talk_with_your_loved_ones_in_private.html</guid>
+ <pubDate>Mon, 7 Nov 2016 10:25:00 +0100</pubDate>
+ <description><p>A few days ago I ran a very biased and informal survey to get an
+idea about what options are being used to communicate with end to end
+encryption with friends and family. I explicitly asked people not to
+list options only used in a work setting. The background is the
+uneasy feeling I get when using Signal, a feeling shared by others as
+a blog post from Sander Venima about
+<a href="https://sandervenema.ch/2016/11/why-i-wont-recommend-signal-anymore/">why
+he do not recommend Signal anymore</a> (with
+<a href="https://news.ycombinator.com/item?id=12883410">feedback from
+the Signal author available from ycombinator</a>). I wanted an
+overview of the options being used, and hope to include those options
+in a less biased survey later on. So far I have not taken the time to
+look into the individual proposed systems. They range from text
+sharing web pages, via file sharing and email to instant messaging,
+VOIP and video conferencing. For those considering which system to
+use, it is also useful to have a look at
+<a href="https://www.eff.org/secure-messaging-scorecard">the EFF Secure
+messaging scorecard</a> which is slightly out of date but still
+provide valuable information.</p>
+
+<p>So, on to the list. There were some used by many, some used by a
+few, some rarely used ones and a few mentioned but without anyone
+claiming to use them. Notice the grouping is in reality quite random
+given the biased self selected set of participants. First the ones
+used by many:</p>
+
+<ul>
+
+<li><a href="https://whispersystems.org/">Signal</a></li>
+<li>Email w/<a href="http://openpgp.org/">OpenPGP</a> (Enigmail, GPGSuite,etc)</li>
+<li><a href="https://www.whatsapp.com/">Whatsapp</a></li>
+<li>IRC w/<a href="https://otr.cypherpunks.ca/">OTR</a></li>
+<li>XMPP w/<a href="https://otr.cypherpunks.ca/">OTR</a></li>
+
+</ul>
+
+<p>Then the ones used by a few.</p>
+
+<ul>
+
+<li><a href="https://wiki.mumble.info/wiki/Main_Page">Mumble</a></li>
+<li>iMessage (included in iOS from Apple)</li>
+<li><a href="https://telegram.org/">Telegram</a></li>
+<li><a href="https://jitsi.org/">Jitsi</a></li>
+<li><a href="https://keybase.io/download">Keybase file</a></li>
+
+</ul>
+
+<p>Then the ones used by even fewer people</p>
+
+<ul>
+
+<li><a href="https://ring.cx/">Ring</a></li>
+<li><a href="https://bitmessage.org/">Bitmessage</a></li>
+<li><a href="https://wire.com/">Wire</a></li>
+<li>VoIP w/<a href="https://en.wikipedia.org/wiki/ZRTP">ZRTP</a> or controlled <a href="https://en.wikipedia.org/wiki/Secure_Real-time_Transport_Protocol">SRTP</a> (e.g using <a href="https://en.wikipedia.org/wiki/CSipSimple">CSipSimple</a>, <a href="https://en.wikipedia.org/wiki/Linphone">Linphone</a>)</li>
+<li><a href="https://matrix.org/">Matrix</a></li>
+<li><a href="https://kontalk.org/">Kontalk</a></li>
+<li><a href="https://0bin.net/">0bin</a> (encrypted pastebin)</li>
+<li><a href="https://appear.in">Appear.in</a></li>
+<li><a href="https://riot.im/">riot</a></li>
+<li><a href="https://www.wickr.com/">Wickr Me</a></li>
+
+</ul>
+
+<p>And finally the ones mentioned by not marked as used by
+anyone. This might be a mistake, perhaps the person adding the entry
+forgot to flag it as used?</p>
+
+<ul>
+
+<li>Email w/Certificates <a href="https://en.wikipedia.org/wiki/S/MIME">S/MIME</a></li>
+<li><a href="https://www.crypho.com/">Crypho</a></li>
+<li><a href="https://cryptpad.fr/">CryptPad</a></li>
+<li><a href="https://github.com/ricochet-im/ricochet">ricochet</a></li>
+
+</ul>
+
+<p>Given the network effect it seem obvious to me that we as a society
+have been divided and conquered by those interested in keeping
+encrypted and secure communication away from the masses. The
+finishing remarks <a href="https://vimeo.com/97505679">from Aral Balkan
+in his talk "Free is a lie"</a> about the usability of free software
+really come into effect when you want to communicate in private with
+your friends and family. We can not expect them to allow the
+usability of communication tool to block their ability to talk to
+their loved ones.</p>
+
+<p>Note for example the option IRC w/OTR. Most IRC clients do not
+have OTR support, so in most cases OTR would not be an option, even if
+you wanted to. In my personal experience, about 1 in 20 I talk to
+have a IRC client with OTR. For private communication to really be
+available, most people to talk to must have the option in their
+currently used client. I can not simply ask my family to install an
+IRC client. I need to guide them through a technical multi-step
+process of adding extensions to the client to get them going. This is
+a non-starter for most.</p>
+
+<p>I would like to be able to do video phone calls, audio phone calls,
+exchange instant messages and share files with my loved ones, without
+being forced to share with people I do not know. I do not want to
+share the content of the conversations, and I do not want to share who
+I communicate with or the fact that I communicate with someone.
+Without all these factors in place, my private life is being more or
+less invaded.</p>
+</description>
+ </item>
+
+ <item>
+ <title>My own self balancing Lego Segway</title>
+ <link>http://people.skolelinux.org/pere/blog/My_own_self_balancing_Lego_Segway.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/My_own_self_balancing_Lego_Segway.html</guid>
+ <pubDate>Fri, 4 Nov 2016 10:15:00 +0100</pubDate>
+ <description><p>A while back I received a Gyro sensor for the NXT
+<a href="mindstorms.lego.com">Mindstorms</a> controller as a birthday
+present. It had been on my wishlist for a while, because I wanted to
+build a Segway like balancing lego robot. I had already built
+<a href="http://www.nxtprograms.com/NXT2/segway/">a simple balancing
+robot</a> with the kids, using the light/color sensor included in the
+NXT kit as the balance sensor, but it was not working very well. It
+could balance for a while, but was very sensitive to the light
+condition in the room and the reflective properties of the surface and
+would fall over after a short while. I wanted something more robust,
+and had
+<a href="https://www.hitechnic.com/cgi-bin/commerce.cgi?preadd=action&key=NGY1044">the
+gyro sensor from HiTechnic</a> I believed would solve it on my
+wishlist for some years before it suddenly showed up as a gift from my
+loved ones. :)</p>
+
+<p>Unfortunately I have not had time to sit down and play with it
+since then. But that changed some days ago, when I was searching for
+lego segway information and came across a recipe from HiTechnic for
+building
+<a href="http://www.hitechnic.com/blog/gyro-sensor/htway/">the
+HTWay</a>, a segway like balancing robot. Build instructions and
+<a href="https://www.hitechnic.com/upload/786-HTWayC.nxc">source
+code</a> was included, so it was just a question of putting it all
+together. And thanks to the great work of many Debian developers, the
+compiler needed to build the source for the NXT is already included in
+Debian, so I was read to go in less than an hour. The resulting robot
+do not look very impressive in its simplicity:</p>
+
+<p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2016-11-04-lego-htway-robot.jpeg"></p>
+
+<p>Because I lack the infrared sensor used to control the robot in the
+design from HiTechnic, I had to comment out the last task
+(taskControl). I simply placed /* and */ around it get the program
+working without that sensor present. Now it balances just fine until
+the battery status run low:</p>
+
+<p align="center"><video width="70%" controls="true">
+ <source src="http://people.skolelinux.org/pere/blog/images/2016-11-04-lego-htway-balancing.ogv" type="video/ogg">
+</video></p>
+
+<p>Now we would like to teach it how to follow a line and take remote
+control instructions using the included Bluetooth receiver in the NXT.</p>
+
+<p>If you, like me, love LEGO and want to make sure we find the tools
+they need to work with LEGO in Debian and all our derivative
+distributions like Ubuntu, check out
+<a href="http://wiki.debian.org/LegoDesigners">the LEGO designers
+project page</a> and join the Debian LEGO team. Personally I own a
+RCX and NXT controller (no EV3), and would like to make sure the
+Debian tools needed to program the systems I own work as they
+should.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Experience and updated recipe for using the Signal app without a mobile phone</title>
+ <link>http://people.skolelinux.org/pere/blog/Experience_and_updated_recipe_for_using_the_Signal_app_without_a_mobile_phone.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Experience_and_updated_recipe_for_using_the_Signal_app_without_a_mobile_phone.html</guid>
+ <pubDate>Mon, 10 Oct 2016 11:30:00 +0200</pubDate>
+ <description><p>In July
+<a href="http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html">I
+wrote how to get the Signal Chrome/Chromium app working</a> without
+the ability to receive SMS messages (aka without a cell phone). It is
+time to share some experiences and provide an updated setup.</p>
+
+<p>The Signal app have worked fine for several months now, and I use
+it regularly to chat with my loved ones. I had a major snag at the
+end of my summer vacation, when the the app completely forgot my
+setup, identity and keys. The reason behind this major mess was
+running out of disk space. To avoid that ever happening again I have
+started storing everything in <tt>userdata/</tt> in git, to be able to
+roll back to an earlier version if the files are wiped by mistake. I
+had to use it once after introducing the git backup. When rolling
+back to an earlier version, one need to use the 'reset session' option
+in Signal to get going, and notify the people you talk with about the
+problem. I assume there is some sequence number tracking in the
+protocol to detect rollback attacks. The git repository is rather big
+(674 MiB so far), but I have not tried to figure out if some of the
+content can be added to a .gitignore file due to lack of spare
+time.</p>
+
+<p>I've also hit the 90 days timeout blocking, and noticed that this
+make it impossible to send messages using Signal. I could still
+receive them, but had to patch the code with a new timestamp to send.
+I believe the timeout is added by the developers to force people to
+upgrade to the latest version of the app, even when there is no
+protocol changes, to reduce the version skew among the user base and
+thus try to keep the number of support requests down.</p>
+
+<p>Since my original recipe, the Signal source code changed slightly,
+making the old patch fail to apply cleanly. Below is an updated
+patch, including the shell wrapper I use to start Signal. The
+original version required a new user to locate the JavaScript console
+and call a function from there. I got help from a friend with more
+JavaScript knowledge than me to modify the code to provide a GUI
+button instead. This mean that to get started you just need to run
+the wrapper and click the 'Register without mobile phone' to get going
+now. I've also modified the timeout code to always set it to 90 days
+in the future, to avoid having to patch the code regularly.</p>
+
+<p>So, the updated recipe for Debian Jessie:</p>
+
+<ol>
+
+<li>First, install required packages to get the source code and the
+browser you need. Signal only work with Chrome/Chromium, as far as I
+know, so you need to install it.
+
+<pre>
+apt install git tor chromium
+git clone https://github.com/WhisperSystems/Signal-Desktop.git
+</pre></li>
+
+<li>Modify the source code using command listed in the the patch
+block below.</li>
+
+<li>Start Signal using the run-signal-app wrapper (for example using
+<tt>`pwd`/run-signal-app</tt>).
+
+<li>Click on the 'Register without mobile phone', will in a phone
+number you can receive calls to the next minute, receive the
+verification code and enter it into the form field and press
+'Register'. Note, the phone number you use will be user Signal
+username, ie the way others can find you on Signal.</li>
+
+<li>You can now use Signal to contact others. Note, new contacts do
+not show up in the contact list until you restart Signal, and there is
+no way to assign names to Contacts. There is also no way to create or
+update chat groups. I suspect this is because the web app do not have
+a associated contact database.</li>
+
+</ol>
+
+<p>I am still a bit uneasy about using Signal, because of the way its
+main author moxie0 reject federation and accept dependencies to major
+corporations like Google (part of the code is fetched from Google) and
+Amazon (the central coordination point is owned by Amazon). See for
+example
+<a href="https://github.com/LibreSignal/LibreSignal/issues/37">the
+LibreSignal issue tracker</a> for a thread documenting the authors
+view on these issues. But the network effect is strong in this case,
+and several of the people I want to communicate with already use
+Signal. Perhaps we can all move to <a href="https://ring.cx/">Ring</a>
+once it <a href="https://bugs.debian.org/830265">work on my
+laptop</a>? It already work on Windows and Android, and is included
+in <a href="https://tracker.debian.org/pkg/ring">Debian</a> and
+<a href="https://launchpad.net/ubuntu/+source/ring">Ubuntu</a>, but not
+working on Debian Stable.</p>
+
+<p>Anyway, this is the patch I apply to the Signal code to get it
+working. It switch to the production servers, disable to timeout,
+make registration easier and add the shell wrapper:</p>
+
+<pre>
+cd Signal-Desktop; cat &lt;&lt;EOF | patch -p1
+diff --git a/js/background.js b/js/background.js
+index 24b4c1d..579345f 100644
+--- a/js/background.js
++++ b/js/background.js
+@@ -33,9 +33,9 @@
+ });
+ });
+
+- var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
++ var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
+ var SERVER_PORTS = [80, 4433, 8443];
+- var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
++ var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
+ var messageReceiver;
+ window.getSocketStatus = function() {
+ if (messageReceiver) {
+diff --git a/js/expire.js b/js/expire.js
+index 639aeae..beb91c3 100644
+--- a/js/expire.js
++++ b/js/expire.js
+@@ -1,6 +1,6 @@
+ ;(function() {
+ 'use strict';
+- var BUILD_EXPIRATION = 0;
++ var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
+
+ window.extension = window.extension || {};
+
+diff --git a/js/views/install_view.js b/js/views/install_view.js
+index 7816f4f..1d6233b 100644
+--- a/js/views/install_view.js
++++ b/js/views/install_view.js
+@@ -38,7 +38,8 @@
+ return {
+ 'click .step1': this.selectStep.bind(this, 1),
+ 'click .step2': this.selectStep.bind(this, 2),
+- 'click .step3': this.selectStep.bind(this, 3)
++ 'click .step3': this.selectStep.bind(this, 3),
++ 'click .callreg': function() { extension.install('standalone') },
+ };
+ },
+ clearQR: function() {
+diff --git a/options.html b/options.html
+index dc0f28e..8d709f6 100644
+--- a/options.html
++++ b/options.html
+@@ -14,7 +14,10 @@
+ &lt;div class='nav'>
+ &lt;h1>{{ installWelcome }}&lt;/h1>
+ &lt;p>{{ installTagline }}&lt;/p>
+- &lt;div> &lt;a class='button step2'>{{ installGetStartedButton }}&lt;/a> &lt;/div>
++ &lt;div> &lt;a class='button step2'>{{ installGetStartedButton }}&lt;/a>
++ &lt;br> &lt;a class="button callreg">Register without mobile phone&lt;/a>
++
++ &lt;/div>
+ &lt;span class='dot step1 selected'>&lt;/span>
+ &lt;span class='dot step2'>&lt;/span>
+ &lt;span class='dot step3'>&lt;/span>
+--- /dev/null 2016-10-07 09:55:13.730181472 +0200
++++ b/run-signal-app 2016-10-10 08:54:09.434172391 +0200
+@@ -0,0 +1,12 @@
++#!/bin/sh
++set -e
++cd $(dirname $0)
++mkdir -p userdata
++userdata="`pwd`/userdata"
++if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
++ (cd $userdata && git init)
++fi
++(cd $userdata && git add . && git commit -m "Current status." || true)
++exec chromium \
++ --proxy-server="socks://localhost:9050" \
++ --user-data-dir=$userdata --load-and-launch-app=`pwd`
+EOF
+chmod a+rx run-signal-app
+</pre>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Isenkram, Appstream and udev make life as a LEGO builder easier</title>
+ <link>http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</guid>
+ <pubDate>Fri, 7 Oct 2016 09:50:00 +0200</pubDate>
+ <description><p><a href="http://packages.qa.debian.org/isenkram">The Isenkram
+system</a> provide a practical and easy way to figure out which
+packages support the hardware in a given machine. The command line
+tool <tt>isenkram-lookup</tt> and the tasksel options provide a
+convenient way to list and install packages relevant for the current
+hardware during system installation, both user space packages and
+firmware packages. The GUI background daemon on the other hand provide
+a pop-up proposing to install packages when a new dongle is inserted
+while using the computer. For example, if you plug in a smart card
+reader, the system will ask if you want to install <tt>pcscd</tt> if
+that package isn't already installed, and if you plug in a USB video
+camera the system will ask if you want to install <tt>cheese</tt> if
+cheese is currently missing. This already work just fine.</p>
+
+<p>But Isenkram depend on a database mapping from hardware IDs to
+package names. When I started no such database existed in Debian, so
+I made my own data set and included it with the isenkram package and
+made isenkram fetch the latest version of this database from git using
+http. This way the isenkram users would get updated package proposals
+as soon as I learned more about hardware related packages.</p>
+
+<p>The hardware is identified using modalias strings. The modalias
+design is from the Linux kernel where most hardware descriptors are
+made available as a strings that can be matched using filename style
+globbing. It handle USB, PCI, DMI and a lot of other hardware related
+identifiers.</p>
+
+<p>The downside to the Isenkram specific database is that there is no
+information about relevant distribution / Debian version, making
+isenkram propose obsolete packages too. But along came AppStream, a
+cross distribution mechanism to store and collect metadata about
+software packages. When I heard about the proposal, I contacted the
+people involved and suggested to add a hardware matching rule using
+modalias strings in the specification, to be able to use AppStream for
+mapping hardware to packages. This idea was accepted and AppStream is
+now a great way for a package to announce the hardware it support in a
+distribution neutral way. I wrote
+<a href="http://people.skolelinux.org/pere/blog/Using_appstream_with_isenkram_to_install_hardware_related_packages_in_Debian.html">a
+recipe on how to add such meta-information</a> in a blog post last
+December. If you have a hardware related package in Debian, please
+announce the relevant hardware IDs using AppStream.</p>
+
+<p>In Debian, almost all packages that can talk to a LEGO Mindestorms
+RCX or NXT unit, announce this support using AppStream. The effect is
+that when you insert such LEGO robot controller into your Debian
+machine, Isenkram will propose to install the packages needed to get
+it working. The intention is that this should allow the local user to
+start programming his robot controller right away without having to
+guess what packages to use or which permissions to fix.</p>
+
+<p>But when I sat down with my son the other day to program our NXT
+unit using his Debian Stretch computer, I discovered something
+annoying. The local console user (ie my son) did not get access to
+the USB device for programming the unit. This used to work, but no
+longer in Jessie and Stretch. After some investigation and asking
+around on #debian-devel, I discovered that this was because udev had
+changed the mechanism used to grant access to local devices. The
+ConsoleKit mechanism from <tt>/lib/udev/rules.d/70-udev-acl.rules</tt>
+no longer applied, because LDAP users no longer was added to the
+plugdev group during login. Michael Biebl told me that this method
+was obsolete and the new method used ACLs instead. This was good
+news, as the plugdev mechanism is a mess when using a remote user
+directory like LDAP. Using ACLs would make sure a user lost device
+access when she logged out, even if the user left behind a background
+process which would retain the plugdev membership with the ConsoleKit
+setup. Armed with this knowledge I moved on to fix the access problem
+for the LEGO Mindstorms related packages.</p>
+
+<p>The new system uses a udev tag, 'uaccess'. It can either be
+applied directly for a device, or is applied in
+/lib/udev/rules.d/70-uaccess.rules for classes of devices. As the
+LEGO Mindstorms udev rules did not have a class, I decided to add the
+tag directly in the udev rules files included in the packages. Here
+is one example. For the nqc C compiler for the RCX, the
+<tt>/lib/udev/rules.d/60-nqc.rules</tt> file now look like this:
+
+<p><pre>
+SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
+ SYMLINK+="rcx-%k", TAG+="uaccess"
+</pre></p>
+
+<p>The key part is the 'TAG+="uaccess"' at the end. I suspect all
+packages using plugdev in their /lib/udev/rules.d/ files should be
+changed to use this tag (either directly or indirectly via
+<tt>70-uaccess.rules</tt>). Perhaps a lintian check should be created
+to detect this?</p>
+
+<p>I've been unable to find good documentation on the uaccess feature.
+It is unclear to me if the uaccess tag is an internal implementation
+detail like the udev-acl tag used by
+<tt>/lib/udev/rules.d/70-udev-acl.rules</tt>. If it is, I guess the
+indirect method is the preferred way. Michael
+<a href="https://github.com/systemd/systemd/issues/4288">asked for more
+documentation from the systemd project</a> and I hope it will make
+this clearer. For now I use the generic classes when they exist and
+is already handled by <tt>70-uaccess.rules</tt>, and add the tag
+directly if no such class exist.</p>
+
+<p>To learn more about the isenkram system, please check out
+<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">my
+blog posts tagged isenkram</a>.</p>
+
+<p>To help out making life for LEGO constructors in Debian easier,
+please join us on our IRC channel
+<a href="irc://irc.debian.org/%23debian-lego">#debian-lego</a> and join
+the <a href="https://alioth.debian.org/projects/debian-lego/">Debian
+LEGO team</a> in the Alioth project we created yesterday. A mailing
+list is not yet created, but we are working on it. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public</title>
+ <link>http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</guid>
+ <pubDate>Tue, 30 Aug 2016 10:10:00 +0200</pubDate>
+ <description><p>In April we
+<a href="http://people.skolelinux.org/pere/blog/Lets_make_a_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook.html">started
+to work</a> on a Norwegian Bokmål edition of the "open access" book on
+how to set up and administrate a Debian system. Today I am happy to
+report that the first draft is now publicly available. You can find
+it on <a href="https://debian-handbook.info/get/">get the Debian
+Administrator's Handbook page</a> (under Other languages). The first
+eight chapters have a first draft translation, and we are working on
+proofreading the content. If you want to help out, please start
+contributing using
+<a href="https://hosted.weblate.org/projects/debian-handbook/">the
+hosted weblate project page</a>, and get in touch using
+<a href="http://lists.alioth.debian.org/mailman/listinfo/debian-handbook-translators">the
+translators mailing list</a>. Please also check out
+<a href="https://debian-handbook.info/contribute/">the instructions for
+contributors</a>. A good way to contribute is to proofread the text
+and update weblate if you find errors.</p>
+
+<p>Our goal is still to make the Norwegian book available on paper as well as
+electronic form.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Coz can help you find bottlenecks in multi-threaded software - nice free software</title>
+ <link>http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</guid>
+ <pubDate>Thu, 11 Aug 2016 12:00:00 +0200</pubDate>
+ <description><p>This summer, I read a great article
+"<a href="https://www.usenix.org/publications/login/summer2016/curtsinger">coz:
+This Is the Profiler You're Looking For</a>" in USENIX ;login: about
+how to profile multi-threaded programs. It presented a system for
+profiling software by running experiences in the running program,
+testing how run time performance is affected by "speeding up" parts of
+the code to various degrees compared to a normal run. It does this by
+slowing down parallel threads while the "faster up" code is running
+and measure how this affect processing time. The processing time is
+measured using probes inserted into the code, either using progress
+counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It
+can also measure unmodified code by measuring complete the program
+runtime and running the program several times instead.</p>
+
+<p>The project and presentation was so inspiring that I would like to
+get the system into Debian. I
+<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830708">created
+a WNPP request for it</a> and contacted upstream to try to make the
+system ready for Debian by sending patches. The build process need to
+be changed a bit to avoid running 'git clone' to get dependencies, and
+to include the JavaScript web page used to visualize the collected
+profiling information included in the source package.
+But I expect that should work out fairly soon.</p>
+
+<p>The way the system work is fairly simple. To run an coz experiment
+on a binary with debug symbols available, start the program like this:
+
+<p><blockquote><pre>
+coz run --- program-to-run
+</pre></blockquote></p>
+
+<p>This will create a text file profile.coz with the instrumentation
+information. To show what part of the code affect the performance
+most, use a web browser and either point it to
+<a href="http://plasma-umass.github.io/coz/">http://plasma-umass.github.io/coz/</a>
+or use the copy from git (in the gh-pages branch). Check out this web
+site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the
+profiling more useful you include &lt;coz.h&gt; and insert the
+COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the
+code, rebuild and run the profiler. This allow coz to do more
+targeted experiments.</p>
+
+<p>A video published by ACM
+<a href="https://www.youtube.com/watch?v=jE0V-p1odPg">presenting the
+Coz profiler</a> is available from Youtube. There is also a paper
+from the 25th Symposium on Operating Systems Principles available
+titled
+<a href="https://www.usenix.org/conference/atc16/technical-sessions/presentation/curtsinger">Coz:
+finding code that counts with causal profiling</a>.</p>
+
+<p><a href="https://github.com/plasma-umass/coz">The source code</a>
+for Coz is available from github. It will only build with clang
+because it uses a
+<a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55606">C++
+feature missing in GCC</a>, but I've submitted
+<a href="https://github.com/plasma-umass/coz/pull/67">a patch to solve
+it</a> and hope it will be included in the upstream source soon.</p>
+
+<p>Please get in touch if you, like me, would like to see this piece
+of software in Debian. I would very much like some help with the
+packaging effort, as I lack the in depth knowledge on how to package
+C++ libraries.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Sales number for the Free Culture translation, first half of 2016</title>
+ <link>http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</guid>
+ <pubDate>Fri, 5 Aug 2016 22:45:00 +0200</pubDate>
+ <description><p>As my regular readers probably remember, the last year I published
+a French and Norwegian translation of the classic
+<a href="http://www.free-culture.cc/">Free Culture book</a> by the
+founder of the Creative Commons movement, Lawrence Lessig. A bit less
+known is the fact that due to the way I created the translations,
+using docbook and po4a, I also recreated the English original. And
+because I already had created a new the PDF edition, I published it
+too. The revenue from the books are sent to the Creative Commons
+Corporation. In other words, I do not earn any money from this
+project, I just earn the warm fuzzy feeling that the text is available
+for a wider audience and more people can learn why the Creative
+Commons is needed.</p>
+
+<p>Today, just for fun, I had a look at the sales number over at
+Lulu.com, which take care of payment, printing and shipping. Much to
+my surprise, the English edition is selling better than both the
+French and Norwegian edition, despite the fact that it has been
+available in English since it was first published. In total, 24 paper
+books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:</p>
+
+<table border="0">
+<tr><th>Title / language</th><th>Quantity</th></tr>
+<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">Culture Libre / French</a></td><td align="right">3</td></tr>
+<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Fri kultur / Norwegian</a></td><td align="right">7</td></tr>
+<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">Free Culture / English</a></td><td align="right">14</td></tr>
+</table>
+
+<p>The books are available both from Lulu.com and from large book
+stores like Amazon and Barnes&Noble. Most revenue, around $10 per
+book, is sent to the Creative Commons project when the book is sold
+directly by Lulu.com. The other channels give less revenue. The
+summary from Lulu tell me 10 books was sold via the Amazon channel, 10
+via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells
+me that the revenue sent so far this year is USD $101.42. No idea
+what kind of sales numbers to expect, so I do not know if that is a
+good amount of sales for a 10 year old book or not. But it make me
+happy that the buyers find the book, and I hope they enjoy reading it
+as much as I did.</p>
+
+<p>The ebook edition is available for free from
+<a href="https://github.com/petterreinholdtsen/free-culture-lessig">Github</a>.</p>
+
+<p>If you would like to translate and publish the book in your native
+language, I would be happy to help make it happen. Please get in
+touch.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Techno TV broadcasting live across Norway and the Internet (#debconf16, #nuug) on @frikanalen</title>
+ <link>http://people.skolelinux.org/pere/blog/Techno_TV_broadcasting_live_across_Norway_and_the_Internet___debconf16___nuug__on__frikanalen.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Techno_TV_broadcasting_live_across_Norway_and_the_Internet___debconf16___nuug__on__frikanalen.html</guid>
+ <pubDate>Mon, 1 Aug 2016 10:30:00 +0200</pubDate>
+ <description><p>Did you know there is a TV channel broadcasting talks from DebConf
+16 across an entire country? Or that there is a TV channel
+broadcasting talks by or about
+<a href="http://beta.frikanalen.no/video/625529/">Linus Torvalds</a>,
+<a href="http://beta.frikanalen.no/video/625599/">Tor</a>,
+<a href="http://beta.frikanalen.no/video/624019/">OpenID</A>,
+<a href="http://beta.frikanalen.no/video/625624/">Common Lisp</a>,
+<a href="http://beta.frikanalen.no/video/625446/">Civic Tech</a>,
+<a href="http://beta.frikanalen.no/video/625090/">EFF founder John Barlow</a>,
+<a href="http://beta.frikanalen.no/video/625432/">how to make 3D
+printer electronics</a> and many more fascinating topics? It works
+using only free software (all of it
+<a href="http://github.com/Frikanalen">available from Github</a>), and
+is administrated using a web browser and a web API.</p>
+
+<p>The TV channel is the Norwegian open channel
+<a href="http://www.frikanalen.no/">Frikanalen</a>, and I am involved
+via <a href="https://www.nuug.no/">the NUUG member association</a> in
+running and developing the software for the channel. The channel is
+organised as a member organisation where its members can upload and
+broadcast what they want (think of it as Youtube for national
+broadcasting television). Individuals can broadcast too. The time
+slots are handled on a first come, first serve basis. Because the
+channel have almost no viewers and very few active members, we can
+experiment with TV technology without too much flack when we make
+mistakes. And thanks to the few active members, most of the slots on
+the schedule are free. I see this as an opportunity to spread
+knowledge about technology and free software, and have a script I run
+regularly to fill up all the open slots the next few days with
+technology related video. The end result is a channel I like to
+describe as Techno TV - filled with interesting talks and
+presentations.</p>
+
+<p>It is available on channel 50 on the Norwegian national digital TV
+network (RiksTV). It is also available as a multicast stream on
+Uninett. And finally, it is available as
+<a href="http://beta.frikanalen.no/">a WebM unicast stream</a> from
+Frikanalen and NUUG. Check it out. :)</p>
+</description>
+ </item>
+
+ <item>
+ <title>Unlocking HTC Desire HD on Linux using unruu and fastboot</title>
+ <link>http://people.skolelinux.org/pere/blog/Unlocking_HTC_Desire_HD_on_Linux_using_unruu_and_fastboot.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlocking_HTC_Desire_HD_on_Linux_using_unruu_and_fastboot.html</guid>
+ <pubDate>Thu, 7 Jul 2016 11:30:00 +0200</pubDate>
+ <description><p>Yesterday, I tried to unlock a HTC Desire HD phone, and it proved
+to be a slight challenge. Here is the recipe if I ever need to do it
+again. It all started by me wanting to try the recipe to set up
+<a href="https://blog.torproject.org/blog/mission-impossible-hardening-android-security-and-privacy">an
+hardened Android installation</a> from the Tor project blog on a
+device I had access to. It is a old mobile phone with a broken
+microphone The initial idea had been to just
+<a href="http://wiki.cyanogenmod.org/w/Install_CM_for_ace">install
+CyanogenMod on it</a>, but did not quite find time to start on it
+until a few days ago.</p>
+
+<p>The unlock process is supposed to be simple: (1) Boot into the boot
+loader (press volume down and power at the same time), (2) select
+'fastboot' before (3) connecting the device via USB to a Linux
+machine, (4) request the device identifier token by running 'fastboot
+oem get_identifier_token', (5) request the device unlocking key using
+the <a href="http://www.htcdev.com/bootloader/">HTC developer web
+site</a> and unlock the phone using the key file emailed to you.</p>
+
+<p>Unfortunately, this only work fi you have hboot version 2.00.0029
+or newer, and the device I was working on had 2.00.0027. This
+apparently can be easily fixed by downloading a Windows program and
+running it on your Windows machine, if you accept the terms Microsoft
+require you to accept to use Windows - which I do not. So I had to
+come up with a different approach. I got a lot of help from AndyCap
+on #nuug, and would not have been able to get this working without
+him.</p>
+
+<p>First I needed to extract the hboot firmware from
+<a href="http://www.htcdev.com/ruu/PD9810000_Ace_Sense30_S_hboot_2.00.0029.exe">the
+windows binary for HTC Desire HD</a> downloaded as 'the RUU' from HTC.
+For this there is is <a href="https://github.com/kmdm/unruu/">a github
+project named unruu</a> using libunshield. The unshield tool did not
+recognise the file format, but unruu worked and extracted rom.zip,
+containing the new hboot firmware and a text file describing which
+devices it would work for.</p>
+
+<p>Next, I needed to get the new firmware into the device. For this I
+followed some instructions
+<a href="http://www.htc1guru.com/2013/09/new-ruu-zips-posted/">available
+from HTC1Guru.com</a>, and ran these commands as root on a Linux
+machine with Debian testing:</p>
+
+<p><pre>
+adb reboot-bootloader
+fastboot oem rebootRUU
+fastboot flash zip rom.zip
+fastboot flash zip rom.zip
+fastboot reboot
+</pre></p>
+
+<p>The flash command apparently need to be done twice to take effect,
+as the first is just preparations and the second one do the flashing.
+The adb command is just to get to the boot loader menu, so turning the
+device on while holding volume down and the power button should work
+too.</p>
+
+<p>With the new hboot version in place I could start following the
+instructions on the HTC developer web site. I got the device token
+like this:</p>
+
+<p><pre>
+fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //'
+</pre>
+
+<p>And once I got the unlock code via email, I could use it like
+this:</p>
+
+<p><pre>
+fastboot flash unlocktoken Unlock_code.bin
+</pre></p>
+
+<p>And with that final step in place, the phone was unlocked and I
+could start stuffing the software of my own choosing into the device.
+So far I only inserted a replacement recovery image to wipe the phone
+before I start. We will see what happen next. Perhaps I should
+install <a href="https://www.debian.org/">Debian</a> on it. :)</p>
+</description>
+ </item>
+
+ <item>
+ <title>How to use the Signal app if you only have a land line (ie no mobile phone)</title>
+ <link>http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html</guid>
+ <pubDate>Sun, 3 Jul 2016 14:20:00 +0200</pubDate>
+ <description><p>For a while now, I have wanted to test
+<a href="https://whispersystems.org/">the Signal app</a>, as it is
+said to provide end to end encrypted communication and several of my
+friends and family are already using it. As I by choice do not own a
+mobile phone, this proved to be harder than expected. And I wanted to
+have the source of the client and know that it was the code used on my
+machine. But yesterday I managed to get it working. I used the
+Github source, compared it to the source in
+<a href="https://chrome.google.com/webstore/detail/signal-private-messenger/bikioccmkafdpakkkcpdbppfkghcmihk?hl=en-US">the
+Signal Chrome app</a> available from the Chrome web store, applied
+patches to use the production Signal servers, started the app and
+asked for the hidden "register without a smart phone" form. Here is
+the recipe how I did it.</p>
+
+<p>First, I fetched the Signal desktop source from Github, using
+
+<pre>
+git clone https://github.com/WhisperSystems/Signal-Desktop.git
+</pre>
+
+<p>Next, I patched the source to use the production servers, to be
+able to talk to other Signal users:</p>
+
+<pre>
+cat &lt;&lt;EOF | patch -p0
+diff -ur ./js/background.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js
+--- ./js/background.js 2016-06-29 13:43:15.630344628 +0200
++++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js 2016-06-29 14:06:29.530300934 +0200
+@@ -47,8 +47,8 @@
+ });
+ });
+
+- var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
+- var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
++ var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org:4433';
++ var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
+ var messageReceiver;
+ window.getSocketStatus = function() {
+ if (messageReceiver) {
+diff -ur ./js/expire.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js
+--- ./js/expire.js 2016-06-29 13:43:15.630344628 +0200
++++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js2016-06-29 14:06:29.530300934 +0200
+@@ -1,6 +1,6 @@
+ ;(function() {
+ 'use strict';
+- var BUILD_EXPIRATION = 0;
++ var BUILD_EXPIRATION = 1474492690000;
+
+ window.extension = window.extension || {};
+
+EOF
+</pre>
+
+<p>The first part is changing the servers, and the second is updating
+an expiration timestamp. This timestamp need to be updated regularly.
+It is set 90 days in the future by the build process (Gruntfile.js).
+The value is seconds since 1970 times 1000, as far as I can tell.</p>
+
+<p>Based on a tip and good help from the #nuug IRC channel, I wrote a
+script to launch Signal in Chromium.</p>
+
+<pre>
+#!/bin/sh
+cd $(dirname $0)
+mkdir -p userdata
+exec chromium \
+ --proxy-server="socks://localhost:9050" \
+ --user-data-dir=`pwd`/userdata --load-and-launch-app=`pwd`
+</pre>
+
+<p> The script start the app and configure Chromium to use the Tor
+SOCKS5 proxy to make sure those controlling the Signal servers (today
+Amazon and Whisper Systems) as well as those listening on the lines
+will have a harder time location my laptop based on the Signal
+connections if they use source IP address.</p>
+
+<p>When the script starts, one need to follow the instructions under
+"Standalone Registration" in the CONTRIBUTING.md file in the git
+repository. I right clicked on the Signal window to get up the
+Chromium debugging tool, visited the 'Console' tab and wrote
+'extension.install("standalone")' on the console prompt to get the
+registration form. Then I entered by land line phone number and
+pressed 'Call'. 5 seconds later the phone rang and a robot voice
+repeated the verification code three times. After entering the number
+into the verification code field in the form, I could start using
+Signal from my laptop.
+
+<p>As far as I can tell, The Signal app will leak who is talking to
+whom and thus who know who to those controlling the central server,
+but such leakage is hard to avoid with a centrally controlled server
+setup. It is something to keep in mind when using Signal - the
+content of your chats are harder to intercept, but the meta data
+exposing your contact network is available to people you do not know.
+So better than many options, but not great. And sadly the usage is
+connected to my land line, thus allowing those controlling the server
+to associate it to my home and person. I would prefer it if only
+those I knew could tell who I was on Signal. There are options
+avoiding such information leakage, but most of my friends are not
+using them, so I am stuck with Signal for now.</p>
+
+<p><strong>Update 2017-01-10</strong>: There is an updated blog post
+on this topic in
+<a href="http://people.skolelinux.org/pere/blog/Experience_and_updated_recipe_for_using_the_Signal_app_without_a_mobile_phone.html">Experience
+and updated recipe for using the Signal app without a mobile
+phone</a>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>The new "best" multimedia player in Debian?</title>
+ <link>http://people.skolelinux.org/pere/blog/The_new__best__multimedia_player_in_Debian_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/The_new__best__multimedia_player_in_Debian_.html</guid>
+ <pubDate>Mon, 6 Jun 2016 12:50:00 +0200</pubDate>
+ <description><p>When I set out a few weeks ago to figure out
+<a href="http://people.skolelinux.org/pere/blog/What_is_the_best_multimedia_player_in_Debian_.html">which
+multimedia player in Debian claimed to support most file formats /
+MIME types</a>, I was a bit surprised how varied the sets of MIME types
+the various players claimed support for. The range was from 55 to 130
+MIME types. I suspect most media formats are supported by all
+players, but this is not really reflected in the MimeTypes values in
+their desktop files. There are probably also some bogus MIME types
+listed, but it is hard to identify which one this is.</p>
+
+<p>Anyway, in the mean time I got in touch with upstream for some of
+the players suggesting to add more MIME types to their desktop files,
+and decided to spend some time myself improving the situation for my
+favorite media player VLC. The fixes for VLC entered Debian unstable
+yesterday. The complete list of MIME types can be seen on the
+<a href="https://wiki.debian.org/DebianMultimedia/PlayerSupport">Multimedia
+player MIME type support status</a> Debian wiki page.</p>
+
+<p>The new "best" multimedia player in Debian? It is VLC, followed by
+totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and
+kmplayer. I am sure some of the other players desktop files support
+several of the formats currently listed as working only with vlc,
+toten and parole.</p>
+
+<p>A sad observation is that only 14 MIME types are listed as
+supported by all the tested multimedia players in Debian in their
+desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl,
+audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg,
+video/quicktime, video/vnd.rn-realvideo, video/x-matroska,
+video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find
+it sad that video/ogg and video/webm is not supported by all the media
+players in Debian. As far as I can tell, all of them can handle both
+formats.</p>
+</description>
+ </item>
+
+ <item>
+ <title>A program should be able to open its own files on Linux</title>
+ <link>http://people.skolelinux.org/pere/blog/A_program_should_be_able_to_open_its_own_files_on_Linux.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_program_should_be_able_to_open_its_own_files_on_Linux.html</guid>
+ <pubDate>Sun, 5 Jun 2016 08:30:00 +0200</pubDate>
+ <description><p>Many years ago, when koffice was fresh and with few users, I
+decided to test its presentation tool when making the slides for a
+talk I was giving for NUUG on Japhar, a free Java virtual machine. I
+wrote the first draft of the slides, saved the result and went to bed
+the day before I would give the talk. The next day I took a plane to
+the location where the meeting should take place, and on the plane I
+started up koffice again to polish the talk a bit, only to discover
+that kpresenter refused to load its own data file. I cursed a bit and
+started making the slides again from memory, to have something to
+present when I arrived. I tested that the saved files could be
+loaded, and the day seemed to be rescued. I continued to polish the
+slides until I suddenly discovered that the saved file could no longer
+be loaded into kpresenter. In the end I had to rewrite the slides
+three times, condensing the content until the talk became shorter and
+shorter. After the talk I was able to pinpoint the problem &ndash;
+kpresenter wrote inline images in a way itself could not understand.
+Eventually that bug was fixed and kpresenter ended up being a great
+program to make slides. The point I'm trying to make is that we
+expect a program to be able to load its own data files, and it is
+embarrassing to its developers if it can't.</p>
+
+<p>Did you ever experience a program failing to load its own data
+files from the desktop file browser? It is not a uncommon problem. A
+while back I discovered that the screencast recorder
+gtk-recordmydesktop would save an Ogg Theora video file the KDE file
+browser would refuse to open. No video player claimed to understand
+such file. I tracked down the cause being <tt>file --mime-type</tt>
+returning the application/ogg MIME type, which no video player I had
+installed listed as a MIME type they would understand. I asked for
+<a href="http://bugs.gw.com/view.php?id=382">file to change its
+behavour</a> and use the MIME type video/ogg instead. I also asked
+several video players to add video/ogg to their desktop files, to give
+the file browser an idea what to do about Ogg Theora files. After a
+while, the desktop file browsers in Debian started to handle the
+output from gtk-recordmydesktop properly.</p>
+
+<p>But history repeats itself. A few days ago I tested the music
+system Rosegarden again, and I discovered that the KDE and xfce file
+browsers did not know what to do with the Rosegarden project files
+(*.rg). I've reported <a href="http://bugs.debian.org/825993">the
+rosegarden problem to BTS</a> and a fix is commited to git and will be
+included in the next upload. To increase the chance of me remembering
+how to fix the problem next time some program fail to load its files
+from the file browser, here are some notes on how to fix it.</p>
+
+<p>The file browsers in Debian in general operates on MIME types.
+There are two sources for the MIME type of a given file. The output from
+<tt>file --mime-type</tt> mentioned above, and the content of the
+shared MIME type registry (under /usr/share/mime/). The file MIME
+type is mapped to programs supporting the MIME type, and this
+information is collected from
+<a href="https://www.freedesktop.org/wiki/Specifications/desktop-entry-spec/">the
+desktop files</a> available in /usr/share/applications/. If there is
+one desktop file claiming support for the MIME type of the file, it is
+activated when asking to open a given file. If there are more, one
+can normally select which one to use by right-clicking on the file and
+selecting the wanted one using 'Open with' or similar. In general
+this work well. But it depend on each program picking a good MIME
+type (preferably
+<a href="http://www.iana.org/assignments/media-types/media-types.xhtml">a
+MIME type registered with IANA</a>), file and/or the shared MIME
+registry recognizing the file and the desktop file to list the MIME
+type in its list of supported MIME types.</p>
+
+<p>The <tt>/usr/share/mime/packages/rosegarden.xml</tt> entry for
+<a href="http://www.freedesktop.org/wiki/Specifications/shared-mime-info-spec">the
+Shared MIME database</a> look like this:</p>
+
+<p><blockquote><pre>
+&lt;?xml version="1.0" encoding="UTF-8"?&gt;
+&lt;mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info"&gt;
+ &lt;mime-type type="audio/x-rosegarden"&gt;
+ &lt;sub-class-of type="application/x-gzip"/&gt;
+ &lt;comment&gt;Rosegarden project file&lt;/comment&gt;
+ &lt;glob pattern="*.rg"/&gt;
+ &lt;/mime-type&gt;
+&lt;/mime-info&gt;
+</pre></blockquote></p>
+
+<p>This states that audio/x-rosegarden is a kind of application/x-gzip
+(it is a gzipped XML file). Note, it is much better to use an
+official MIME type registered with IANA than it is to make up ones own
+unofficial ones like the x-rosegarden type used by rosegarden.</p>
+
+<p>The desktop file of the rosegarden program failed to list
+audio/x-rosegarden in its list of supported MIME types, causing the
+file browsers to have no idea what to do with *.rg files:</p>
+
+<p><blockquote><pre>
+% grep Mime /usr/share/applications/rosegarden.desktop
+MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi;
+X-KDE-NativeMimeType=audio/x-rosegarden-composition
+%
+</pre></blockquote></p>
+
+<p>The fix was to add "audio/x-rosegarden;" at the end of the
+MimeType= line.</p>
+
+<p>If you run into a file which fail to open the correct program when
+selected from the file browser, please check out the output from
+<tt>file --mime-type</tt> for the file, ensure the file ending and
+MIME type is registered somewhere under /usr/share/mime/ and check
+that some desktop file under /usr/share/applications/ is claiming
+support for this MIME type. If not, please report a bug to have it
+fixed. :)</p>
+</description>
+ </item>
+
+ <item>
+ <title>Tor - from its creators mouth 11 years ago</title>
+ <link>http://people.skolelinux.org/pere/blog/Tor___from_its_creators_mouth_11_years_ago.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Tor___from_its_creators_mouth_11_years_ago.html</guid>
+ <pubDate>Sat, 28 May 2016 14:20:00 +0200</pubDate>
+ <description><p>A little more than 11 years ago, one of the creators of Tor, and
+the current President of <a href="https://www.torproject.org/">the Tor
+project</a>, Roger Dingledine, gave a talk for the members of the
+<a href="http://www.nuug.no/">Norwegian Unix User group</a> (NUUG). A
+video of the talk was recorded, and today, thanks to the great help
+from David Noble, I finally was able to publish the video of the talk
+on Frikanalen, the Norwegian open channel TV station where NUUG
+currently publishes its talks. You can
+<a href="http://frikanalen.no/se">watch the live stream using a web
+browser</a> with WebM support, or check out the recording on the video
+on demand page for the talk
+"<a href="http://beta.frikanalen.no/video/625599">Tor: Anonymous
+communication for the US Department of Defence...and you.</a>".</p>
+
+<p>Here is the video included for those of you using browsers with
+HTML video and Ogg Theora support:</p>
+
+<p><video width="70%" poster="http://simula.gunkies.org/media/625599/large_thumb/20050421-tor-frikanalen.jpg" controls>
+ <source src="http://simula.gunkies.org/media/625599/theora/20050421-tor-frikanalen.ogv" type="video/ogg"/>
+</video></p>
+
+<p>I guess the gist of the talk can be summarised quite simply: If you
+want to help the military in USA (and everyone else), use Tor. :)</p>
+</description>
+ </item>
+
+ <item>
+ <title>Isenkram with PackageKit support - new version 0.23 available in Debian unstable</title>
+ <link>http://people.skolelinux.org/pere/blog/Isenkram_with_PackageKit_support___new_version_0_23_available_in_Debian_unstable.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram_with_PackageKit_support___new_version_0_23_available_in_Debian_unstable.html</guid>
+ <pubDate>Wed, 25 May 2016 10:20:00 +0200</pubDate>
+ <description><p><a href="https://tracker.debian.org/pkg/isenkram">The isenkram
+system</a> is a user-focused solution in Debian for handling hardware
+related packages. The idea is to have a database of mappings between
+hardware and packages, and pop up a dialog suggesting for the user to
+install the packages to use a given hardware dongle. Some use cases
+are when you insert a Yubikey, it proposes to install the software
+needed to control it; when you insert a braille reader list it
+proposes to install the packages needed to send text to the reader;
+and when you insert a ColorHug screen calibrator it suggests to
+install the driver for it. The system work well, and even have a few
+command line tools to install firmware packages and packages for the
+hardware already in the machine (as opposed to hotpluggable hardware).</p>
+
+<p>The system was initially written using aptdaemon, because I found
+good documentation and example code on how to use it. But aptdaemon
+is going away and is generally being replaced by
+<a href="http://www.freedesktop.org/software/PackageKit/">PackageKit</a>,
+so Isenkram needed a rewrite. And today, thanks to the great patch
+from my college Sunil Mohan Adapa in the FreedomBox project, the
+rewrite finally took place. I've just uploaded a new version of
+Isenkram into Debian Unstable with the patch included, and the default
+for the background daemon is now to use PackageKit. To check it out,
+install the <tt>isenkram</tt> package and insert some hardware dongle
+and see if it is recognised.</p>
+
+<p>If you want to know what kind of packages isenkram would propose for
+the machine it is running on, you can check out the isenkram-lookup
+program. This is what it look like on a Thinkpad X230:</p>
+
+<p><blockquote><pre>
+% isenkram-lookup
+bluez
+cheese
+fprintd
+fprintd-demo
+gkrellm-thinkbat
+hdapsd
+libpam-fprintd
+pidgin-blinklight
+thinkfan
+tleds
+tp-smapi-dkms
+tp-smapi-source
+tpb
+%p
+</pre></blockquote></p>
+
+<p>The hardware mappings come from several places. The preferred way
+is for packages to announce their hardware support using
+<a href="https://www.freedesktop.org/software/appstream/docs/">the
+cross distribution appstream system</a>.
+See
+<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">previous
+blog posts about isenkram</a> to learn how to do that.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Discharge rate estimate in new battery statistics collector for Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Discharge_rate_estimate_in_new_battery_statistics_collector_for_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Discharge_rate_estimate_in_new_battery_statistics_collector_for_Debian.html</guid>
+ <pubDate>Mon, 23 May 2016 09:35:00 +0200</pubDate>
+ <description><p>Yesterday I updated the
+<a href="https://tracker.debian.org/pkg/battery-stats">battery-stats
+package in Debian</a> with a few patches sent to me by skilled and
+enterprising users. There were some nice user and visible changes.
+First of all, both desktop menu entries now work. A design flaw in
+one of the script made the history graph fail to show up (its PNG was
+dumped in ~/.xsession-errors) if no controlling TTY was available.
+The script worked when called from the command line, but not when
+called from the desktop menu. I changed this to look for a DISPLAY
+variable or a TTY before deciding where to draw the graph, and now the
+graph window pop up as expected.</p>
+
+<p>The next new feature is a discharge rate estimator in one of the
+graphs (the one showing the last few hours). New is also the user of
+colours showing charging in blue and discharge in red. The percentages
+of this graph is relative to last full charge, not battery design
+capacity.</p>
+
+<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2016-05-23-battery-stats-rate.png"/></p>
+
+<p>The other graph show the entire history of the collected battery
+statistics, comparing it to the design capacity of the battery to
+visualise how the battery life time get shorter over time. The red
+line in this graph is what the previous graph considers 100 percent:
+
+<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2016-05-23-battery-stats-history.png"/></p>
+
+<p>In this graph you can see that I only charge the battery to 80
+percent of last full capacity, and how the capacity of the battery is
+shrinking. :(</p>
+
+<p>The last new feature is in the collector, which now will handle
+more hardware models. On some hardware, Linux power supply
+information is stored in /sys/class/power_supply/ACAD/, while the
+collector previously only looked in /sys/class/power_supply/AC/. Now
+both are checked to figure if there is power connected to the
+machine.</p>
+
+<p>If you are interested in how your laptop battery is doing, please
+check out the
+<a href="https://tracker.debian.org/pkg/battery-stats">battery-stats</a>
+in Debian unstable, or rebuild it on Jessie to get it working on
+Debian stable. :) The upstream source is available from <a
+href="https://github.com/petterreinholdtsen/battery-stats">github</a>.
+Patches are very welcome.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+