<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Legal to share more than 3000 movies listed on IMDB?</title>
- <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
- <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
- <description><p>A month ago, I blogged about my work to
-<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
-check the copyright status of IMDB entries</a>, and try to count the
-number of movies listed in IMDB that is legal to distribute on the
-Internet. I have continued to look for good data sources, and
-identified a few more. The code used to extract information from
-various data sources is available in
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-git repository</a>, currently available from github.</p>
-
-<p>So far I have identified 3186 unique IMDB title IDs. To gain
-better understanding of the structure of the data set, I created a
-histogram of the year associated with each movie (typically release
-year). It is interesting to notice where the peaks and dips in the
-graph are located. I wonder why they are placed there. I suspect
-World Word II caused the dip around 1940, but what caused the peak
-around 2010?</p>
-
-<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
-
-<p>I've so far identified ten sources for IMDB title IDs for movies in
-the public domain or with a free license. This is the statistics
-reported when running 'make stats' in the git repository:</p>
-
-<pre>
- 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
- 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
- 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
- 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
- 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
- 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
- 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
- 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
- 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
- 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
- 3186 unique IMDB title IDs in total
-</pre>
-
-<p>The entries without IMDB title ID are candidates to increase the
-data set, but might equally well be duplicates of entries already
-listed with IMDB title ID in one of the other sources, or represent
-movies that lack a IMDB title ID. I've seen examples of all these
-situations when peeking at the entries without IMDB title ID. Based
-on these data sources, the lower bound for movies listed in IMDB that
-are legal to distribute on the Internet is between 3186 and 4713.
-
-<p>It would be great for improving the accuracy of this measurement,
-if the various sources added IMDB title ID to their metadata. I have
-tried to reach the people behind the various sources to ask if they
-are interested in doing this, without any replies so far. Perhaps you
-can help me get in touch with the people behind VODO, Public Domain
-Torrents, Public Domain Movies and Public Domain Review to try to
-convince them to add more metadata to their movie entries?</p>
-
-<p>Another way you could help is by adding pages to Wikipedia about
-movies that are legal to distribute on the Internet. If such page
-exist and include a link to both IMDB and The Internet Archive, the
-script used to generate free-movies-archive-org-wikidata.json should
-pick up the mapping as soon as wikidata is updates.</p>
+ <title>Time for an official MIME type for patches?</title>
+ <link>http://people.skolelinux.org/pere/blog/Time_for_an_official_MIME_type_for_patches_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Time_for_an_official_MIME_type_for_patches_.html</guid>
+ <pubDate>Thu, 1 Nov 2018 08:15:00 +0100</pubDate>
+ <description><p>As part of my involvement in
+<a href="https://gitlab.com/OsloMet-ABI/nikita-noark5-core">the Nikita
+archive API project</a>, I've been importing a fairly large lump of
+emails into a test instance of the archive to see how well this would
+go. I picked a subset of <a href="https://notmuchmail.org/">my
+notmuch email database</a>, all public emails sent to me via
+@lists.debian.org, giving me a set of around 216 000 emails to import.
+In the process, I had a look at the various attachments included in
+these emails, to figure out what to do with attachments, and noticed
+that one of the most common attachment formats do not have
+<a href="https://www.iana.org/assignments/media-types/media-types.xhtml">an
+official MIME type</a> registered with IANA/IETF. The output from
+diff, ie the input for patch, is on the top 10 list of formats
+included in these emails. At the moment people seem to use either
+text/x-patch or text/x-diff, but neither is officially registered. It
+would be better if one official MIME type were registered and used
+everywhere.</p>
+
+<p>To try to get one official MIME type for these files, I've brought
+up the topic on
+<a href="https://www.ietf.org/mailman/listinfo/media-types">the
+media-types mailing list</a>. If you are interested in discussion
+which MIME type to use as the official for patch files, or involved in
+making software using a MIME type for patches, perhaps you would like
+to join the discussion?</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Some notes on fault tolerant storage systems</title>
- <link>http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</guid>
- <pubDate>Wed, 1 Nov 2017 15:35:00 +0100</pubDate>
- <description><p>If you care about how fault tolerant your storage is, you might
-find these articles and papers interesting. They have formed how I
-think of when designing a storage system.</p>
-
-<ul>
-
-<li>USENIX :login; <a
-href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy
-Does Not Imply Fault Tolerance. Analysis of Distributed Storage
-Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan,
-Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi
-H. Arpaci-Dusseau</li>
-
-<li>ZDNet
-<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why
-RAID 5 stops working in 2009</a> by Robin Harris</li>
-
-<li>ZDNet
-<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why
-RAID 6 stops working in 2019</a> by Robin Harris</li>
-
-<li>USENIX FAST'07
-<a href="http://research.google.com/archive/disk_failures.pdf">Failure
-Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro,
-Wolf-Dietrich Weber and Luiz André Barroso</li>
-
-<li>USENIX ;login: <a
-href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data
-Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug
-Hughes</li>
-
-<li>USENIX FAST'08
-<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An
-Analysis of Data Corruption in the Storage Stack</a> by
-L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C.
-Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li>
-
-<li>USENIX FAST'07 <a
-href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk
-failures in the real world: what does an MTTF of 1,000,000 hours mean
-to you?</a> by B. Schroeder and G. A. Gibson.</li>
-
-<li>USENIX ;login: <a
-href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are
-Disks the Dominant Contributor for Storage Failures? A Comprehensive
-Study of Storage Subsystem Failure Characteristics</a> by Weihang
-Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li>
-
-<li>SIGMETRICS 2007
-<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An
-analysis of latent sector errors in disk drives</a> by
-L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li>
-
-</ul>
-
-<p>Several of these research papers are based on data collected from
-hundred thousands or millions of disk, and their findings are eye
-opening. The short story is simply do not implicitly trust RAID or
-redundant storage systems. Details matter. And unfortunately there
-are few options on Linux addressing all the identified issues. Both
-ZFS and Btrfs are doing a fairly good job, but have legal and
-practical issues on their own. I wonder how cluster file systems like
-Ceph do in this regard. After all, there is an old saying, you know
-you have a distributed system when the crash of a computer you have
-never heard of stops you from getting any work done. The same holds
-true if fault tolerance do not work.</p>
-
-<p>Just remember, in the end, it do not matter how redundant, or how
-fault tolerant your storage is, if you do not continuously monitor its
-status to detect and replace failed disks.</p>
+ <title>Measuring the speaker frequency response using the AUDMES free software GUI - nice free software</title>
+ <link>http://people.skolelinux.org/pere/blog/Measuring_the_speaker_frequency_response_using_the_AUDMES_free_software_GUI___nice_free_software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Measuring_the_speaker_frequency_response_using_the_AUDMES_free_software_GUI___nice_free_software.html</guid>
+ <pubDate>Mon, 22 Oct 2018 08:40:00 +0200</pubDate>
+ <description><p><img src="http://people.skolelinux.org/pere/blog/images/2018-10-22-audmes-measure-speakers.png" align="right" width="40%"/></p>
+
+<p>My current home stereo is a patchwork of various pieces I got on
+flee markeds over the years. It is amazing what kind of equipment
+show up there. I've been wondering for a while if it was possible to
+measure how well this equipment is working together, and decided to
+see how far I could get using free software. After trawling the web I
+came across an article from DIY Audio and Video on
+<a href="https://www.diyaudioandvideo.com/Tutorial/SpeakerResponseTesting/">Speaker
+Testing and Analysis</a> describing how to test speakers, and it listing
+several software options, among them
+<a href="https://sourceforge.net/projects/audmes/">AUDio MEasurement
+System (AUDMES)</a>. It is the only free software system I could find
+focusing on measuring speakers and audio frequency response. In the
+process I also found an interesting article from NOVO on
+<a href="http://novo.press/understanding-speaker-specifications-and-frequency-response/">Understanding
+Speaker Specifications and Frequency Response</a> and an article from
+ecoustics on
+<a href="https://www.ecoustics.com/articles/understanding-speaker-frequency-response/">Understanding
+Speaker Frequency Response</a>, with a lot of information on what to
+look for and how to interpret the graphs. Armed with this knowledge,
+I set out to measure the state of my speakers.</p>
+
+<p>The first hurdle was that AUDMES hadn't seen a commit for 10 years
+and did not build with current compilers and libraries. I got in
+touch with its author, who no longer was spending time on the program
+but gave me write access to the subversion repository on Sourceforge.
+The end result is that now the code build on Linux and is capable of
+saving and loading the collected frequency response data in CSV
+format. The application is quite nice and flexible, and I was able to
+select the input and output audio interfaces independently. This made
+it possible to use a USB mixer as the input source, while sending
+output via my laptop headphone connection. I lacked the hardware and
+cabling to figure out a different way to get independent cabling to
+speakers and microphone.</p>
+
+<p>Using this setup I could see how a large range of high frequencies
+apparently were not making it out of my speakers. The picture show
+the frequency response measurement of one of the speakers. Note the
+frequency lines seem to be slightly misaligned, compared to the CSV
+output from the program. I can not hear several of these are high
+frequencies, according to measurement from
+<a href="http://freehearingtestsoftware.com">Free Hearing Test
+Software</a>, an freeware system to measure your hearing (still
+looking for a free software alternative), so I do not know if they are
+coming out out the speakers. I thus do not quite know how to figure
+out if the missing frequencies is a problem with the microphone, the
+amplifier or the speakers, but I managed to rule out the audio card in my
+PC by measuring my Bose noise canceling headset using its own
+microphone. This setup was able to see the high frequency tones, so
+the problem with my stereo had to be in the amplifier or speakers.</p>
+
+<p>Anyway, to try to role out one factor I ended up picking up a new
+set of speakers at a flee marked, and these work a lot better than the
+old speakers, so I guess the microphone and amplifier is OK. If you
+need to measure your own speakers, check out AUDMES. If more people
+get involved, perhaps the project could become good enough to
+<a href="https://bugs.debian.org/910876">include in Debian</a>? And if
+you know of some other free software to measure speakers and amplifier
+performance, please let me know. I am aware of the freeware option
+<a href="https://www.roomeqwizard.com/">REW</a>, but I want something
+that can be developed also when the vendor looses interest.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Web services for writing academic LaTeX papers as a team</title>
- <link>http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</guid>
- <pubDate>Tue, 31 Oct 2017 21:00:00 +0100</pubDate>
- <description><p>I was surprised today to learn that a friend in academia did not
-know there are easily available web services available for writing
-LaTeX documents as a team. I thought it was common knowledge, but to
-make sure at least my readers are aware of it, I would like to mention
-these useful services for writing LaTeX documents. Some of them even
-provide a WYSIWYG editor to ease writing even further.</p>
-
-<p>There are two commercial services available,
-<a href="https://sharelatex.com">ShareLaTeX</a> and
-<a href="https://overleaf.com">Overleaf</a>. They are very easy to
-use. Just start a new document, select which publisher to write for
-(ie which LaTeX style to use), and start writing. Note, these two
-have announced their intention to join forces, so soon it will only be
-one joint service. I've used both for different documents, and they
-work just fine. While
-<a href="https://github.com/sharelatex/sharelatex">ShareLaTeX is free
-software</a>, while the latter is not. According to <a
-href="https://www.overleaf.com/help/17-is-overleaf-open-source">a
-announcement from Overleaf</a>, they plan to keep the ShareLaTeX code
-base maintained as free software.</p>
-
-But these two are not the only alternatives.
-<a href="https://app.fiduswriter.org/">Fidus Writer</a> is another free
-software solution with <a href="https://github.com/fiduswriter">the
-source available on github</a>. I have not used it myself. Several
-others can be found on the nice
-<a href="https://alternativeto.net/software/sharelatex/">alterntiveTo
-web service</a>.
-
-<p>If you like Google Docs or Etherpad, but would like to write
-documents in LaTeX, you should check out these services. You can even
-host your own, if you want to. :)</p>
+ <title>Web browser integration of VLC with Bittorrent support</title>
+ <link>http://people.skolelinux.org/pere/blog/Web_browser_integration_of_VLC_with_Bittorrent_support.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Web_browser_integration_of_VLC_with_Bittorrent_support.html</guid>
+ <pubDate>Sun, 21 Oct 2018 09:50:00 +0200</pubDate>
+ <description><p>Bittorrent is as far as I know, currently the most efficient way to
+distribute content on the Internet. It is used all by all sorts of
+content providers, from national TV stations like
+<a href="https://www.nrk.no/">NRK</a>, Linux distributors like
+<a href="https://www.debian.org/">Debian</a> and
+<a href="https://www.ubuntu.com/">Ubuntu</a>, and of course the
+<a href="https://archive.org/">Internet archive</A>.
+
+<p>Almost a month ago
+<a href="https://tracker.debian.org/pkg/vlc-plugin-bittorrent">a new
+package adding Bittorrent support to VLC</a> became available in
+Debian testing and unstable. To test it, simply install it like
+this:</p>
+
+<p><pre>
+apt install vlc-plugin-bittorrent
+</pre></p>
+<p>Since the plugin was made available for the first time in Debian,
+several improvements have been made to it. In version 2.2-4, now
+available in both testing and unstable, a desktop file is provided to
+teach browsers to start VLC when the user click on torrent files or
+magnet links. The last part is thanks to me finally understanding
+what the strange x-scheme-handler style MIME types in desktop files
+are used for. By adding x-scheme-handler/magnet to the MimeType entry
+in the desktop file, at least the browsers Firefox and Chromium will
+suggest to start VLC when selecting a magnet URI on a web page. The
+end result is that now, with the plugin installed in Buster and Sid,
+one can visit any
+<a href="https://archive.org/details/CopyingIsNotTheft1080p">Internet
+Archive page with movies</a> using a web browser and click on the
+torrent link to start streaming the movie.</p>
+
+<p>Note, there is still some misfeatures in the plugin. One is the
+fact that it will hang and
+<a href="https://github.com/johang/vlc-bittorrent/issues/13">block VLC
+from exiting until the torrent streaming starts</a>. Another is the
+fact that it
+<a href="https://github.com/johang/vlc-bittorrent/issues/9">will pick
+and play a random file in a multi file torrent</a>. This is not
+always the video file you want. Combined with the first it can be a
+bit hard to get the video streaming going. But when it work, it seem
+to do a good job.</p>
+
+<p>For the Debian packaging, I would love to find a good way to test
+if the plugin work with VLC using autopkgtest. I tried, but do not
+know enough of the inner workings of VLC to get it working. For now
+the autopkgtest script is only checking if the .so file was
+successfully loaded by VLC. If you have any suggestions, please
+submit a patch to the Debian bug tracking system.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Locating IMDB IDs of movies in the Internet Archive using Wikidata</title>
- <link>http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</guid>
- <pubDate>Wed, 25 Oct 2017 12:20:00 +0200</pubDate>
- <description><p>Recently, I needed to automatically check the copyright status of a
-set of <a href="http://www.imdb.com/">The Internet Movie database
-(IMDB)</a> entries, to figure out which one of the movies they refer
-to can be freely distributed on the Internet. This proved to be
-harder than it sounds. IMDB for sure list movies without any
-copyright protection, where the copyright protection has expired or
-where the movie is lisenced using a permissive license like one from
-Creative Commons. These are mixed with copyright protected movies,
-and there seem to be no way to separate these classes of movies using
-the information in IMDB.</p>
-
-<p>First I tried to look up entries manually in IMDB,
-<a href="https://www.wikipedia.org/">Wikipedia</a> and
-<a href="https://www.archive.org/">The Internet Archive</a>, to get a
-feel how to do this. It is hard to know for sure using these sources,
-but it should be possible to be reasonable confident a movie is "out
-of copyright" with a few hours work per movie. As I needed to check
-almost 20,000 entries, this approach was not sustainable. I simply
-can not work around the clock for about 6 years to check this data
-set.</p>
-
-<p>I asked the people behind The Internet Archive if they could
-introduce a new metadata field in their metadata XML for IMDB ID, but
-was told that they leave it completely to the uploaders to update the
-metadata. Some of the metadata entries had IMDB links in the
-description, but I found no way to download all metadata files in bulk
-to locate those ones and put that approach aside.</p>
-
-<p>In the process I noticed several Wikipedia articles about movies
-had links to both IMDB and The Internet Archive, and it occured to me
-that I could use the Wikipedia RDF data set to locate entries with
-both, to at least get a lower bound on the number of movies on The
-Internet Archive with a IMDB ID. This is useful based on the
-assumption that movies distributed by The Internet Archive can be
-legally distributed on the Internet. With some help from the RDF
-community (thank you DanC), I was able to come up with this query to
-pass to <a href="https://query.wikidata.org/">the SPARQL interface on
-Wikidata</a>:
+ <title>Release 0.2 of free software archive system Nikita announced</title>
+ <link>http://people.skolelinux.org/pere/blog/Release_0_2_of_free_software_archive_system_Nikita_announced.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Release_0_2_of_free_software_archive_system_Nikita_announced.html</guid>
+ <pubDate>Thu, 18 Oct 2018 14:40:00 +0200</pubDate>
+ <description><p>This morning, the new release of the
+<a href="https://gitlab.com/OsloMet-ABI/nikita-noark5-core/">Nikita
+Noark 5 core project</a> was
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2018-October/000406.html">announced
+on the project mailing list</a>. The free software solution is an
+implementation of the Norwegian archive standard Noark 5 used by
+government offices in Norway. These were the changes in version 0.2
+since version 0.1.1 (from NEWS.md):
-<p><pre>
-SELECT ?work ?imdb ?ia ?when ?label
-WHERE
-{
- ?work wdt:P31/wdt:P279* wd:Q11424.
- ?work wdt:P345 ?imdb.
- ?work wdt:P724 ?ia.
- OPTIONAL {
- ?work wdt:P577 ?when.
- ?work rdfs:label ?label.
- FILTER(LANG(?label) = "en").
- }
-}
-</pre></p>
+<ul>
+ <li>Fix typos in REL names</li>
+ <li>Tidy up error message reporting</li>
+ <li>Fix issue where we used Integer.valueOf(), not Integer.getInteger()</li>
+ <li>Change some String handling to StringBuffer</li>
+ <li>Fix error reporting</li>
+ <li>Code tidy-up</li>
+ <li>Fix issue using static non-synchronized SimpleDateFormat to avoid
+ race conditions</li>
+ <li>Fix problem where deserialisers were treating integers as strings</li>
+ <li>Update methods to make them null-safe</li>
+ <li>Fix many issues reported by coverity</li>
+ <li>Improve equals(), compareTo() and hash() in domain model</li>
+ <li>Improvements to the domain model for metadata classes</li>
+ <li>Fix CORS issues when downloading document</li>
+ <li>Implementation of case-handling with registryEntry and document upload</li>
+ <li>Better support in Javascript for OPTIONS</li>
+ <li>Adding concept description of mail integration</li>
+ <li>Improve setting of default values for GET on ny-journalpost</li>
+ <li>Better handling of required values during deserialisation </li>
+ <li>Changed tilknyttetDato (M620) from date to dateTime</li>
+ <li>Corrected some opprettetDato (M600) (de)serialisation errors.</li>
+ <li>Improve parse error reporting.</li>
+ <li>Started on OData search and filtering.</li>
+ <li>Added Contributor Covenant Code of Conduct to project.</li>
+ <li>Moved repository and project from Github to Gitlab.</li>
+ <li>Restructured repository, moved code into src/ and web/.</li>
+ <li>Updated code to use Spring Boot version 2.</li>
+ <li>Added support for OAuth2 authentication.</li>
+ <li>Fixed several bugs discovered by Coverity.</li>
+ <li>Corrected handling of date/datetime fields.</li>
+ <li>Improved error reporting when rejecting during deserializatoin.</li>
+ <li>Adjusted default values provided for ny-arkivdel, ny-mappe,
+ ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.</li>
+ <li>Several fixes for korrespondansepart*.</li>
+ <li>Updated web GUI:
+ <ul>
+ <li>Now handle both file upload and download.</li>
+ <li>Uses new OAuth2 authentication for login.</li>
+ <li>Forms now fetches default values from API using GET.</li>
+ <li>Added RFC 822 (email), TIFF and JPEG to list of possible file formats.</li>
+ </ul></li>
+</ul>
-<p>If I understand the query right, for every film entry anywhere in
-Wikpedia, it will return the IMDB ID and The Internet Archive ID, and
-when the movie was released and its English title, if either or both
-of the latter two are available. At the moment the result set contain
-2338 entries. Of course, it depend on volunteers including both
-correct IMDB and The Internet Archive IDs in the wikipedia articles
-for the movie. It should be noted that the result will include
-duplicates if the movie have entries in several languages. There are
-some bogus entries, either because The Internet Archive ID contain a
-typo or because the movie is not available from The Internet Archive.
-I did not verify the IMDB IDs, as I am unsure how to do that
-automatically.</p>
-
-<p>I wrote a small python script to extract the data set from Wikidata
-and check if the XML metadata for the movie is available from The
-Internet Archive, and after around 1.5 hour it produced a list of 2097
-free movies and their IMDB ID. In total, 171 entries in Wikidata lack
-the refered Internet Archive entry. I assume the 70 "disappearing"
-entries (ie 2338-2097-171) are duplicate entries.</p>
-
-<p>This is not too bad, given that The Internet Archive report to
-contain <a href="https://archive.org/details/feature_films">5331
-feature films</a> at the moment, but it also mean more than 3000
-movies are missing on Wikipedia or are missing the pair of references
-on Wikipedia.</p>
-
-<p>I was curious about the distribution by release year, and made a
-little graph to show how the amount of free movies is spread over the
-years:<p>
-
-<p><img src="http://people.skolelinux.org/pere/blog/images/2017-10-25-verk-i-det-fri-filmer.png"></p>
-
-<p>I expect the relative distribution of the remaining 3000 movies to
-be similar.</p>
-
-<p>If you want to help, and want to ensure Wikipedia can be used to
-cross reference The Internet Archive and The Internet Movie Database,
-please make sure entries like this are listed under the "External
-links" heading on the Wikipedia article for the movie:</p>
+<p>The changes and improvements are extensive. Running diffstat on
+the changes between git tab 0.1.1 and 0.2 show 1098 files changed,
+108666 insertions(+), 54066 deletions(-).</p>
-<p><pre>
-* {{Internet Archive film|id=FightingLady}}
-* {{IMDb title|id=0036823|title=The Fighting Lady}}
-</pre></p>
+<p>If free and open standardized archiving API sound interesting to
+you, please contact us on IRC
+(<a href="irc://irc.freenode.net/%23nikita">#nikita on
+irc.freenode.net</a>) or email
+(<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">nikita-noark
+mailing list</a>).</p>
-<p>Please verify the links on the final page, to make sure you did not
-introduce a typo.</p>
-
-<p>Here is the complete list, if you want to correct the 171
-identified Wikipedia entries with broken links to The Internet
-Archive: <a href="http://www.wikidata.org/entity/Q1140317">Q1140317</a>,
-<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
-<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
-<a href="http://www.wikidata.org/entity/Q470560">Q470560</a>,
-<a href="http://www.wikidata.org/entity/Q743340">Q743340</a>,
-<a href="http://www.wikidata.org/entity/Q822580">Q822580</a>,
-<a href="http://www.wikidata.org/entity/Q480696">Q480696</a>,
-<a href="http://www.wikidata.org/entity/Q128761">Q128761</a>,
-<a href="http://www.wikidata.org/entity/Q1307059">Q1307059</a>,
-<a href="http://www.wikidata.org/entity/Q1335091">Q1335091</a>,
-<a href="http://www.wikidata.org/entity/Q1537166">Q1537166</a>,
-<a href="http://www.wikidata.org/entity/Q1438334">Q1438334</a>,
-<a href="http://www.wikidata.org/entity/Q1479751">Q1479751</a>,
-<a href="http://www.wikidata.org/entity/Q1497200">Q1497200</a>,
-<a href="http://www.wikidata.org/entity/Q1498122">Q1498122</a>,
-<a href="http://www.wikidata.org/entity/Q865973">Q865973</a>,
-<a href="http://www.wikidata.org/entity/Q834269">Q834269</a>,
-<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
-<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
-<a href="http://www.wikidata.org/entity/Q1548193">Q1548193</a>,
-<a href="http://www.wikidata.org/entity/Q499031">Q499031</a>,
-<a href="http://www.wikidata.org/entity/Q1564769">Q1564769</a>,
-<a href="http://www.wikidata.org/entity/Q1585239">Q1585239</a>,
-<a href="http://www.wikidata.org/entity/Q1585569">Q1585569</a>,
-<a href="http://www.wikidata.org/entity/Q1624236">Q1624236</a>,
-<a href="http://www.wikidata.org/entity/Q4796595">Q4796595</a>,
-<a href="http://www.wikidata.org/entity/Q4853469">Q4853469</a>,
-<a href="http://www.wikidata.org/entity/Q4873046">Q4873046</a>,
-<a href="http://www.wikidata.org/entity/Q915016">Q915016</a>,
-<a href="http://www.wikidata.org/entity/Q4660396">Q4660396</a>,
-<a href="http://www.wikidata.org/entity/Q4677708">Q4677708</a>,
-<a href="http://www.wikidata.org/entity/Q4738449">Q4738449</a>,
-<a href="http://www.wikidata.org/entity/Q4756096">Q4756096</a>,
-<a href="http://www.wikidata.org/entity/Q4766785">Q4766785</a>,
-<a href="http://www.wikidata.org/entity/Q880357">Q880357</a>,
-<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
-<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
-<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
-<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
-<a href="http://www.wikidata.org/entity/Q1194170">Q1194170</a>,
-<a href="http://www.wikidata.org/entity/Q940014">Q940014</a>,
-<a href="http://www.wikidata.org/entity/Q946863">Q946863</a>,
-<a href="http://www.wikidata.org/entity/Q172837">Q172837</a>,
-<a href="http://www.wikidata.org/entity/Q573077">Q573077</a>,
-<a href="http://www.wikidata.org/entity/Q1219005">Q1219005</a>,
-<a href="http://www.wikidata.org/entity/Q1219599">Q1219599</a>,
-<a href="http://www.wikidata.org/entity/Q1643798">Q1643798</a>,
-<a href="http://www.wikidata.org/entity/Q1656352">Q1656352</a>,
-<a href="http://www.wikidata.org/entity/Q1659549">Q1659549</a>,
-<a href="http://www.wikidata.org/entity/Q1660007">Q1660007</a>,
-<a href="http://www.wikidata.org/entity/Q1698154">Q1698154</a>,
-<a href="http://www.wikidata.org/entity/Q1737980">Q1737980</a>,
-<a href="http://www.wikidata.org/entity/Q1877284">Q1877284</a>,
-<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
-<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
-<a href="http://www.wikidata.org/entity/Q1199451">Q1199451</a>,
-<a href="http://www.wikidata.org/entity/Q1211871">Q1211871</a>,
-<a href="http://www.wikidata.org/entity/Q1212179">Q1212179</a>,
-<a href="http://www.wikidata.org/entity/Q1238382">Q1238382</a>,
-<a href="http://www.wikidata.org/entity/Q4906454">Q4906454</a>,
-<a href="http://www.wikidata.org/entity/Q320219">Q320219</a>,
-<a href="http://www.wikidata.org/entity/Q1148649">Q1148649</a>,
-<a href="http://www.wikidata.org/entity/Q645094">Q645094</a>,
-<a href="http://www.wikidata.org/entity/Q5050350">Q5050350</a>,
-<a href="http://www.wikidata.org/entity/Q5166548">Q5166548</a>,
-<a href="http://www.wikidata.org/entity/Q2677926">Q2677926</a>,
-<a href="http://www.wikidata.org/entity/Q2698139">Q2698139</a>,
-<a href="http://www.wikidata.org/entity/Q2707305">Q2707305</a>,
-<a href="http://www.wikidata.org/entity/Q2740725">Q2740725</a>,
-<a href="http://www.wikidata.org/entity/Q2024780">Q2024780</a>,
-<a href="http://www.wikidata.org/entity/Q2117418">Q2117418</a>,
-<a href="http://www.wikidata.org/entity/Q2138984">Q2138984</a>,
-<a href="http://www.wikidata.org/entity/Q1127992">Q1127992</a>,
-<a href="http://www.wikidata.org/entity/Q1058087">Q1058087</a>,
-<a href="http://www.wikidata.org/entity/Q1070484">Q1070484</a>,
-<a href="http://www.wikidata.org/entity/Q1080080">Q1080080</a>,
-<a href="http://www.wikidata.org/entity/Q1090813">Q1090813</a>,
-<a href="http://www.wikidata.org/entity/Q1251918">Q1251918</a>,
-<a href="http://www.wikidata.org/entity/Q1254110">Q1254110</a>,
-<a href="http://www.wikidata.org/entity/Q1257070">Q1257070</a>,
-<a href="http://www.wikidata.org/entity/Q1257079">Q1257079</a>,
-<a href="http://www.wikidata.org/entity/Q1197410">Q1197410</a>,
-<a href="http://www.wikidata.org/entity/Q1198423">Q1198423</a>,
-<a href="http://www.wikidata.org/entity/Q706951">Q706951</a>,
-<a href="http://www.wikidata.org/entity/Q723239">Q723239</a>,
-<a href="http://www.wikidata.org/entity/Q2079261">Q2079261</a>,
-<a href="http://www.wikidata.org/entity/Q1171364">Q1171364</a>,
-<a href="http://www.wikidata.org/entity/Q617858">Q617858</a>,
-<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
-<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
-<a href="http://www.wikidata.org/entity/Q324513">Q324513</a>,
-<a href="http://www.wikidata.org/entity/Q374172">Q374172</a>,
-<a href="http://www.wikidata.org/entity/Q7533269">Q7533269</a>,
-<a href="http://www.wikidata.org/entity/Q970386">Q970386</a>,
-<a href="http://www.wikidata.org/entity/Q976849">Q976849</a>,
-<a href="http://www.wikidata.org/entity/Q7458614">Q7458614</a>,
-<a href="http://www.wikidata.org/entity/Q5347416">Q5347416</a>,
-<a href="http://www.wikidata.org/entity/Q5460005">Q5460005</a>,
-<a href="http://www.wikidata.org/entity/Q5463392">Q5463392</a>,
-<a href="http://www.wikidata.org/entity/Q3038555">Q3038555</a>,
-<a href="http://www.wikidata.org/entity/Q5288458">Q5288458</a>,
-<a href="http://www.wikidata.org/entity/Q2346516">Q2346516</a>,
-<a href="http://www.wikidata.org/entity/Q5183645">Q5183645</a>,
-<a href="http://www.wikidata.org/entity/Q5185497">Q5185497</a>,
-<a href="http://www.wikidata.org/entity/Q5216127">Q5216127</a>,
-<a href="http://www.wikidata.org/entity/Q5223127">Q5223127</a>,
-<a href="http://www.wikidata.org/entity/Q5261159">Q5261159</a>,
-<a href="http://www.wikidata.org/entity/Q1300759">Q1300759</a>,
-<a href="http://www.wikidata.org/entity/Q5521241">Q5521241</a>,
-<a href="http://www.wikidata.org/entity/Q7733434">Q7733434</a>,
-<a href="http://www.wikidata.org/entity/Q7736264">Q7736264</a>,
-<a href="http://www.wikidata.org/entity/Q7737032">Q7737032</a>,
-<a href="http://www.wikidata.org/entity/Q7882671">Q7882671</a>,
-<a href="http://www.wikidata.org/entity/Q7719427">Q7719427</a>,
-<a href="http://www.wikidata.org/entity/Q7719444">Q7719444</a>,
-<a href="http://www.wikidata.org/entity/Q7722575">Q7722575</a>,
-<a href="http://www.wikidata.org/entity/Q2629763">Q2629763</a>,
-<a href="http://www.wikidata.org/entity/Q2640346">Q2640346</a>,
-<a href="http://www.wikidata.org/entity/Q2649671">Q2649671</a>,
-<a href="http://www.wikidata.org/entity/Q7703851">Q7703851</a>,
-<a href="http://www.wikidata.org/entity/Q7747041">Q7747041</a>,
-<a href="http://www.wikidata.org/entity/Q6544949">Q6544949</a>,
-<a href="http://www.wikidata.org/entity/Q6672759">Q6672759</a>,
-<a href="http://www.wikidata.org/entity/Q2445896">Q2445896</a>,
-<a href="http://www.wikidata.org/entity/Q12124891">Q12124891</a>,
-<a href="http://www.wikidata.org/entity/Q3127044">Q3127044</a>,
-<a href="http://www.wikidata.org/entity/Q2511262">Q2511262</a>,
-<a href="http://www.wikidata.org/entity/Q2517672">Q2517672</a>,
-<a href="http://www.wikidata.org/entity/Q2543165">Q2543165</a>,
-<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
-<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
-<a href="http://www.wikidata.org/entity/Q12126890">Q12126890</a>,
-<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
-<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
-<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
-<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
-<a href="http://www.wikidata.org/entity/Q2559509">Q2559509</a>,
-<a href="http://www.wikidata.org/entity/Q2559912">Q2559912</a>,
-<a href="http://www.wikidata.org/entity/Q7760469">Q7760469</a>,
-<a href="http://www.wikidata.org/entity/Q6703974">Q6703974</a>,
-<a href="http://www.wikidata.org/entity/Q4744">Q4744</a>,
-<a href="http://www.wikidata.org/entity/Q7766962">Q7766962</a>,
-<a href="http://www.wikidata.org/entity/Q7768516">Q7768516</a>,
-<a href="http://www.wikidata.org/entity/Q7769205">Q7769205</a>,
-<a href="http://www.wikidata.org/entity/Q7769988">Q7769988</a>,
-<a href="http://www.wikidata.org/entity/Q2946945">Q2946945</a>,
-<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
-<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
-<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
-<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
-<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
-<a href="http://www.wikidata.org/entity/Q6909175">Q6909175</a>,
-<a href="http://www.wikidata.org/entity/Q7405709">Q7405709</a>,
-<a href="http://www.wikidata.org/entity/Q7416149">Q7416149</a>,
-<a href="http://www.wikidata.org/entity/Q7239952">Q7239952</a>,
-<a href="http://www.wikidata.org/entity/Q7317332">Q7317332</a>,
-<a href="http://www.wikidata.org/entity/Q7783674">Q7783674</a>,
-<a href="http://www.wikidata.org/entity/Q7783704">Q7783704</a>,
-<a href="http://www.wikidata.org/entity/Q7857590">Q7857590</a>,
-<a href="http://www.wikidata.org/entity/Q3372526">Q3372526</a>,
-<a href="http://www.wikidata.org/entity/Q3372642">Q3372642</a>,
-<a href="http://www.wikidata.org/entity/Q3372816">Q3372816</a>,
-<a href="http://www.wikidata.org/entity/Q3372909">Q3372909</a>,
-<a href="http://www.wikidata.org/entity/Q7959649">Q7959649</a>,
-<a href="http://www.wikidata.org/entity/Q7977485">Q7977485</a>,
-<a href="http://www.wikidata.org/entity/Q7992684">Q7992684</a>,
-<a href="http://www.wikidata.org/entity/Q3817966">Q3817966</a>,
-<a href="http://www.wikidata.org/entity/Q3821852">Q3821852</a>,
-<a href="http://www.wikidata.org/entity/Q3420907">Q3420907</a>,
-<a href="http://www.wikidata.org/entity/Q3429733">Q3429733</a>,
-<a href="http://www.wikidata.org/entity/Q774474">Q774474</a></p>
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>A one-way wall on the border?</title>
- <link>http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html</guid>
- <pubDate>Sat, 14 Oct 2017 22:10:00 +0200</pubDate>
- <description><p>I find it fascinating how many of the people being locked inside
-the proposed border wall between USA and Mexico support the idea. The
-proposal to keep Mexicans out reminds me of
-<a href="http://www.history.com/news/10-things-you-may-not-know-about-the-berlin-wall">the
-propaganda twist from the East Germany government</a> calling the wall
-the “Antifascist Bulwark” after erecting the Berlin Wall, claiming
-that the wall was erected to keep enemies from creeping into East
-Germany, while it was obvious to the people locked inside it that it
-was erected to keep the people from escaping.</p>
-
-<p>Do the people in USA supporting this wall really believe it is a
-one way wall, only keeping people on the outside from getting in,
-while not keeping people in the inside from getting out?</p>
+ <title>Fetching trusted timestamps using the rfc3161ng python module</title>
+ <link>http://people.skolelinux.org/pere/blog/Fetching_trusted_timestamps_using_the_rfc3161ng_python_module.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Fetching_trusted_timestamps_using_the_rfc3161ng_python_module.html</guid>
+ <pubDate>Mon, 8 Oct 2018 12:30:00 +0200</pubDate>
+ <description><p>I have earlier covered the basics of trusted timestamping using the
+'openssl ts' client. See blog post for
+<a href="http://people.skolelinux.org/pere/blog/Public_Trusted_Timestamping_services_for_everyone.html">2014</a>,
+<a href="http://people.skolelinux.org/pere/blog/syslog_trusted_timestamp___chain_of_trusted_timestamps_for_your_syslog.html">2016</a>
+and
+<a href="http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html">2017</a>
+for those stories. But some times I want to integrate the timestamping
+in other code, and recently I needed to integrate it into Python.
+After searching a bit, I found
+<a href="https://dev.entrouvert.org/projects/python-rfc3161">the
+rfc3161 library</a> which seemed like a good fit, but I soon
+discovered it only worked for python version 2, and I needed something
+that work with python version 3. Luckily I next came across
+<a href="https://github.com/trbs/rfc3161ng/">the rfc3161ng library</a>,
+a fork of the original rfc3161 library. Not only is it working with
+python 3, it have fixed a few of the bugs in the original library, and
+it has an active maintainer. I decided to wrap it up and make it
+<a href="https://tracker.debian.org/pkg/python-rfc3161ng">available in
+Debian</a>, and a few days ago it entered Debian unstable and testing.</p>
+
+<p>Using the library is fairly straight forward. The only slightly
+problematic step is to fetch the required certificates to verify the
+timestamp. For some services it is straight forward, while for others
+I have not yet figured out how to do it. Here is a small standalone
+code example based on of the integration tests in the library code:</p>
+
+<pre>
+#!/usr/bin/python3
+
+"""
+
+Python 3 script demonstrating how to use the rfc3161ng module to
+get trusted timestamps.
+
+The license of this code is the same as the license of the rfc3161ng
+library, ie MIT/BSD.
+
+"""
+
+import os
+import pyasn1.codec.der
+import rfc3161ng
+import subprocess
+import tempfile
+import urllib.request
+
+def store(f, data):
+ f.write(data)
+ f.flush()
+ f.seek(0)
+
+def fetch(url, f=None):
+ response = urllib.request.urlopen(url)
+ data = response.read()
+ if f:
+ store(f, data)
+ return data
+
+def main():
+ with tempfile.NamedTemporaryFile() as cert_f,\
+ tempfile.NamedTemporaryFile() as ca_f,\
+ tempfile.NamedTemporaryFile() as msg_f,\
+ tempfile.NamedTemporaryFile() as tsr_f:
+
+ # First fetch certificates used by service
+ certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f)
+ ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f)
+
+ # Then timestamp the message
+ timestamper = \
+ rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr',
+ certificate=certificate_data)
+ data = b"Python forever!\n"
+ tsr = timestamper(data=data, return_tsr=True)
+
+ # Finally, convert message and response to something 'openssl ts' can verify
+ store(msg_f, data)
+ store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
+ args = ["openssl", "ts", "-verify",
+ "-data", msg_f.name,
+ "-in", tsr_f.name,
+ "-CAfile", ca_f.name,
+ "-untrusted", cert_f.name]
+ subprocess.check_call(args)
+
+if '__main__' == __name__:
+ main()
+</pre>
+
+<p>The code fetches the required certificates, store them as temporary
+files, timestamp a simple message, store the message and timestamp to
+disk and ask 'openssl ts' to verify the timestamp. A timestamp is
+around 1.5 kiB in size, and should be fairly easy to store for future
+use.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Generating 3D prints in Debian using Cura and Slic3r(-prusa)</title>
- <link>http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html</guid>
- <pubDate>Mon, 9 Oct 2017 10:50:00 +0200</pubDate>
- <description><p>At my nearby maker space,
-<a href="http://sonen.ifi.uio.no/">Sonen</a>, I heard the story that it
-was easier to generate gcode files for theyr 3D printers (Ultimake 2+)
-on Windows and MacOS X than Linux, because the software involved had
-to be manually compiled and set up on Linux while premade packages
-worked out of the box on Windows and MacOS X. I found this annoying,
-as the software involved,
-<a href="https://github.com/Ultimaker/Cura">Cura</a>, is free software
-and should be trivial to get up and running on Linux if someone took
-the time to package it for the relevant distributions. I even found
-<a href="https://bugs.debian.org/706656">a request for adding into
-Debian</a> from 2013, which had seem some activity over the years but
-never resulted in the software showing up in Debian. So a few days
-ago I offered my help to try to improve the situation.</p>
-
-<p>Now I am very happy to see that all the packages required by a
-working Cura in Debian are uploaded into Debian and waiting in the NEW
-queue for the ftpmasters to have a look. You can track the progress
-on
-<a href="https://qa.debian.org/developer.php?email=3dprinter-general%40lists.alioth.debian.org">the
-status page for the 3D printer team</a>.</p>
-
-<p>The uploaded packages are a bit behind upstream, and was uploaded
-now to get slots in <a href="https://ftp-master.debian.org/new.html">the NEW
-queue</a> while we work up updating the packages to the latest
-upstream version.</p>
-
-<p>On a related note, two competitors for Cura, which I found harder
-to use and was unable to configure correctly for Ultimaker 2+ in the
-short time I spent on it, are already in Debian. If you are looking
-for 3D printer "slicers" and want something already available in
-Debian, check out
-<a href="https://tracker.debian.org/pkg/slic3r">slic3r</a> and
-<a href="https://tracker.debian.org/pkg/slic3r-prusa">slic3r-prusa</a>.
-The latter is a fork of the former.</p>
+ <title>Automatic Google Drive sync using grive in Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Automatic_Google_Drive_sync_using_grive_in_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Automatic_Google_Drive_sync_using_grive_in_Debian.html</guid>
+ <pubDate>Thu, 4 Oct 2018 15:20:00 +0200</pubDate>
+ <description><p>A few days, I rescued a Windows victim over to Debian. To try to
+rescue the remains, I helped set up automatic sync with Google Drive.
+I did not find any sensible Debian package handling this
+automatically, so I rebuild the grive2 source from
+<a href="http://www.webupd8.org/">the Ubuntu UPD8 PPA</a> to do the
+task and added a autostart desktop entry and a small shell script to
+run in the background while the user is logged in to do the sync.
+Here is a sketch of the setup for future reference.</p>
+
+<p>I first created <tt>~/googledrive</tt>, entered the directory and
+ran '<tt>grive -a</tt>' to authenticate the machine/user. Next, I
+created a autostart hook in <tt>~/.config/autostart/grive.desktop</tt>
+to start the sync when the user log in:</p>
+
+<p><blockquote><pre>
+[Desktop Entry]
+Name=Google drive autosync
+Type=Application
+Exec=/home/user/bin/grive-sync
+</pre></blockquote></p>
+
+<p>Finally, I wrote the <tt>~/bin/grive-sync</tt> script to sync
+~/googledrive/ with the files in Google Drive.</p>
+
+<p><blockquote><pre>
+#!/bin/sh
+set -e
+cd ~/
+cleanup() {
+ if [ "$syncpid" ] ; then
+ kill $syncpid
+ fi
+}
+trap cleanup EXIT INT QUIT
+/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" &
+syncpdi=$!
+while true; do
+ if ! xhost >/dev/null 2>&1 ; then
+ echo "no DISPLAY, exiting as the user probably logged out"
+ exit 1
+ fi
+ if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then
+ /usr/lib/grive/grive-sync.sh sync googledrive
+ fi
+ sleep 300
+done 2>&1 | sed "s%^%$0:%"
+</pre></blockquote></p>
+
+<p>Feel free to use the setup if you want. It can be assumed to be
+GNU GPL v2 licensed (or any later version, at your leisure), but I
+doubt this code is possible to claim copyright on.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Mangler du en skrue, eller har du en skrue løs?</title>
- <link>http://people.skolelinux.org/pere/blog/Mangler_du_en_skrue__eller_har_du_en_skrue_l_s_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Mangler_du_en_skrue__eller_har_du_en_skrue_l_s_.html</guid>
- <pubDate>Wed, 4 Oct 2017 09:40:00 +0200</pubDate>
- <description>Når jeg holder på med ulike prosjekter, så trenger jeg stadig ulike
-skruer. Det siste prosjektet jeg holder på med er å lage
-<a href="https://www.thingiverse.com/thing:676916">en boks til en
-HDMI-touch-skjerm</a> som skal brukes med Raspberry Pi. Boksen settes
-sammen med skruer og bolter, og jeg har vært i tvil om hvor jeg kan
-få tak i de riktige skruene. Clas Ohlson og Jernia i nærheten har
-sjelden hatt det jeg trenger. Men her om dagen fikk jeg et fantastisk
-tips for oss som bor i Oslo.
-<a href="http://www.zachskruer.no/">Zachariassen Jernvare AS</a> i
-<a href="http://www.openstreetmap.org/?mlat=59.93421&mlon=10.76795#map=19/59.93421/10.76795">Hegermannsgate
-23A på Torshov</a> har et fantastisk utvalg, og åpent mellom 09:00 og
-17:00. De selger skruer, muttere, bolter, skiver etc i løs vekt, og
-så langt har jeg fått alt jeg har lett etter. De har i tillegg det
-meste av annen jernvare, som verktøy, lamper, ledninger, etc. Jeg
-håper de har nok kunder til å holde det gående lenge, da dette er en
-butikk jeg kommer til å besøke ofte. Butikken er et funn å ha i
-nabolaget for oss som liker å bygge litt selv. :)</p>
+ <title>Valutakrambod - A python and bitcoin love story</title>
+ <link>http://people.skolelinux.org/pere/blog/Valutakrambod___A_python_and_bitcoin_love_story.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Valutakrambod___A_python_and_bitcoin_love_story.html</guid>
+ <pubDate>Sat, 29 Sep 2018 22:20:00 +0200</pubDate>
+ <description><p>It would come as no surprise to anyone that I am interested in
+bitcoins and virtual currencies. I've been keeping an eye on virtual
+currencies for many years, and it is part of the reason a few months
+ago, I started writing a python library for collecting currency
+exchange rates and trade on virtual currency exchanges. I decided to
+name the end result valutakrambod, which perhaps can be translated to
+small currency shop.</p>
+
+<p>The library uses the tornado python library to handle HTTP and
+websocket connections, and provide a asynchronous system for
+connecting to and tracking several services. The code is available
+from
+<a href="http://github.com/petterreinholdtsen/valutakrambod">github</a>.</p>
+
+</p>There are two example clients of the library. One is very simple and
+list every updated buy/sell price received from the various services.
+This code is started by running bin/btc-rates and call the client code
+in valutakrambod/client.py. The simple client look like this:</p>
+
+<p><blockquote><pre>
+import functools
+import tornado.ioloop
+import valutakrambod
+class SimpleClient(object):
+ def __init__(self):
+ self.services = []
+ self.streams = []
+ pass
+ def newdata(self, service, pair, changed):
+ print("%-15s %s-%s: %8.3f %8.3f" % (
+ service.servicename(),
+ pair[0],
+ pair[1],
+ service.rates[pair]['ask'],
+ service.rates[pair]['bid'])
+ )
+ async def refresh(self, service):
+ await service.fetchRates(service.wantedpairs)
+ def run(self):
+ self.ioloop = tornado.ioloop.IOLoop.current()
+ self.services = valutakrambod.service.knownServices()
+ for e in self.services:
+ service = e()
+ service.subscribe(self.newdata)
+ stream = service.websocket()
+ if stream:
+ self.streams.append(stream)
+ else:
+ # Fetch information from non-streaming services immediately
+ self.ioloop.call_later(len(self.services),
+ functools.partial(self.refresh, service))
+ # as well as regularly
+ service.periodicUpdate(60)
+ for stream in self.streams:
+ stream.connect()
+ try:
+ self.ioloop.start()
+ except KeyboardInterrupt:
+ print("Interrupted by keyboard, closing all connections.")
+ pass
+ for stream in self.streams:
+ stream.close()
+</pre></blockquote></p>
+
+<p>The library client loops over all known "public" services,
+initialises it, subscribes to any updates from the service, checks and
+activates websocket streaming if the service provide it, and if no
+streaming is supported, fetches information from the service and sets
+up a periodic update every 60 seconds. The output from this client
+can look like this:</p>
+
+<p><blockquote><pre>
+Bl3p BTC-EUR: 5687.110 5653.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Hitbtc BTC-USD: 6594.560 6593.690
+Hitbtc BTC-USD: 6594.560 6593.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Hitbtc BTC-USD: 6594.570 6593.690
+Bitstamp EUR-USD: 1.159 1.154
+Hitbtc BTC-USD: 6594.570 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Paymium BTC-EUR: 5680.000 5620.240
+</pre></blockquote></p>
+
+<p>The exchange order book is tracked in addition to the best buy/sell
+price, for those that need to know the details.</p>
+
+<p>The other example client is focusing on providing a curses view
+with updated buy/sell prices as soon as they are received from the
+services. This code is located in bin/btc-rates-curses and activated
+by using the '-c' argument. Without the argument the "curses" output
+is printed without using curses, which is useful for debugging. The
+curses view look like this:</p>
+
+<p><blockquote><pre>
+ Name Pair Bid Ask Spr Ftcd Age
+ BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60
+ Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59
+ Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60
+ Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60
+ Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1
+ Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19
+ Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan
+ Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60
+ Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan
+ BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60
+ Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60
+ Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60
+ Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan
+ MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan
+ BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60
+ Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57
+ Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60
+ Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1
+ Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117
+ Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan
+ Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0
+ Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58
+ Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan
+ Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan
+ Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1
+ Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan
+ BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan
+ BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60
+ BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan
+ Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan
+</pre></blockquote></p>
+
+<p>The code for this client is too complex for a simple blog post, so
+you will have to check out the git repository to figure out how it
+work. What I can tell is how the three last numbers on each line
+should be interpreted. The first is how many seconds ago information
+was received from the service. The second is how long ago, according
+to the service, the provided information was updated. The last is an
+estimate on how often the buy/sell values change.</p>
+
+<p>If you find this library useful, or would like to improve it, I
+would love to hear from you. Note that for some of the services I've
+implemented a trading API. It might be the topic of a future blog
+post.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Visualizing GSM radio chatter using gr-gsm and Hopglass</title>
- <link>http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html</guid>
- <pubDate>Fri, 29 Sep 2017 10:30:00 +0200</pubDate>
- <description><p>Every mobile phone announce its existence over radio to the nearby
-mobile cell towers. And this radio chatter is available for anyone
-with a radio receiver capable of receiving them. Details about the
-mobile phones with very good accuracy is of course collected by the
-phone companies, but this is not the topic of this blog post. The
-mobile phone radio chatter make it possible to figure out when a cell
-phone is nearby, as it include the SIM card ID (IMSI). By paying
-attention over time, one can see when a phone arrive and when it leave
-an area. I believe it would be nice to make this information more
-available to the general public, to make more people aware of how
-their phones are announcing their whereabouts to anyone that care to
-listen.</p>
-
-<p>I am very happy to report that we managed to get something
-visualizing this information up and running for
-<a href="http://norwaymakers.org/osf17">Oslo Skaperfestival 2017</a>
-(Oslo Makers Festival) taking place today and tomorrow at Deichmanske
-library. The solution is based on the
-<a href="http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html">simple
-recipe for listening to GSM chatter</a> I posted a few days ago, and
-will show up at the stand of <a href="http://sonen.ifi.uio.no/">Åpen
-Sone from the Computer Science department of the University of
-Oslo</a>. The presentation will show the nearby mobile phones (aka
-IMSIs) as dots in a web browser graph, with lines to the dot
-representing mobile base station it is talking to. It was working in
-the lab yesterday, and was moved into place this morning.</p>
-
-<p>We set up a fairly powerful desktop machine using Debian
-Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers
-connected and visualize the visible cell phone towers using an
-<a href="https://github.com/marlow925/hopglass">English version of
-Hopglass</a>. A fairly powerfull machine is needed as the
-grgsm_livemon_headless processes from
-<a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a> converting
-the radio signal to data packages is quite CPU intensive.</p>
-
-<p>The frequencies to listen to, are identified using a slightly
-patched scan-and-livemon (to set the --args values for each receiver),
-and the Hopglass data is generated using the
-<a href="https://github.com/petterreinholdtsen/IMSI-catcher/tree/meshviewer-output">patches
-in my meshviewer-output branch</a>. For some reason we could not get
-more than four SDRs working. There is also a geographical map trying
-to show the location of the base stations, but I believe their
-coordinates are hardcoded to some random location in Germany, I
-believe. The code should be replaced with code to look up location in
-a text file, a sqlite database or one of the online databases
-mentioned in
-<a href="https://github.com/Oros42/IMSI-catcher/issues/14">the github
-issue for the topic</a>.
-
-<p>If this sound interesting, visit the stand at the festival!</p>
+ <title>VLC in Debian now can do bittorrent streaming</title>
+ <link>http://people.skolelinux.org/pere/blog/VLC_in_Debian_now_can_do_bittorrent_streaming.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/VLC_in_Debian_now_can_do_bittorrent_streaming.html</guid>
+ <pubDate>Mon, 24 Sep 2018 21:20:00 +0200</pubDate>
+ <description><p>Back in February, I got curious to see
+<a href="http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html">if
+VLC now supported Bittorrent streaming</a>. It did not, despite the
+fact that the idea and code to handle such streaming had been floating
+around for years. I did however find
+<a href="https://github.com/johang/vlc-bittorrent">a standalone plugin
+for VLC</a> to do it, and half a year later I decided to wrap up the
+plugin and get it into Debian. I uploaded it to NEW a few days ago,
+and am very happy to report that it
+<a href="https://tracker.debian.org/pkg/vlc-plugin-bittorrent">entered
+Debian</a> a few hours ago, and should be available in Debian/Unstable
+tomorrow, and Debian/Testing in a few days.</p>
+
+<p>With the vlc-plugin-bittorrent package installed you should be able
+to stream videos using a simple call to</p>
+
+<p><blockquote><pre>
+vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent
+</pre></blockquote></p>
+
+</p>It can handle magnet links too. Now if only native vlc had
+bittorrent support. Then a lot more would be helping each other to
+share public domain and creative commons movies. The plugin need some
+stability work with seeking and picking the right file in a torrent
+with many files, but is already usable. Please note that the plugin
+is not removing downloaded files when vlc is stopped, so it can fill
+up your disk if you are not careful. Have fun. :)</p>
+
+<p>I would love to get help maintaining this package. Get in touch if
+you are interested.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Easier recipe to observe the cell phones around you</title>
- <link>http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html</guid>
- <pubDate>Sun, 24 Sep 2017 08:30:00 +0200</pubDate>
- <description><p>A little more than a month ago I wrote
-<a href="http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html">how
-to observe the SIM card ID (aka IMSI number) of mobile phones talking
-to nearby mobile phone base stations using Debian GNU/Linux and a
-cheap USB software defined radio</a>, and thus being able to pinpoint
-the location of people and equipment (like cars and trains) with an
-accuracy of a few kilometer. Since then we have worked to make the
-procedure even simpler, and it is now possible to do this without any
-manual frequency tuning and without building your own packages.</p>
-
-<p>The <a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a>
-package is now included in Debian testing and unstable, and the
-IMSI-catcher code no longer require root access to fetch and decode
-the GSM data collected using gr-gsm.</p>
-
-<p>Here is an updated recipe, using packages built by Debian and a git
-clone of two python scripts:</p>
-
-<ol>
-
-<li>Start with a Debian machine running the Buster version (aka
- testing).</li>
-
-<li>Run '<tt>apt install gr-gsm python-numpy python-scipy
- python-scapy</tt>' as root to install required packages.</li>
-
-<li>Fetch the code decoding GSM packages using '<tt>git clone
- github.com/Oros42/IMSI-catcher.git</tt>'.</li>
-
-<li>Insert USB software defined radio supported by GNU Radio.</li>
-
-<li>Enter the IMSI-catcher directory and run '<tt>python
- scan-and-livemon</tt>' to locate the frequency of nearby base
- stations and start listening for GSM packages on one of them.</li>
-
-<li>Enter the IMSI-catcher directory and run '<tt>python
- simple_IMSI-catcher.py</tt>' to display the collected information.</li>
-
-</ol>
-
-<p>Note, due to a bug somewhere the scan-and-livemon program (actually
-<a href="https://github.com/ptrkrysik/gr-gsm/issues/336">its underlying
-program grgsm_scanner</a>) do not work with the HackRF radio. It does
-work with RTL 8232 and other similar USB radio receivers you can get
-very cheaply
-(<a href="https://www.ebay.com/sch/items/?_nkw=rtl+2832">for example
-from ebay</a>), so for now the solution is to scan using the RTL radio
-and only use HackRF for fetching GSM data.</p>
-
-<p>As far as I can tell, a cell phone only show up on one of the
-frequencies at the time, so if you are going to track and count every
-cell phone around you, you need to listen to all the frequencies used.
-To listen to several frequencies, use the --numrecv argument to
-scan-and-livemon to use several receivers. Further, I am not sure if
-phones using 3G or 4G will show as talking GSM to base stations, so
-this approach might not see all phones around you. I typically see
-0-400 IMSI numbers an hour when looking around where I live.</p>
-
-<p>I've tried to run the scanner on a
-<a href="https://wiki.debian.org/RaspberryPi">Raspberry Pi 2 and 3
-running Debian Buster</a>, but the grgsm_livemon_headless process seem
-to be too CPU intensive to keep up. When GNU Radio print 'O' to
-stdout, I am told there it is caused by a buffer overflow between the
-radio and GNU Radio, caused by the program being unable to read the
-GSM data fast enough. If you see a stream of 'O's from the terminal
-where you started scan-and-livemon, you need a give the process more
-CPU power. Perhaps someone are able to optimize the code to a point
-where it become possible to set up RPi3 based GSM sniffers? I tried
-using Raspbian instead of Debian, but there seem to be something wrong
-with GNU Radio on raspbian, causing glibc to abort().</p>
+ <title>Using the Kodi API to play Youtube videos</title>
+ <link>http://people.skolelinux.org/pere/blog/Using_the_Kodi_API_to_play_Youtube_videos.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Using_the_Kodi_API_to_play_Youtube_videos.html</guid>
+ <pubDate>Sun, 2 Sep 2018 23:40:00 +0200</pubDate>
+ <description><p>I continue to explore my Kodi installation, and today I wanted to
+tell it to play a youtube URL I received in a chat, without having to
+insert search terms using the on-screen keyboard. After searching the
+web for API access to the Youtube plugin and testing a bit, I managed
+to find a recipe that worked. If you got a kodi instance with its API
+available from http://kodihost/jsonrpc, you can try the following to
+have check out a nice cover band.</p>
+
+<p><blockquote><pre>curl --silent --header 'Content-Type: application/json' \
+ --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open",
+ "params": {"item": { "file":
+ "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \
+ http://projector.local/jsonrpc</pre></blockquote></p>
+
+<p>I've extended kodi-stream program to take a video source as its
+first argument. It can now handle direct video links, youtube links
+and 'desktop' to stream my desktop to Kodi. It is almost like a
+Chromecast. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Datalagringsdirektivet kaster skygger over Høyre og Arbeiderpartiet</title>
- <link>http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_kaster_skygger_over_H_yre_og_Arbeiderpartiet.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_kaster_skygger_over_H_yre_og_Arbeiderpartiet.html</guid>
- <pubDate>Thu, 7 Sep 2017 21:35:00 +0200</pubDate>
- <description><p>For noen dager siden publiserte Jon Wessel-Aas en bloggpost om
-«<a href="http://www.uhuru.biz/?p=1821">Konklusjonen om datalagring som
-EU-kommisjonen ikke ville at vi skulle få se</a>». Det er en
-interessant gjennomgang av EU-domstolens syn på snurpenotovervåkning
-av befolkningen, som er klar på at det er i strid med
-EU-lovgivingen.</p>
-
-<p>Valgkampen går for fullt i Norge, og om noen få dager er siste
-frist for å avgi stemme. En ting er sikkert, Høyre og Arbeiderpartiet
-får ikke min stemme
-<a href="http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_gj_r_at_Oslo_H_yre_og_Arbeiderparti_ikke_f_r_min_stemme_i__r.html">denne
-gangen heller</a>. Jeg har ikke glemt at de tvang igjennom loven som
-skulle pålegge alle data- og teletjenesteleverandører å overvåke alle
-sine kunder. En lov som er vedtatt, og aldri opphevet igjen.</p>
-
-<p>Det er tydelig fra diskusjonen rundt grenseløs digital overvåkning
-(eller "Digital Grenseforsvar" som det kalles i Orvellisk nytale) at
-hverken Høyre og Arbeiderpartiet har noen prinsipielle sperrer mot å
-overvåke hele befolkningen, og diskusjonen så langt tyder på at flere
-av de andre partiene heller ikke har det. Mange av
-<a href="https://data.holderdeord.no/votes/1301946411e">de som stemte
-for Datalagringsdirektivet i Stortinget</a> (64 fra Arbeiderpartiet,
-25 fra Høyre) er fortsatt aktive og argumenterer fortsatt for å radere
-vekk mer av innbyggernes privatsfære.</p>
-
-<p>Når myndighetene demonstrerer sin mistillit til folket, tror jeg
-folket selv bør legge litt innsats i å verne sitt privatliv, ved å ta
-i bruk ende-til-ende-kryptert kommunikasjon med sine kjente og kjære,
-og begrense hvor mye privat informasjon som deles med uvedkommende.
-Det er jo ingenting som tyder på at myndighetene kommer til å være vår
-privatsfære.
-<a href="http://people.skolelinux.org/pere/blog/How_to_talk_with_your_loved_ones_in_private.html">Det
-er mange muligheter</a>. Selv har jeg litt sans for
-<a href="https://ring.cx/">Ring</a>, som er basert på p2p-teknologi
-uten sentral kontroll, er fri programvare, og støtter meldinger, tale
-og video. Systemet er tilgjengelig ut av boksen fra
-<a href="https://tracker.debian.org/pkg/ring">Debian</a> og
-<a href="https://launchpad.net/ubuntu/+source/ring">Ubuntu</a>, og det
-finnes pakker for Android, MacOSX og Windows. Foreløpig er det få
-brukere med Ring, slik at jeg også bruker
-<a href="https://signal.org/">Signal</a> som nettleserutvidelse.</p>
+ <title>Software created using taxpayers’ money should be Free Software</title>
+ <link>http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html</guid>
+ <pubDate>Thu, 30 Aug 2018 13:50:00 +0200</pubDate>
+ <description><p>It might seem obvious that software created using tax money should
+be available for everyone to use and improve. Free Software
+Foundation Europe recentlystarted a campaign to help get more people
+to understand this, and I just signed the petition on
+<a href="https://publiccode.eu/">Public Money, Public Code</a> to help
+them. I hope you too will do the same.</p>
</description>
</item>