<link>http://people.skolelinux.org/pere/blog/</link>
<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
+ <item>
+ <title>H, Ap, Frp og Venstre går for DNA-innsamling av hele befolkingen</title>
+ <link>http://people.skolelinux.org/pere/blog/H__Ap__Frp_og_Venstre_g_r_for_DNA_innsamling_av_hele_befolkingen.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/H__Ap__Frp_og_Venstre_g_r_for_DNA_innsamling_av_hele_befolkingen.html</guid>
+ <pubDate>Wed, 14 Mar 2018 14:15:00 +0100</pubDate>
+ <description><p>I går kom det nok et argument for å holde seg unna det norske
+helsevesenet. Da annonserte et stortingsflertall bestående av Høyre,
+Arbeiderpartiet, Fremskrittspartiet og Venstre, at de går inn for å
+samle inn og lagre DNA fra hele befolkningen i Norge til evig tid.
+Endringen gjelder innsamlede blodprøver fra nyfødte i Norge. Det vil
+dermed ta litt tid før en har hele befolkningen, men det er dit vi
+havner gitt nok tid. I dag er det nesten hundre prosent oppslutning
+om undersøkelsen som gjøres like etter fødselen, på bakgrunn av
+blodprøven det er snakk om å lagre, for å oppdage endel medfødte
+sykdommer. Blodprøven lagres i dag i inntil seks år.
+<a href="https://www.stortinget.no/no/Saker-og-publikasjoner/Publikasjoner/Innstillinger/Stortinget/2017-2018/inns-201718-182l/?all=true">Stortingets
+flertallsinnstilling</a> er at tidsbegresningen skal fjernes, og mener
+at tidsubegrenset lagring ikke vil påvirke oppslutningen om
+undersøkelsen.</p>
+
+<p>Datatilsynet har ikke akkurat applaudert forslaget:</p>
+
+<p><blockquote>
+
+ <p>«Datatilsynet mener forslaget ikke i tilstrekkelig grad
+ synliggjør hvilke etiske og personvernmessige utfordringer som må
+ diskuteres før en etablerer en nasjonal biobank med blodprøver fra
+ hele befolkningen.»</p>
+
+</blockquote></p>
+
+<p>Det er flere historier om hvordan innsamlet biologisk materiale har
+blitt brukt til andre formål enn de ble innsamlet til, og historien om
+<a href="https://www.aftenposten.no/norge/i/Ql0WR/Na-ma-Folkehelsa-slette-uskyldiges-DNA-info">folkehelseinstituttets
+lagring på vegne av politiet (Kripos) av innsamlet biologisk materiale
+og DNA-informasjon i strid med loven</a> viser at en ikke kan være
+trygg på at lover og intensjoner beskytter de som blir berørt mot
+misbruk av slik privat og personlig informasjon.</p>
+
+<p>Det er verdt å merke seg at det kan forskes på de innsamlede
+blodprøvene uten samtykke fra den det gjelder (eller foreldre når det
+gjelder barn), etter en lovendring for en stund tilbake, med mindre
+det er sendt inn skjema der en reserverer seg mot forskning uten
+samtykke. Skjemaet er tilgjengelig fra
+<a href="https://www.fhi.no/arkiv/publikasjoner/for-pasienter-skjema-for-reservasjo/">folkehelseinstituttets
+websider</a>, og jeg anbefaler, uavhengig av denne saken, varmt alle å
+sende inn skjemaet for å dokumentere hvor mange som ikke synes det er
+greit å fjerne krav om samtykke.</p>
+
+<p>I tillegg bør en kreve destruering av alt biologisk materiale som
+er samlet inn om en selv, for å redusere eventuelle negative
+konsekvener i fremtiden når materialet kommer på avveie eller blir
+brukt uten samtykke, men det er så vidt jeg vet ikke noe system for
+dette i dag.</p>
+</description>
+ </item>
+
+ <item>
+ <title>First rough draft Norwegian and Spanish edition of the book Made with Creative Commons</title>
+ <link>http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html</guid>
+ <pubDate>Tue, 13 Mar 2018 13:00:00 +0100</pubDate>
+ <description><p>I am working on publishing yet another book related to Creative
+Commons. This time it is a book filled with interviews and histories
+from those around the globe making a living using Creative
+Commons.</p>
+
+<p>Yesterday, after many months of hard work by several volunteer
+translators, the first draft of a Norwegian Bokmål edition of the book
+<a href="https://madewith.cc">Made with Creative Commons from 2017</a>
+was complete. The Spanish translation is also complete, while the
+Dutch, Polish, German and Ukraine edition need a lot of work. Get in
+touch if you want to help make those happen, or would like to
+translate into your mother tongue.</p>
+
+<p>The whole book project started when
+<a href="http://gwolf.org/node/4102">Gunnar Wolf announced</a> that he
+was going to make a Spanish edition of the book. I noticed, and
+offered some input on how to make a book, based on my experience with
+translating the
+<a href="https://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Free
+Culture</a> and
+<a href="https://debian-handbook.info/get/#norwegian">The Debian
+Administrator's Handbook</a> books to Norwegian Bokmål. To make a
+long story short, we ended up working on a Bokmål edition, and now the
+first rough translation is complete, thanks to the hard work of
+Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first
+proof reading is almost done, and only the second and third proof
+reading remains. We will also need to translate the 14 figures and
+create a book cover. Once it is done we will publish the book on
+paper, as well as in PDF, ePub and possibly Mobi formats.</p>
+
+<p>The book itself originates as a manuscript on Google Docs, is
+downloaded as ODT from there and converted to Markdown using pandoc.
+The Markdown is modified by a script before is converted to DocBook
+using pandoc. The DocBook is modified again using a script before it
+is used to create a Gettext POT file for translators. The translated
+PO file is then combined with the earlier mentioned DocBook file to
+create a translated DocBook file, which finally is given to dblatex to
+create the final PDF. The end result is a set of editions of the
+manuscript, one English and one for each of the translations.</p>
+
+<p>The translation is conducted using
+<a href="https://hosted.weblate.org/projects/madewithcc/translation/">the
+Weblate web based translation system</a>. Please have a look there
+and get in touch if you would like to help out with proof
+reading. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Debian used in the subway info screens in Oslo, Norway</title>
+ <link>http://people.skolelinux.org/pere/blog/Debian_used_in_the_subway_info_screens_in_Oslo__Norway.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Debian_used_in_the_subway_info_screens_in_Oslo__Norway.html</guid>
+ <pubDate>Fri, 2 Mar 2018 13:10:00 +0100</pubDate>
+ <description><p>Today I was pleasantly surprised to discover my operating system of
+choice, Debian, was used in the info screens on the subway stations.
+While passing Nydalen subway station in Oslo, Norway, I discovered the
+info screen booting with some text scrolling. I was not quick enough
+with my camera to be able to record a video of the scrolling boot
+screen, but I did get a photo from when the boot got stuck with a
+corrupt file system:
+
+<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2018-03-02-ruter-debian-lenny.jpeg"><img align="center" width="40%" src="http://people.skolelinux.org/pere/blog/images/2018-03-02-ruter-debian-lenny.jpeg" alt="[photo of subway info screen]"></a></p>
+
+<p>While I am happy to see Debian used more places, some details of the
+content on the screen worries me.</p>
+
+<p>The image show the version booting is 'Debian GNU/Linux lenny/sid',
+indicating that this is based on code taken from Debian Unstable/Sid
+after Debian Etch (version 4) was released 2007-04-08 and before
+Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian
+has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8
+(Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to
+<a href="https://en.wikipedia.org/wiki/Debian_version_history">a Debian
+version history on Wikpedia</a>. This mean the system is running
+around 10 year old code, with no security fixes from the vendor for
+many years.</p>
+
+<p>This is not the first time I discover the Oslo subway company,
+Ruter, running outdated software. In 2012,
+<a href="http://people.skolelinux.org/pere/blog/Er_billettautomatene_til_kollektivtrafikken_i_Oslo_uten_sikkerhetsoppdateringer_.html">I
+discovered the ticket vending machines were running Windows 2000</a>,
+and this was
+<a href="http://people.skolelinux.org/pere/blog/Fortsatt_ingen_sikkerhetsoppdateringer_for_billettautomatene_til_kollektivtrafikken_i_Oslo_.html">still
+the case in 2016</a>. Given the response from the responsible people
+in 2016, I would assume the machines are still running unpatched
+Windows 2000. Thus, an unpatched Debian setup come as no surprise.</p>
+
+<p>The photo is made available under the license terms
+<a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons
+4.0 Attribution International (CC BY 4.0)</a>.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>The SysVinit upstream project just migrated to git</title>
+ <link>http://people.skolelinux.org/pere/blog/The_SysVinit_upstream_project_just_migrated_to_git.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/The_SysVinit_upstream_project_just_migrated_to_git.html</guid>
+ <pubDate>Sun, 18 Feb 2018 09:20:00 +0100</pubDate>
+ <description><p>Surprising as it might sound, there are still computers using the
+traditional Sys V init system, and there probably will be until
+systemd start working on Hurd and FreeBSD.
+<a href="https://savannah.nongnu.org/projects/sysvinit">The upstream
+project still exist</a>, though, and up until today, the upstream
+source was available from Savannah via subversion. I am happy to
+report that this just changed.</p>
+
+<p>The upstream source is now in Git, and consist of three
+repositories:</p>
+
+<ul>
+
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit.git">sysvinit</a></li>
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit/insserv.git">insserv</a></li>
+<li><a href="http://git.savannah.nongnu.org/cgit/sysvinit/startpar.git">startpar</a></li>
+
+</ul>
+
+<p>I do not really spend much time on the project these days, and I
+has mostly retired, but found it best to migrate the source to a good
+version control system to help those willing to move it forward.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Using VLC to stream bittorrent sources</title>
+ <link>http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html</guid>
+ <pubDate>Wed, 14 Feb 2018 08:00:00 +0100</pubDate>
+ <description><p>A few days ago, a new major version of
+<a href="https://www.videolan.org/">VLC</a> was announced, and I
+decided to check out if it now supported streaming over
+<a href="http://bittorrent.org/">bittorrent</a> and
+<a href="https://webtorrent.io">webtorrent</a>. Bittorrent is one of
+the most efficient ways to distribute large files on the Internet, and
+Webtorrent is a variant of Bittorrent using
+<a href="https://webrtc.org">WebRTC</a> as its transport channel,
+allowing web pages to stream and share files using the same technique.
+The network protocols are similar but not identical, so a client
+supporting one of them can not talk to a client supporting the other.
+I was a bit surprised with what I discovered when I started to look.
+Looking at
+<a href="https://www.videolan.org/vlc/releases/3.0.0.html">the release
+notes</a> did not help answering this question, so I started searching
+the web. I found several news articles from 2013, most of them
+tracing the news from Torrentfreak
+("<a href=https://torrentfreak.com/open-source-giant-vlc-mulls-bittorrent-support-130211/">Open
+Source Giant VLC Mulls BitTorrent Streaming Support</a>"), about a
+initiative to pay someone to create a VLC patch for bittorrent
+support. To figure out what happend with this initiative, I headed
+over to the #videolan IRC channel and asked if there were some bug or
+feature request tickets tracking such feature. I got an answer from
+lead developer Jean-Babtiste Kempf, telling me that there was a patch
+but neither he nor anyone else knew where it was. So I searched a bit
+more, and came across an independent
+<a href="https://github.com/johang/vlc-bittorrent">VLC plugin to add
+bittorrent support</a>, created by Johan Gunnarsson in 2016/2017.
+Again according to Jean-Babtiste, this is not the patch he was talking
+about.</p>
+
+<p>Anyway, to test the plugin, I made a working Debian package from
+the git repository, with some modifications. After installing this
+package, I could stream videos from
+<a href="https://www.archive.org/">The Internet Archive</a> using VLC
+commands like this:</p>
+
+<p><blockquote><pre>
+vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent
+</pre></blockquote></p>
+
+<p>The plugin is supposed to handle magnet links too, but since The
+Internet Archive do not have magnet links and I did not want to spend
+time tracking down another source, I have not tested it. It can take
+quite a while before the video start playing without any indication of
+what is going on from VLC. It took 10-20 seconds when I measured it.
+Some times the plugin seem unable to find the correct video file to
+play, and show the metadata XML file name in the VLC status line. I
+have no idea why.</p>
+
+<p>I have created a <a href="https://bugs.debian.org/890360">request for
+a new package in Debian (RFP)</a> and
+<a href="https://github.com/johang/vlc-bittorrent/issues/1">asked if
+the upstream author is willing to help make this happen</a>. Now we
+wait to see what come out of this. I do not want to maintain a
+package that is not maintained upstream, nor do I really have time to
+maintain more packages myself, so I might leave it at this. But I
+really hope someone step up to do the packaging, and hope upstream is
+still maintaining the source. If you want to help, please update the
+RFP request or the upstream issue.</p>
+
+<p>I have not found any traces of webtorrent support for VLC.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+</description>
+ </item>
+
<item>
<title>Version 3.1 of Cura, the 3D print slicer, is now in Debian</title>
<link>http://people.skolelinux.org/pere/blog/Version_3_1_of_Cura__the_3D_print_slicer__is_now_in_Debian.html</link>
useful. It was uploaded the last few days, and the last update will
enter testing tomorrow. See the
<a href="https://ultimaker.com/en/products/cura-software/release-notes">release
-notes</a> for the list of bug fixes and new features.</p> Version 3.2
+notes</a> for the list of bug fixes and new features. Version 3.2
was announced 6 days ago. We will try to get it into Debian as
well.</p>
</description>
</item>
- <item>
- <title>Cura, the nice 3D print slicer, is now in Debian Unstable</title>
- <link>http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</guid>
- <pubDate>Sun, 17 Dec 2017 07:00:00 +0100</pubDate>
- <description><p>After several months of working and waiting, I am happy to report
-that the nice and user friendly 3D printer slicer software Cura just
-entered Debian Unstable. It consist of five packages,
-<a href="https://tracker.debian.org/pkg/cura">cura</a>,
-<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>,
-<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>,
-<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>,
-<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and
-<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last
-two, uranium and cura, entered Unstable yesterday. This should make
-it easier for Debian users to print on at least the Ultimaker class of
-3D printers. My nearest 3D printer is an Ultimaker 2+, so it will
-make life easier for at least me. :)</p>
-
-<p>The work to make this happen was done by Gregor Riepl, and I was
-happy to assist him in sponsoring the packages. With the introduction
-of Cura, Debian is up to three 3D printer slicers at your service,
-Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D
-printer, give it a go. :)</p>
-
-<p>The 3D printer software is maintained by the 3D printer Debian
-team, flocking together on the
-<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a>
-mailing list and the
-<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a>
-IRC channel.</p>
-
-<p>The next step for Cura in Debian is to update the cura package to
-version 3.0.3 and then update the entire set of packages to version
-3.1.0 which showed up the last few days.</p>
-</description>
- </item>
-
- <item>
- <title>Idea for finding all public domain movies in the USA</title>
- <link>http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</guid>
- <pubDate>Wed, 13 Dec 2017 10:15:00 +0100</pubDate>
- <description><p>While looking at
-<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies
-for the copyright renewal entries for movies published in the USA</a>,
-an idea occurred to me. The number of renewals are so few per year, it
-should be fairly quick to transcribe them all and add references to
-the corresponding IMDB title ID. This would give the (presumably)
-complete list of movies published 28 years earlier that did _not_
-enter the public domain for the transcribed year. By fetching the
-list of USA movies published 28 years earlier and subtract the movies
-with renewals, we should be left with movies registered in IMDB that
-are now in the public domain. For the year 1955 (which is the one I
-have looked at the most), the total number of pages to transcribe is
-21. For the 28 years from 1950 to 1978, it should be in the range
-500-600 pages. It is just a few days of work, and spread among a
-small group of people it should be doable in a few weeks of spare
-time.</p>
-
-<p>A typical copyright renewal entry look like this (the first one
-listed for 1955):</p>
-
-<p><blockquote>
- ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer
- Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH);
- 10Jun55; R151558.
-</blockquote></p>
-
-<p>The movie title as well as registration and renewal dates are easy
-enough to locate by a program (split on first comma and look for
-DDmmmYY). The rest of the text is not required to find the movie in
-IMDB, but is useful to confirm the correct movie is found. I am not
-quite sure what the L and R numbers mean, but suspect they are
-reference numbers into the archive of the US Copyright Office.</p>
-
-<p>Tracking down the equivalent IMDB title ID is probably going to be
-a manual task, but given the year it is fairly easy to search for the
-movie title using for example
-<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>.
-Using this search, I find that the equivalent IMDB title ID for the
-first renewal entry from 1955 is
-<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p>
-
-<p>I suspect the best way to do this would be to make a specialised
-web service to make it easy for contributors to transcribe and track
-down IMDB title IDs. In the web service, once a entry is transcribed,
-the title and year could be extracted from the text, a search in IMDB
-conducted for the user to pick the equivalent IMDB title ID right
-away. By spreading out the work among volunteers, it would also be
-possible to make at least two persons transcribe the same entries to
-be able to discover any typos introduced. But I will need help to
-make this happen, as I lack the spare time to do all of this on my
-own. If you would like to help, please get in touch. Perhaps you can
-draft a web service for crowd sourcing the task?</p>
-
-<p>Note, Project Gutenberg already have some
-<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed
-copies of the US Copyright Office renewal protocols</a>, but I have
-not been able to find any film renewals there, so I suspect they only
-have copies of renewal for written works. I have not been able to find
-any transcribed versions of movie renewals so far. Perhaps they exist
-somewhere?</p>
-
-<p>I would love to figure out methods for finding all the public
-domain works in other countries too, but it is a lot harder. At least
-for Norway and Great Britain, such work involve tracking down the
-people involved in making the movie and figuring out when they died.
-It is hard enough to figure out who was part of making a movie, but I
-do not know how to automate such procedure without a registry of every
-person involved in making movies and their death year.</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>Is the short movie «Empty Socks» from 1927 in the public domain or not?</title>
- <link>http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</guid>
- <pubDate>Tue, 5 Dec 2017 12:30:00 +0100</pubDate>
- <description><p>Three years ago, a presumed lost animation film,
-<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from
-1927</a>, was discovered in the Norwegian National Library. At the
-time it was discovered, it was generally assumed to be copyrighted by
-The Walt Disney Company, and I blogged about
-<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my
-reasoning to conclude</a> that it would would enter the Norwegian
-equivalent of the public domain in 2053, based on my understanding of
-Norwegian Copyright Law. But a few days ago, I came across
-<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a
-blog post claiming the movie was already in the public domain</a>, at
-least in USA. The reasoning is as follows: The film was released in
-November or Desember 1927 (sources disagree), and presumably
-registered its copyright that year. At that time, right holders of
-movies registered by the copyright office received government
-protection for there work for 28 years. After 28 years, the copyright
-had to be renewed if the wanted the government to protect it further.
-The blog post I found claim such renewal did not happen for this
-movie, and thus it entered the public domain in 1956. Yet someone
-claim the copyright was renewed and the movie is still copyright
-protected. Can anyone help me to figure out which claim is correct?
-I have not been able to find Empty Socks in Catalog of copyright
-entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures
-<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available
-from the University of Pennsylvania</a>, neither in
-<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page
-45 for the first half of 1955</a>, nor in
-<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page
-119 for the second half of 1955</a>. It is of course possible that
-the renewal entry was left out of the printed catalog by mistake. Is
-there some way to rule out this possibility? Please help, and update
-the wikipedia page with your findings.
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>Metadata proposal for movies on the Internet Archive</title>
- <link>http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</guid>
- <pubDate>Tue, 28 Nov 2017 12:00:00 +0100</pubDate>
- <description><p>It would be easier to locate the movie you want to watch in
-<a href="https://www.archive.org/">the Internet Archive</a>, if the
-metadata about each movie was more complete and accurate. In the
-archiving community, a well known saying state that good metadata is a
-love letter to the future. The metadata in the Internet Archive could
-use a face lift for the future to love us back. Here is a proposal
-for a small improvement that would make the metadata more useful
-today. I've been unable to find any document describing the various
-standard fields available when uploading videos to the archive, so
-this proposal is based on my best quess and searching through several
-of the existing movies.</p>
-
-<p>I have a few use cases in mind. First of all, I would like to be
-able to count the number of distinct movies in the Internet Archive,
-without duplicates. I would further like to identify the IMDB title
-ID of the movies in the Internet Archive, to be able to look up a IMDB
-title ID and know if I can fetch the video from there and share it
-with my friends.</p>
-
-<p>Second, I would like the Butter data provider for The Internet
-archive
-(<a href="https://github.com/butterproviders/butter-provider-archive">available
-from github</a>), to list as many of the good movies as possible. The
-plugin currently do a search in the archive with the following
-parameters:</p>
-
-<p><pre>
-collection:moviesandfilms
-AND NOT collection:movie_trailers
-AND -mediatype:collection
-AND format:"Archive BitTorrent"
-AND year
-</pre></p>
-
-<p>Most of the cool movies that fail to show up in Butter do so
-because the 'year' field is missing. The 'year' field is populated by
-the year part from the 'date' field, and should be when the movie was
-released (date or year). Two such examples are
-<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur
-from 1905</a> and
-<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes
-2: Gran Dillama from 2013</a>, where the year metadata field is
-missing.</p>
-
-So, my proposal is simply, for every movie in The Internet Archive
-where an IMDB title ID exist, please fill in these metadata fields
-(note, they can be updated also long after the video was uploaded, but
-as far as I can tell, only by the uploader):
-
-<dl>
-
-<dt>mediatype</dt>
-<dd>Should be 'movie' for movies.</dd>
-
-<dt>collection</dt>
-<dd>Should contain 'moviesandfilms'.</dd>
-
-<dt>title</dt>
-<dd>The title of the movie, without the publication year.</dd>
-
-<dt>date</dt>
-<dd>The data or year the movie was released. This make the movie show
-up in Butter, as well as make it possible to know the age of the
-movie and is useful to figure out copyright status.</dd>
-
-<dt>director</dt>
-<dd>The director of the movie. This make it easier to know if the
-correct movie is found in movie databases.</dd>
-
-<dt>publisher</dt>
-<dd>The production company making the movie. Also useful for
-identifying the correct movie.</dd>
-
-<dt>links</dt>
-
-<dd>Add a link to the IMDB title page, for example like this: &lt;a
-href="http://www.imdb.com/title/tt0028496/"&gt;Movie in
-IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for
-counting of number of unique movies in the Archive. Other external
-references, like to TMDB, could be added like this too.</dd>
-
-</dl>
-
-<p>I did consider proposing a Custom field for the IMDB title ID (for
-example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it
-will be easier to simply place it in the links free text field.</p>
-
-<p>I created
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-list of IMDB title IDs for several thousand movies in the Internet
-Archive</a>, but I also got a list of several thousand movies without
-such IMDB title ID (and quite a few duplicates). It would be great if
-this data set could be integrated into the Internet Archive metadata
-to be available for everyone in the future, but with the current
-policy of leaving metadata editing to the uploaders, it will take a
-while before this happen. If you have uploaded movies into the
-Internet Archive, you can help. Please consider following my proposal
-above for your movies, to ensure that movie is properly
-counted. :)</p>
-
-<p>The list is mostly generated using wikidata, which based on
-Wikipedia articles make it possible to link between IMDB and movies in
-the Internet Archive. But there are lots of movies without a
-Wikipedia article, and some movies where only a collection page exist
-(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the
-Caminandes example above</a>, where there are three movies but only
-one Wikidata entry).</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>Legal to share more than 3000 movies listed on IMDB?</title>
- <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
- <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
- <description><p>A month ago, I blogged about my work to
-<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
-check the copyright status of IMDB entries</a>, and try to count the
-number of movies listed in IMDB that is legal to distribute on the
-Internet. I have continued to look for good data sources, and
-identified a few more. The code used to extract information from
-various data sources is available in
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-git repository</a>, currently available from github.</p>
-
-<p>So far I have identified 3186 unique IMDB title IDs. To gain
-better understanding of the structure of the data set, I created a
-histogram of the year associated with each movie (typically release
-year). It is interesting to notice where the peaks and dips in the
-graph are located. I wonder why they are placed there. I suspect
-World War II caused the dip around 1940, but what caused the peak
-around 2010?</p>
-
-<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
-
-<p>I've so far identified ten sources for IMDB title IDs for movies in
-the public domain or with a free license. This is the statistics
-reported when running 'make stats' in the git repository:</p>
-
-<pre>
- 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
- 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
- 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
- 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
- 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
- 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
- 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
- 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
- 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
- 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
- 3186 unique IMDB title IDs in total
-</pre>
-
-<p>The entries without IMDB title ID are candidates to increase the
-data set, but might equally well be duplicates of entries already
-listed with IMDB title ID in one of the other sources, or represent
-movies that lack a IMDB title ID. I've seen examples of all these
-situations when peeking at the entries without IMDB title ID. Based
-on these data sources, the lower bound for movies listed in IMDB that
-are legal to distribute on the Internet is between 3186 and 4713.
-
-<p>It would be great for improving the accuracy of this measurement,
-if the various sources added IMDB title ID to their metadata. I have
-tried to reach the people behind the various sources to ask if they
-are interested in doing this, without any replies so far. Perhaps you
-can help me get in touch with the people behind VODO, Public Domain
-Torrents, Public Domain Movies and Public Domain Review to try to
-convince them to add more metadata to their movie entries?</p>
-
-<p>Another way you could help is by adding pages to Wikipedia about
-movies that are legal to distribute on the Internet. If such page
-exist and include a link to both IMDB and The Internet Archive, the
-script used to generate free-movies-archive-org-wikidata.json should
-pick up the mapping as soon as wikidata is updates.</p>
-
-<p>As usual, if you use Bitcoin and want to show your support of my
-activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
</channel>
</rss>