<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Version 3.1 of Cura, the 3D print slicer, is now in Debian</title>
- <link>http://people.skolelinux.org/pere/blog/Version_3_1_of_Cura__the_3D_print_slicer__is_now_in_Debian.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Version_3_1_of_Cura__the_3D_print_slicer__is_now_in_Debian.html</guid>
- <pubDate>Tue, 13 Feb 2018 06:20:00 +0100</pubDate>
- <description><p>A new version of the
-<a href="https://tracker.debian.org/pkg/cura">3D printer slicer
-software Cura</a>, version 3.1.0, is now available in Debian Testing
-(aka Buster) and Debian Unstable (aka Sid). I hope you find it
-useful. It was uploaded the last few days, and the last update will
-enter testing tomorrow. See the
-<a href="https://ultimaker.com/en/products/cura-software/release-notes">release
-notes</a> for the list of bug fixes and new features. Version 3.2
-was announced 6 days ago. We will try to get it into Debian as
-well.</p>
-
-<p>More information related to 3D printing is available on the
-<a href="https://wiki.debian.org/3DPrinting">3D printing</a> and
-<a href="https://wiki.debian.org/3D-printer">3D printer</a> wiki pages
-in Debian.</p>
+ <title>Time for an official MIME type for patches?</title>
+ <link>http://people.skolelinux.org/pere/blog/Time_for_an_official_MIME_type_for_patches_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Time_for_an_official_MIME_type_for_patches_.html</guid>
+ <pubDate>Thu, 1 Nov 2018 08:15:00 +0100</pubDate>
+ <description><p>As part of my involvement in
+<a href="https://gitlab.com/OsloMet-ABI/nikita-noark5-core">the Nikita
+archive API project</a>, I've been importing a fairly large lump of
+emails into a test instance of the archive to see how well this would
+go. I picked a subset of <a href="https://notmuchmail.org/">my
+notmuch email database</a>, all public emails sent to me via
+@lists.debian.org, giving me a set of around 216 000 emails to import.
+In the process, I had a look at the various attachments included in
+these emails, to figure out what to do with attachments, and noticed
+that one of the most common attachment formats do not have
+<a href="https://www.iana.org/assignments/media-types/media-types.xhtml">an
+official MIME type</a> registered with IANA/IETF. The output from
+diff, ie the input for patch, is on the top 10 list of formats
+included in these emails. At the moment people seem to use either
+text/x-patch or text/x-diff, but neither is officially registered. It
+would be better if one official MIME type were registered and used
+everywhere.</p>
+
+<p>To try to get one official MIME type for these files, I've brought
+up the topic on
+<a href="https://www.ietf.org/mailman/listinfo/media-types">the
+media-types mailing list</a>. If you are interested in discussion
+which MIME type to use as the official for patch files, or involved in
+making software using a MIME type for patches, perhaps you would like
+to join the discussion?</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Overvåkning i Kina vs. Norge</title>
- <link>http://people.skolelinux.org/pere/blog/Overv_kning_i_Kina_vs__Norge.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Overv_kning_i_Kina_vs__Norge.html</guid>
- <pubDate>Mon, 12 Feb 2018 09:40:00 +0100</pubDate>
- <description><p>Jeg lar meg fascinere av en artikkel
-<a href="https://www.dagbladet.no/kultur/terroristene-star-pa-dora/69436116">i
-Dagbladet om Kinas håndtering av Xinjiang</a>, spesielt følgende
-utsnitt:</p>
-
-<p><blockquote>
-
-<p>«I den sørvestlige byen Kashgar nærmere grensa til
-Sentral-Asia meldes det nå at 120.000 uigurer er internert i såkalte
-omskoleringsleirer. Samtidig er det innført et omfattende
-helsesjekk-program med innsamling og lagring av DNA-prøver fra
-absolutt alle innbyggerne. De mest avanserte overvåkingsmetodene
-testes ut her. Programmer for å gjenkjenne ansikter og stemmer er på
-plass i regionen. Der har de lokale myndighetene begynt å installere
-GPS-systemer i alle kjøretøy og egne sporingsapper i
-mobiltelefoner.</p>
-
-<p>Politimetodene griper så dypt inn i folks dagligliv at motstanden
-mot Beijing-regimet øker.»</p>
-
-</blockquote></p>
-
-<p>Beskrivelsen avviker jo desverre ikke så veldig mye fra tilstanden
-her i Norge.</p>
-
-<table>
-<tr>
-<th>Dataregistrering</th>
-<th>Kina</th>
-<th>Norge</th>
-
-<tr>
-<td>Innsamling og lagring av DNA-prøver fra befolkningen</td>
-<td>Ja</td>
-<td>Delvis, planlagt for alle nyfødte.</td>
-</tr>
-
-<tr>
-<td>Ansiktsgjenkjenning</td>
-<td>Ja</td>
-<td>Ja</td>
-</tr>
-
-<tr>
-<td>Stemmegjenkjenning</td>
-<td>Ja</td>
-<td>Nei</td>
-</tr>
-
-<tr>
-<td>Posisjons-sporing av mobiltelefoner</td>
-<td>Ja</td>
-<td>Ja</td>
-</tr>
-
-<tr>
-<td>Posisjons-sporing av biler</td>
-<td>Ja</td>
-<td>Ja</td>
-</tr>
-
-</table>
-
-<p>I Norge har jo situasjonen rundt Folkehelseinstituttets lagring av
-DNA-informasjon på vegne av politiet, der de nektet å slette
-informasjon politiet ikke hadde lov til å ta vare på, gjort det klart
-at DNA tar vare på ganske lenge. I tillegg finnes det utallige
-biobanker som lagres til evig tid, og det er planer om å innføre
-<a href="https://www.aftenposten.no/norge/i/75E9/4-av-10-mener-staten-bor-lagre-DNA-profiler-pa-alle-nyfodte">evig
-lagring av DNA-materiale fra alle spebarn som fødes</a> (med mulighet
-for å be om sletting).</p>
-
-<p>I Norge er det system på plass for ansiktsgjenkjenning, som
-<a href="https://www.nrk.no/norge/kun-gardermoen-har-teknologi-for-ansiktsgjenkjenning-i-norge-1.12719461">en
-NRK-artikkel fra 2015</a> forteller er aktiv på Gardermoen, samt
-<a href="https://www.dagbladet.no/nyheter/inntil-27-000-bor-i-norge-under-falsk-id/60500781">brukes
-til å analysere bilder innsamlet av myndighetene</a>. Brukes det også
-flere plasser? Det er tett med overvåkningskamera kontrollert av
-politi og andre myndigheter i for eksempel Oslo sentrum.</p>
-
-<p>Jeg er ikke kjent med at Norge har noe system for identifisering av
-personer ved hjelp av stemmegjenkjenning.</p>
-
-<p>Posisjons-sporing av mobiltelefoner er ruinemessig tilgjengelig for
-blant annet politi, NAV og Finanstilsynet, i tråd med krav i
-telefonselskapenes konsesjon. I tillegg rapporterer smarttelefoner
-sin posisjon til utviklerne av utallige mobil-apper, der myndigheter
-og andre kan hente ut informasjon ved behov. Det er intet behov for
-noen egen app for dette.</p>
-
-<p>Posisjons-sporing av biler er rutinemessig tilgjengelig via et tett
-nett av målepunkter på veiene (automatiske bomstasjoner,
-køfribrikke-registrering, automatiske fartsmålere og andre veikamera).
-Det er i tillegg vedtatt at alle nye biler skal selges med utstyr for
-GPS-sporing (eCall).</p>
-
-<p>Det er jammen godt vi lever i et liberalt demokrati, og ikke en
-overvåkningsstat, eller?</p>
-
-<p>Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til
-det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner
-til min adresse
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>How hard can æ, ø and å be?</title>
- <link>http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</guid>
- <pubDate>Sun, 11 Feb 2018 17:10:00 +0100</pubDate>
- <description><img src="http://people.skolelinux.org/pere/blog/images/2018-02-11-peppes-unicode.jpeg" align="right"/>
-
-<p>We write 2018, and it is 30 years since Unicode was introduced.
-Most of us in Norway have come to expect the use of our alphabet to
-just work with any computer system. But it is apparently beyond reach
-of the computers printing recites at a restaurant. Recently I visited
-a Peppes pizza resturant, and noticed a few details on the recite.
-Notice how 'ø' and 'å' are replaced with strange symbols in
-'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi
-gleder oss til å se deg igjen'.</p>
-
-<p>I would say that this state is passed sad and over in embarrassing.</p>
-
-<p>I removed personal and private information to be nice.</p>
+ <title>Measuring the speaker frequency response using the AUDMES free software GUI - nice free software</title>
+ <link>http://people.skolelinux.org/pere/blog/Measuring_the_speaker_frequency_response_using_the_AUDMES_free_software_GUI___nice_free_software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Measuring_the_speaker_frequency_response_using_the_AUDMES_free_software_GUI___nice_free_software.html</guid>
+ <pubDate>Mon, 22 Oct 2018 08:40:00 +0200</pubDate>
+ <description><p><img src="http://people.skolelinux.org/pere/blog/images/2018-10-22-audmes-measure-speakers.png" align="right" width="40%"/></p>
+
+<p>My current home stereo is a patchwork of various pieces I got on
+flee markeds over the years. It is amazing what kind of equipment
+show up there. I've been wondering for a while if it was possible to
+measure how well this equipment is working together, and decided to
+see how far I could get using free software. After trawling the web I
+came across an article from DIY Audio and Video on
+<a href="https://www.diyaudioandvideo.com/Tutorial/SpeakerResponseTesting/">Speaker
+Testing and Analysis</a> describing how to test speakers, and it listing
+several software options, among them
+<a href="https://sourceforge.net/projects/audmes/">AUDio MEasurement
+System (AUDMES)</a>. It is the only free software system I could find
+focusing on measuring speakers and audio frequency response. In the
+process I also found an interesting article from NOVO on
+<a href="http://novo.press/understanding-speaker-specifications-and-frequency-response/">Understanding
+Speaker Specifications and Frequency Response</a> and an article from
+ecoustics on
+<a href="https://www.ecoustics.com/articles/understanding-speaker-frequency-response/">Understanding
+Speaker Frequency Response</a>, with a lot of information on what to
+look for and how to interpret the graphs. Armed with this knowledge,
+I set out to measure the state of my speakers.</p>
+
+<p>The first hurdle was that AUDMES hadn't seen a commit for 10 years
+and did not build with current compilers and libraries. I got in
+touch with its author, who no longer was spending time on the program
+but gave me write access to the subversion repository on Sourceforge.
+The end result is that now the code build on Linux and is capable of
+saving and loading the collected frequency response data in CSV
+format. The application is quite nice and flexible, and I was able to
+select the input and output audio interfaces independently. This made
+it possible to use a USB mixer as the input source, while sending
+output via my laptop headphone connection. I lacked the hardware and
+cabling to figure out a different way to get independent cabling to
+speakers and microphone.</p>
+
+<p>Using this setup I could see how a large range of high frequencies
+apparently were not making it out of my speakers. The picture show
+the frequency response measurement of one of the speakers. Note the
+frequency lines seem to be slightly misaligned, compared to the CSV
+output from the program. I can not hear several of these are high
+frequencies, according to measurement from
+<a href="http://freehearingtestsoftware.com">Free Hearing Test
+Software</a>, an freeware system to measure your hearing (still
+looking for a free software alternative), so I do not know if they are
+coming out out the speakers. I thus do not quite know how to figure
+out if the missing frequencies is a problem with the microphone, the
+amplifier or the speakers, but I managed to rule out the audio card in my
+PC by measuring my Bose noise canceling headset using its own
+microphone. This setup was able to see the high frequency tones, so
+the problem with my stereo had to be in the amplifier or speakers.</p>
+
+<p>Anyway, to try to role out one factor I ended up picking up a new
+set of speakers at a flee marked, and these work a lot better than the
+old speakers, so I guess the microphone and amplifier is OK. If you
+need to measure your own speakers, check out AUDMES. If more people
+get involved, perhaps the project could become good enough to
+<a href="https://bugs.debian.org/910876">include in Debian</a>? And if
+you know of some other free software to measure speakers and amplifier
+performance, please let me know. I am aware of the freeware option
+<a href="https://www.roomeqwizard.com/">REW</a>, but I want something
+that can be developed also when the vendor looses interest.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Legal to share more than 11,000 movies listed on IMDB?</title>
- <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</guid>
- <pubDate>Sun, 7 Jan 2018 23:30:00 +0100</pubDate>
- <description><p>I've continued to track down list of movies that are legal to
-distribute on the Internet, and identified more than 11,000 title IDs
-in The Internet Movie Database (IMDB) so far. Most of them (57%) are
-feature films from USA published before 1923. I've also tracked down
-more than 24,000 movies I have not yet been able to map to IMDB title
-ID, so the real number could be a lot higher. According to the front
-web page for <a href="https://retrofilmvault.com/">Retro Film
-Vault</A>, there are 44,000 public domain films, so I guess there are
-still some left to identify.</p>
-
-<p>The complete data set is available from
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-public git repository</a>, including the scripts used to create it.
-Most of the data is collected using web scraping, for example from the
-"product catalog" of companies selling copies of public domain movies,
-but any source I find believable is used. I've so far had to throw
-out three sources because I did not trust the public domain status of
-the movies listed.</p>
-
-<p>Anyway, this is the summary of the 28 collected data sources so
-far:</p>
+ <title>Web browser integration of VLC with Bittorrent support</title>
+ <link>http://people.skolelinux.org/pere/blog/Web_browser_integration_of_VLC_with_Bittorrent_support.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Web_browser_integration_of_VLC_with_Bittorrent_support.html</guid>
+ <pubDate>Sun, 21 Oct 2018 09:50:00 +0200</pubDate>
+ <description><p>Bittorrent is as far as I know, currently the most efficient way to
+distribute content on the Internet. It is used all by all sorts of
+content providers, from national TV stations like
+<a href="https://www.nrk.no/">NRK</a>, Linux distributors like
+<a href="https://www.debian.org/">Debian</a> and
+<a href="https://www.ubuntu.com/">Ubuntu</a>, and of course the
+<a href="https://archive.org/">Internet archive</A>.
+
+<p>Almost a month ago
+<a href="https://tracker.debian.org/pkg/vlc-plugin-bittorrent">a new
+package adding Bittorrent support to VLC</a> became available in
+Debian testing and unstable. To test it, simply install it like
+this:</p>
<p><pre>
- 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
- 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
- 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json
- 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json
- 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json
- 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json
- 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
- 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
- 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json
- 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json
- 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
- 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json
- 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
- 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
- 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json
- 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json
- 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json
- 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json
- 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json
- 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
- 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json
- 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
- 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json
- 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json
- 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json
-11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID
+apt install vlc-plugin-bittorrent
</pre></p>
-<p> I keep finding more data sources. I found the cinemovies source
-just a few days ago, and as you can see from the summary, it extended
-my list with 63 movies. Check out the mklist-* scripts in the git
-repository if you are curious how the lists are created. Many of the
-titles are extracted using searches on IMDB, where I look for the
-title and year, and accept search results with only one movie listed
-if the year matches. This allow me to automatically use many lists of
-movies without IMDB title ID references at the cost of increasing the
-risk of wrongly identify a IMDB title ID as public domain. So far my
-random manual checks have indicated that the method is solid, but I
-really wish all lists of public domain movies would include unique
-movie identifier like the IMDB title ID. It would make the job of
-counting movies in the public domain a lot easier.</p>
+<p>Since the plugin was made available for the first time in Debian,
+several improvements have been made to it. In version 2.2-4, now
+available in both testing and unstable, a desktop file is provided to
+teach browsers to start VLC when the user click on torrent files or
+magnet links. The last part is thanks to me finally understanding
+what the strange x-scheme-handler style MIME types in desktop files
+are used for. By adding x-scheme-handler/magnet to the MimeType entry
+in the desktop file, at least the browsers Firefox and Chromium will
+suggest to start VLC when selecting a magnet URI on a web page. The
+end result is that now, with the plugin installed in Buster and Sid,
+one can visit any
+<a href="https://archive.org/details/CopyingIsNotTheft1080p">Internet
+Archive page with movies</a> using a web browser and click on the
+torrent link to start streaming the movie.</p>
+
+<p>Note, there is still some misfeatures in the plugin. One is the
+fact that it will hang and
+<a href="https://github.com/johang/vlc-bittorrent/issues/13">block VLC
+from exiting until the torrent streaming starts</a>. Another is the
+fact that it
+<a href="https://github.com/johang/vlc-bittorrent/issues/9">will pick
+and play a random file in a multi file torrent</a>. This is not
+always the video file you want. Combined with the first it can be a
+bit hard to get the video streaming going. But when it work, it seem
+to do a good job.</p>
+
+<p>For the Debian packaging, I would love to find a good way to test
+if the plugin work with VLC using autopkgtest. I tried, but do not
+know enough of the inner workings of VLC to get it working. For now
+the autopkgtest script is only checking if the .so file was
+successfully loaded by VLC. If you have any suggestions, please
+submit a patch to the Debian bug tracking system.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Kommentarer til «Evaluation of (il)legality» for Popcorn Time</title>
- <link>http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</guid>
- <pubDate>Wed, 20 Dec 2017 11:40:00 +0100</pubDate>
- <description><p>I går var jeg i Follo tingrett som sakkyndig vitne og presenterte
- mine undersøkelser rundt
- <a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">telling
- av filmverk i det fri</a>, relatert til
- <a href="https://www.nuug.no/">foreningen NUUG</a>s involvering i
- <a href="https://www.nuug.no/news/tags/dns-domenebeslag/">saken om
- Økokrims beslag og senere inndragning av DNS-domenet
- popcorn-time.no</a>. Jeg snakket om flere ting, men mest om min
- vurdering av hvordan filmbransjen har målt hvor ulovlig Popcorn Time
- er. Filmbransjens måling er så vidt jeg kan se videreformidlet uten
- endringer av norsk politi, og domstolene har lagt målingen til grunn
- når de har vurdert Popcorn Time både i Norge og i utlandet (tallet
- 99% er referert også i utenlandske domsavgjørelser).</p>
-
-<p>I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv,
- med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg
- skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha
- notatet, så hvis jeg forsto rettsprosessen riktig ble kun
- histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var
- visst bare interessert i å forholde seg til det jeg sa i retten,
- ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere
- enn meg kan ha glede av teksten, og publiserer den derfor her.
- Legger ved avskrift av dokument 09,13, som er det sentrale
- dokumentet jeg kommenterer.</p>
-
-<p><strong>Kommentarer til «Evaluation of (il)legality» for Popcorn
- Time</strong></p>
-
-<p><strong>Oppsummering</strong></p>
-
-<p>Målemetoden som Økokrim har lagt til grunn når de påstår at 99% av
- filmene tilgjengelig fra Popcorn Time deles ulovlig har
- svakheter.</p>
-
-<p>De eller den som har vurdert hvorvidt filmer kan lovlig deles har
- ikke lyktes med å identifisere filmer som kan deles lovlig og har
- tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig.
- Økokrim legger til grunn at det bare finnes èn film, Charlie
- Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de
- som ble observert tilgjengelig via ulike Popcorn Time-varianter.
- Jeg finner tre flere blant de observerte filmene: «The Brain That
- Wouldn't Die» fra 1962, «God’s Little Acre» fra 1958 og «She Wore a
- Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det
- finnes dermed minst fire ganger så mange filmer som lovlig kan deles
- på Internett i datasettet Økokrim har lagt til grunn når det påstås
- at mindre enn 1 % kan deles lovlig.</p>
-
-<p>Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra
- ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte
- filmkatalogene som helhet, hvilket påvirker fordelingen mellom
- filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I
- tillegg gir valg av øvre del (de fem første) av søkeresultatene et
- avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk
- i det fri i søkeresultatet.</p>
-
-<p>Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn
- Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger
- som vedlikeholdes uavhengig av Popcorn Time.</p>
-
-<p>Omtalte dokumenter: 09,12, <a href="#dok-09-13">09,13</a>, 09,14,
-09,18, 09,19, 09,20.</p>
-
-<p><strong>Utfyllende kommentarer</strong></p>
-
-<p>Økokrim har forklart domstolene at minst 99% av alt som er
- tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig på
- Internet. Jeg ble nysgjerrig på hvordan de er kommet frem til dette
- tallet, og dette notatet er en samling kommentarer rundt målingen
- Økokrim henviser til. Litt av bakgrunnen for at jeg valgte å se på
- saken er at jeg er interessert i å identifisere og telle hvor mange
- kunstneriske verk som er falt i det fri eller av andre grunner kan
- lovlig deles på Internett, og dermed var interessert i hvordan en
- hadde funnet den ene prosenten som kanskje deles lovlig.</p>
-
-<p>Andelen på 99% kommer fra et ukreditert og udatert notatet som tar
- mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike
- Popcorn Time-varianter er.</p>
-
-<p>Raskt oppsummert, så forteller metodedokumentet at på grunn av at
- det ikke er mulig å få tak i komplett liste over alle filmtitler
- tilgjengelig via Popcorn Time, så lages noe som skal være et
- representativt utvalg ved å velge 50 søkeord større enn tre tegn fra
- ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og
- de første fem filmene i søkeresultatet samles inn inntil 100 unike
- filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å
- nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt
- til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut
- og søkt på flere tilfeldig valgte søkeord inntil 100 unike
- filmtitler var identifisert.</p>
-
-<p>Deretter ble for hver av filmtitlene «vurdert hvorvidt det var
- rimelig å forvente om at verket var vernet av copyright, ved å se på
- om filmen var tilgjengelig i IMDB, samt se på regissør,
- utgivelsesår, når det var utgitt for bestemte markedsområder samt
- hvilke produksjons- og distribusjonsselskap som var registrert» (min
- oversettelse).</p>
-
-<p>Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og
- 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert
- 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion
- Picture Association EMEA. Metoden virker å ha flere svakheter som
- gir resultatene en slagside. Den starter med å slå fast at det ikke
- er mulig å hente ut en komplett liste over alle filmtitler som er
- tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne
- forutsetningen er ikke i tråd med det som står i dokument 09,12, som
- ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller
- hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument
- 09,12 er muligens samme rapport som ble referert til i dom fra Oslo
- Tingrett 2017-11-03
- (<a href="https://www.domstol.no/no/Enkelt-domstol/Oslo--tingrett/Nyheter/ma-sperre-for-popcorn-time/">sak
- 17-093347TVI-OTIR/05</a>) som rapport av 1. juni 2017 av Alexander
- Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord
- for å kontrollere dette.</p>
-
-<p>IMDB er en forkortelse for The Internet Movie Database, en
- anerkjent kommersiell nettjeneste som brukes aktivt av både
- filmbransjen og andre til å holde rede på hvilke spillefilmer (og
- endel andre filmer) som finnes eller er under produksjon, og
- informasjon om disse filmene. Datakvaliteten er høy, med få feil og
- få filmer som mangler. IMDB viser ikke informasjon om
- opphavsrettslig status for filmene på infosiden for hver film. Som
- del av IMDB-tjenesten finnes det lister med filmer laget av
- frivillige som lister opp det som antas å være verk i det fri.</p>
-
-<p>Det finnes flere kilder som kan brukes til å finne filmer som er
- allemannseie (public domain) eller har bruksvilkår som gjør det
- lovlig for alleå dele dem på Internett. Jeg har de siste ukene
- forsøkt å samle og krysskoble disse listene for å forsøke å telle
- antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og
- publiserte filmer for Internett-arkivets del), har jeg så langt
- klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer.
-
-<p>De aller fleste oppføringene er hentet fra IMDB selv, basert på det
- faktum at alle filmer laget i USA før 1923 er falt i det fri.
- Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette
- utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt).
- En annen stor andel kommer fra Internett-arkivet, der jeg har
- identifisert filmer med referanse til IMDB. Internett-arkivet, som
- holder til i USA, har som
- <a href="https://archive.org/about/terms.php">policy å kun publisere
- filmer som det er lovlig å distribuere</a>. Jeg har under arbeidet
- kommet over flere filmer som har blitt fjernet fra
- Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene
- som kontrollerer Internett-arkivet har et aktivt forhold til å kun
- ha lovlig innhold der, selv om det i stor grad er drevet av
- frivillige. En annen stor liste med filmer kommer fra det
- kommersielle selskapet Retro Film Vault, som selger allemannseide
- filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister
- over filmer som hevdes å være allemannseie, det være seg Public
- Domain Review, Public Domain Torrents og Public Domain Movies (.net
- og .info), samt lister over filmer med Creative Commons-lisensiering
- fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel
- stikkontroll ved å vurdere filmer som kun omtales på en liste. Der
- jeg har funnet feil som har gjort meg i tvil om vurderingen til de
- som har laget listen har jeg forkastet listen fullstendig (gjelder
- en liste fra IMDB).</p>
-
-<p>Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på
- Internett (fra blant annet Internett-arkivet, Public Domain
- Torrents, Public Domain Reivew og Public Domain Movies), og knytte
- dem til oppføringer i IMDB, så har jeg så langt klart å identifisere
- over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro
- kan lovlig distribueres av alle på Internett. Som ekstra kilder er
- det brukt lister over filmer som antas/påstås å være allemannseie.
- Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig
- for almennheten alle verk som er falt i det fri eller har
- bruksvilkår som tillater deling.
-
-<p>I tillegg til de over 11 000 filmene der tittel-ID i IMDB er
- identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå
- ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av
- disse er nok duplikater av de IMDB-oppføringene som er identifisert
- så langt, men neppe alle. Retro Film Vault hevder å ha 44 000
- filmverk i det fri i sin katalog, så det er mulig at det reelle
- tallet er betydelig høyere enn de jeg har klart å identifisere så
- langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor
- mange filmer i IMDB som kan lovlig deles på Internett. I følge <a
- href="http://www.imdb.com/stats">statistikk fra IMDB</a> er det 4.6
- millioner titler registrert, hvorav 3 millioner er TV-serieepisoder.
- Jeg har ikke funnet ut hvordan de fordeler seg per år.</p>
-
-<p>Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig
- kunne deles på Internett, får en følgende histogram:</p>
-
-<p align="center"><img width="80%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year.png"></p>
-
-<p>En kan i histogrammet se at effekten av manglende registrering
- eller fornying av registrering er at mange filmer gitt ut i USA før
- 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere
- filmer gitt ut de siste årene med bruksvilkår som tillater deling,
- muligens på grunn av fremveksten av
- <a href="https://creativecommons.org/">Creative
- Commons</a>-bevegelsen..</p>
-
-<p>For maskinell analyse av katalogene har jeg laget et lite program
- som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn
- Time-varianter og laster ned komplett liste over filmer i
- katalogene, noe som bekrefter at det er mulig å hente ned komplett
- liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire
- bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra
- www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den
- andre brukes i følge dokument 09,12 av klienten tilgjengelig fra
- popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette
- dokumentet. Den tredje brukes av websidene tilgjengelig fra
- popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet.
- Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i
- følge dokument 09,12, og er navngitt 'ukrfnlge' i dette
- dokumentet.</p>
-
-<p>Metoden Økokrim legger til grunn skriver i sitt punkt fire at
- skjønn er en egnet metode for å finne ut om en film kan lovlig deles
- på Internett eller ikke, og sier at det ble «vurdert hvorvidt det
- var rimelig å forvente om at verket var vernet av copyright». For
- det første er det ikke nok å slå fast om en film er «vernet av
- copyright» for å vite om det er lovlig å dele den på Internett eller
- ikke, da det finnes flere filmer med opphavsrettslige bruksvilkår
- som tillater deling på Internett. Eksempler på dette er Creative
- Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra
- 2010. I tillegg til slike finnes det flere filmer som nå er
- allemannseie (public domain) på grunn av manglende registrering
- eller fornying av registrering selv om både regisør,
- produksjonsselskap og distributør ønsker seg vern. Eksempler på
- dette er Plan 9 from Outer Space fra 1959 og Night of the Living
- Dead fra 1968. Alle filmer fra USA som var allemannseie før
- 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i
- USA på det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis
- det er noe
- <a href="http://www.latimes.com/local/lanow/la-me-ln-happy-birthday-song-lawsuit-decision-20150922-story.html">historien
- om sangen «Happy birthday»</a> forteller oss, der betaling for bruk
- har vært krevd inn i flere tiår selv om sangen ikke egentlig var
- vernet av åndsverksloven, så er det at hvert enkelt verk må vurderes
- nøye og i detalj før en kan slå fast om verket er allemannseie eller
- ikke, det holder ikke å tro på selverklærte rettighetshavere. Flere
- eksempel på verk i det fri som feilklassifiseres som vernet er fra
- dokument 09,18, som lister opp søkeresultater for klienten omtalt
- som popcorntime.sh og i følge notatet kun inneholder en film (The
- Circus fra 1928) som under tvil kan antas å være allemannseie.</p>
-
-<p>Ved rask gjennomlesning av dokument 09,18, som inneholder
- skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt
- både filmen «The Brain That Wouldn't Die» fra 1962 som er
- <a href="https://archive.org/details/brain_that_wouldnt_die">tilgjengelig
- fra Internett-arkivet</a> og som
- <a href="https://en.wikipedia.org/wiki/List_of_films_in_the_public_domain_in_the_United_States">i
- følge Wikipedia er allemannseie i USA</a> da den ble gitt ut i
- 1962 uten 'copyright'-merking, og filmen «God’s Little Acre» fra
- 1958 <a href="https://en.wikipedia.org/wiki/God%27s_Little_Acre_%28film%29">som
- er lagt ut på Wikipedia</a>, der det fortelles at
- sort/hvit-utgaven er allemannseie. Det fremgår ikke fra dokument
- 09,18 om filmen omtalt der er sort/hvit-utgaven. Av
- kapasitetsårsaker og på grunn av at filmoversikten i dokument 09,18
- ikke er maskinlesbart har jeg ikke forsøkt å sjekke alle filmene som
- listes opp der om mot liste med filmer som er antatt lovlig kan
- distribueres på Internet.</p>
-
-<p>Ved maskinell gjennomgang av listen med IMDB-referanser under
- regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg
- filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er
- feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig
- fra Internett-arkivet og markert som allemannseie der. Det virker
- dermed å være minst fire ganger så mange filmer som kan lovlig deles
- på Internett enn det som er lagt til grunn når en påstår at minst
- 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere
- undersøkelser kan avdekke flere. Poenget er uansett at metodens
- punkt om «rimelig å forvente om at verket var vernet av copyright»
- gjør metoden upålitelig.</p>
-
-<p>Den omtalte målemetoden velger ut tilfeldige søketermer fra
- ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske
- som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke
- hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg
- om den er egnet til å få et representativt utvalg av filmer. Mange
- av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk
- ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger.
- Dette antyder at enkeltmålinger av 100 filmer slik målemetoden
- beskriver er gjort, ikke er velegnet til å finne andel ulovlig
- innhold i bittorrent-katalogene.</p>
-
-<p>En kan motvirke dette store avviket for enkeltmålinger ved å gjøre
- mange søk og slå sammen resultatet. Jeg har testet ved å
- gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000
- tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig
- avvik, i forhold til telling av filmer pr år i hele katalogen.</p>
-
-<p>Målemetoden henter ut de fem øverste i søkeresultatet.
- Søkeresultatene er sortert på antall bittorrent-klienter registrert
- som delere i katalogene, hvilket kan gi en slagside mot hvilke
- filmer som er populære blant de som bruker bittorrent-katalogene,
- uten at det forteller noe om hvilket innhold som er tilgjengelig
- eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har
- forsøkt å måle hvor stor en slik slagside eventuelt er ved å
- sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i
- stedet. Avviket for disse to metodene for endel kataloger er godt
- synlig på histogramet. Her er histogram over filmer funnet i den
- komplette katalogen (grønn strek), og filmer funnet ved søk etter
- ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i
- søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En
- kan her se at resultatene påvirkes betydelig av hvorvidt en ser på
- de første eller de siste filmene i et søketreff.</p>
-
-<p align="center">
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-top.png"/>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-bottom.png"/>
- <br>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-top.png"/>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-bottom.png"/>
- <br>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-top.png"/>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-bottom.png"/>
- <br>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-top.png"/>
- <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-bottom.png"/>
-</p>
-
-<p>Det er verdt å bemerke at de omtalte bittorrent-katalogene ikke er
- laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen
- YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh,
- et selvstendig fildelings-relatert nettsted YTS.AG med et separat
- brukermiljø. Målemetoden foreslått av Økokrim måler dermed ikke
- (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til
- innholdet i disse katalogene.</p>
-
-<hr>
-
-<p id="dok-09-13">Metoden fra Økokrims dokument 09,13 i straffesaken
-om DNS-beslag.</p>
-
-<p><strong>1. Evaluation of (il)legality</strong></p>
-
-<p><strong>1.1. Methodology</strong>
-
-<p>Due to its technical configuration, Popcorn Time applications don't
-allow to make a full list of all titles made available. In order to
-evaluate the level of illegal operation of PCT, the following
-methodology was applied:</p>
-
-<ol>
-
- <li>A random selection of 50 keywords, greater than 3 letters, was
- made from the Dale-Chall list that contains 3000 simple English
- words1. The selection was made by using a Random Number
- Generator2.</li>
-
- <li>For each keyword, starting with the first randomly selected
- keyword, a search query was conducted in the movie section of the
- respective Popcorn Time application. For each keyword, the first
- five results were added to the title list until the number of 100
- unique titles was reached (duplicates were removed).</li>
-
- <li>For one fork, .CH, insufficient titles were generated via this
- approach to reach 100 titles. This was solved by adding any
- additional query results above five for each of the 50 keywords.
- Since this still was not enough, another 42 random keywords were
- selected to finally reach 100 titles.</li>
-
- <li>It was verified whether or not there is a reasonable expectation
- that the work is copyrighted by checking if they are available on
- IMDb, also verifying the director, the year when the title was
- released, the release date for a certain market, the production
- company/ies of the title and the distribution company/ies.</li>
-
-</ol>
-
-<p><strong>1.2. Results</strong></p>
-
-<p>Between 6 and 9 June 2016, four forks of Popcorn Time were
-investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and
-popcorntime.ch. An excel sheet with the results is included in
-Appendix 1. Screenshots were secured in separate Appendixes for each
-respective fork, see Appendix 2-5.</p>
-
-<p>For each fork, out of 100, de-duplicated titles it was possible to
-retrieve data according to the parameters set out above that indicate
-that the title is commercially available. Per fork, there was 1 title
-that presumably falls within the public domain, i.e. the 1928 movie
-"The Circus" by and with Charles Chaplin.</p>
-
-<p>Based on the above it is reasonable to assume that 99% of the movie
-content of each fork is copyright protected and is made available
-illegally.</p>
-
-<p>This exercise was not repeated for TV series, but considering that
-besides production companies and distribution companies also
-broadcasters may have relevant rights, it is reasonable to assume that
-at least a similar level of infringement will be established.</p>
-
-<p>Based on the above it is reasonable to assume that 99% of all the
-content of each fork is copyright protected and are made available
-illegally.</p>
+ <title>Release 0.2 of free software archive system Nikita announced</title>
+ <link>http://people.skolelinux.org/pere/blog/Release_0_2_of_free_software_archive_system_Nikita_announced.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Release_0_2_of_free_software_archive_system_Nikita_announced.html</guid>
+ <pubDate>Thu, 18 Oct 2018 14:40:00 +0200</pubDate>
+ <description><p>This morning, the new release of the
+<a href="https://gitlab.com/OsloMet-ABI/nikita-noark5-core/">Nikita
+Noark 5 core project</a> was
+<a href="https://lists.nuug.no/pipermail/nikita-noark/2018-October/000406.html">announced
+on the project mailing list</a>. The free software solution is an
+implementation of the Norwegian archive standard Noark 5 used by
+government offices in Norway. These were the changes in version 0.2
+since version 0.1.1 (from NEWS.md):
+
+<ul>
+ <li>Fix typos in REL names</li>
+ <li>Tidy up error message reporting</li>
+ <li>Fix issue where we used Integer.valueOf(), not Integer.getInteger()</li>
+ <li>Change some String handling to StringBuffer</li>
+ <li>Fix error reporting</li>
+ <li>Code tidy-up</li>
+ <li>Fix issue using static non-synchronized SimpleDateFormat to avoid
+ race conditions</li>
+ <li>Fix problem where deserialisers were treating integers as strings</li>
+ <li>Update methods to make them null-safe</li>
+ <li>Fix many issues reported by coverity</li>
+ <li>Improve equals(), compareTo() and hash() in domain model</li>
+ <li>Improvements to the domain model for metadata classes</li>
+ <li>Fix CORS issues when downloading document</li>
+ <li>Implementation of case-handling with registryEntry and document upload</li>
+ <li>Better support in Javascript for OPTIONS</li>
+ <li>Adding concept description of mail integration</li>
+ <li>Improve setting of default values for GET on ny-journalpost</li>
+ <li>Better handling of required values during deserialisation </li>
+ <li>Changed tilknyttetDato (M620) from date to dateTime</li>
+ <li>Corrected some opprettetDato (M600) (de)serialisation errors.</li>
+ <li>Improve parse error reporting.</li>
+ <li>Started on OData search and filtering.</li>
+ <li>Added Contributor Covenant Code of Conduct to project.</li>
+ <li>Moved repository and project from Github to Gitlab.</li>
+ <li>Restructured repository, moved code into src/ and web/.</li>
+ <li>Updated code to use Spring Boot version 2.</li>
+ <li>Added support for OAuth2 authentication.</li>
+ <li>Fixed several bugs discovered by Coverity.</li>
+ <li>Corrected handling of date/datetime fields.</li>
+ <li>Improved error reporting when rejecting during deserializatoin.</li>
+ <li>Adjusted default values provided for ny-arkivdel, ny-mappe,
+ ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.</li>
+ <li>Several fixes for korrespondansepart*.</li>
+ <li>Updated web GUI:
+ <ul>
+ <li>Now handle both file upload and download.</li>
+ <li>Uses new OAuth2 authentication for login.</li>
+ <li>Forms now fetches default values from API using GET.</li>
+ <li>Added RFC 822 (email), TIFF and JPEG to list of possible file formats.</li>
+ </ul></li>
+</ul>
+
+<p>The changes and improvements are extensive. Running diffstat on
+the changes between git tab 0.1.1 and 0.2 show 1098 files changed,
+108666 insertions(+), 54066 deletions(-).</p>
+
+<p>If free and open standardized archiving API sound interesting to
+you, please contact us on IRC
+(<a href="irc://irc.freenode.net/%23nikita">#nikita on
+irc.freenode.net</a>) or email
+(<a href="https://lists.nuug.no/mailman/listinfo/nikita-noark">nikita-noark
+mailing list</a>).</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Cura, the nice 3D print slicer, is now in Debian Unstable</title>
- <link>http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</guid>
- <pubDate>Sun, 17 Dec 2017 07:00:00 +0100</pubDate>
- <description><p>After several months of working and waiting, I am happy to report
-that the nice and user friendly 3D printer slicer software Cura just
-entered Debian Unstable. It consist of five packages,
-<a href="https://tracker.debian.org/pkg/cura">cura</a>,
-<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>,
-<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>,
-<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>,
-<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and
-<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last
-two, uranium and cura, entered Unstable yesterday. This should make
-it easier for Debian users to print on at least the Ultimaker class of
-3D printers. My nearest 3D printer is an Ultimaker 2+, so it will
-make life easier for at least me. :)</p>
-
-<p>The work to make this happen was done by Gregor Riepl, and I was
-happy to assist him in sponsoring the packages. With the introduction
-of Cura, Debian is up to three 3D printer slicers at your service,
-Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D
-printer, give it a go. :)</p>
-
-<p>The 3D printer software is maintained by the 3D printer Debian
-team, flocking together on the
-<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a>
-mailing list and the
-<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a>
-IRC channel.</p>
-
-<p>The next step for Cura in Debian is to update the cura package to
-version 3.0.3 and then update the entire set of packages to version
-3.1.0 which showed up the last few days.</p>
+ <title>Fetching trusted timestamps using the rfc3161ng python module</title>
+ <link>http://people.skolelinux.org/pere/blog/Fetching_trusted_timestamps_using_the_rfc3161ng_python_module.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Fetching_trusted_timestamps_using_the_rfc3161ng_python_module.html</guid>
+ <pubDate>Mon, 8 Oct 2018 12:30:00 +0200</pubDate>
+ <description><p>I have earlier covered the basics of trusted timestamping using the
+'openssl ts' client. See blog post for
+<a href="http://people.skolelinux.org/pere/blog/Public_Trusted_Timestamping_services_for_everyone.html">2014</a>,
+<a href="http://people.skolelinux.org/pere/blog/syslog_trusted_timestamp___chain_of_trusted_timestamps_for_your_syslog.html">2016</a>
+and
+<a href="http://people.skolelinux.org/pere/blog/Idea_for_storing_trusted_timestamps_in_a_Noark_5_archive.html">2017</a>
+for those stories. But some times I want to integrate the timestamping
+in other code, and recently I needed to integrate it into Python.
+After searching a bit, I found
+<a href="https://dev.entrouvert.org/projects/python-rfc3161">the
+rfc3161 library</a> which seemed like a good fit, but I soon
+discovered it only worked for python version 2, and I needed something
+that work with python version 3. Luckily I next came across
+<a href="https://github.com/trbs/rfc3161ng/">the rfc3161ng library</a>,
+a fork of the original rfc3161 library. Not only is it working with
+python 3, it have fixed a few of the bugs in the original library, and
+it has an active maintainer. I decided to wrap it up and make it
+<a href="https://tracker.debian.org/pkg/python-rfc3161ng">available in
+Debian</a>, and a few days ago it entered Debian unstable and testing.</p>
+
+<p>Using the library is fairly straight forward. The only slightly
+problematic step is to fetch the required certificates to verify the
+timestamp. For some services it is straight forward, while for others
+I have not yet figured out how to do it. Here is a small standalone
+code example based on of the integration tests in the library code:</p>
+
+<pre>
+#!/usr/bin/python3
+
+"""
+
+Python 3 script demonstrating how to use the rfc3161ng module to
+get trusted timestamps.
+
+The license of this code is the same as the license of the rfc3161ng
+library, ie MIT/BSD.
+
+"""
+
+import os
+import pyasn1.codec.der
+import rfc3161ng
+import subprocess
+import tempfile
+import urllib.request
+
+def store(f, data):
+ f.write(data)
+ f.flush()
+ f.seek(0)
+
+def fetch(url, f=None):
+ response = urllib.request.urlopen(url)
+ data = response.read()
+ if f:
+ store(f, data)
+ return data
+
+def main():
+ with tempfile.NamedTemporaryFile() as cert_f,\
+ tempfile.NamedTemporaryFile() as ca_f,\
+ tempfile.NamedTemporaryFile() as msg_f,\
+ tempfile.NamedTemporaryFile() as tsr_f:
+
+ # First fetch certificates used by service
+ certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f)
+ ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f)
+
+ # Then timestamp the message
+ timestamper = \
+ rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr',
+ certificate=certificate_data)
+ data = b"Python forever!\n"
+ tsr = timestamper(data=data, return_tsr=True)
+
+ # Finally, convert message and response to something 'openssl ts' can verify
+ store(msg_f, data)
+ store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
+ args = ["openssl", "ts", "-verify",
+ "-data", msg_f.name,
+ "-in", tsr_f.name,
+ "-CAfile", ca_f.name,
+ "-untrusted", cert_f.name]
+ subprocess.check_call(args)
+
+if '__main__' == __name__:
+ main()
+</pre>
+
+<p>The code fetches the required certificates, store them as temporary
+files, timestamp a simple message, store the message and timestamp to
+disk and ask 'openssl ts' to verify the timestamp. A timestamp is
+around 1.5 kiB in size, and should be fairly easy to store for future
+use.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Idea for finding all public domain movies in the USA</title>
- <link>http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</guid>
- <pubDate>Wed, 13 Dec 2017 10:15:00 +0100</pubDate>
- <description><p>While looking at
-<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies
-for the copyright renewal entries for movies published in the USA</a>,
-an idea occurred to me. The number of renewals are so few per year, it
-should be fairly quick to transcribe them all and add references to
-the corresponding IMDB title ID. This would give the (presumably)
-complete list of movies published 28 years earlier that did _not_
-enter the public domain for the transcribed year. By fetching the
-list of USA movies published 28 years earlier and subtract the movies
-with renewals, we should be left with movies registered in IMDB that
-are now in the public domain. For the year 1955 (which is the one I
-have looked at the most), the total number of pages to transcribe is
-21. For the 28 years from 1950 to 1978, it should be in the range
-500-600 pages. It is just a few days of work, and spread among a
-small group of people it should be doable in a few weeks of spare
-time.</p>
-
-<p>A typical copyright renewal entry look like this (the first one
-listed for 1955):</p>
-
-<p><blockquote>
- ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer
- Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH);
- 10Jun55; R151558.
-</blockquote></p>
-
-<p>The movie title as well as registration and renewal dates are easy
-enough to locate by a program (split on first comma and look for
-DDmmmYY). The rest of the text is not required to find the movie in
-IMDB, but is useful to confirm the correct movie is found. I am not
-quite sure what the L and R numbers mean, but suspect they are
-reference numbers into the archive of the US Copyright Office.</p>
-
-<p>Tracking down the equivalent IMDB title ID is probably going to be
-a manual task, but given the year it is fairly easy to search for the
-movie title using for example
-<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>.
-Using this search, I find that the equivalent IMDB title ID for the
-first renewal entry from 1955 is
-<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p>
-
-<p>I suspect the best way to do this would be to make a specialised
-web service to make it easy for contributors to transcribe and track
-down IMDB title IDs. In the web service, once a entry is transcribed,
-the title and year could be extracted from the text, a search in IMDB
-conducted for the user to pick the equivalent IMDB title ID right
-away. By spreading out the work among volunteers, it would also be
-possible to make at least two persons transcribe the same entries to
-be able to discover any typos introduced. But I will need help to
-make this happen, as I lack the spare time to do all of this on my
-own. If you would like to help, please get in touch. Perhaps you can
-draft a web service for crowd sourcing the task?</p>
-
-<p>Note, Project Gutenberg already have some
-<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed
-copies of the US Copyright Office renewal protocols</a>, but I have
-not been able to find any film renewals there, so I suspect they only
-have copies of renewal for written works. I have not been able to find
-any transcribed versions of movie renewals so far. Perhaps they exist
-somewhere?</p>
-
-<p>I would love to figure out methods for finding all the public
-domain works in other countries too, but it is a lot harder. At least
-for Norway and Great Britain, such work involve tracking down the
-people involved in making the movie and figuring out when they died.
-It is hard enough to figure out who was part of making a movie, but I
-do not know how to automate such procedure without a registry of every
-person involved in making movies and their death year.</p>
+ <title>Automatic Google Drive sync using grive in Debian</title>
+ <link>http://people.skolelinux.org/pere/blog/Automatic_Google_Drive_sync_using_grive_in_Debian.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Automatic_Google_Drive_sync_using_grive_in_Debian.html</guid>
+ <pubDate>Thu, 4 Oct 2018 15:20:00 +0200</pubDate>
+ <description><p>A few days, I rescued a Windows victim over to Debian. To try to
+rescue the remains, I helped set up automatic sync with Google Drive.
+I did not find any sensible Debian package handling this
+automatically, so I rebuild the grive2 source from
+<a href="http://www.webupd8.org/">the Ubuntu UPD8 PPA</a> to do the
+task and added a autostart desktop entry and a small shell script to
+run in the background while the user is logged in to do the sync.
+Here is a sketch of the setup for future reference.</p>
+
+<p>I first created <tt>~/googledrive</tt>, entered the directory and
+ran '<tt>grive -a</tt>' to authenticate the machine/user. Next, I
+created a autostart hook in <tt>~/.config/autostart/grive.desktop</tt>
+to start the sync when the user log in:</p>
+
+<p><blockquote><pre>
+[Desktop Entry]
+Name=Google drive autosync
+Type=Application
+Exec=/home/user/bin/grive-sync
+</pre></blockquote></p>
+
+<p>Finally, I wrote the <tt>~/bin/grive-sync</tt> script to sync
+~/googledrive/ with the files in Google Drive.</p>
+
+<p><blockquote><pre>
+#!/bin/sh
+set -e
+cd ~/
+cleanup() {
+ if [ "$syncpid" ] ; then
+ kill $syncpid
+ fi
+}
+trap cleanup EXIT INT QUIT
+/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" &
+syncpdi=$!
+while true; do
+ if ! xhost >/dev/null 2>&1 ; then
+ echo "no DISPLAY, exiting as the user probably logged out"
+ exit 1
+ fi
+ if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then
+ /usr/lib/grive/grive-sync.sh sync googledrive
+ fi
+ sleep 300
+done 2>&1 | sed "s%^%$0:%"
+</pre></blockquote></p>
+
+<p>Feel free to use the setup if you want. It can be assumed to be
+GNU GPL v2 licensed (or any later version, at your leisure), but I
+doubt this code is possible to claim copyright on.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Is the short movie «Empty Socks» from 1927 in the public domain or not?</title>
- <link>http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</guid>
- <pubDate>Tue, 5 Dec 2017 12:30:00 +0100</pubDate>
- <description><p>Three years ago, a presumed lost animation film,
-<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from
-1927</a>, was discovered in the Norwegian National Library. At the
-time it was discovered, it was generally assumed to be copyrighted by
-The Walt Disney Company, and I blogged about
-<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my
-reasoning to conclude</a> that it would would enter the Norwegian
-equivalent of the public domain in 2053, based on my understanding of
-Norwegian Copyright Law. But a few days ago, I came across
-<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a
-blog post claiming the movie was already in the public domain</a>, at
-least in USA. The reasoning is as follows: The film was released in
-November or Desember 1927 (sources disagree), and presumably
-registered its copyright that year. At that time, right holders of
-movies registered by the copyright office received government
-protection for there work for 28 years. After 28 years, the copyright
-had to be renewed if the wanted the government to protect it further.
-The blog post I found claim such renewal did not happen for this
-movie, and thus it entered the public domain in 1956. Yet someone
-claim the copyright was renewed and the movie is still copyright
-protected. Can anyone help me to figure out which claim is correct?
-I have not been able to find Empty Socks in Catalog of copyright
-entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures
-<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available
-from the University of Pennsylvania</a>, neither in
-<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page
-45 for the first half of 1955</a>, nor in
-<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page
-119 for the second half of 1955</a>. It is of course possible that
-the renewal entry was left out of the printed catalog by mistake. Is
-there some way to rule out this possibility? Please help, and update
-the wikipedia page with your findings.
+ <title>Valutakrambod - A python and bitcoin love story</title>
+ <link>http://people.skolelinux.org/pere/blog/Valutakrambod___A_python_and_bitcoin_love_story.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Valutakrambod___A_python_and_bitcoin_love_story.html</guid>
+ <pubDate>Sat, 29 Sep 2018 22:20:00 +0200</pubDate>
+ <description><p>It would come as no surprise to anyone that I am interested in
+bitcoins and virtual currencies. I've been keeping an eye on virtual
+currencies for many years, and it is part of the reason a few months
+ago, I started writing a python library for collecting currency
+exchange rates and trade on virtual currency exchanges. I decided to
+name the end result valutakrambod, which perhaps can be translated to
+small currency shop.</p>
+
+<p>The library uses the tornado python library to handle HTTP and
+websocket connections, and provide a asynchronous system for
+connecting to and tracking several services. The code is available
+from
+<a href="http://github.com/petterreinholdtsen/valutakrambod">github</a>.</p>
+
+</p>There are two example clients of the library. One is very simple and
+list every updated buy/sell price received from the various services.
+This code is started by running bin/btc-rates and call the client code
+in valutakrambod/client.py. The simple client look like this:</p>
+
+<p><blockquote><pre>
+import functools
+import tornado.ioloop
+import valutakrambod
+class SimpleClient(object):
+ def __init__(self):
+ self.services = []
+ self.streams = []
+ pass
+ def newdata(self, service, pair, changed):
+ print("%-15s %s-%s: %8.3f %8.3f" % (
+ service.servicename(),
+ pair[0],
+ pair[1],
+ service.rates[pair]['ask'],
+ service.rates[pair]['bid'])
+ )
+ async def refresh(self, service):
+ await service.fetchRates(service.wantedpairs)
+ def run(self):
+ self.ioloop = tornado.ioloop.IOLoop.current()
+ self.services = valutakrambod.service.knownServices()
+ for e in self.services:
+ service = e()
+ service.subscribe(self.newdata)
+ stream = service.websocket()
+ if stream:
+ self.streams.append(stream)
+ else:
+ # Fetch information from non-streaming services immediately
+ self.ioloop.call_later(len(self.services),
+ functools.partial(self.refresh, service))
+ # as well as regularly
+ service.periodicUpdate(60)
+ for stream in self.streams:
+ stream.connect()
+ try:
+ self.ioloop.start()
+ except KeyboardInterrupt:
+ print("Interrupted by keyboard, closing all connections.")
+ pass
+ for stream in self.streams:
+ stream.close()
+</pre></blockquote></p>
+
+<p>The library client loops over all known "public" services,
+initialises it, subscribes to any updates from the service, checks and
+activates websocket streaming if the service provide it, and if no
+streaming is supported, fetches information from the service and sets
+up a periodic update every 60 seconds. The output from this client
+can look like this:</p>
+
+<p><blockquote><pre>
+Bl3p BTC-EUR: 5687.110 5653.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Hitbtc BTC-USD: 6594.560 6593.690
+Hitbtc BTC-USD: 6594.560 6593.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Hitbtc BTC-USD: 6594.570 6593.690
+Bitstamp EUR-USD: 1.159 1.154
+Hitbtc BTC-USD: 6594.570 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Hitbtc BTC-USD: 6594.580 6593.690
+Bl3p BTC-EUR: 5687.110 5653.690
+Paymium BTC-EUR: 5680.000 5620.240
+</pre></blockquote></p>
+
+<p>The exchange order book is tracked in addition to the best buy/sell
+price, for those that need to know the details.</p>
+
+<p>The other example client is focusing on providing a curses view
+with updated buy/sell prices as soon as they are received from the
+services. This code is located in bin/btc-rates-curses and activated
+by using the '-c' argument. Without the argument the "curses" output
+is printed without using curses, which is useful for debugging. The
+curses view look like this:</p>
+
+<p><blockquote><pre>
+ Name Pair Bid Ask Spr Ftcd Age
+ BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60
+ Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59
+ Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60
+ Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60
+ Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1
+ Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19
+ Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan
+ Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60
+ Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan
+ BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60
+ Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60
+ Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60
+ Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan
+ MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan
+ BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60
+ Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57
+ Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60
+ Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1
+ Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117
+ Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan
+ Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0
+ Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58
+ Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan
+ Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan
+ Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1
+ Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan
+ BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan
+ BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60
+ BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan
+ Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan
+</pre></blockquote></p>
+
+<p>The code for this client is too complex for a simple blog post, so
+you will have to check out the git repository to figure out how it
+work. What I can tell is how the three last numbers on each line
+should be interpreted. The first is how many seconds ago information
+was received from the service. The second is how long ago, according
+to the service, the provided information was updated. The last is an
+estimate on how often the buy/sell values change.</p>
+
+<p>If you find this library useful, or would like to improve it, I
+would love to hear from you. Note that for some of the services I've
+implemented a trading API. It might be the topic of a future blog
+post.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Metadata proposal for movies on the Internet Archive</title>
- <link>http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</guid>
- <pubDate>Tue, 28 Nov 2017 12:00:00 +0100</pubDate>
- <description><p>It would be easier to locate the movie you want to watch in
-<a href="https://www.archive.org/">the Internet Archive</a>, if the
-metadata about each movie was more complete and accurate. In the
-archiving community, a well known saying state that good metadata is a
-love letter to the future. The metadata in the Internet Archive could
-use a face lift for the future to love us back. Here is a proposal
-for a small improvement that would make the metadata more useful
-today. I've been unable to find any document describing the various
-standard fields available when uploading videos to the archive, so
-this proposal is based on my best quess and searching through several
-of the existing movies.</p>
-
-<p>I have a few use cases in mind. First of all, I would like to be
-able to count the number of distinct movies in the Internet Archive,
-without duplicates. I would further like to identify the IMDB title
-ID of the movies in the Internet Archive, to be able to look up a IMDB
-title ID and know if I can fetch the video from there and share it
-with my friends.</p>
-
-<p>Second, I would like the Butter data provider for The Internet
-archive
-(<a href="https://github.com/butterproviders/butter-provider-archive">available
-from github</a>), to list as many of the good movies as possible. The
-plugin currently do a search in the archive with the following
-parameters:</p>
-
-<p><pre>
-collection:moviesandfilms
-AND NOT collection:movie_trailers
-AND -mediatype:collection
-AND format:"Archive BitTorrent"
-AND year
-</pre></p>
-
-<p>Most of the cool movies that fail to show up in Butter do so
-because the 'year' field is missing. The 'year' field is populated by
-the year part from the 'date' field, and should be when the movie was
-released (date or year). Two such examples are
-<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur
-from 1905</a> and
-<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes
-2: Gran Dillama from 2013</a>, where the year metadata field is
-missing.</p>
-
-So, my proposal is simply, for every movie in The Internet Archive
-where an IMDB title ID exist, please fill in these metadata fields
-(note, they can be updated also long after the video was uploaded, but
-as far as I can tell, only by the uploader):
-
-<dl>
-
-<dt>mediatype</dt>
-<dd>Should be 'movie' for movies.</dd>
-
-<dt>collection</dt>
-<dd>Should contain 'moviesandfilms'.</dd>
-
-<dt>title</dt>
-<dd>The title of the movie, without the publication year.</dd>
-
-<dt>date</dt>
-<dd>The data or year the movie was released. This make the movie show
-up in Butter, as well as make it possible to know the age of the
-movie and is useful to figure out copyright status.</dd>
-
-<dt>director</dt>
-<dd>The director of the movie. This make it easier to know if the
-correct movie is found in movie databases.</dd>
-
-<dt>publisher</dt>
-<dd>The production company making the movie. Also useful for
-identifying the correct movie.</dd>
-
-<dt>links</dt>
-
-<dd>Add a link to the IMDB title page, for example like this: &lt;a
-href="http://www.imdb.com/title/tt0028496/"&gt;Movie in
-IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for
-counting of number of unique movies in the Archive. Other external
-references, like to TMDB, could be added like this too.</dd>
-
-</dl>
-
-<p>I did consider proposing a Custom field for the IMDB title ID (for
-example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it
-will be easier to simply place it in the links free text field.</p>
-
-<p>I created
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-list of IMDB title IDs for several thousand movies in the Internet
-Archive</a>, but I also got a list of several thousand movies without
-such IMDB title ID (and quite a few duplicates). It would be great if
-this data set could be integrated into the Internet Archive metadata
-to be available for everyone in the future, but with the current
-policy of leaving metadata editing to the uploaders, it will take a
-while before this happen. If you have uploaded movies into the
-Internet Archive, you can help. Please consider following my proposal
-above for your movies, to ensure that movie is properly
-counted. :)</p>
-
-<p>The list is mostly generated using wikidata, which based on
-Wikipedia articles make it possible to link between IMDB and movies in
-the Internet Archive. But there are lots of movies without a
-Wikipedia article, and some movies where only a collection page exist
-(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the
-Caminandes example above</a>, where there are three movies but only
-one Wikidata entry).</p>
+ <title>VLC in Debian now can do bittorrent streaming</title>
+ <link>http://people.skolelinux.org/pere/blog/VLC_in_Debian_now_can_do_bittorrent_streaming.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/VLC_in_Debian_now_can_do_bittorrent_streaming.html</guid>
+ <pubDate>Mon, 24 Sep 2018 21:20:00 +0200</pubDate>
+ <description><p>Back in February, I got curious to see
+<a href="http://people.skolelinux.org/pere/blog/Using_VLC_to_stream_bittorrent_sources.html">if
+VLC now supported Bittorrent streaming</a>. It did not, despite the
+fact that the idea and code to handle such streaming had been floating
+around for years. I did however find
+<a href="https://github.com/johang/vlc-bittorrent">a standalone plugin
+for VLC</a> to do it, and half a year later I decided to wrap up the
+plugin and get it into Debian. I uploaded it to NEW a few days ago,
+and am very happy to report that it
+<a href="https://tracker.debian.org/pkg/vlc-plugin-bittorrent">entered
+Debian</a> a few hours ago, and should be available in Debian/Unstable
+tomorrow, and Debian/Testing in a few days.</p>
+
+<p>With the vlc-plugin-bittorrent package installed you should be able
+to stream videos using a simple call to</p>
+
+<p><blockquote><pre>
+vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent
+</pre></blockquote></p>
+
+</p>It can handle magnet links too. Now if only native vlc had
+bittorrent support. Then a lot more would be helping each other to
+share public domain and creative commons movies. The plugin need some
+stability work with seeking and picking the right file in a torrent
+with many files, but is already usable. Please note that the plugin
+is not removing downloaded files when vlc is stopped, so it can fill
+up your disk if you are not careful. Have fun. :)</p>
+
+<p>I would love to get help maintaining this package. Get in touch if
+you are interested.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</item>
<item>
- <title>Legal to share more than 3000 movies listed on IMDB?</title>
- <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
- <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
- <description><p>A month ago, I blogged about my work to
-<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
-check the copyright status of IMDB entries</a>, and try to count the
-number of movies listed in IMDB that is legal to distribute on the
-Internet. I have continued to look for good data sources, and
-identified a few more. The code used to extract information from
-various data sources is available in
-<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
-git repository</a>, currently available from github.</p>
-
-<p>So far I have identified 3186 unique IMDB title IDs. To gain
-better understanding of the structure of the data set, I created a
-histogram of the year associated with each movie (typically release
-year). It is interesting to notice where the peaks and dips in the
-graph are located. I wonder why they are placed there. I suspect
-World War II caused the dip around 1940, but what caused the peak
-around 2010?</p>
-
-<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
-
-<p>I've so far identified ten sources for IMDB title IDs for movies in
-the public domain or with a free license. This is the statistics
-reported when running 'make stats' in the git repository:</p>
-
-<pre>
- 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
- 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
- 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
- 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
- 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
- 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
- 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
- 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
- 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
- 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
- 3186 unique IMDB title IDs in total
-</pre>
-
-<p>The entries without IMDB title ID are candidates to increase the
-data set, but might equally well be duplicates of entries already
-listed with IMDB title ID in one of the other sources, or represent
-movies that lack a IMDB title ID. I've seen examples of all these
-situations when peeking at the entries without IMDB title ID. Based
-on these data sources, the lower bound for movies listed in IMDB that
-are legal to distribute on the Internet is between 3186 and 4713.
-
-<p>It would be great for improving the accuracy of this measurement,
-if the various sources added IMDB title ID to their metadata. I have
-tried to reach the people behind the various sources to ask if they
-are interested in doing this, without any replies so far. Perhaps you
-can help me get in touch with the people behind VODO, Public Domain
-Torrents, Public Domain Movies and Public Domain Review to try to
-convince them to add more metadata to their movie entries?</p>
-
-<p>Another way you could help is by adding pages to Wikipedia about
-movies that are legal to distribute on the Internet. If such page
-exist and include a link to both IMDB and The Internet Archive, the
-script used to generate free-movies-archive-org-wikidata.json should
-pick up the mapping as soon as wikidata is updates.</p>
+ <title>Using the Kodi API to play Youtube videos</title>
+ <link>http://people.skolelinux.org/pere/blog/Using_the_Kodi_API_to_play_Youtube_videos.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Using_the_Kodi_API_to_play_Youtube_videos.html</guid>
+ <pubDate>Sun, 2 Sep 2018 23:40:00 +0200</pubDate>
+ <description><p>I continue to explore my Kodi installation, and today I wanted to
+tell it to play a youtube URL I received in a chat, without having to
+insert search terms using the on-screen keyboard. After searching the
+web for API access to the Youtube plugin and testing a bit, I managed
+to find a recipe that worked. If you got a kodi instance with its API
+available from http://kodihost/jsonrpc, you can try the following to
+have check out a nice cover band.</p>
+
+<p><blockquote><pre>curl --silent --header 'Content-Type: application/json' \
+ --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open",
+ "params": {"item": { "file":
+ "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \
+ http://projector.local/jsonrpc</pre></blockquote></p>
+
+<p>I've extended kodi-stream program to take a video source as its
+first argument. It can now handle direct video links, youtube links
+and 'desktop' to stream my desktop to Kodi. It is almost like a
+Chromecast. :)</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
</description>
</item>
+ <item>
+ <title>Software created using taxpayers’ money should be Free Software</title>
+ <link>http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html</guid>
+ <pubDate>Thu, 30 Aug 2018 13:50:00 +0200</pubDate>
+ <description><p>It might seem obvious that software created using tax money should
+be available for everyone to use and improve. Free Software
+Foundation Europe recentlystarted a campaign to help get more people
+to understand this, and I just signed the petition on
+<a href="https://publiccode.eu/">Public Money, Public Code</a> to help
+them. I hope you too will do the same.</p>
+</description>
+ </item>
+
</channel>
</rss>