<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>Unlimited randomness with the ChaosKey?</title>
- <link>http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Unlimited_randomness_with_the_ChaosKey_.html</guid>
- <pubDate>Wed, 1 Mar 2017 20:50:00 +0100</pubDate>
- <description><p>A few days ago I ordered a small batch of
-<a href="http://altusmetrum.org/ChaosKey/">the ChaosKey</a>, a small
-USB dongle for generating entropy created by Bdale Garbee and Keith
-Packard. Yesterday it arrived, and I am very happy to report that it
-work great! According to its designers, to get it to work out of the
-box, you need the Linux kernel version 4.1 or later. I tested on a
-Debian Stretch machine (kernel version 4.9), and there it worked just
-fine, increasing the available entropy very quickly. I wrote a small
-test oneliner to test. It first print the current entropy level,
-drain /dev/random, and then print the entropy level for five seconds.
-Here is the situation without the ChaosKey inserted:</p>
-
-<blockquote><pre>
-% cat /proc/sys/kernel/random/entropy_avail; \
- dd bs=1M if=/dev/random of=/dev/null count=1; \
- for n in $(seq 1 5); do \
- cat /proc/sys/kernel/random/entropy_avail; \
- sleep 1; \
- done
-300
-0+1 oppføringer inn
-0+1 oppføringer ut
-28 byte kopiert, 0,000264565 s, 106 kB/s
-4
-8
-12
-17
-21
-%
-</pre></blockquote>
-
-<p>The entropy level increases by 3-4 every second. In such case any
-application requiring random bits (like a HTTPS enabled web server)
-will halt and wait for more entrpy. And here is the situation with
-the ChaosKey inserted:</p>
-
-<blockquote><pre>
-% cat /proc/sys/kernel/random/entropy_avail; \
- dd bs=1M if=/dev/random of=/dev/null count=1; \
- for n in $(seq 1 5); do \
- cat /proc/sys/kernel/random/entropy_avail; \
- sleep 1; \
- done
-1079
-0+1 oppføringer inn
-0+1 oppføringer ut
-104 byte kopiert, 0,000487647 s, 213 kB/s
-433
-1028
-1031
-1035
-1038
-%
-</pre></blockquote>
-
-<p>Quite the difference. :) I bought a few more than I need, in case
-someone want to buy one her in Norway. :)</p>
+ <title>Overvåkning i Kina vs. Norge</title>
+ <link>http://people.skolelinux.org/pere/blog/Overv_kning_i_Kina_vs__Norge.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Overv_kning_i_Kina_vs__Norge.html</guid>
+ <pubDate>Mon, 12 Feb 2018 09:40:00 +0100</pubDate>
+ <description><p>Jeg lar meg fascinere av en artikkel
+<a href="https://www.dagbladet.no/kultur/terroristene-star-pa-dora/69436116">i
+Dagbladet om Kinas håndtering av Xinjiang</a>, spesielt følgende
+utsnitt:</p>
+
+<p><blockquote>
+
+<p>«I den sørvestlige byen Kashgar nærmere grensa til
+Sentral-Asia meldes det nå at 120.000 uigurer er internert i såkalte
+omskoleringsleirer. Samtidig er det innført et omfattende
+helsesjekk-program med innsamling og lagring av DNA-prøver fra
+absolutt alle innbyggerne. De mest avanserte overvåkingsmetodene
+testes ut her. Programmer for å gjenkjenne ansikter og stemmer er på
+plass i regionen. Der har de lokale myndighetene begynt å installere
+GPS-systemer i alle kjøretøy og egne sporingsapper i
+mobiltelefoner.</p>
+
+<p>Politimetodene griper så dypt inn i folks dagligliv at motstanden
+mot Beijing-regimet øker.»</p>
+
+</blockquote></p>
+
+<p>Beskrivelsen avviker jo desverre ikke så veldig mye fra tilstanden
+her i Norge.</p>
+
+<table>
+<tr>
+<th>Dataregistrering</th>
+<th>Kina</th>
+<th>Norge</th>
+
+<tr>
+<td>Innsamling og lagring av DNA-prøver fra befolkningen</td>
+<td>Ja</td>
+<td>Delvis, planlagt for alle nyfødte.</td>
+</tr>
+
+<tr>
+<td>Ansiktsgjenkjenning</td>
+<td>Ja</td>
+<td>Ja</td>
+</tr>
+
+<tr>
+<td>Stemmegjenkjenning</td>
+<td>Ja</td>
+<td>Nei</td>
+</tr>
+
+<tr>
+<td>Posisjons-sporing av mobiltelefoner</td>
+<td>Ja</td>
+<td>Ja</td>
+</tr>
+
+<tr>
+<td>Posisjons-sporing av biler</td>
+<td>Ja</td>
+<td>Ja</td>
+</tr>
+
+</table>
+
+<p>I Norge har jo situasjonen rundt Folkehelseinstituttets lagring av
+DNA-informasjon på vegne av politiet, der de nektet å slette
+informasjon politiet ikke hadde lov til å ta vare på, gjort det klart
+at DNA tar vare på ganske lenge. I tillegg finnes det utallige
+biobanker som lagres til evig tid, og det er planer om å innføre
+<a href="https://www.aftenposten.no/norge/i/75E9/4-av-10-mener-staten-bor-lagre-DNA-profiler-pa-alle-nyfodte">evig
+lagring av DNA-materiale fra alle spebarn som fødes</a> (med mulighet
+for å be om sletting).</p>
+
+<p>I Norge er det system på plass for ansiktsgjenkjenning, som
+<a href="https://www.nrk.no/norge/kun-gardermoen-har-teknologi-for-ansiktsgjenkjenning-i-norge-1.12719461">en
+NRK-artikkel fra 2015</a> forteller er aktiv på Gardermoen, samt
+<a href="https://www.dagbladet.no/nyheter/inntil-27-000-bor-i-norge-under-falsk-id/60500781">brukes
+til å analysere bilder innsamlet av myndighetene</a>. Brukes det også
+flere plasser? Det er tett med overvåkningskamera kontrollert av
+politi og andre myndigheter i for eksempel Oslo sentrum.</p>
+
+<p>Jeg er ikke kjent med at Norge har noe system for identifisering av
+personer ved hjelp av stemmegjenkjenning.</p>
+
+<p>Posisjons-sporing av mobiltelefoner er ruinemessig tilgjengelig for
+blant annet politi, NAV og Finanstilsynet, i tråd med krav i
+telefonselskapenes konsesjon. I tillegg rapporterer smarttelefoner
+sin posisjon til utviklerne av utallige mobil-apper, der myndigheter
+og andre kan hente ut informasjon ved behov. Det er intet behov for
+noen egen app for dette.</p>
+
+<p>Posisjons-sporing av biler er rutinemessig tilgjengelig via et tett
+nett av målepunkter på veiene (automatiske bomstasjoner,
+køfribrikke-registrering, automatiske fartsmålere og andre veikamera).
+Det er i tillegg vedtatt at alle nye biler skal selges med utstyr for
+GPS-sporing (eCall).</p>
+
+<p>Det er jammen godt vi lever i et liberalt demokrati, og ikke en
+overvåkningsstat, eller?</p>
</description>
</item>
<item>
- <title>Detect OOXML files with undefined behaviour?</title>
- <link>http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Detect_OOXML_files_with_undefined_behaviour_.html</guid>
- <pubDate>Tue, 21 Feb 2017 00:20:00 +0100</pubDate>
- <description><p>I just noticed
-<a href="http://www.arkivrad.no/aktuelt/riksarkivarens-forskrift-pa-horing">the
-new Norwegian proposal for archiving rules in the goverment</a> list
-<a href="http://www.ecma-international.org/publications/standards/Ecma-376.htm">ECMA-376</a>
-/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term
-storage. Luckily such files will only be accepted based on
-pre-approval from the National Archive. Allowing OOXML files to be
-used for long term storage might seem like a good idea as long as we
-forget that there are plenty of ways for a "valid" OOXML document to
-have content with no defined interpretation in the standard, which
-lead to a question and an idea.</p>
-
-<p>Is there any tool to detect if a OOXML document depend on such
-undefined behaviour? It would be useful for the National Archive (and
-anyone else interested in verifying that a document is well defined)
-to have such tool available when considering to approve the use of
-OOXML. I'm aware of the
-<a href="https://github.com/arlm/officeotron/">officeotron OOXML
-validator</a>, but do not know how complete it is nor if it will
-report use of undefined behaviour. Are there other similar tools
-available? Please send me an email if you know of any such tool.</p>
+ <title>How hard can æ, ø and å be?</title>
+ <link>http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html</guid>
+ <pubDate>Sun, 11 Feb 2018 17:10:00 +0100</pubDate>
+ <description><img src="http://people.skolelinux.org/pere/blog/images/2018-02-11-peppes-unicode.jpeg" align="right"/>
+
+<p>We write 2018, and it is 30 years since Unicode was introduced.
+Most of us in Norway have come to expect the use of our alphabet to
+just work with any computer system. But it is apparently beyond reach
+of the computers printing recites at a restaurant. Recently I visited
+a Peppes pizza resturant, and noticed a few details on the recite.
+Notice how 'ø' and 'å' are replaced with strange symbols in
+'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi
+gleder oss til å se deg igjen'.</p>
+
+<p>I would say that this state is passed sad and over in embarrassing.</p>
+
+<p>I removed personal and private information to be nice.</p>
</description>
</item>
<item>
- <title>Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)</title>
- <link>http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Ruling_ignored_our_objections_to_the_seizure_of_popcorn_time_no___domstolkontroll_.html</guid>
- <pubDate>Mon, 13 Feb 2017 21:30:00 +0100</pubDate>
- <description><p>A few days ago, we received the ruling from
-<a href="http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html">my
-day in court</a>. The case in question is a challenge of the seizure
-of the DNS domain popcorn-time.no. The ruling simply did not mention
-most of our arguments, and seemed to take everything ØKOKRIM said at
-face value, ignoring our demonstration and explanations. But it is
-hard to tell for sure, as we still have not seen most of the documents
-in the case and thus were unprepared and unable to contradict several
-of the claims made in court by the opposition. We are considering an
-appeal, but it is partly a question of funding, as it is costing us
-quite a bit to pay for our lawyer. If you want to help, please
-<a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to the
-NUUG defense fund</a>.</p>
-
-<p>The details of the case, as far as we know it, is available in
-Norwegian from
-<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the NUUG
-blog</a>. This also include
-<a href="https://www.nuug.no/news/Avslag_etter_rettslig_h_ring_om_DNS_beslaget___vurderer_veien_videre.shtml">the
-ruling itself</a>.</p>
+ <title>Legal to share more than 11,000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sun, 7 Jan 2018 23:30:00 +0100</pubDate>
+ <description><p>I've continued to track down list of movies that are legal to
+distribute on the Internet, and identified more than 11,000 title IDs
+in The Internet Movie Database (IMDB) so far. Most of them (57%) are
+feature films from USA published before 1923. I've also tracked down
+more than 24,000 movies I have not yet been able to map to IMDB title
+ID, so the real number could be a lot higher. According to the front
+web page for <a href="https://retrofilmvault.com/">Retro Film
+Vault</A>, there are 44,000 public domain films, so I guess there are
+still some left to identify.</p>
+
+<p>The complete data set is available from
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+public git repository</a>, including the scripts used to create it.
+Most of the data is collected using web scraping, for example from the
+"product catalog" of companies selling copies of public domain movies,
+but any source I find believable is used. I've so far had to throw
+out three sources because I did not trust the public domain status of
+the movies listed.</p>
+
+<p>Anyway, this is the summary of the 28 collected data sources so
+far:</p>
+
+<p><pre>
+ 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
+ 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json
+ 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json
+ 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json
+ 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json
+ 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
+ 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json
+ 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json
+ 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json
+ 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
+ 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json
+ 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json
+ 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json
+ 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json
+ 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json
+ 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
+ 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json
+ 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json
+ 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json
+11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID
+</pre></p>
+
+<p> I keep finding more data sources. I found the cinemovies source
+just a few days ago, and as you can see from the summary, it extended
+my list with 63 movies. Check out the mklist-* scripts in the git
+repository if you are curious how the lists are created. Many of the
+titles are extracted using searches on IMDB, where I look for the
+title and year, and accept search results with only one movie listed
+if the year matches. This allow me to automatically use many lists of
+movies without IMDB title ID references at the cost of increasing the
+risk of wrongly identify a IMDB title ID as public domain. So far my
+random manual checks have indicated that the method is solid, but I
+really wish all lists of public domain movies would include unique
+movie identifier like the IMDB title ID. It would make the job of
+counting movies in the public domain a lot easier.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>A day in court challenging seizure of popcorn-time.no for #domstolkontroll</title>
- <link>http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/A_day_in_court_challenging_seizure_of_popcorn_time_no_for__domstolkontroll.html</guid>
- <pubDate>Fri, 3 Feb 2017 11:10:00 +0100</pubDate>
- <description><p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2017-02-01-popcorn-time-in-court.jpeg"></p>
-
-<p>On Wednesday, I spent the entire day in court in Follo Tingrett
-representing <a href="https://www.nuug.no/">the member association
-NUUG</a>, alongside <a href="https://www.efn.no/">the member
-association EFN</a> and <a href="http://www.imc.no">the DNS registrar
-IMC</a>, challenging the seizure of the DNS name popcorn-time.no. It
-was interesting to sit in a court of law for the first time in my
-life. Our team can be seen in the picture above: attorney Ola
-Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil
-Eriksen and NUUG board member Petter Reinholdtsen.</p>
-
-<p><a href="http://www.domstol.no/no/Enkelt-domstol/follo-tingrett/Nar-gar-rettssaken/Beramming/?cid=AAAA1701301512081262234UJFBVEZZZZZEJBAvtale">The
-case at hand</a> is that the Norwegian National Authority for
-Investigation and Prosecution of Economic and Environmental Crime (aka
-Økokrim) decided on their own, to seize a DNS domain early last
-year, without following
-<a href="https://www.norid.no/no/regelverk/navnepolitikk/#link12">the
-official policy of the Norwegian DNS authority</a> which require a
-court decision. The web site in question was a site covering Popcorn
-Time. And Popcorn Time is the name of a technology with both legal
-and illegal applications. Popcorn Time is a client combining
-searching a Bittorrent directory available on the Internet with
-downloading/distribute content via Bittorrent and playing the
-downloaded content on screen. It can be used illegally if it is used
-to distribute content against the will of the right holder, but it can
-also be used legally to play a lot of content, for example the
-millions of movies
-<a href="https://archive.org/details/movies">available from the
-Internet Archive</a> or the collection
-<a href="http://vodo.net/films/">available from Vodo</a>. We created
-<a href="magnet:?xt=urn:btih:86c1802af5a667ca56d3918aecb7d3c0f7173084&dn=PresentasjonFolloTingrett.mov&tr=udp%3A%2F%2Fpublic.popcorn-tracker.org%3A6969%2Fannounce">a
-video demonstrating legally use of Popcorn Time</a> and played it in
-Court. It can of course be downloaded using Bittorrent.</p>
-
-<p>I did not quite know what to expect from a day in court. The
-government held on to their version of the story and we held on to
-ours, and I hope the judge is able to make sense of it all. We will
-know in two weeks time. Unfortunately I do not have high hopes, as
-the Government have the upper hand here with more knowledge about the
-case, better training in handling criminal law and in general higher
-standing in the courts than fairly unknown DNS registrar and member
-associations. It is expensive to be right also in Norway. So far the
-case have cost more than NOK 70 000,-. To help fund the case, NUUG
-and EFN have asked for donations, and managed to collect around NOK 25
-000,- so far. Given the presentation from the Government, I expect
-the government to appeal if the case go our way. And if the case do
-not go our way, I hope we have enough funding to appeal.</p>
-
-<p>From the other side came two people from Økokrim. On the benches,
-appearing to be part of the group from the government were two people
-from the Simonsen Vogt Wiik lawyer office, and three others I am not
-quite sure who was. Økokrim had proposed to present two witnesses
-from The Motion Picture Association, but this was rejected because
-they did not speak Norwegian and it was a bit late to bring in a
-translator, but perhaps the two from MPA were present anyway. All
-seven appeared to know each other. Good to see the case is take
-seriously.</p>
-
-<p>If you, like me, believe the courts should be involved before a DNS
-domain is hijacked by the government, or you believe the Popcorn Time
-technology have a lot of useful and legal applications, I suggest you
-too <a href="http://www.nuug.no/dns-beslag-donasjon.shtml">donate to
-the NUUG defense fund</a>. Both Bitcoin and bank transfer are
-available. If NUUG get more than we need for the legal action (very
-unlikely), the rest will be spend promoting free software, open
-standards and unix-like operating systems in Norway, so no matter what
-happens the money will be put to good use.</p>
-
-<p>If you want to lean more about the case, I recommend you check out
-<a href="https://www.nuug.no/news/tags/dns-domenebeslag/">the blog
-posts from NUUG covering the case</a>. They cover the legal arguments
-on both sides.</p>
+ <title>Kommentarer til «Evaluation of (il)legality» for Popcorn Time</title>
+ <link>http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</guid>
+ <pubDate>Wed, 20 Dec 2017 11:40:00 +0100</pubDate>
+ <description><p>I går var jeg i Follo tingrett som sakkyndig vitne og presenterte
+ mine undersøkelser rundt
+ <a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">telling
+ av filmverk i det fri</a>, relatert til
+ <a href="https://www.nuug.no/">foreningen NUUG</a>s involvering i
+ <a href="https://www.nuug.no/news/tags/dns-domenebeslag/">saken om
+ Økokrims beslag og senere inndragning av DNS-domenet
+ popcorn-time.no</a>. Jeg snakket om flere ting, men mest om min
+ vurdering av hvordan filmbransjen har målt hvor ulovlig Popcorn Time
+ er. Filmbransjens måling er så vidt jeg kan se videreformidlet uten
+ endringer av norsk politi, og domstolene har lagt målingen til grunn
+ når de har vurdert Popcorn Time både i Norge og i utlandet (tallet
+ 99% er referert også i utenlandske domsavgjørelser).</p>
+
+<p>I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv,
+ med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg
+ skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha
+ notatet, så hvis jeg forsto rettsprosessen riktig ble kun
+ histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var
+ visst bare interessert i å forholde seg til det jeg sa i retten,
+ ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere
+ enn meg kan ha glede av teksten, og publiserer den derfor her.
+ Legger ved avskrift av dokument 09,13, som er det sentrale
+ dokumentet jeg kommenterer.</p>
+
+<p><strong>Kommentarer til «Evaluation of (il)legality» for Popcorn
+ Time</strong></p>
+
+<p><strong>Oppsummering</strong></p>
+
+<p>Målemetoden som Økokrim har lagt til grunn når de påstår at 99% av
+ filmene tilgjengelig fra Popcorn Time deles ulovlig har
+ svakheter.</p>
+
+<p>De eller den som har vurdert hvorvidt filmer kan lovlig deles har
+ ikke lyktes med å identifisere filmer som kan deles lovlig og har
+ tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig.
+ Økokrim legger til grunn at det bare finnes èn film, Charlie
+ Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de
+ som ble observert tilgjengelig via ulike Popcorn Time-varianter.
+ Jeg finner tre flere blant de observerte filmene: «The Brain That
+ Wouldn't Die» fra 1962, «God’s Little Acre» fra 1958 og «She Wore a
+ Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det
+ finnes dermed minst fire ganger så mange filmer som lovlig kan deles
+ på Internett i datasettet Økokrim har lagt til grunn når det påstås
+ at mindre enn 1 % kan deles lovlig.</p>
+
+<p>Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra
+ ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte
+ filmkatalogene som helhet, hvilket påvirker fordelingen mellom
+ filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I
+ tillegg gir valg av øvre del (de fem første) av søkeresultatene et
+ avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk
+ i det fri i søkeresultatet.</p>
+
+<p>Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn
+ Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger
+ som vedlikeholdes uavhengig av Popcorn Time.</p>
+
+<p>Omtalte dokumenter: 09,12, <a href="#dok-09-13">09,13</a>, 09,14,
+09,18, 09,19, 09,20.</p>
+
+<p><strong>Utfyllende kommentarer</strong></p>
+
+<p>Økokrim har forklart domstolene at minst 99% av alt som er
+ tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig på
+ Internet. Jeg ble nysgjerrig på hvordan de er kommet frem til dette
+ tallet, og dette notatet er en samling kommentarer rundt målingen
+ Økokrim henviser til. Litt av bakgrunnen for at jeg valgte å se på
+ saken er at jeg er interessert i å identifisere og telle hvor mange
+ kunstneriske verk som er falt i det fri eller av andre grunner kan
+ lovlig deles på Internett, og dermed var interessert i hvordan en
+ hadde funnet den ene prosenten som kanskje deles lovlig.</p>
+
+<p>Andelen på 99% kommer fra et ukreditert og udatert notatet som tar
+ mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike
+ Popcorn Time-varianter er.</p>
+
+<p>Raskt oppsummert, så forteller metodedokumentet at på grunn av at
+ det ikke er mulig å få tak i komplett liste over alle filmtitler
+ tilgjengelig via Popcorn Time, så lages noe som skal være et
+ representativt utvalg ved å velge 50 søkeord større enn tre tegn fra
+ ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og
+ de første fem filmene i søkeresultatet samles inn inntil 100 unike
+ filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å
+ nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt
+ til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut
+ og søkt på flere tilfeldig valgte søkeord inntil 100 unike
+ filmtitler var identifisert.</p>
+
+<p>Deretter ble for hver av filmtitlene «vurdert hvorvidt det var
+ rimelig å forvente om at verket var vernet av copyright, ved å se på
+ om filmen var tilgjengelig i IMDB, samt se på regissør,
+ utgivelsesår, når det var utgitt for bestemte markedsområder samt
+ hvilke produksjons- og distribusjonsselskap som var registrert» (min
+ oversettelse).</p>
+
+<p>Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og
+ 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert
+ 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion
+ Picture Association EMEA. Metoden virker å ha flere svakheter som
+ gir resultatene en slagside. Den starter med å slå fast at det ikke
+ er mulig å hente ut en komplett liste over alle filmtitler som er
+ tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne
+ forutsetningen er ikke i tråd med det som står i dokument 09,12, som
+ ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller
+ hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument
+ 09,12 er muligens samme rapport som ble referert til i dom fra Oslo
+ Tingrett 2017-11-03
+ (<a href="https://www.domstol.no/no/Enkelt-domstol/Oslo--tingrett/Nyheter/ma-sperre-for-popcorn-time/">sak
+ 17-093347TVI-OTIR/05</a>) som rapport av 1. juni 2017 av Alexander
+ Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord
+ for å kontrollere dette.</p>
+
+<p>IMDB er en forkortelse for The Internet Movie Database, en
+ anerkjent kommersiell nettjeneste som brukes aktivt av både
+ filmbransjen og andre til å holde rede på hvilke spillefilmer (og
+ endel andre filmer) som finnes eller er under produksjon, og
+ informasjon om disse filmene. Datakvaliteten er høy, med få feil og
+ få filmer som mangler. IMDB viser ikke informasjon om
+ opphavsrettslig status for filmene på infosiden for hver film. Som
+ del av IMDB-tjenesten finnes det lister med filmer laget av
+ frivillige som lister opp det som antas å være verk i det fri.</p>
+
+<p>Det finnes flere kilder som kan brukes til å finne filmer som er
+ allemannseie (public domain) eller har bruksvilkår som gjør det
+ lovlig for alleå dele dem på Internett. Jeg har de siste ukene
+ forsøkt å samle og krysskoble disse listene for å forsøke å telle
+ antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og
+ publiserte filmer for Internett-arkivets del), har jeg så langt
+ klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer.
+
+<p>De aller fleste oppføringene er hentet fra IMDB selv, basert på det
+ faktum at alle filmer laget i USA før 1923 er falt i det fri.
+ Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette
+ utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt).
+ En annen stor andel kommer fra Internett-arkivet, der jeg har
+ identifisert filmer med referanse til IMDB. Internett-arkivet, som
+ holder til i USA, har som
+ <a href="https://archive.org/about/terms.php">policy å kun publisere
+ filmer som det er lovlig å distribuere</a>. Jeg har under arbeidet
+ kommet over flere filmer som har blitt fjernet fra
+ Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene
+ som kontrollerer Internett-arkivet har et aktivt forhold til å kun
+ ha lovlig innhold der, selv om det i stor grad er drevet av
+ frivillige. En annen stor liste med filmer kommer fra det
+ kommersielle selskapet Retro Film Vault, som selger allemannseide
+ filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister
+ over filmer som hevdes å være allemannseie, det være seg Public
+ Domain Review, Public Domain Torrents og Public Domain Movies (.net
+ og .info), samt lister over filmer med Creative Commons-lisensiering
+ fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel
+ stikkontroll ved å vurdere filmer som kun omtales på en liste. Der
+ jeg har funnet feil som har gjort meg i tvil om vurderingen til de
+ som har laget listen har jeg forkastet listen fullstendig (gjelder
+ en liste fra IMDB).</p>
+
+<p>Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på
+ Internett (fra blant annet Internett-arkivet, Public Domain
+ Torrents, Public Domain Reivew og Public Domain Movies), og knytte
+ dem til oppføringer i IMDB, så har jeg så langt klart å identifisere
+ over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro
+ kan lovlig distribueres av alle på Internett. Som ekstra kilder er
+ det brukt lister over filmer som antas/påstås å være allemannseie.
+ Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig
+ for almennheten alle verk som er falt i det fri eller har
+ bruksvilkår som tillater deling.
+
+<p>I tillegg til de over 11 000 filmene der tittel-ID i IMDB er
+ identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå
+ ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av
+ disse er nok duplikater av de IMDB-oppføringene som er identifisert
+ så langt, men neppe alle. Retro Film Vault hevder å ha 44 000
+ filmverk i det fri i sin katalog, så det er mulig at det reelle
+ tallet er betydelig høyere enn de jeg har klart å identifisere så
+ langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor
+ mange filmer i IMDB som kan lovlig deles på Internett. I følge <a
+ href="http://www.imdb.com/stats">statistikk fra IMDB</a> er det 4.6
+ millioner titler registrert, hvorav 3 millioner er TV-serieepisoder.
+ Jeg har ikke funnet ut hvordan de fordeler seg per år.</p>
+
+<p>Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig
+ kunne deles på Internett, får en følgende histogram:</p>
+
+<p align="center"><img width="80%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year.png"></p>
+
+<p>En kan i histogrammet se at effekten av manglende registrering
+ eller fornying av registrering er at mange filmer gitt ut i USA før
+ 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere
+ filmer gitt ut de siste årene med bruksvilkår som tillater deling,
+ muligens på grunn av fremveksten av
+ <a href="https://creativecommons.org/">Creative
+ Commons</a>-bevegelsen..</p>
+
+<p>For maskinell analyse av katalogene har jeg laget et lite program
+ som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn
+ Time-varianter og laster ned komplett liste over filmer i
+ katalogene, noe som bekrefter at det er mulig å hente ned komplett
+ liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire
+ bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra
+ www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den
+ andre brukes i følge dokument 09,12 av klienten tilgjengelig fra
+ popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette
+ dokumentet. Den tredje brukes av websidene tilgjengelig fra
+ popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet.
+ Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i
+ følge dokument 09,12, og er navngitt 'ukrfnlge' i dette
+ dokumentet.</p>
+
+<p>Metoden Økokrim legger til grunn skriver i sitt punkt fire at
+ skjønn er en egnet metode for å finne ut om en film kan lovlig deles
+ på Internett eller ikke, og sier at det ble «vurdert hvorvidt det
+ var rimelig å forvente om at verket var vernet av copyright». For
+ det første er det ikke nok å slå fast om en film er «vernet av
+ copyright» for å vite om det er lovlig å dele den på Internett eller
+ ikke, da det finnes flere filmer med opphavsrettslige bruksvilkår
+ som tillater deling på Internett. Eksempler på dette er Creative
+ Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra
+ 2010. I tillegg til slike finnes det flere filmer som nå er
+ allemannseie (public domain) på grunn av manglende registrering
+ eller fornying av registrering selv om både regisør,
+ produksjonsselskap og distributør ønsker seg vern. Eksempler på
+ dette er Plan 9 from Outer Space fra 1959 og Night of the Living
+ Dead fra 1968. Alle filmer fra USA som var allemannseie før
+ 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i
+ USA på det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis
+ det er noe
+ <a href="http://www.latimes.com/local/lanow/la-me-ln-happy-birthday-song-lawsuit-decision-20150922-story.html">historien
+ om sangen «Happy birthday»</a> forteller oss, der betaling for bruk
+ har vært krevd inn i flere tiår selv om sangen ikke egentlig var
+ vernet av åndsverksloven, så er det at hvert enkelt verk må vurderes
+ nøye og i detalj før en kan slå fast om verket er allemannseie eller
+ ikke, det holder ikke å tro på selverklærte rettighetshavere. Flere
+ eksempel på verk i det fri som feilklassifiseres som vernet er fra
+ dokument 09,18, som lister opp søkeresultater for klienten omtalt
+ som popcorntime.sh og i følge notatet kun inneholder en film (The
+ Circus fra 1928) som under tvil kan antas å være allemannseie.</p>
+
+<p>Ved rask gjennomlesning av dokument 09,18, som inneholder
+ skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt
+ både filmen «The Brain That Wouldn't Die» fra 1962 som er
+ <a href="https://archive.org/details/brain_that_wouldnt_die">tilgjengelig
+ fra Internett-arkivet</a> og som
+ <a href="https://en.wikipedia.org/wiki/List_of_films_in_the_public_domain_in_the_United_States">i
+ følge Wikipedia er allemannseie i USA</a> da den ble gitt ut i
+ 1962 uten 'copyright'-merking, og filmen «God’s Little Acre» fra
+ 1958 <a href="https://en.wikipedia.org/wiki/God%27s_Little_Acre_%28film%29">som
+ er lagt ut på Wikipedia</a>, der det fortelles at
+ sort/hvit-utgaven er allemannseie. Det fremgår ikke fra dokument
+ 09,18 om filmen omtalt der er sort/hvit-utgaven. Av
+ kapasitetsårsaker og på grunn av at filmoversikten i dokument 09,18
+ ikke er maskinlesbart har jeg ikke forsøkt å sjekke alle filmene som
+ listes opp der om mot liste med filmer som er antatt lovlig kan
+ distribueres på Internet.</p>
+
+<p>Ved maskinell gjennomgang av listen med IMDB-referanser under
+ regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg
+ filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er
+ feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig
+ fra Internett-arkivet og markert som allemannseie der. Det virker
+ dermed å være minst fire ganger så mange filmer som kan lovlig deles
+ på Internett enn det som er lagt til grunn når en påstår at minst
+ 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere
+ undersøkelser kan avdekke flere. Poenget er uansett at metodens
+ punkt om «rimelig å forvente om at verket var vernet av copyright»
+ gjør metoden upålitelig.</p>
+
+<p>Den omtalte målemetoden velger ut tilfeldige søketermer fra
+ ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske
+ som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke
+ hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg
+ om den er egnet til å få et representativt utvalg av filmer. Mange
+ av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk
+ ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger.
+ Dette antyder at enkeltmålinger av 100 filmer slik målemetoden
+ beskriver er gjort, ikke er velegnet til å finne andel ulovlig
+ innhold i bittorrent-katalogene.</p>
+
+<p>En kan motvirke dette store avviket for enkeltmålinger ved å gjøre
+ mange søk og slå sammen resultatet. Jeg har testet ved å
+ gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000
+ tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig
+ avvik, i forhold til telling av filmer pr år i hele katalogen.</p>
+
+<p>Målemetoden henter ut de fem øverste i søkeresultatet.
+ Søkeresultatene er sortert på antall bittorrent-klienter registrert
+ som delere i katalogene, hvilket kan gi en slagside mot hvilke
+ filmer som er populære blant de som bruker bittorrent-katalogene,
+ uten at det forteller noe om hvilket innhold som er tilgjengelig
+ eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har
+ forsøkt å måle hvor stor en slik slagside eventuelt er ved å
+ sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i
+ stedet. Avviket for disse to metodene for endel kataloger er godt
+ synlig på histogramet. Her er histogram over filmer funnet i den
+ komplette katalogen (grønn strek), og filmer funnet ved søk etter
+ ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i
+ søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En
+ kan her se at resultatene påvirkes betydelig av hvorvidt en ser på
+ de første eller de siste filmene i et søketreff.</p>
+
+<p align="center">
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-bottom.png"/>
+</p>
+
+<p>Det er verdt å bemerke at de omtalte bittorrent-katalogene ikke er
+ laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen
+ YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh,
+ et selvstendig fildelings-relatert nettsted YTS.AG med et separat
+ brukermiljø. Målemetoden foreslått av Økokrim måler dermed ikke
+ (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til
+ innholdet i disse katalogene.</p>
+
+<hr>
+
+<p id="dok-09-13">Metoden fra Økokrims dokument 09,13 i straffesaken
+om DNS-beslag.</p>
+
+<p><strong>1. Evaluation of (il)legality</strong></p>
+
+<p><strong>1.1. Methodology</strong>
+
+<p>Due to its technical configuration, Popcorn Time applications don't
+allow to make a full list of all titles made available. In order to
+evaluate the level of illegal operation of PCT, the following
+methodology was applied:</p>
+
+<ol>
+
+ <li>A random selection of 50 keywords, greater than 3 letters, was
+ made from the Dale-Chall list that contains 3000 simple English
+ words1. The selection was made by using a Random Number
+ Generator2.</li>
+
+ <li>For each keyword, starting with the first randomly selected
+ keyword, a search query was conducted in the movie section of the
+ respective Popcorn Time application. For each keyword, the first
+ five results were added to the title list until the number of 100
+ unique titles was reached (duplicates were removed).</li>
+
+ <li>For one fork, .CH, insufficient titles were generated via this
+ approach to reach 100 titles. This was solved by adding any
+ additional query results above five for each of the 50 keywords.
+ Since this still was not enough, another 42 random keywords were
+ selected to finally reach 100 titles.</li>
+
+ <li>It was verified whether or not there is a reasonable expectation
+ that the work is copyrighted by checking if they are available on
+ IMDb, also verifying the director, the year when the title was
+ released, the release date for a certain market, the production
+ company/ies of the title and the distribution company/ies.</li>
+
+</ol>
+
+<p><strong>1.2. Results</strong></p>
+
+<p>Between 6 and 9 June 2016, four forks of Popcorn Time were
+investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and
+popcorntime.ch. An excel sheet with the results is included in
+Appendix 1. Screenshots were secured in separate Appendixes for each
+respective fork, see Appendix 2-5.</p>
+
+<p>For each fork, out of 100, de-duplicated titles it was possible to
+retrieve data according to the parameters set out above that indicate
+that the title is commercially available. Per fork, there was 1 title
+that presumably falls within the public domain, i.e. the 1928 movie
+"The Circus" by and with Charles Chaplin.</p>
+
+<p>Based on the above it is reasonable to assume that 99% of the movie
+content of each fork is copyright protected and is made available
+illegally.</p>
+
+<p>This exercise was not repeated for TV series, but considering that
+besides production companies and distribution companies also
+broadcasters may have relevant rights, it is reasonable to assume that
+at least a similar level of infringement will be established.</p>
+
+<p>Based on the above it is reasonable to assume that 99% of all the
+content of each fork is copyright protected and are made available
+illegally.</p>
</description>
</item>
<item>
- <title>Nasjonalbiblioteket avslutter sin ulovlige bruk av Google Skjemaer</title>
- <link>http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Nasjonalbiblioteket_avslutter_sin_ulovlige_bruk_av_Google_Skjemaer.html</guid>
- <pubDate>Thu, 12 Jan 2017 09:40:00 +0100</pubDate>
- <description><p>I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul
-arrangerte Nasjonalbiblioteket
-<a href="http://www.nb.no/Bibliotekutvikling/Kunnskapsorganisering/Nasjonalt-verksregister/Seminar-om-verksregister">et
-seminar om sitt knakende gode tiltak «verksregister»</a>. Eneste
-måten å melde seg på dette seminaret var å sende personopplysninger
-til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis,
-da det bør være mulig å delta på seminarer arrangert av det offentlige
-uten å måtte dele sine interesser, posisjon og andre
-personopplysninger med Google. Jeg ba derfor om innsyn via
-<a href="https://www.mimesbronn.no/">Mimes brønn</a> i
-<a href="https://www.mimesbronn.no/request/personopplysninger_til_google_sk">avtaler
-og vurderinger Nasjonalbiblioteket hadde rundt dette</a>.
-Personopplysningsloven legger klare rammer for hva som må være på
-plass før en kan be tredjeparter, spesielt i utlandet, behandle
-personopplysninger på sine vegne, så det burde eksistere grundig
-dokumentasjon før noe slikt kan bli lovlig. To jurister hos
-Nasjonalbiblioteket mente først dette var helt i orden, og at Googles
-standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg
-var merkelig, men har ikke hatt kapasitet til å følge opp saken før
-for to dager siden.</p>
-
-<p>Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket
-om at Datatilsynet underkjente Googles standardavtaler som
-databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg
-for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI
-for å finne bedre måter å håndtere påmeldinger i tråd med
-personopplysningsloven. Det er fantastisk å se at av og til hjelper
-det å spørre hva i alle dager det offentlige holder på med.</p>
+ <title>Cura, the nice 3D print slicer, is now in Debian Unstable</title>
+ <link>http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</guid>
+ <pubDate>Sun, 17 Dec 2017 07:00:00 +0100</pubDate>
+ <description><p>After several months of working and waiting, I am happy to report
+that the nice and user friendly 3D printer slicer software Cura just
+entered Debian Unstable. It consist of five packages,
+<a href="https://tracker.debian.org/pkg/cura">cura</a>,
+<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>,
+<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>,
+<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>,
+<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and
+<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last
+two, uranium and cura, entered Unstable yesterday. This should make
+it easier for Debian users to print on at least the Ultimaker class of
+3D printers. My nearest 3D printer is an Ultimaker 2+, so it will
+make life easier for at least me. :)</p>
+
+<p>The work to make this happen was done by Gregor Riepl, and I was
+happy to assist him in sponsoring the packages. With the introduction
+of Cura, Debian is up to three 3D printer slicers at your service,
+Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D
+printer, give it a go. :)</p>
+
+<p>The 3D printer software is maintained by the 3D printer Debian
+team, flocking together on the
+<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a>
+mailing list and the
+<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a>
+IRC channel.</p>
+
+<p>The next step for Cura in Debian is to update the cura package to
+version 3.0.3 and then update the entire set of packages to version
+3.1.0 which showed up the last few days.</p>
</description>
</item>
<item>
- <title>Bryter NAV sin egen personvernerklæring?</title>
- <link>http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Bryter_NAV_sin_egen_personvernerkl_ring_.html</guid>
- <pubDate>Wed, 11 Jan 2017 06:50:00 +0100</pubDate>
- <description><p>Jeg leste med interesse en nyhetssak hos
-<a href="http://www.digi.no/artikler/nav-avslorer-trygdemisbruk-ved-a-spore-ip-adresser/367394">digi.no</a>
-og
-<a href="https://www.nrk.no/buskerud/trygdesvindlere-avslores-av-utenlandske-ip-adresser-1.13313461">NRK</a>
-om at det ikke bare er meg, men at også NAV bedriver geolokalisering
-av IP-adresser, og at det gjøres analyse av IP-adressene til de som
-sendes inn meldekort for å se om meldekortet sendes inn fra
-utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare,
-er sitert i NRK på at «De to er jo blant annet avslørt av
-IP-adresser. At man ser at meldekortet kommer fra utlandet.»</p>
-
-<p>Jeg synes det er fint at det blir bedre kjent at IP-adresser
-knyttes til enkeltpersoner og at innsamlet informasjon brukes til å
-stedsbestemme personer også av aktører her i Norge. Jeg ser det som
-nok et argument for å bruke
-<a href="https://www.torproject.org/">Tor</a> så mye som mulig for å
-gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin
-privatsfære og unngå å dele sin fysiske plassering med
-uvedkommede.</p>
-
-<P>Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble
-tipset (takk #nuug) om
-<a href="https://www.nav.no/no/NAV+og+samfunn/Kontakt+NAV/Teknisk+brukerstotte/Snarveier/personvernerkl%C3%A6ring-for-arbeids-og-velferdsetaten">NAVs
-personvernerklæring</a>, som under punktet «Personvern og statistikk»
-lyder:</p>
+ <title>Idea for finding all public domain movies in the USA</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</guid>
+ <pubDate>Wed, 13 Dec 2017 10:15:00 +0100</pubDate>
+ <description><p>While looking at
+<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies
+for the copyright renewal entries for movies published in the USA</a>,
+an idea occurred to me. The number of renewals are so few per year, it
+should be fairly quick to transcribe them all and add references to
+the corresponding IMDB title ID. This would give the (presumably)
+complete list of movies published 28 years earlier that did _not_
+enter the public domain for the transcribed year. By fetching the
+list of USA movies published 28 years earlier and subtract the movies
+with renewals, we should be left with movies registered in IMDB that
+are now in the public domain. For the year 1955 (which is the one I
+have looked at the most), the total number of pages to transcribe is
+21. For the 28 years from 1950 to 1978, it should be in the range
+500-600 pages. It is just a few days of work, and spread among a
+small group of people it should be doable in a few weeks of spare
+time.</p>
+
+<p>A typical copyright renewal entry look like this (the first one
+listed for 1955):</p>
<p><blockquote>
+ ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer
+ Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH);
+ 10Jun55; R151558.
+</blockquote></p>
-<p>«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene
-dannes fordi din nettleser automatisk sender en rekke opplysninger til
-NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det
-er eksempelvis opplysninger om hvilken nettleser og -versjon du
-bruker, og din internettadresse (ip-adresse). For hver side som vises,
-lagres følgende opplysninger:</p>
+<p>The movie title as well as registration and renewal dates are easy
+enough to locate by a program (split on first comma and look for
+DDmmmYY). The rest of the text is not required to find the movie in
+IMDB, but is useful to confirm the correct movie is found. I am not
+quite sure what the L and R numbers mean, but suspect they are
+reference numbers into the archive of the US Copyright Office.</p>
-<ul>
-<li>hvilken side du ser på</li>
-<li>dato og tid</li>
-<li>hvilken nettleser du bruker</li>
-<li>din ip-adresse</li>
-</ul>
+<p>Tracking down the equivalent IMDB title ID is probably going to be
+a manual task, but given the year it is fairly easy to search for the
+movie title using for example
+<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>.
+Using this search, I find that the equivalent IMDB title ID for the
+first renewal entry from 1955 is
+<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p>
-<p>Ingen av opplysningene vil bli brukt til å identifisere
-enkeltpersoner. NAV bruker disse opplysningene til å generere en
-samlet statistikk som blant annet viser hvilke sider som er mest
-populære. Statistikken er et redskap til å forbedre våre
-tjenester.»</p>
+<p>I suspect the best way to do this would be to make a specialised
+web service to make it easy for contributors to transcribe and track
+down IMDB title IDs. In the web service, once a entry is transcribed,
+the title and year could be extracted from the text, a search in IMDB
+conducted for the user to pick the equivalent IMDB title ID right
+away. By spreading out the work among volunteers, it would also be
+possible to make at least two persons transcribe the same entries to
+be able to discover any typos introduced. But I will need help to
+make this happen, as I lack the spare time to do all of this on my
+own. If you would like to help, please get in touch. Perhaps you can
+draft a web service for crowd sourcing the task?</p>
-</blockquote></p>
+<p>Note, Project Gutenberg already have some
+<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed
+copies of the US Copyright Office renewal protocols</a>, but I have
+not been able to find any film renewals there, so I suspect they only
+have copies of renewal for written works. I have not been able to find
+any transcribed versions of movie renewals so far. Perhaps they exist
+somewhere?</p>
+
+<p>I would love to figure out methods for finding all the public
+domain works in other countries too, but it is a lot harder. At least
+for Norway and Great Britain, such work involve tracking down the
+people involved in making the movie and figuring out when they died.
+It is hard enough to figure out who was part of making a movie, but I
+do not know how to automate such procedure without a registry of every
+person involved in making movies and their death year.</p>
-<p>Jeg klarer ikke helt å se hvordan analyse av de besøkendes
-IP-adresser for å se hvem som sender inn meldekort via web fra en
-IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om
-at «ingen av opplysningene vil bli brukt til å identifisere
-enkeltpersoner». Det virker dermed for meg som at NAV bryter sine
-egen personvernerklæring, hvilket
-<a href="http://people.skolelinux.org/pere/blog/Er_lover_brutt_n_r_personvernpolicy_ikke_stemmer_med_praksis_.html">Datatilsynet
-fortalte meg i starten av desember antagelig er brudd på
-personopplysningsloven</a>.
-
-<p>I tillegg er personvernerklæringen ganske misvisende i og med at
-NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i
-tillegg ber brukernes nettleser kontakte fem andre nettjenere
-(script.hotjar.com, static.hotjar.com, vars.hotjar.com,
-www.google-analytics.com og www.googletagmanager.com), slik at
-personopplysninger blir gjort tilgjengelig for selskapene Hotjar og
-Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og
-NSA). Jeg klarer heller ikke se hvordan slikt spredning av
-personopplysninger kan være i tråd med kravene i
-personopplysningloven, eller i tråd med NAVs personvernerklæring.</p>
-
-<p>Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller
-kanskje Datatilsynet bør gjøre det?</p>
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Where did that package go? &mdash; geolocated IP traceroute</title>
- <link>http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</guid>
- <pubDate>Mon, 9 Jan 2017 12:20:00 +0100</pubDate>
- <description><p>Did you ever wonder where the web trafic really flow to reach the
-web servers, and who own the network equipment it is flowing through?
-It is possible to get a glimpse of this from using traceroute, but it
-is hard to find all the details. Many years ago, I wrote a system to
-map the Norwegian Internet (trying to figure out if our plans for a
-network game service would get low enough latency, and who we needed
-to talk to about setting up game servers close to the users. Back
-then I used traceroute output from many locations (I asked my friends
-to run a script and send me their traceroute output) to create the
-graph and the map. The output from traceroute typically look like
-this:
-
-<p><pre>
-traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms
- 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms
- 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms
- 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms
- 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms
- 8 * * *
- 9 * * *
-[...]
-</pre></p>
-
-<p>This show the DNS names and IP addresses of (at least some of the)
-network equipment involved in getting the data traffic from me to the
-www.stortinget.no server, and how long it took in milliseconds for a
-package to reach the equipment and return to me. Three packages are
-sent, and some times the packages do not follow the same path. This
-is shown for hop 5, where three different IP addresses replied to the
-traceroute request.</p>
-
-<p>There are many ways to measure trace routes. Other good traceroute
-implementations I use are traceroute (using ICMP packages) mtr (can do
-both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP
-traceroute and a lot of other capabilities). All of them are easily
-available in <a href="https://www.debian.org/">Debian</a>.</p>
-
-<p>This time around, I wanted to know the geographic location of
-different route points, to visualize how visiting a web page spread
-information about the visit to a lot of servers around the globe. The
-background is that a web site today often will ask the browser to get
-from many servers the parts (for example HTML, JSON, fonts,
-JavaScript, CSS, video) required to display the content. This will
-leak information about the visit to those controlling these servers
-and anyone able to peek at the data traffic passing by (like your ISP,
-the ISPs backbone provider, FRA, GCHQ, NSA and others).</p>
-
-<p>Lets pick an example, the Norwegian parliament web site
-www.stortinget.no. It is read daily by all members of parliament and
-their staff, as well as political journalists, activits and many other
-citizens of Norway. A visit to the www.stortinget.no web site will
-ask your browser to contact 8 other servers: ajax.googleapis.com,
-insights.hotjar.com, script.hotjar.com, static.hotjar.com,
-stats.g.doubleclick.net, www.google-analytics.com,
-www.googletagmanager.com and www.netigate.se. I extracted this by
-asking <a href="http://phantomjs.org/">PhantomJS</a> to visit the
-Stortinget web page and tell me all the URLs PhantomJS downloaded to
-render the page (in HAR format using
-<a href="https://github.com/ariya/phantomjs/blob/master/examples/netsniff.js">their
-netsniff example</a>. I am very grateful to Gorm for showing me how
-to do this). My goal is to visualize network traces to all IP
-addresses behind these DNS names, do show where visitors personal
-information is spread when visiting the page.</p>
-
-<p align="center"><a href="www.stortinget.no-geoip.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geoip-small.png" alt="map of combined traces for URLs used by www.stortinget.no using GeoIP"/></a></p>
-
-<p>When I had a look around for options, I could not find any good
-free software tools to do this, and decided I needed my own traceroute
-wrapper outputting KML based on locations looked up using GeoIP. KML
-is easy to work with and easy to generate, and understood by several
-of the GIS tools I have available. I got good help from by NUUG
-colleague Anders Einar with this, and the result can be seen in
-<a href="https://github.com/petterreinholdtsen/kmltraceroute">my
-kmltraceroute git repository</a>. Unfortunately, the quality of the
-free GeoIP databases I could find (and the for-pay databases my
-friends had access to) is not up to the task. The IP addresses of
-central Internet infrastructure would typically be placed near the
-controlling companies main office, and not where the router is really
-located, as you can see from <a href="www.stortinget.no-geoip.kml">the
-KML file I created</a> using the GeoLite City dataset from MaxMind.
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy-small.png" alt="scapy traceroute graph for URLs used by www.stortinget.no"/></a></p>
-
-<p>I also had a look at the visual traceroute graph created by
-<a href="http://www.secdev.org/projects/scapy/">the scrapy project</a>,
-showing IP network ownership (aka AS owner) for the IP address in
-question.
-<a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg">The
-graph display a lot of useful information about the traceroute in SVG
-format</a>, and give a good indication on who control the network
-equipment involved, but it do not include geolocation. This graph
-make it possible to see the information is made available at least for
-UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level
-3 Communications and NetDNA.</p>
-
-<p align="center"><a href="https://geotraceroute.com/index.php?node=4&host=www.stortinget.no"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-small.png" alt="example geotraceroute view for www.stortinget.no"/></a></p>
-
-<p>In the process, I came across the
-<a href="https://geotraceroute.com/">web service GeoTraceroute</a> by
-Salim Gasmi. Its methology of combining guesses based on DNS names,
-various location databases and finally use latecy times to rule out
-candidate locations seemed to do a very good job of guessing correct
-geolocation. But it could only do one trace at the time, did not have
-a sensor in Norway and did not make the geolocations easily available
-for postprocessing. So I contacted the developer and asked if he
-would be willing to share the code (he refused until he had time to
-clean it up), but he was interested in providing the geolocations in a
-machine readable format, and willing to set up a sensor in Norway. So
-since yesterday, it is possible to run traces from Norway in this
-service thanks to a sensor node set up by
-<a href="https://www.nuug.no/">the NUUG assosiation</a>, and get the
-trace in KML format for further processing.</p>
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.png" alt="map of combined traces for URLs used by www.stortinget.no using geotraceroute"/></a></p>
-
-<p>Here we can see a lot of trafic passes Sweden on its way to
-Denmark, Germany, Holland and Ireland. Plenty of places where the
-Snowden confirmations verified the traffic is read by various actors
-without your best interest as their top priority.</p>
-
-<p>Combining KML files is trivial using a text editor, so I could loop
-over all the hosts behind the urls imported by www.stortinget.no and
-ask for the KML file from GeoTraceroute, and create a combined KML
-file with all the traces (unfortunately only one of the IP addresses
-behind the DNS name is traced this time. To get them all, one would
-have to request traces using IP number instead of DNS names from
-GeoTraceroute). That might be the next step in this project.</p>
-
-<p>Armed with these tools, I find it a lot easier to figure out where
-the IP traffic moves and who control the boxes involved in moving it.
-And every time the link crosses for example the Swedish border, we can
-be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in
-Britain and NSA in USA and cables around the globe. (Hm, what should
-we tell them? :) Keep that in mind if you ever send anything
-unencrypted over the Internet.</p>
-
-<p>PS: KML files are drawn using
-<a href="http://ivanrublev.me/kml/">the KML viewer from Ivan
-Rublev<a/>, as it was less cluttered than the local Linux application
-Marble. There are heaps of other options too.</p>
+ <title>Is the short movie «Empty Socks» from 1927 in the public domain or not?</title>
+ <link>http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</guid>
+ <pubDate>Tue, 5 Dec 2017 12:30:00 +0100</pubDate>
+ <description><p>Three years ago, a presumed lost animation film,
+<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from
+1927</a>, was discovered in the Norwegian National Library. At the
+time it was discovered, it was generally assumed to be copyrighted by
+The Walt Disney Company, and I blogged about
+<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my
+reasoning to conclude</a> that it would would enter the Norwegian
+equivalent of the public domain in 2053, based on my understanding of
+Norwegian Copyright Law. But a few days ago, I came across
+<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a
+blog post claiming the movie was already in the public domain</a>, at
+least in USA. The reasoning is as follows: The film was released in
+November or Desember 1927 (sources disagree), and presumably
+registered its copyright that year. At that time, right holders of
+movies registered by the copyright office received government
+protection for there work for 28 years. After 28 years, the copyright
+had to be renewed if the wanted the government to protect it further.
+The blog post I found claim such renewal did not happen for this
+movie, and thus it entered the public domain in 1956. Yet someone
+claim the copyright was renewed and the movie is still copyright
+protected. Can anyone help me to figure out which claim is correct?
+I have not been able to find Empty Socks in Catalog of copyright
+entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures
+<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available
+from the University of Pennsylvania</a>, neither in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page
+45 for the first half of 1955</a>, nor in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page
+119 for the second half of 1955</a>. It is of course possible that
+the renewal entry was left out of the printed catalog by mistake. Is
+there some way to rule out this possibility? Please help, and update
+the wikipedia page with your findings.
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Introducing ical-archiver to split out old iCalendar entries</title>
- <link>http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html</guid>
- <pubDate>Wed, 4 Jan 2017 12:20:00 +0100</pubDate>
- <description><p>Do you have a large <a href="https://icalendar.org/">iCalendar</a>
-file with lots of old entries, and would like to archive them to save
-space and resources? At least those of us using KOrganizer know that
-turning on and off an event set become slower and slower the more
-entries are in the set. While working on migrating our calendars to a
-<a href="http://radicale.org/">Radicale CalDAV server</a> on our
-<a href="https://freedomboxfoundation.org/">Freedombox server</a/>, my
-loved one wondered if I could find a way to split up the calendar file
-she had in KOrganizer, and I set out to write a tool. I spent a few
-days writing and polishing the system, and it is now ready for general
-consumption. The
-<a href="https://github.com/petterreinholdtsen/ical-archiver">code for
-ical-archiver</a> is publicly available from a git repository on
-github. The system is written in Python and depend on
-<a href="http://eventable.github.io/vobject/">the vobject Python
-module</a>.</p>
-
-<p>To use it, locate the iCalendar file you want to operate on and
-give it as an argument to the ical-archiver script. This will
-generate a set of new files, one file per component type per year for
-all components expiring more than two years in the past. The vevent,
-vtodo and vjournal entries are handled by the script. The remaining
-entries are stored in a 'remaining' file.</p>
-
-<p>This is what a test run can look like:
+ <title>Metadata proposal for movies on the Internet Archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</guid>
+ <pubDate>Tue, 28 Nov 2017 12:00:00 +0100</pubDate>
+ <description><p>It would be easier to locate the movie you want to watch in
+<a href="https://www.archive.org/">the Internet Archive</a>, if the
+metadata about each movie was more complete and accurate. In the
+archiving community, a well known saying state that good metadata is a
+love letter to the future. The metadata in the Internet Archive could
+use a face lift for the future to love us back. Here is a proposal
+for a small improvement that would make the metadata more useful
+today. I've been unable to find any document describing the various
+standard fields available when uploading videos to the archive, so
+this proposal is based on my best quess and searching through several
+of the existing movies.</p>
+
+<p>I have a few use cases in mind. First of all, I would like to be
+able to count the number of distinct movies in the Internet Archive,
+without duplicates. I would further like to identify the IMDB title
+ID of the movies in the Internet Archive, to be able to look up a IMDB
+title ID and know if I can fetch the video from there and share it
+with my friends.</p>
+
+<p>Second, I would like the Butter data provider for The Internet
+archive
+(<a href="https://github.com/butterproviders/butter-provider-archive">available
+from github</a>), to list as many of the good movies as possible. The
+plugin currently do a search in the archive with the following
+parameters:</p>
<p><pre>
-% ical-archiver t/2004-2016.ics
-Found 3612 vevents
-Found 6 vtodos
-Found 2 vjournals
-Writing t/2004-2016.ics-subset-vevent-2004.ics
-Writing t/2004-2016.ics-subset-vevent-2005.ics
-Writing t/2004-2016.ics-subset-vevent-2006.ics
-Writing t/2004-2016.ics-subset-vevent-2007.ics
-Writing t/2004-2016.ics-subset-vevent-2008.ics
-Writing t/2004-2016.ics-subset-vevent-2009.ics
-Writing t/2004-2016.ics-subset-vevent-2010.ics
-Writing t/2004-2016.ics-subset-vevent-2011.ics
-Writing t/2004-2016.ics-subset-vevent-2012.ics
-Writing t/2004-2016.ics-subset-vevent-2013.ics
-Writing t/2004-2016.ics-subset-vevent-2014.ics
-Writing t/2004-2016.ics-subset-vjournal-2007.ics
-Writing t/2004-2016.ics-subset-vjournal-2011.ics
-Writing t/2004-2016.ics-subset-vtodo-2012.ics
-Writing t/2004-2016.ics-remaining.ics
-%
+collection:moviesandfilms
+AND NOT collection:movie_trailers
+AND -mediatype:collection
+AND format:"Archive BitTorrent"
+AND year
</pre></p>
-<p>As you can see, the original file is untouched and new files are
-written with names derived from the original file. If you are happy
-with their content, the *-remaining.ics file can replace the original
-the the others can be archived or imported as historical calendar
-collections.</p>
+<p>Most of the cool movies that fail to show up in Butter do so
+because the 'year' field is missing. The 'year' field is populated by
+the year part from the 'date' field, and should be when the movie was
+released (date or year). Two such examples are
+<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur
+from 1905</a> and
+<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes
+2: Gran Dillama from 2013</a>, where the year metadata field is
+missing.</p>
+
+So, my proposal is simply, for every movie in The Internet Archive
+where an IMDB title ID exist, please fill in these metadata fields
+(note, they can be updated also long after the video was uploaded, but
+as far as I can tell, only by the uploader):
+
+<dl>
+
+<dt>mediatype</dt>
+<dd>Should be 'movie' for movies.</dd>
+
+<dt>collection</dt>
+<dd>Should contain 'moviesandfilms'.</dd>
+
+<dt>title</dt>
+<dd>The title of the movie, without the publication year.</dd>
+
+<dt>date</dt>
+<dd>The data or year the movie was released. This make the movie show
+up in Butter, as well as make it possible to know the age of the
+movie and is useful to figure out copyright status.</dd>
+
+<dt>director</dt>
+<dd>The director of the movie. This make it easier to know if the
+correct movie is found in movie databases.</dd>
-<p>The script should probably be improved a bit. The error handling
-when discovering broken entries is not good, and I am not sure yet if
-it make sense to split different entry types into separate files or
-not. The program is thus likely to change. If you find it
-interesting, please get in touch. :)</p>
+<dt>publisher</dt>
+<dd>The production company making the movie. Also useful for
+identifying the correct movie.</dd>
+
+<dt>links</dt>
+
+<dd>Add a link to the IMDB title page, for example like this: &lt;a
+href="http://www.imdb.com/title/tt0028496/"&gt;Movie in
+IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for
+counting of number of unique movies in the Archive. Other external
+references, like to TMDB, could be added like this too.</dd>
+
+</dl>
+
+<p>I did consider proposing a Custom field for the IMDB title ID (for
+example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it
+will be easier to simply place it in the links free text field.</p>
+
+<p>I created
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+list of IMDB title IDs for several thousand movies in the Internet
+Archive</a>, but I also got a list of several thousand movies without
+such IMDB title ID (and quite a few duplicates). It would be great if
+this data set could be integrated into the Internet Archive metadata
+to be available for everyone in the future, but with the current
+policy of leaving metadata editing to the uploaders, it will take a
+while before this happen. If you have uploaded movies into the
+Internet Archive, you can help. Please consider following my proposal
+above for your movies, to ensure that movie is properly
+counted. :)</p>
+
+<p>The list is mostly generated using wikidata, which based on
+Wikipedia articles make it possible to link between IMDB and movies in
+the Internet Archive. But there are lots of movies without a
+Wikipedia article, and some movies where only a collection page exist
+(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the
+Caminandes example above</a>, where there are three movies but only
+one Wikidata entry).</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Appstream just learned how to map hardware to packages too!</title>
- <link>http://people.skolelinux.org/pere/blog/Appstream_just_learned_how_to_map_hardware_to_packages_too_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Appstream_just_learned_how_to_map_hardware_to_packages_too_.html</guid>
- <pubDate>Fri, 23 Dec 2016 10:30:00 +0100</pubDate>
- <description><p>I received a very nice Christmas present today. As my regular
-readers probably know, I have been working on the
-<a href="http://packages.qa.debian.org/isenkram">the Isenkram
-system</a> for many years. The goal of the Isenkram system is to make
-it easier for users to figure out what to install to get a given piece
-of hardware to work in Debian, and a key part of this system is a way
-to map hardware to packages. Isenkram have its own mapping database,
-and also uses data provided by each package using the AppStream
-metadata format. And today,
-<a href="https://tracker.debian.org/pkg/appstream">AppStream</a> in
-Debian learned to look up hardware the same way Isenkram is doing it,
-ie using fnmatch():</p>
+ <title>Legal to share more than 3000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
+ <description><p>A month ago, I blogged about my work to
+<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
+check the copyright status of IMDB entries</a>, and try to count the
+number of movies listed in IMDB that is legal to distribute on the
+Internet. I have continued to look for good data sources, and
+identified a few more. The code used to extract information from
+various data sources is available in
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+git repository</a>, currently available from github.</p>
-<p><pre>
-% appstreamcli what-provides modalias \
- usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
-Identifier: pymissile [generic]
-Name: pymissile
-Summary: Control original Striker USB Missile Launcher
-Package: pymissile
-% appstreamcli what-provides modalias usb:v0694p0002d0000
-Identifier: libnxt [generic]
-Name: libnxt
-Summary: utility library for talking to the LEGO Mindstorms NXT brick
-Package: libnxt
----
-Identifier: t2n [generic]
-Name: t2n
-Summary: Simple command-line tool for Lego NXT
-Package: t2n
----
-Identifier: python-nxt [generic]
-Name: python-nxt
-Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot
-Package: python-nxt
----
-Identifier: nbc [generic]
-Name: nbc
-Summary: C compiler for LEGO Mindstorms NXT bricks
-Package: nbc
-%
-</pre></p>
+<p>So far I have identified 3186 unique IMDB title IDs. To gain
+better understanding of the structure of the data set, I created a
+histogram of the year associated with each movie (typically release
+year). It is interesting to notice where the peaks and dips in the
+graph are located. I wonder why they are placed there. I suspect
+World War II caused the dip around 1940, but what caused the peak
+around 2010?</p>
-<p>A similar query can be done using the combined AppStream and
-Isenkram databases using the isenkram-lookup tool:</p>
+<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
-<p><pre>
-% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
-pymissile
-% isenkram-lookup usb:v0694p0002d0000
-libnxt
-nbc
-python-nxt
-t2n
-%
-</pre></p>
+<p>I've so far identified ten sources for IMDB title IDs for movies in
+the public domain or with a free license. This is the statistics
+reported when running 'make stats' in the git repository:</p>
+
+<pre>
+ 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
+ 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
+ 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
+ 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
+ 3186 unique IMDB title IDs in total
+</pre>
+
+<p>The entries without IMDB title ID are candidates to increase the
+data set, but might equally well be duplicates of entries already
+listed with IMDB title ID in one of the other sources, or represent
+movies that lack a IMDB title ID. I've seen examples of all these
+situations when peeking at the entries without IMDB title ID. Based
+on these data sources, the lower bound for movies listed in IMDB that
+are legal to distribute on the Internet is between 3186 and 4713.
-<p>You can find modalias values relevant for your machine using
-<tt>cat $(find /sys/devices/ -name modalias)</tt>.
-
-<p>If you want to make this system a success and help Debian users
-make the most of the hardware they have, please
-help<a href="https://wiki.debian.org/AppStream/Guidelines">add
-AppStream metadata for your package following the guidelines</a>
-documented in the wiki. So far only 11 packages provide such
-information, among the several hundred hardware specific packages in
-Debian. The Isenkram database on the other hand contain 101 packages,
-mostly related to USB dongles. Most of the packages with hardware
-mapping in AppStream are LEGO Mindstorms related, because I have, as
-part of my involvement in
-<a href="https://wiki.debian.org/LegoDesigners">the Debian LEGO
-team</a> given priority to making sure LEGO users get proposed the
-complete set of packages in Debian for that particular hardware. The
-team also got a nice Christmas present today. The
-<a href="https://tracker.debian.org/pkg/nxt-firmware">nxt-firmware
-package</a> made it into Debian. With this package in place, it is
-now possible to use the LEGO Mindstorms NXT unit with only free
-software, as the nxt-firmware package contain the source and firmware
-binaries for the NXT brick.</p>
+<p>It would be great for improving the accuracy of this measurement,
+if the various sources added IMDB title ID to their metadata. I have
+tried to reach the people behind the various sources to ask if they
+are interested in doing this, without any replies so far. Perhaps you
+can help me get in touch with the people behind VODO, Public Domain
+Torrents, Public Domain Movies and Public Domain Review to try to
+convince them to add more metadata to their movie entries?</p>
+
+<p>Another way you could help is by adding pages to Wikipedia about
+movies that are legal to distribute on the Internet. If such page
+exist and include a link to both IMDB and The Internet Archive, the
+script used to generate free-movies-archive-org-wikidata.json should
+pick up the mapping as soon as wikidata is updates.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Isenkram updated with a lot more hardware-package mappings</title>
- <link>http://people.skolelinux.org/pere/blog/Isenkram_updated_with_a_lot_more_hardware_package_mappings.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram_updated_with_a_lot_more_hardware_package_mappings.html</guid>
- <pubDate>Tue, 20 Dec 2016 11:55:00 +0100</pubDate>
- <description><p><a href="http://packages.qa.debian.org/isenkram">The Isenkram
-system</a> I wrote two years ago to make it easier in Debian to find
-and install packages to get your hardware dongles to work, is still
-going strong. It is a system to look up the hardware present on or
-connected to the current system, and map the hardware to Debian
-packages. It can either be done using the tools in isenkram-cli or
-using the user space daemon in the isenkram package. The latter will
-notify you, when inserting new hardware, about what packages to
-install to get the dongle working. It will even provide a button to
-click on to ask packagekit to install the packages.</p>
-
-<p>Here is an command line example from my Thinkpad laptop:</p>
+ <title>Some notes on fault tolerant storage systems</title>
+ <link>http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</guid>
+ <pubDate>Wed, 1 Nov 2017 15:35:00 +0100</pubDate>
+ <description><p>If you care about how fault tolerant your storage is, you might
+find these articles and papers interesting. They have formed how I
+think of when designing a storage system.</p>
-<p><pre>
-% isenkram-lookup
-bluez
-cheese
-ethtool
-fprintd
-fprintd-demo
-gkrellm-thinkbat
-hdapsd
-libpam-fprintd
-pidgin-blinklight
-thinkfan
-tlp
-tp-smapi-dkms
-tp-smapi-source
-tpb
-%
-</pre></p>
+<ul>
-<p>It can also list the firware package providing firmware requested
-by the load kernel modules, which in my case is an empty list because
-I have all the firmware my machine need:
+<li>USENIX :login; <a
+href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy
+Does Not Imply Fault Tolerance. Analysis of Distributed Storage
+Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan,
+Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi
+H. Arpaci-Dusseau</li>
-<p><pre>
-% /usr/sbin/isenkram-autoinstall-firmware -l
-info: did not find any firmware files requested by loaded kernel modules. exiting
-%
-</pre></p>
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why
+RAID 5 stops working in 2009</a> by Robin Harris</li>
+
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why
+RAID 6 stops working in 2019</a> by Robin Harris</li>
+
+<li>USENIX FAST'07
+<a href="http://research.google.com/archive/disk_failures.pdf">Failure
+Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro,
+Wolf-Dietrich Weber and Luiz André Barroso</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data
+Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug
+Hughes</li>
+
+<li>USENIX FAST'08
+<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An
+Analysis of Data Corruption in the Storage Stack</a> by
+L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C.
+Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li>
+
+<li>USENIX FAST'07 <a
+href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk
+failures in the real world: what does an MTTF of 1,000,000 hours mean
+to you?</a> by B. Schroeder and G. A. Gibson.</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are
+Disks the Dominant Contributor for Storage Failures? A Comprehensive
+Study of Storage Subsystem Failure Characteristics</a> by Weihang
+Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li>
-<p>The last few days I had a look at several of the around 250
-packages in Debian with udev rules. These seem like good candidates
-to install when a given hardware dongle is inserted, and I found
-several that should be proposed by isenkram. I have not had time to
-check all of them, but am happy to report that now there are 97
-packages packages mapped to hardware by Isenkram. 11 of these
-packages provide hardware mapping using AppStream, while the rest are
-listed in the modaliases file provided in isenkram.</p>
-
-<p>These are the packages with hardware mappings at the moment. The
-<strong>marked packages</strong> are also announcing their hardware
-support using AppStream, for everyone to use:</p>
-
-<p>air-quality-sensor, alsa-firmware-loaders, argyll,
-<strong>array-info</strong>, avarice, avrdude, b43-fwcutter,
-bit-babbler, bluez, bluez-firmware, <strong>brltty</strong>,
-<strong>broadcom-sta-dkms</strong>, calibre, cgminer, cheese, colord,
-<strong>colorhug-client</strong>, dahdi-firmware-nonfree, dahdi-linux,
-dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd,
-fprintd-demo, <strong>galileo</strong>, gkrellm-thinkbat, gphoto2,
-gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus,
-gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip,
-ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup,
-<strong>libnxt</strong>, libpam-fprintd, <strong>lomoco</strong>,
-madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel,
-<strong>nbc</strong>, <strong>nqc</strong>, nut-hal-drivers, ola,
-open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils,
-pcscd, pidgin-blinklight, printer-driver-splix,
-<strong>pymissile</strong>, python-nxt, qlandkartegt,
-qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl,
-soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools,
-<strong>t2n</strong>, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms,
-tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking,
-virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse,
-xserver-xorg-input-wacom, xserver-xorg-video-qxl,
-xserver-xorg-video-vmware, yubikey-personalization and
-zd1211-firmware</p>
-
-<p>If you know of other packages, please let me know with a wishlist
-bug report against the isenkram-cli package, and ask the package
-maintainer to
-<a href="https://wiki.debian.org/AppStream/Guidelines">add AppStream
-metadata according to the guidelines</a> to provide the information
-for everyone. In time, I hope to get rid of the isenkram specific
-hardware mapping and depend exclusively on AppStream.</p>
-
-<p>Note, the AppStream metadata for broadcom-sta-dkms is matching too
-much hardware, and suggest that the package with with any ethernet
-card. See <a href="http://bugs.debian.org/838735">bug #838735</a> for
-the details. I hope the maintainer find time to address it soon. In
-the mean time I provide an override in isenkram.</p>
+<li>SIGMETRICS 2007
+<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An
+analysis of latent sector errors in disk drives</a> by
+L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li>
+
+</ul>
+
+<p>Several of these research papers are based on data collected from
+hundred thousands or millions of disk, and their findings are eye
+opening. The short story is simply do not implicitly trust RAID or
+redundant storage systems. Details matter. And unfortunately there
+are few options on Linux addressing all the identified issues. Both
+ZFS and Btrfs are doing a fairly good job, but have legal and
+practical issues on their own. I wonder how cluster file systems like
+Ceph do in this regard. After all, there is an old saying, you know
+you have a distributed system when the crash of a computer you have
+never heard of stops you from getting any work done. The same holds
+true if fault tolerance do not work.</p>
+
+<p>Just remember, in the end, it do not matter how redundant, or how
+fault tolerant your storage is, if you do not continuously monitor its
+status to detect and replace failed disks.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>