<atom:link href="http://people.skolelinux.org/pere/blog/index.rss" rel="self" type="application/rss+xml" />
<item>
- <title>My own self balancing Lego Segway</title>
- <link>http://people.skolelinux.org/pere/blog/My_own_self_balancing_Lego_Segway.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/My_own_self_balancing_Lego_Segway.html</guid>
- <pubDate>Fri, 4 Nov 2016 10:15:00 +0100</pubDate>
- <description><p>A while back I received a Gyro sensor for the NXT
-<a href="mindstorms.lego.com">Mindstorms</a> controller as a birthday
-present. It had been on my wishlist for a while, because I wanted to
-build a Segway like balancing lego robot. I had already built
-<a href="http://www.nxtprograms.com/NXT2/segway/">a simple balancing
-robot</a> with the kids, using the light/color sensor included in the
-NXT kit as the balance sensor, but it was not working very well. It
-could balance for a while, but was very sensitive to the light
-condition in the room and the reflective properties of the surface and
-would fall over after a short while. I wanted something more robust,
-and had
-<a href="https://www.hitechnic.com/cgi-bin/commerce.cgi?preadd=action&key=NGY1044">the
-gyro sensor from HiTechnic</a> I believed would solve it on my
-wishlist for some years before it suddenly showed up as a gift from my
-loved ones. :)</p>
-
-<p>Unfortunately I have not had time to sit down and play with it
-since then. But that changed some days ago, when I was searching for
-lego segway information and came across a recipe from HiTechnic for
-building
-<a href="http://www.hitechnic.com/blog/gyro-sensor/htway/">the
-HTWay</a>, a segway like balancing robot. Build instructions and
-<a href="https://www.hitechnic.com/upload/786-HTWayC.nxc">source
-code</a> was included, so it was just a question of putting it all
-together. And thanks to the great work of many Debian developers, the
-compiler needed to build the source for the NXT is already included in
-Debian, so I was read to go in less than an hour. The resulting robot
-do not look very impressive in its simplicity:</p>
-
-<p align="center"><img width="70%" src="http://people.skolelinux.org/pere/blog/images/2016-11-04-lego-htway-robot.jpeg"></p>
-
-<p>Because I lack the infrared sensor used to control the robot in the
-design from HiTechnic, I had to comment out the last task
-(taskControl). I simply placed /* and */ around it get the program
-working without that sensor present. Now it balances just fine until
-the battery status run low:</p>
-
-<p align="center"><video width="70%" controls="true">
- <source src="http://people.skolelinux.org/pere/blog/images/2016-11-04-lego-htway-balancing.ogv" type="video/ogg">
-</video></p>
-
-<p>Now we would like to teach it how to follow a line and take remote
-control instructions using the included Bluetooth receiver in the NXT.</p>
-
-<p>If you, like me, love LEGO and want to make sure we find the tools
-they need to work with LEGO in Debian and all our derivative
-distributions like Ubuntu, check out
-<a href="http://wiki.debian.org/LegoDesigners">the LEGO designers
-project page</a> and join the Debian LEGO team. Personally I own a
-RCX and NXT controller (no EV3), and would like to make sure the
-Debian tools needed to program the systems I own work as they
-should.</p>
-</description>
- </item>
-
- <item>
- <title>Aktivitetsbånd som beskytter privatsfæren</title>
- <link>http://people.skolelinux.org/pere/blog/Aktivitetsb_nd_som_beskytter_privatsf_ren.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Aktivitetsb_nd_som_beskytter_privatsf_ren.html</guid>
- <pubDate>Thu, 3 Nov 2016 09:55:00 +0100</pubDate>
- <description><p>Jeg ble så imponert over
-<a href="https://www.nrk.no/norge/forbrukerradet-mener-aktivitetsarmband-strider-mot-norsk-lov-1.13209079">dagens
-gladnyhet på NRK</a>, om at Forbrukerrådet klager inn vilkårene for
-bruk av aktivitetsbånd fra Fitbit, Garmin, Jawbone og Mio til
-Datatilsynet og forbrukerombudet, at jeg sendte følgende brev til
-forbrukerrådet for å uttrykke min støtte:
-
-<blockquote>
-
-<p>Jeg ble veldig glad over å lese at Forbrukerrådet
-<a href="http://www.forbrukerradet.no/siste-nytt/klager-inn-aktivitetsarmband-for-brudd-pa-norsk-lov/">klager
-inn flere aktivitetsbånd til Datatilsynet for dårlige vilkår</a>. Jeg
-har ønsket meg et aktivitetsbånd som kan måle puls, bevegelse og
-gjerne også andre helserelaterte indikatorer en stund nå. De eneste
-jeg har funnet i salg gjør, som dere også har oppdaget, graverende
-inngrep i privatsfæren og sender informasjonen ut av huset til folk og
-organisasjoner jeg ikke ønsker å dele aktivitets- og helseinformasjon
-med. Jeg ønsker et alternativ som <em>ikke</em> sender informasjon til
-skyen, men derimot bruker
-<a href="http://people.skolelinux.org/pere/blog/Fri_og__pen_standard__slik_Digistan_ser_det.html">en
-fritt og åpent standardisert</a> protokoll (eller i det minste en
-dokumentert protokoll uten patent- og opphavsrettslige
-bruksbegrensinger) til å kommunisere med datautstyr jeg kontrollerer.
-Er jo ikke interessert i å betale noen for å tilrøve seg
-personopplysninger fra meg. Desverre har jeg ikke funnet noe
-alternativ så langt.</p>
-
-<p>Det holder ikke å endre på bruksvilkårene for enhetene, slik
-Datatilsynet ofte legger opp til i sin behandling, når de gjør slik
-f.eks. Fitbit (den jeg har sett mest på). Fitbit krypterer
-informasjonen på enheten og sender den kryptert til leverandøren. Det
-gjør det i praksis umulig både å sjekke hva slags informasjon som
-sendes over, og umulig å ta imot informasjonen selv i stedet for
-Fitbit. Uansett hva slags historie som forteller i bruksvilkårene er
-en jo både prisgitt leverandørens godvilje og at de ikke tvinges av
-sitt lands myndigheter til å lyve til sine kunder om hvorvidt
-personopplysninger spres ut over det bruksvilkårene sier. Det er
-veldokumentert hvordan f.eks. USA tvinger selskaper vha. såkalte
-National security letters til å utlevere personopplysninger samtidig
-som de ikke får lov til å fortelle dette til kundene sine.</p>
-
-<p>Stå på, jeg er veldig glade for at dere har sett på saken. Vet
-dere om aktivitetsbånd i salg i dag som ikke tvinger en til å utlevere
-aktivitets- og helseopplysninger med leverandøren?</p>
-
-</blockquote>
-
-<p>Jeg håper en konkurrent som respekterer kundenes privatliv klarer å
-nå opp i markedet, slik at det finnes et reelt alternativ for oss som
-har full tillit til at skyleverandører vil prioritere egen inntjening
-og myndighetspålegg langt foran kundenes rett til privatliv. Jeg har
-ingen tiltro til at Datatilsynet vil kreve noe mer enn at vilkårene
-endres slik at de forklarer eksplisitt i hvor stor grad bruk av
-produktene utraderer privatsfæren til kundene. Det vil nok gjøre de
-innklagede armbåndene «lovlige», men fortsatt tvinge kundene til å
-dele sine personopplysninger med leverandøren.</p>
+ <title>Legal to share more than 11,000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sun, 7 Jan 2018 23:30:00 +0100</pubDate>
+ <description><p>I've continued to track down list of movies that are legal to
+distribute on the Internet, and identified more than 11,000 title IDs
+in The Internet Movie Database so far. Most of them (57%) are feature
+films from USA published before 1923. I've also tracked down more
+than 24,000 movies I have not yet been able to map to IMDB title ID,
+so the real number could be a lot higher. According to the front web
+page for <a href="https://retrofilmvault.com/">Retro Film Vault</A>,
+there are 44,000 public domain films, so I guess there are still some
+left to identify.</p>
+
+<p>The complete data set is available from
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+public git repository</a>, including the scripts used to create it.
+Most of the data is collected using web scraping, for example from the
+"product catalog" of companies selling copies of public domain movies,
+but any source I find believable is used. I've so far had to throw
+out three sources because I did not trust the public domain status of
+the movies listed.</p>
+
+<p>Anyway, this is the summary of the 28 collected data sources so
+far:</p>
+
+<p><pre>
+ 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
+ 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json
+ 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json
+ 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json
+ 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json
+ 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
+ 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json
+ 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json
+ 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json
+ 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
+ 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json
+ 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json
+ 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json
+ 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json
+ 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json
+ 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
+ 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json
+ 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json
+ 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json
+11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID
+</pre></p>
+
+<p> I keep finding more data sources. I found the cinemovies source
+just a few days ago, and as you can see from the summary, it extended
+my list with 63 movies. Check out the mklist-* scripts in the git
+repository if you are curious how the lists are created. Many of the
+titles are extracted using searches on IMDB, where I look for the
+title and year, and accept search results with only one movie listed
+if the year matches. This allow me to automatically use many lists of
+movies without IMDB title ID references at the cost of increasing the
+risk of wrongly identify a IMDB title ID as public domain. So far my
+random manual checks have indicated that the method is solid, but I
+really wish all lists of public domain movies would include unique
+movie identifier like the IMDB title ID. It would make the job of
+counting movies in the public domain a lot easier.</p>
</description>
</item>
<item>
- <title>Experience and updated recipe for using the Signal app without a mobile phone</title>
- <link>http://people.skolelinux.org/pere/blog/Experience_and_updated_recipe_for_using_the_Signal_app_without_a_mobile_phone.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Experience_and_updated_recipe_for_using_the_Signal_app_without_a_mobile_phone.html</guid>
- <pubDate>Mon, 10 Oct 2016 11:30:00 +0200</pubDate>
- <description><p>In July
-<a href="http://people.skolelinux.org/pere/blog/How_to_use_the_Signal_app_if_you_only_have_a_land_line__ie_no_mobile_phone_.html">I
-wrote how to get the Signal Chrome/Chromium app working</a> without
-the ability to receive SMS messages (aka without a cell phone). It is
-time to share some experiences and provide an updated setup.</p>
-
-<p>The Signal app have worked fine for several months now, and I use
-it regularly to chat with my loved ones. I had a major snag at the
-end of my summer vacation, when the the app completely forgot my
-setup, identity and keys. The reason behind this major mess was
-running out of disk space. To avoid that ever happening again I have
-started storing everything in <tt>userdata/</tt> in git, to be able to
-roll back to an earlier version if the files are wiped by mistake. I
-had to use it once after introducing the git backup. When rolling
-back to an earlier version, one need to use the 'reset session' option
-in Signal to get going, and notify the people you talk with about the
-problem. I assume there is some sequence number tracking in the
-protocol to detect rollback attacks. The git repository is rather big
-(674 MiB so far), but I have not tried to figure out if some of the
-content can be added to a .gitignore file due to lack of spare
-time.</p>
-
-<p>I've also hit the 90 days timeout blocking, and noticed that this
-make it impossible to send messages using Signal. I could still
-receive them, but had to patch the code with a new timestamp to send.
-I believe the timeout is added by the developers to force people to
-upgrade to the latest version of the app, even when there is no
-protocol changes, to reduce the version skew among the user base and
-thus try to keep the number of support requests down.</p>
-
-<p>Since my original recipe, the Signal source code changed slightly,
-making the old patch fail to apply cleanly. Below is an updated
-patch, including the shell wrapper I use to start Signal. The
-original version required a new user to locate the JavaScript console
-and call a function from there. I got help from a friend with more
-JavaScript knowledge than me to modify the code to provide a GUI
-button instead. This mean that to get started you just need to run
-the wrapper and click the 'Register without mobile phone' to get going
-now. I've also modified the timeout code to always set it to 90 days
-in the future, to avoid having to patch the code regularly.</p>
-
-<p>So, the updated recipe for Debian Jessie:</p>
+ <title>Kommentarer til «Evaluation of (il)legality» for Popcorn Time</title>
+ <link>http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html</guid>
+ <pubDate>Wed, 20 Dec 2017 11:40:00 +0100</pubDate>
+ <description><p>I går var jeg i Follo tingrett som sakkyndig vitne og presenterte
+ mine undersøkelser rundt
+ <a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">telling
+ av filmverk i det fri</a>, relatert til
+ <a href="https://www.nuug.no/">foreningen NUUG</a>s involvering i
+ <a href="https://www.nuug.no/news/tags/dns-domenebeslag/">saken om
+ Økokrims beslag og senere inndragning av DNS-domenet
+ popcorn-time.no</a>. Jeg snakket om flere ting, men mest om min
+ vurdering av hvordan filmbransjen har målt hvor ulovlig Popcorn Time
+ er. Filmbransjens måling er så vidt jeg kan se videreformidlet uten
+ endringer av norsk politi, og domstolene har lagt målingen til grunn
+ når de har vurdert Popcorn Time både i Norge og i utlandet (tallet
+ 99% er referert også i utenlandske domsavgjørelser).</p>
+
+<p>I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv,
+ med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg
+ skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha
+ notatet, så hvis jeg forsto rettsprosessen riktig ble kun
+ histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var
+ visst bare interessert i å forholde seg til det jeg sa i retten,
+ ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere
+ enn meg kan ha glede av teksten, og publiserer den derfor her.
+ Legger ved avskrift av dokument 09,13, som er det sentrale
+ dokumentet jeg kommenterer.</p>
+
+<p><strong>Kommentarer til «Evaluation of (il)legality» for Popcorn
+ Time</strong></p>
+
+<p><strong>Oppsummering</strong></p>
+
+<p>Målemetoden som Økokrim har lagt til grunn når de påstår at 99% av
+ filmene tilgjengelig fra Popcorn Time deles ulovlig har
+ svakheter.</p>
+
+<p>De eller den som har vurdert hvorvidt filmer kan lovlig deles har
+ ikke lyktes med å identifisere filmer som kan deles lovlig og har
+ tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig.
+ Økokrim legger til grunn at det bare finnes èn film, Charlie
+ Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de
+ som ble observert tilgjengelig via ulike Popcorn Time-varianter.
+ Jeg finner tre flere blant de observerte filmene: «The Brain That
+ Wouldn't Die» fra 1962, «God’s Little Acre» fra 1958 og «She Wore a
+ Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det
+ finnes dermed minst fire ganger så mange filmer som lovlig kan deles
+ på Internett i datasettet Økokrim har lagt til grunn når det påstås
+ at mindre enn 1 % kan deles lovlig.</p>
+
+<p>Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra
+ ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte
+ filmkatalogene som helhet, hvilket påvirker fordelingen mellom
+ filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I
+ tillegg gir valg av øvre del (de fem første) av søkeresultatene et
+ avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk
+ i det fri i søkeresultatet.</p>
+
+<p>Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn
+ Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger
+ som vedlikeholdes uavhengig av Popcorn Time.</p>
+
+<p>Omtalte dokumenter: 09,12, <a href="#dok-09-13">09,13</a>, 09,14,
+09,18, 09,19, 09,20.</p>
+
+<p><strong>Utfyllende kommentarer</strong></p>
+
+<p>Økokrim har forklart domstolene at minst 99% av alt som er
+ tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig på
+ Internet. Jeg ble nysgjerrig på hvordan de er kommet frem til dette
+ tallet, og dette notatet er en samling kommentarer rundt målingen
+ Økokrim henviser til. Litt av bakgrunnen for at jeg valgte å se på
+ saken er at jeg er interessert i å identifisere og telle hvor mange
+ kunstneriske verk som er falt i det fri eller av andre grunner kan
+ lovlig deles på Internett, og dermed var interessert i hvordan en
+ hadde funnet den ene prosenten som kanskje deles lovlig.</p>
+
+<p>Andelen på 99% kommer fra et ukreditert og udatert notatet som tar
+ mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike
+ Popcorn Time-varianter er.</p>
+
+<p>Raskt oppsummert, så forteller metodedokumentet at på grunn av at
+ det ikke er mulig å få tak i komplett liste over alle filmtitler
+ tilgjengelig via Popcorn Time, så lages noe som skal være et
+ representativt utvalg ved å velge 50 søkeord større enn tre tegn fra
+ ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og
+ de første fem filmene i søkeresultatet samles inn inntil 100 unike
+ filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å
+ nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt
+ til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut
+ og søkt på flere tilfeldig valgte søkeord inntil 100 unike
+ filmtitler var identifisert.</p>
+
+<p>Deretter ble for hver av filmtitlene «vurdert hvorvidt det var
+ rimelig å forvente om at verket var vernet av copyright, ved å se på
+ om filmen var tilgjengelig i IMDB, samt se på regissør,
+ utgivelsesår, når det var utgitt for bestemte markedsområder samt
+ hvilke produksjons- og distribusjonsselskap som var registrert» (min
+ oversettelse).</p>
+
+<p>Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og
+ 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert
+ 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion
+ Picture Association EMEA. Metoden virker å ha flere svakheter som
+ gir resultatene en slagside. Den starter med å slå fast at det ikke
+ er mulig å hente ut en komplett liste over alle filmtitler som er
+ tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne
+ forutsetningen er ikke i tråd med det som står i dokument 09,12, som
+ ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller
+ hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument
+ 09,12 er muligens samme rapport som ble referert til i dom fra Oslo
+ Tingrett 2017-11-03
+ (<a href="https://www.domstol.no/no/Enkelt-domstol/Oslo--tingrett/Nyheter/ma-sperre-for-popcorn-time/">sak
+ 17-093347TVI-OTIR/05</a>) som rapport av 1. juni 2017 av Alexander
+ Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord
+ for å kontrollere dette.</p>
+
+<p>IMDB er en forkortelse for The Internet Movie Database, en
+ anerkjent kommersiell nettjeneste som brukes aktivt av både
+ filmbransjen og andre til å holde rede på hvilke spillefilmer (og
+ endel andre filmer) som finnes eller er under produksjon, og
+ informasjon om disse filmene. Datakvaliteten er høy, med få feil og
+ få filmer som mangler. IMDB viser ikke informasjon om
+ opphavsrettslig status for filmene på infosiden for hver film. Som
+ del av IMDB-tjenesten finnes det lister med filmer laget av
+ frivillige som lister opp det som antas å være verk i det fri.</p>
+
+<p>Det finnes flere kilder som kan brukes til å finne filmer som er
+ allemannseie (public domain) eller har bruksvilkår som gjør det
+ lovlig for alleå dele dem på Internett. Jeg har de siste ukene
+ forsøkt å samle og krysskoble disse listene for å forsøke å telle
+ antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og
+ publiserte filmer for Internett-arkivets del), har jeg så langt
+ klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer.
+
+<p>De aller fleste oppføringene er hentet fra IMDB selv, basert på det
+ faktum at alle filmer laget i USA før 1923 er falt i det fri.
+ Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette
+ utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt).
+ En annen stor andel kommer fra Internett-arkivet, der jeg har
+ identifisert filmer med referanse til IMDB. Internett-arkivet, som
+ holder til i USA, har som
+ <a href="https://archive.org/about/terms.php">policy å kun publisere
+ filmer som det er lovlig å distribuere</a>. Jeg har under arbeidet
+ kommet over flere filmer som har blitt fjernet fra
+ Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene
+ som kontrollerer Internett-arkivet har et aktivt forhold til å kun
+ ha lovlig innhold der, selv om det i stor grad er drevet av
+ frivillige. En annen stor liste med filmer kommer fra det
+ kommersielle selskapet Retro Film Vault, som selger allemannseide
+ filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister
+ over filmer som hevdes å være allemannseie, det være seg Public
+ Domain Review, Public Domain Torrents og Public Domain Movies (.net
+ og .info), samt lister over filmer med Creative Commons-lisensiering
+ fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel
+ stikkontroll ved å vurdere filmer som kun omtales på en liste. Der
+ jeg har funnet feil som har gjort meg i tvil om vurderingen til de
+ som har laget listen har jeg forkastet listen fullstendig (gjelder
+ en liste fra IMDB).</p>
+
+<p>Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på
+ Internett (fra blant annet Internett-arkivet, Public Domain
+ Torrents, Public Domain Reivew og Public Domain Movies), og knytte
+ dem til oppføringer i IMDB, så har jeg så langt klart å identifisere
+ over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro
+ kan lovlig distribueres av alle på Internett. Som ekstra kilder er
+ det brukt lister over filmer som antas/påstås å være allemannseie.
+ Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig
+ for almennheten alle verk som er falt i det fri eller har
+ bruksvilkår som tillater deling.
+
+<p>I tillegg til de over 11 000 filmene der tittel-ID i IMDB er
+ identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå
+ ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av
+ disse er nok duplikater av de IMDB-oppføringene som er identifisert
+ så langt, men neppe alle. Retro Film Vault hevder å ha 44 000
+ filmverk i det fri i sin katalog, så det er mulig at det reelle
+ tallet er betydelig høyere enn de jeg har klart å identifisere så
+ langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor
+ mange filmer i IMDB som kan lovlig deles på Internett. I følge <a
+ href="http://www.imdb.com/stats">statistikk fra IMDB</a> er det 4.6
+ millioner titler registrert, hvorav 3 millioner er TV-serieepisoder.
+ Jeg har ikke funnet ut hvordan de fordeler seg per år.</p>
+
+<p>Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig
+ kunne deles på Internett, får en følgende histogram:</p>
+
+<p align="center"><img width="80%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year.png"></p>
+
+<p>En kan i histogrammet se at effekten av manglende registrering
+ eller fornying av registrering er at mange filmer gitt ut i USA før
+ 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere
+ filmer gitt ut de siste årene med bruksvilkår som tillater deling,
+ muligens på grunn av fremveksten av
+ <a href="https://creativecommons.org/">Creative
+ Commons</a>-bevegelsen..</p>
+
+<p>For maskinell analyse av katalogene har jeg laget et lite program
+ som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn
+ Time-varianter og laster ned komplett liste over filmer i
+ katalogene, noe som bekrefter at det er mulig å hente ned komplett
+ liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire
+ bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra
+ www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den
+ andre brukes i følge dokument 09,12 av klienten tilgjengelig fra
+ popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette
+ dokumentet. Den tredje brukes av websidene tilgjengelig fra
+ popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet.
+ Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i
+ følge dokument 09,12, og er navngitt 'ukrfnlge' i dette
+ dokumentet.</p>
+
+<p>Metoden Økokrim legger til grunn skriver i sitt punkt fire at
+ skjønn er en egnet metode for å finne ut om en film kan lovlig deles
+ på Internett eller ikke, og sier at det ble «vurdert hvorvidt det
+ var rimelig å forvente om at verket var vernet av copyright». For
+ det første er det ikke nok å slå fast om en film er «vernet av
+ copyright» for å vite om det er lovlig å dele den på Internett eller
+ ikke, da det finnes flere filmer med opphavsrettslige bruksvilkår
+ som tillater deling på Internett. Eksempler på dette er Creative
+ Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra
+ 2010. I tillegg til slike finnes det flere filmer som nå er
+ allemannseie (public domain) på grunn av manglende registrering
+ eller fornying av registrering selv om både regisør,
+ produksjonsselskap og distributør ønsker seg vern. Eksempler på
+ dette er Plan 9 from Outer Space fra 1959 og Night of the Living
+ Dead fra 1968. Alle filmer fra USA som var allemannseie før
+ 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i
+ USA på det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis
+ det er noe
+ <a href="http://www.latimes.com/local/lanow/la-me-ln-happy-birthday-song-lawsuit-decision-20150922-story.html">historien
+ om sangen «Happy birthday»</a> forteller oss, der betaling for bruk
+ har vært krevd inn i flere tiår selv om sangen ikke egentlig var
+ vernet av åndsverksloven, så er det at hvert enkelt verk må vurderes
+ nøye og i detalj før en kan slå fast om verket er allemannseie eller
+ ikke, det holder ikke å tro på selverklærte rettighetshavere. Flere
+ eksempel på verk i det fri som feilklassifiseres som vernet er fra
+ dokument 09,18, som lister opp søkeresultater for klienten omtalt
+ som popcorntime.sh og i følge notatet kun inneholder en film (The
+ Circus fra 1928) som under tvil kan antas å være allemannseie.</p>
+
+<p>Ved rask gjennomlesning av dokument 09,18, som inneholder
+ skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt
+ både filmen «The Brain That Wouldn't Die» fra 1962 som er
+ <a href="https://archive.org/details/brain_that_wouldnt_die">tilgjengelig
+ fra Internett-arkivet</a> og som
+ <a href="https://en.wikipedia.org/wiki/List_of_films_in_the_public_domain_in_the_United_States">i
+ følge Wikipedia er allemannseie i USA</a> da den ble gitt ut i
+ 1962 uten 'copyright'-merking, og filmen «God’s Little Acre» fra
+ 1958 <a href="https://en.wikipedia.org/wiki/God%27s_Little_Acre_%28film%29">som
+ er lagt ut på Wikipedia</a>, der det fortelles at
+ sort/hvit-utgaven er allemannseie. Det fremgår ikke fra dokument
+ 09,18 om filmen omtalt der er sort/hvit-utgaven. Av
+ kapasitetsårsaker og på grunn av at filmoversikten i dokument 09,18
+ ikke er maskinlesbart har jeg ikke forsøkt å sjekke alle filmene som
+ listes opp der om mot liste med filmer som er antatt lovlig kan
+ distribueres på Internet.</p>
+
+<p>Ved maskinell gjennomgang av listen med IMDB-referanser under
+ regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg
+ filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er
+ feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig
+ fra Internett-arkivet og markert som allemannseie der. Det virker
+ dermed å være minst fire ganger så mange filmer som kan lovlig deles
+ på Internett enn det som er lagt til grunn når en påstår at minst
+ 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere
+ undersøkelser kan avdekke flere. Poenget er uansett at metodens
+ punkt om «rimelig å forvente om at verket var vernet av copyright»
+ gjør metoden upålitelig.</p>
+
+<p>Den omtalte målemetoden velger ut tilfeldige søketermer fra
+ ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske
+ som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke
+ hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg
+ om den er egnet til å få et representativt utvalg av filmer. Mange
+ av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk
+ ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger.
+ Dette antyder at enkeltmålinger av 100 filmer slik målemetoden
+ beskriver er gjort, ikke er velegnet til å finne andel ulovlig
+ innhold i bittorrent-katalogene.</p>
+
+<p>En kan motvirke dette store avviket for enkeltmålinger ved å gjøre
+ mange søk og slå sammen resultatet. Jeg har testet ved å
+ gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000
+ tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig
+ avvik, i forhold til telling av filmer pr år i hele katalogen.</p>
+
+<p>Målemetoden henter ut de fem øverste i søkeresultatet.
+ Søkeresultatene er sortert på antall bittorrent-klienter registrert
+ som delere i katalogene, hvilket kan gi en slagside mot hvilke
+ filmer som er populære blant de som bruker bittorrent-katalogene,
+ uten at det forteller noe om hvilket innhold som er tilgjengelig
+ eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har
+ forsøkt å måle hvor stor en slik slagside eventuelt er ved å
+ sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i
+ stedet. Avviket for disse to metodene for endel kataloger er godt
+ synlig på histogramet. Her er histogram over filmer funnet i den
+ komplette katalogen (grønn strek), og filmer funnet ved søk etter
+ ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i
+ søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En
+ kan her se at resultatene påvirkes betydelig av hvorvidt en ser på
+ de første eller de siste filmene i et søketreff.</p>
+
+<p align="center">
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-bottom.png"/>
+ <br>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-top.png"/>
+ <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-bottom.png"/>
+</p>
+
+<p>Det er verdt å bemerke at de omtalte bittorrent-katalogene ikke er
+ laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen
+ YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh,
+ et selvstendig fildelings-relatert nettsted YTS.AG med et separat
+ brukermiljø. Målemetoden foreslått av Økokrim måler dermed ikke
+ (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til
+ innholdet i disse katalogene.</p>
+
+<hr>
+
+<p id="dok-09-13">Metoden fra Økokrims dokument 09,13 i straffesaken
+om DNS-beslag.</p>
+
+<p><strong>1. Evaluation of (il)legality</strong></p>
+
+<p><strong>1.1. Methodology</strong>
+
+<p>Due to its technical configuration, Popcorn Time applications don't
+allow to make a full list of all titles made available. In order to
+evaluate the level of illegal operation of PCT, the following
+methodology was applied:</p>
<ol>
-<li>First, install required packages to get the source code and the
-browser you need. Signal only work with Chrome/Chromium, as far as I
-know, so you need to install it.
-
-<pre>
-apt install git tor chromium
-git clone https://github.com/WhisperSystems/Signal-Desktop.git
-</pre></li>
-
-<li>Modify the source code using command listed in the the patch
-block below.</li>
-
-<li>Start Signal using the run-signal-app wrapper (for example using
-<tt>`pwd`/run-signal-app</tt>).
-
-<li>Click on the 'Register without mobile phone', will in a phone
-number you can receive calls to the next minute, receive the
-verification code and enter it into the form field and press
-'Register'. Note, the phone number you use will be user Signal
-username, ie the way others can find you on Signal.</li>
-
-<li>You can now use Signal to contact others. Note, new contacts do
-not show up in the contact list until you restart Signal, and there is
-no way to assign names to Contacts. There is also no way to create or
-update chat groups. I suspect this is because the web app do not have
-a associated contact database.</li>
+ <li>A random selection of 50 keywords, greater than 3 letters, was
+ made from the Dale-Chall list that contains 3000 simple English
+ words1. The selection was made by using a Random Number
+ Generator2.</li>
+
+ <li>For each keyword, starting with the first randomly selected
+ keyword, a search query was conducted in the movie section of the
+ respective Popcorn Time application. For each keyword, the first
+ five results were added to the title list until the number of 100
+ unique titles was reached (duplicates were removed).</li>
+
+ <li>For one fork, .CH, insufficient titles were generated via this
+ approach to reach 100 titles. This was solved by adding any
+ additional query results above five for each of the 50 keywords.
+ Since this still was not enough, another 42 random keywords were
+ selected to finally reach 100 titles.</li>
+
+ <li>It was verified whether or not there is a reasonable expectation
+ that the work is copyrighted by checking if they are available on
+ IMDb, also verifying the director, the year when the title was
+ released, the release date for a certain market, the production
+ company/ies of the title and the distribution company/ies.</li>
</ol>
-<p>I am still a bit uneasy about using Signal, because of the way its
-main author moxie0 reject federation and accept dependencies to major
-corporations like Google (part of the code is fetched from Google) and
-Amazon (the central coordination point is owned by Amazon). See for
-example
-<a href="https://github.com/LibreSignal/LibreSignal/issues/37">the
-LibreSignal issue tracker</a> for a thread documenting the authors
-view on these issues. But the network effect is strong in this case,
-and several of the people I want to communicate with already use
-Signal. Perhaps we can all move to <a href="https://ring.cx/">Ring</a>
-once it <a href="https://bugs.debian.org/830265">work on my
-laptop</a>? It already work on Windows and Android, and is included
-in <a href="https://tracker.debian.org/pkg/ring">Debian</a> and
-<a href="https://launchpad.net/ubuntu/+source/ring">Ubuntu</a>, but not
-working on Debian Stable.</p>
-
-<p>Anyway, this is the patch I apply to the Signal code to get it
-working. It switch to the production servers, disable to timeout,
-make registration easier and add the shell wrapper:</p>
+<p><strong>1.2. Results</strong></p>
+
+<p>Between 6 and 9 June 2016, four forks of Popcorn Time were
+investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and
+popcorntime.ch. An excel sheet with the results is included in
+Appendix 1. Screenshots were secured in separate Appendixes for each
+respective fork, see Appendix 2-5.</p>
+
+<p>For each fork, out of 100, de-duplicated titles it was possible to
+retrieve data according to the parameters set out above that indicate
+that the title is commercially available. Per fork, there was 1 title
+that presumably falls within the public domain, i.e. the 1928 movie
+"The Circus" by and with Charles Chaplin.</p>
+
+<p>Based on the above it is reasonable to assume that 99% of the movie
+content of each fork is copyright protected and is made available
+illegally.</p>
+
+<p>This exercise was not repeated for TV series, but considering that
+besides production companies and distribution companies also
+broadcasters may have relevant rights, it is reasonable to assume that
+at least a similar level of infringement will be established.</p>
+
+<p>Based on the above it is reasonable to assume that 99% of all the
+content of each fork is copyright protected and are made available
+illegally.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Cura, the nice 3D print slicer, is now in Debian Unstable</title>
+ <link>http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html</guid>
+ <pubDate>Sun, 17 Dec 2017 07:00:00 +0100</pubDate>
+ <description><p>After several months of working and waiting, I am happy to report
+that the nice and user friendly 3D printer slicer software Cura just
+entered Debian Unstable. It consist of five packages,
+<a href="https://tracker.debian.org/pkg/cura">cura</a>,
+<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>,
+<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>,
+<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>,
+<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and
+<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last
+two, uranium and cura, entered Unstable yesterday. This should make
+it easier for Debian users to print on at least the Ultimaker class of
+3D printers. My nearest 3D printer is an Ultimaker 2+, so it will
+make life easier for at least me. :)</p>
+
+<p>The work to make this happen was done by Gregor Riepl, and I was
+happy to assist him in sponsoring the packages. With the introduction
+of Cura, Debian is up to three 3D printer slicers at your service,
+Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D
+printer, give it a go. :)</p>
+
+<p>The 3D printer software is maintained by the 3D printer Debian
+team, flocking together on the
+<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a>
+mailing list and the
+<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a>
+IRC channel.</p>
+
+<p>The next step for Cura in Debian is to update the cura package to
+version 3.0.3 and then update the entire set of packages to version
+3.1.0 which showed up the last few days.</p>
+</description>
+ </item>
+
+ <item>
+ <title>Idea for finding all public domain movies in the USA</title>
+ <link>http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html</guid>
+ <pubDate>Wed, 13 Dec 2017 10:15:00 +0100</pubDate>
+ <description><p>While looking at
+<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies
+for the copyright renewal entries for movies published in the USA</a>,
+an idea occurred to me. The number of renewals are so few per year, it
+should be fairly quick to transcribe them all and add references to
+the corresponding IMDB title ID. This would give the (presumably)
+complete list of movies published 28 years earlier that did _not_
+enter the public domain for the transcribed year. By fetching the
+list of USA movies published 28 years earlier and subtract the movies
+with renewals, we should be left with movies registered in IMDB that
+are now in the public domain. For the year 1955 (which is the one I
+have looked at the most), the total number of pages to transcribe is
+21. For the 28 years from 1950 to 1978, it should be in the range
+500-600 pages. It is just a few days of work, and spread among a
+small group of people it should be doable in a few weeks of spare
+time.</p>
-<pre>
-cd Signal-Desktop; cat &lt;&lt;EOF | patch -p1
-diff --git a/js/background.js b/js/background.js
-index 24b4c1d..579345f 100644
---- a/js/background.js
-+++ b/js/background.js
-@@ -33,9 +33,9 @@
- });
- });
-
-- var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
-+ var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
- var SERVER_PORTS = [80, 4433, 8443];
-- var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
-+ var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
- var messageReceiver;
- window.getSocketStatus = function() {
- if (messageReceiver) {
-diff --git a/js/expire.js b/js/expire.js
-index 639aeae..beb91c3 100644
---- a/js/expire.js
-+++ b/js/expire.js
-@@ -1,6 +1,6 @@
- ;(function() {
- 'use strict';
-- var BUILD_EXPIRATION = 0;
-+ var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
-
- window.extension = window.extension || {};
-
-diff --git a/js/views/install_view.js b/js/views/install_view.js
-index 7816f4f..1d6233b 100644
---- a/js/views/install_view.js
-+++ b/js/views/install_view.js
-@@ -38,7 +38,8 @@
- return {
- 'click .step1': this.selectStep.bind(this, 1),
- 'click .step2': this.selectStep.bind(this, 2),
-- 'click .step3': this.selectStep.bind(this, 3)
-+ 'click .step3': this.selectStep.bind(this, 3),
-+ 'click .callreg': function() { extension.install('standalone') },
- };
- },
- clearQR: function() {
-diff --git a/options.html b/options.html
-index dc0f28e..8d709f6 100644
---- a/options.html
-+++ b/options.html
-@@ -14,7 +14,10 @@
- &lt;div class='nav'>
- &lt;h1>{{ installWelcome }}&lt;/h1>
- &lt;p>{{ installTagline }}&lt;/p>
-- &lt;div> &lt;a class='button step2'>{{ installGetStartedButton }}&lt;/a> &lt;/div>
-+ &lt;div> &lt;a class='button step2'>{{ installGetStartedButton }}&lt;/a>
-+ &lt;br> &lt;a class="button callreg">Register without mobile phone&lt;/a>
-+
-+ &lt;/div>
- &lt;span class='dot step1 selected'>&lt;/span>
- &lt;span class='dot step2'>&lt;/span>
- &lt;span class='dot step3'>&lt;/span>
---- /dev/null 2016-10-07 09:55:13.730181472 +0200
-+++ b/run-signal-app 2016-10-10 08:54:09.434172391 +0200
-@@ -0,0 +1,12 @@
-+#!/bin/sh
-+set -e
-+cd $(dirname $0)
-+mkdir -p userdata
-+userdata="`pwd`/userdata"
-+if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
-+ (cd $userdata && git init)
-+fi
-+(cd $userdata && git add . && git commit -m "Current status." || true)
-+exec chromium \
-+ --proxy-server="socks://localhost:9050" \
-+ --user-data-dir=$userdata --load-and-launch-app=`pwd`
-EOF
-chmod a+rx run-signal-app
-</pre>
+<p>A typical copyright renewal entry look like this (the first one
+listed for 1955):</p>
+
+<p><blockquote>
+ ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer
+ Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH);
+ 10Jun55; R151558.
+</blockquote></p>
+
+<p>The movie title as well as registration and renewal dates are easy
+enough to locate by a program (split on first comma and look for
+DDmmmYY). The rest of the text is not required to find the movie in
+IMDB, but is useful to confirm the correct movie is found. I am not
+quite sure what the L and R numbers mean, but suspect they are
+reference numbers into the archive of the US Copyright Office.</p>
+
+<p>Tracking down the equivalent IMDB title ID is probably going to be
+a manual task, but given the year it is fairly easy to search for the
+movie title using for example
+<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>.
+Using this search, I find that the equivalent IMDB title ID for the
+first renewal entry from 1955 is
+<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p>
+
+<p>I suspect the best way to do this would be to make a specialised
+web service to make it easy for contributors to transcribe and track
+down IMDB title IDs. In the web service, once a entry is transcribed,
+the title and year could be extracted from the text, a search in IMDB
+conducted for the user to pick the equivalent IMDB title ID right
+away. By spreading out the work among volunteers, it would also be
+possible to make at least two persons transcribe the same entries to
+be able to discover any typos introduced. But I will need help to
+make this happen, as I lack the spare time to do all of this on my
+own. If you would like to help, please get in touch. Perhaps you can
+draft a web service for crowd sourcing the task?</p>
+
+<p>Note, Project Gutenberg already have some
+<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed
+copies of the US Copyright Office renewal protocols</a>, but I have
+not been able to find any film renewals there, so I suspect they only
+have copies of renewal for written works. I have not been able to find
+any transcribed versions of movie renewals so far. Perhaps they exist
+somewhere?</p>
+
+<p>I would love to figure out methods for finding all the public
+domain works in other countries too, but it is a lot harder. At least
+for Norway and Great Britain, such work involve tracking down the
+people involved in making the movie and figuring out when they died.
+It is hard enough to figure out who was part of making a movie, but I
+do not know how to automate such procedure without a registry of every
+person involved in making movies and their death year.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>NRKs kildevern når NRK-epost deles med utenlands etterretning?</title>
- <link>http://people.skolelinux.org/pere/blog/NRKs_kildevern_n_r_NRK_epost_deles_med_utenlands_etterretning_.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/NRKs_kildevern_n_r_NRK_epost_deles_med_utenlands_etterretning_.html</guid>
- <pubDate>Sat, 8 Oct 2016 08:15:00 +0200</pubDate>
- <description><p>NRK
-<a href="https://nrkbeta.no/2016/09/02/securing-whistleblowers/">lanserte
-for noen uker siden</a> en ny
-<a href="https://www.nrk.no/varsle/">varslerportal som bruker
-SecureDrop til å ta imot tips</a> der det er vesentlig at ingen
-utenforstående får vite at NRK er tipset. Det er et langt steg
-fremover for NRK, og når en leser bloggposten om hva de har tenkt på
-og hvordan løsningen er satt opp virker det som om de har gjort en
-grundig jobb der. Men det er ganske mye ekstra jobb å motta tips via
-SecureDrop, så varslersiden skriver "Nyhetstips som ikke krever denne
-typen ekstra vern vil vi gjerne ha på nrk.no/03030", og 03030-siden
-foreslår i tillegg til et webskjema å bruke epost, SMS, telefon,
-personlig oppmøte og brevpost. Denne artikkelen handler disse andre
-metodene.</p>
-
-<p>Når en sender epost til en @nrk.no-adresse så vil eposten sendes ut
-av landet til datamaskiner kontrollert av Microsoft. En kan sjekke
-dette selv ved å slå opp epostleveringsadresse (MX) i DNS. For NRK er
-dette i dag "nrk-no.mail.protection.outlook.com". NRK har som en ser
-valgt å sette bort epostmottaket sitt til de som står bak outlook.com,
-dvs. Microsoft. En kan sjekke hvor nettverkstrafikken tar veien
-gjennom Internett til epostmottaket vha. programmet
-<tt>traceroute</tt>, og finne ut hvem som eier en Internett-adresse
-vha. whois-systemet. Når en gjør dette for epost-trafikk til @nrk.no
-ser en at trafikken fra Norge mot nrk-no.mail.protection.outlook.com
-går via Sverige mot enten Irland eller Tyskland (det varierer fra gang
-til gang og kan endre seg over tid).</p>
-
-<p>Vi vet fra
-<a href="https://no.wikipedia.org/wiki/FRA-loven">introduksjonen av
-FRA-loven</a> at IP-trafikk som passerer grensen til Sverige avlyttes
-av Försvarets radioanstalt (FRA). Vi vet videre takket være
-Snowden-bekreftelsene at trafikk som passerer grensen til
-Storbritannia avlyttes av Government Communications Headquarters
-(GCHQ). I tillegg er er det nettopp lansert et forslag i Norge om at
-forsvarets E-tjeneste skal få avlytte trafikk som krysser grensen til
-Norge. Jeg er ikke kjent med dokumentasjon på at Irland og Tyskland
-gjør det samme. Poenget er uansett at utenlandsk etterretning har
-mulighet til å snappe opp trafikken når en sender epost til @nrk.no.
-I tillegg er det selvsagt tilgjengelig for Microsoft som er underlagt USAs
-jurisdiksjon og
-<a href="https://www.theguardian.com/world/2013/jul/11/microsoft-nsa-collaboration-user-data">samarbeider
-med USAs etterretning på flere områder</a>. De som tipser NRK om
-nyheter via epost kan dermed gå ut fra at det blir kjent for mange
-andre enn NRK at det er gjort.</p>
-
-<p>Bruk av SMS og telefon registreres av blant annet telefonselskapene
-og er tilgjengelig i følge lov og forskrift for blant annet Politi,
-NAV og Finanstilsynet, i tillegg til IT-folkene hos telefonselskapene
-og deres overordnede. Hvis innringer eller mottaker bruker
-smarttelefon vil slik kontakt også gjøres tilgjengelig for ulike
-app-leverandører og de som lytter på trafikken mellom telefon og
-app-leverandør, alt etter hva som er installert på telefonene som
-brukes.</p>
-
-<p>Brevpost kan virke trygt, og jeg vet ikke hvor mye som registreres
-og lagres av postens datastyrte postsorteringssentraler. Det vil ikke
-overraske meg om det lagres hvor i landet hver konvolutt kommer fra og
-hvor den er adressert, i hvert fall for en kortere periode. Jeg vet
-heller ikke hvem slik informasjon gjøres tilgjengelig for. Det kan
-være nok til å ringe inn potensielle kilder når det krysses med hvem
-som kjente til aktuell informasjon og hvor de befant seg (tilgjengelig
-f.eks. hvis de bærer mobiltelefon eller bor i nærheten).</p>
-
-<p>Personlig oppmøte hos en NRK-journalist er antagelig det tryggeste,
-men en bør passe seg for å bruke NRK-kantina. Der bryter de nemlig
-<a href="http://www.lovdata.no/all/hl-19850524-028.html#14">Sentralbanklovens
-paragraf 14</a> og nekter folk å betale med kontanter. I stedet
-krever de at en varsle sin bankkortutsteder om hvor en befinner seg
-ved å bruke bankkort. Banktransaksjoner er tilgjengelig for
-bankkortutsteder (det være seg VISA, Mastercard, Nets og/eller en
-bank) i tillegg til politiet og i hvert fall tidligere med Se & Hør
-(via utro tjenere, slik det ble avslørt etter utgivelsen av boken
-«Livet, det forbannede» av Ken B. Rasmussen). Men hvor mange kjenner
-en NRK-journalist personlig? Besøk på NRK på Marienlyst krever at en
-registrerer sin ankost elektronisk i besøkssystemet. Jeg vet ikke hva
-som skjer med det datasettet, men har grunn til å tro at det sendes ut
-SMS til den en skal besøke med navnet som er oppgitt. Kanskje greit å
-oppgi falskt navn.</p>
-
-<p>Når så tipset er kommet frem til NRK skal det behandles
-redaksjonelt i NRK. Der vet jeg via ulike kilder at de fleste
-journalistene bruker lokalt installert programvare, men noen bruker
-Google Docs og andre skytjenester i strid med interne retningslinjer
-når de skriver. Hvordan vet en hvem det gjelder? Ikke vet jeg, men
-det kan være greit å spørre for å sjekke at journalisten har tenkt på
-problemstillingen, før en gir et tips. Og hvis tipset omtales internt
-på epost, er det jo grunn til å tro at også intern eposten vil deles
-med Microsoft og utenlands etterretning, slik tidligere nevnt, men det
-kan hende at det holdes internt i NRKs interne MS Exchange-løsning.
-Men Microsoft ønsker å få alle Exchange-kunder over "i skyen" (eller
-andre folks datamaskiner, som det jo innebærer), så jeg vet ikke hvor
-lenge det i så fall vil vare.</p>
-
-<p>I tillegg vet en jo at
-<a href="https://www.nrk.no/ytring/elektronisk-kildevern-i-nrk-1.11941196">NRK
-har valgt å gi nasjonal sikkerhetsmyndighet (NSM) tilgang til å se på
-intern og ekstern Internett-trafikk</a> hos NRK ved oppsett av såkalte
-VDI-noder, på tross av
-<a href="https://www.nrk.no/ytring/bekymring-for-nrks-kildevern-1.11941584">protester
-fra NRKs journalistlag</a>. Jeg vet ikke om den vil kunne snappe opp
-dokumenter som lagres på interne filtjenere eller dokumenter som lages
-i de interne webbaserte publiseringssystemene, men vet at hva noden
-ser etter på nettet kontrolleres av NSM og oppdateres automatisk, slik
-at det ikke gir så mye mening å sjekke hva noden ser etter i dag når
-det kan endres automatisk i morgen.</p>
-
-<p>Personlig vet jeg ikke om jeg hadde turt tipse NRK hvis jeg satt på
-noe som kunne være en trussel mot den bestående makten i Norge eller
-verden. Til det virker det å være for mange åpninger for
-utenforstående med andre prioriteter enn NRKs journalistiske fokus.
-Og den største truslen for en varsler er jo om metainformasjon kommer
-på avveie, dvs. informasjon om at en har vært i kontakt med en
-journalist. Det kan være nok til at en kommer i myndighetenes
-søkelys, og de færreste har nok operasjonell sikkerhet til at vil tåle
-slik flombelysning på sitt privatliv.</p>
+ <title>Is the short movie «Empty Socks» from 1927 in the public domain or not?</title>
+ <link>http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html</guid>
+ <pubDate>Tue, 5 Dec 2017 12:30:00 +0100</pubDate>
+ <description><p>Three years ago, a presumed lost animation film,
+<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from
+1927</a>, was discovered in the Norwegian National Library. At the
+time it was discovered, it was generally assumed to be copyrighted by
+The Walt Disney Company, and I blogged about
+<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my
+reasoning to conclude</a> that it would would enter the Norwegian
+equivalent of the public domain in 2053, based on my understanding of
+Norwegian Copyright Law. But a few days ago, I came across
+<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a
+blog post claiming the movie was already in the public domain</a>, at
+least in USA. The reasoning is as follows: The film was released in
+November or Desember 1927 (sources disagree), and presumably
+registered its copyright that year. At that time, right holders of
+movies registered by the copyright office received government
+protection for there work for 28 years. After 28 years, the copyright
+had to be renewed if the wanted the government to protect it further.
+The blog post I found claim such renewal did not happen for this
+movie, and thus it entered the public domain in 1956. Yet someone
+claim the copyright was renewed and the movie is still copyright
+protected. Can anyone help me to figure out which claim is correct?
+I have not been able to find Empty Socks in Catalog of copyright
+entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures
+<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available
+from the University of Pennsylvania</a>, neither in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page
+45 for the first half of 1955</a>, nor in
+<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page
+119 for the second half of 1955</a>. It is of course possible that
+the renewal entry was left out of the printed catalog by mistake. Is
+there some way to rule out this possibility? Please help, and update
+the wikipedia page with your findings.
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Isenkram, Appstream and udev make life as a LEGO builder easier</title>
- <link>http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Isenkram__Appstream_and_udev_make_life_as_a_LEGO_builder_easier.html</guid>
- <pubDate>Fri, 7 Oct 2016 09:50:00 +0200</pubDate>
- <description><p><a href="http://packages.qa.debian.org/isenkram">The Isenkram
-system</a> provide a practical and easy way to figure out which
-packages support the hardware in a given machine. The command line
-tool <tt>isenkram-lookup</tt> and the tasksel options provide a
-convenient way to list and install packages relevant for the current
-hardware during system installation, both user space packages and
-firmware packages. The GUI background daemon on the other hand provide
-a pop-up proposing to install packages when a new dongle is inserted
-while using the computer. For example, if you plug in a smart card
-reader, the system will ask if you want to install <tt>pcscd</tt> if
-that package isn't already installed, and if you plug in a USB video
-camera the system will ask if you want to install <tt>cheese</tt> if
-cheese is currently missing. This already work just fine.</p>
-
-<p>But Isenkram depend on a database mapping from hardware IDs to
-package names. When I started no such database existed in Debian, so
-I made my own data set and included it with the isenkram package and
-made isenkram fetch the latest version of this database from git using
-http. This way the isenkram users would get updated package proposals
-as soon as I learned more about hardware related packages.</p>
-
-<p>The hardware is identified using modalias strings. The modalias
-design is from the Linux kernel where most hardware descriptors are
-made available as a strings that can be matched using filename style
-globbing. It handle USB, PCI, DMI and a lot of other hardware related
-identifiers.</p>
-
-<p>The downside to the Isenkram specific database is that there is no
-information about relevant distribution / Debian version, making
-isenkram propose obsolete packages too. But along came AppStream, a
-cross distribution mechanism to store and collect metadata about
-software packages. When I heard about the proposal, I contacted the
-people involved and suggested to add a hardware matching rule using
-modalias strings in the specification, to be able to use AppStream for
-mapping hardware to packages. This idea was accepted and AppStream is
-now a great way for a package to announce the hardware it support in a
-distribution neutral way. I wrote
-<a href="http://people.skolelinux.org/pere/blog/Using_appstream_with_isenkram_to_install_hardware_related_packages_in_Debian.html">a
-recipe on how to add such meta-information</a> in a blog post last
-December. If you have a hardware related package in Debian, please
-announce the relevant hardware IDs using AppStream.</p>
-
-<p>In Debian, almost all packages that can talk to a LEGO Mindestorms
-RCX or NXT unit, announce this support using AppStream. The effect is
-that when you insert such LEGO robot controller into your Debian
-machine, Isenkram will propose to install the packages needed to get
-it working. The intention is that this should allow the local user to
-start programming his robot controller right away without having to
-guess what packages to use or which permissions to fix.</p>
-
-<p>But when I sat down with my son the other day to program our NXT
-unit using his Debian Stretch computer, I discovered something
-annoying. The local console user (ie my son) did not get access to
-the USB device for programming the unit. This used to work, but no
-longer in Jessie and Stretch. After some investigation and asking
-around on #debian-devel, I discovered that this was because udev had
-changed the mechanism used to grant access to local devices. The
-ConsoleKit mechanism from <tt>/lib/udev/rules.d/70-udev-acl.rules</tt>
-no longer applied, because LDAP users no longer was added to the
-plugdev group during login. Michael Biebl told me that this method
-was obsolete and the new method used ACLs instead. This was good
-news, as the plugdev mechanism is a mess when using a remote user
-directory like LDAP. Using ACLs would make sure a user lost device
-access when she logged out, even if the user left behind a background
-process which would retain the plugdev membership with the ConsoleKit
-setup. Armed with this knowledge I moved on to fix the access problem
-for the LEGO Mindstorms related packages.</p>
-
-<p>The new system uses a udev tag, 'uaccess'. It can either be
-applied directly for a device, or is applied in
-/lib/udev/rules.d/70-uaccess.rules for classes of devices. As the
-LEGO Mindstorms udev rules did not have a class, I decided to add the
-tag directly in the udev rules files included in the packages. Here
-is one example. For the nqc C compiler for the RCX, the
-<tt>/lib/udev/rules.d/60-nqc.rules</tt> file now look like this:
+ <title>Metadata proposal for movies on the Internet Archive</title>
+ <link>http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html</guid>
+ <pubDate>Tue, 28 Nov 2017 12:00:00 +0100</pubDate>
+ <description><p>It would be easier to locate the movie you want to watch in
+<a href="https://www.archive.org/">the Internet Archive</a>, if the
+metadata about each movie was more complete and accurate. In the
+archiving community, a well known saying state that good metadata is a
+love letter to the future. The metadata in the Internet Archive could
+use a face lift for the future to love us back. Here is a proposal
+for a small improvement that would make the metadata more useful
+today. I've been unable to find any document describing the various
+standard fields available when uploading videos to the archive, so
+this proposal is based on my best quess and searching through several
+of the existing movies.</p>
+
+<p>I have a few use cases in mind. First of all, I would like to be
+able to count the number of distinct movies in the Internet Archive,
+without duplicates. I would further like to identify the IMDB title
+ID of the movies in the Internet Archive, to be able to look up a IMDB
+title ID and know if I can fetch the video from there and share it
+with my friends.</p>
+
+<p>Second, I would like the Butter data provider for The Internet
+archive
+(<a href="https://github.com/butterproviders/butter-provider-archive">available
+from github</a>), to list as many of the good movies as possible. The
+plugin currently do a search in the archive with the following
+parameters:</p>
<p><pre>
-SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
- SYMLINK+="rcx-%k", TAG+="uaccess"
+collection:moviesandfilms
+AND NOT collection:movie_trailers
+AND -mediatype:collection
+AND format:"Archive BitTorrent"
+AND year
</pre></p>
-<p>The key part is the 'TAG+="uaccess"' at the end. I suspect all
-packages using plugdev in their /lib/udev/rules.d/ files should be
-changed to use this tag (either directly or indirectly via
-<tt>70-uaccess.rules</tt>). Perhaps a lintian check should be created
-to detect this?</p>
-
-<p>I've been unable to find good documentation on the uaccess feature.
-It is unclear to me if the uaccess tag is an internal implementation
-detail like the udev-acl tag used by
-<tt>/lib/udev/rules.d/70-udev-acl.rules</tt>. If it is, I guess the
-indirect method is the preferred way. Michael
-<a href="https://github.com/systemd/systemd/issues/4288">asked for more
-documentation from the systemd project</a> and I hope it will make
-this clearer. For now I use the generic classes when they exist and
-is already handled by <tt>70-uaccess.rules</tt>, and add the tag
-directly if no such class exist.</p>
-
-<p>To learn more about the isenkram system, please check out
-<a href="http://people.skolelinux.org/pere/blog/tags/isenkram/">my
-blog posts tagged isenkram</a>.</p>
-
-<p>To help out making life for LEGO constructors in Debian easier,
-please join us on our IRC channel
-<a href="irc://irc.debian.org/%23debian-lego">#debian-lego</a> and join
-the <a href="https://alioth.debian.org/projects/debian-lego/">Debian
-LEGO team</a> in the Alioth project we created yesterday. A mailing
-list is not yet created, but we are working on it. :)</p>
+<p>Most of the cool movies that fail to show up in Butter do so
+because the 'year' field is missing. The 'year' field is populated by
+the year part from the 'date' field, and should be when the movie was
+released (date or year). Two such examples are
+<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur
+from 1905</a> and
+<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes
+2: Gran Dillama from 2013</a>, where the year metadata field is
+missing.</p>
+
+So, my proposal is simply, for every movie in The Internet Archive
+where an IMDB title ID exist, please fill in these metadata fields
+(note, they can be updated also long after the video was uploaded, but
+as far as I can tell, only by the uploader):
+
+<dl>
+
+<dt>mediatype</dt>
+<dd>Should be 'movie' for movies.</dd>
+
+<dt>collection</dt>
+<dd>Should contain 'moviesandfilms'.</dd>
+
+<dt>title</dt>
+<dd>The title of the movie, without the publication year.</dd>
+
+<dt>date</dt>
+<dd>The data or year the movie was released. This make the movie show
+up in Butter, as well as make it possible to know the age of the
+movie and is useful to figure out copyright status.</dd>
+
+<dt>director</dt>
+<dd>The director of the movie. This make it easier to know if the
+correct movie is found in movie databases.</dd>
+
+<dt>publisher</dt>
+<dd>The production company making the movie. Also useful for
+identifying the correct movie.</dd>
+
+<dt>links</dt>
+
+<dd>Add a link to the IMDB title page, for example like this: &lt;a
+href="http://www.imdb.com/title/tt0028496/"&gt;Movie in
+IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for
+counting of number of unique movies in the Archive. Other external
+references, like to TMDB, could be added like this too.</dd>
+
+</dl>
+
+<p>I did consider proposing a Custom field for the IMDB title ID (for
+example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it
+will be easier to simply place it in the links free text field.</p>
+
+<p>I created
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+list of IMDB title IDs for several thousand movies in the Internet
+Archive</a>, but I also got a list of several thousand movies without
+such IMDB title ID (and quite a few duplicates). It would be great if
+this data set could be integrated into the Internet Archive metadata
+to be available for everyone in the future, but with the current
+policy of leaving metadata editing to the uploaders, it will take a
+while before this happen. If you have uploaded movies into the
+Internet Archive, you can help. Please consider following my proposal
+above for your movies, to ensure that movie is properly
+counted. :)</p>
+
+<p>The list is mostly generated using wikidata, which based on
+Wikipedia articles make it possible to link between IMDB and movies in
+the Internet Archive. But there are lots of movies without a
+Wikipedia article, and some movies where only a collection page exist
+(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the
+Caminandes example above</a>, where there are three movies but only
+one Wikidata entry).</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
-<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
-</description>
- </item>
-
- <item>
- <title>Aftenposten-redaktøren med lua i hånda</title>
- <link>http://people.skolelinux.org/pere/blog/Aftenposten_redakt_ren_med_lua_i_h_nda.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Aftenposten_redakt_ren_med_lua_i_h_nda.html</guid>
- <pubDate>Fri, 9 Sep 2016 11:30:00 +0200</pubDate>
- <description><p>En av dagens nyheter er at Aftenpostens redaktør Espen Egil Hansen
-bruker
-<a href="https://www.nrk.no/kultur/aftenposten-brukar-heile-forsida-pa-facebook-kritikk-1.13126918">forsiden
-av papiravisen på et åpent brev til Facebooks sjef Mark Zuckerberg om
-Facebooks fjerning av bilder, tekster og sider de ikke liker</a>. Det
-må være uvant for redaktøren i avisen Aftenposten å stå med lua i
-handa og håpe på å bli hørt. Spesielt siden Aftenposten har vært med
-på å gi Facebook makten de nå demonstrerer at de har. Ved å melde seg
-inn i Facebook-samfunnet har de sagt ja til bruksvilkårene og inngått
-en antagelig bindende avtale. Kanskje de skulle lest og vurdert
-vilkårene litt nærmere før de sa ja, i stedet for å klage over at
-reglende de har valgt å akseptere blir fulgt? Personlig synes jeg
-vilkårene er uakseptable og det ville ikke falle meg inn å gå inn på
-en avtale med slike vilkår. I tillegg til uakseptable vilkår er det
-mange andre grunner til å unngå Facebook. Du kan finne en solid
-gjennomgang av flere slike argumenter hos
-<a href="https://stallman.org/facebook.html">Richard Stallmans side om
-Facebook</a>.
-
-<p>Jeg håper flere norske redaktører på samme vis må stå med lua i
-hånden inntil de forstår at de selv er med på å føre samfunnet på
-ville veier ved å omfavne Facebook slik de gjør når de omtaler og
-løfter frem saker fra Facebook, og tar i bruk Facebook som
-distribusjonskanal for sine nyheter. De bidrar til
-overvåkningssamfunnet og raderer ut lesernes privatsfære når de lenker
-til Facebook på sine sider, og låser seg selv inne i en omgivelse der
-det er Facebook, og ikke redaktøren, som sitter med makta.</p>
-
-<p>Men det vil nok ta tid, i et Norge der de fleste nettredaktører
-<a href="http://people.skolelinux.org/pere/blog/Snurpenot_overv_kning_av_sensitiv_personinformasjon.html">deler
-sine leseres personopplysinger med utenlands etterretning</a>.</p>
-
-<p>For øvrig burde varsleren Edward Snowden få politisk asyl i
-Norge.</p>
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>E-tjenesten ber om innsyn i eposten til partiene på Stortinget</title>
- <link>http://people.skolelinux.org/pere/blog/E_tjenesten_ber_om_innsyn_i_eposten_til_partiene_p__Stortinget.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/E_tjenesten_ber_om_innsyn_i_eposten_til_partiene_p__Stortinget.html</guid>
- <pubDate>Tue, 6 Sep 2016 23:00:00 +0200</pubDate>
- <description><p>I helga kom det et hårreisende forslag fra Lysne II-utvalget satt
-ned av Forsvarsdepartementet. Lysne II-utvalget var bedt om å vurdere
-ønskelista til Forsvarets etterretningstjeneste (e-tjenesten), og har
-kommet med
-<a href="http://www.aftenposten.no/norge/Utvalg-sier-ja-til-at-E-tjenesten-far-overvake-innholdet-i-all-internett--og-telefontrafikk-som-krysser-riksgrensen-603232b.html">forslag
-om at e-tjenesten skal få lov til a avlytte all Internett-trafikk</a>
-som passerer Norges grenser. Få er klar over at dette innebærer at
-e-tjenesten får tilgang til epost sendt til de fleste politiske
-partiene på Stortinget. Regjeringspartiet Høyre (@hoyre.no),
-støttepartiene Venstre (@venstre.no) og Kristelig Folkeparti (@krf.no)
-samt Sosialistisk Ventreparti (@sv.no) og Miljøpartiet de grønne
-(@mdg.no) har nemlig alle valgt å ta imot eposten sin via utenlandske
-tjenester. Det betyr at hvis noen sender epost til noen med en slik
-adresse vil innholdet i eposten, om dette forslaget blir vedtatt, gjøres
-tilgjengelig for e-tjenesten. Venstre, Sosialistisk Ventreparti og
-Miljøpartiet De Grønne har valgt å motta sin epost hos Google,
-Kristelig Folkeparti har valgt å motta sin epost hos Microsoft, og
-Høyre har valgt å motta sin epost hos Comendo med mottak i Danmark og
-Irland. Kun Arbeiderpartiet og Fremskrittspartiet har valgt å motta
-eposten sin i Norge, hos henholdsvis Intility AS og Telecomputing
-AS.</p>
-
-<p>Konsekvensen er at epost inn og ut av de politiske organisasjonene,
-til og fra partimedlemmer og partiets tillitsvalgte vil gjøres
-tilgjengelig for e-tjenesten for analyse og sortering. Jeg mistenker
-at kunnskapen som slik blir tilgjengelig vil være nyttig hvis en
-ønsker å vite hvilke argumenter som treffer publikum når en ønsker å
-påvirke Stortingets representanter.</p
-
-<p>Ved hjelp av MX-oppslag i DNS for epost-domene, tilhørende
-whois-oppslag av IP-adressene og traceroute for å se hvorvidt
-trafikken går via utlandet kan enhver få bekreftet at epost sendt til
-de omtalte partiene vil gjøres tilgjengelig for forsvarets
-etterretningstjeneste hvis forslaget blir vedtatt. En kan også bruke
-den kjekke nett-tjenesten <a href="http://ipinfo.io/">ipinfo.io</a>
-for å få en ide om hvor i verden en IP-adresse hører til.</p>
-
-<p>På den positive siden vil forslaget gjøre at enda flere blir
-motivert til å ta grep for å bruke
-<a href="https://www.torproject.org/">Tor</a> og krypterte
-kommunikasjonsløsninger for å kommunisere med sine kjære, for å sikre
-at privatsfæren vernes. Selv bruker jeg blant annet
-<a href="https://www.freedomboxfoundation.org/">FreedomBox</a> og
-<a href="https://whispersystems.org/">Signal</a> til slikt. Ingen av
-dem er optimale, men de fungerer ganske bra allerede og øker kostnaden
-for dem som ønsker å invadere mitt privatliv.</p>
-
-<p>For øvrig burde varsleren Edward Snowden få politisk asyl i
-Norge.</p>
-
-<!--
-
-venstre.no
- venstre.no mail is handled by 10 aspmx.l.google.com.
- venstre.no mail is handled by 20 alt1.aspmx.l.google.com.
- venstre.no mail is handled by 20 alt2.aspmx.l.google.com.
- venstre.no mail is handled by 30 aspmx2.googlemail.com.
- venstre.no mail is handled by 30 aspmx3.googlemail.com.
-
-traceroute to aspmx.l.google.com (173.194.222.27), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.411 ms 0.438 ms 0.536 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.375 ms 0.452 ms 0.548 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 1.940 ms 1.950 ms 1.942 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.910 ms 6.949 ms 7.283 ms
- 5 google-gw.nordu.net (109.105.98.6) 6.975 ms 6.967 ms 6.958 ms
- 6 209.85.250.192 (209.85.250.192) 7.337 ms 7.286 ms 10.890 ms
- 7 209.85.254.13 (209.85.254.13) 7.394 ms 209.85.254.31 (209.85.254.31) 7.586 ms 209.85.254.33 (209.85.254.33) 7.570 ms
- 8 209.85.251.255 (209.85.251.255) 15.686 ms 209.85.249.229 (209.85.249.229) 16.118 ms 209.85.251.255 (209.85.251.255) 16.073 ms
- 9 74.125.37.255 (74.125.37.255) 16.794 ms 216.239.40.248 (216.239.40.248) 16.113 ms 74.125.37.44 (74.125.37.44) 16.764 ms
-10 * * *
-
-mdg.no
- mdg.no mail is handled by 1 aspmx.l.google.com.
- mdg.no mail is handled by 5 alt2.aspmx.l.google.com.
- mdg.no mail is handled by 5 alt1.aspmx.l.google.com.
- mdg.no mail is handled by 10 aspmx2.googlemail.com.
- mdg.no mail is handled by 10 aspmx3.googlemail.com.
-sv.no
- sv.no mail is handled by 1 aspmx.l.google.com.
- sv.no mail is handled by 5 alt1.aspmx.l.google.com.
- sv.no mail is handled by 5 alt2.aspmx.l.google.com.
- sv.no mail is handled by 10 aspmx3.googlemail.com.
- sv.no mail is handled by 10 aspmx2.googlemail.com.
-hoyre.no
- hoyre.no mail is handled by 10 hoyre-no.mx1.comendosystems.com.
- hoyre.no mail is handled by 20 hoyre-no.mx2.comendosystems.net.
-
-traceroute to hoyre-no.mx1.comendosystems.com (89.104.206.4), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.450 ms 0.510 ms 0.591 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.383 ms 0.508 ms 0.596 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.311 ms 0.315 ms 0.300 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.837 ms 6.842 ms 6.834 ms
- 5 dk-uni.nordu.net (109.105.97.10) 26.073 ms 26.085 ms 26.076 ms
- 6 dix.1000m.soeborg.ip.comendo.dk (192.38.7.22) 15.372 ms 15.046 ms 15.123 ms
- 7 89.104.192.65 (89.104.192.65) 15.875 ms 15.990 ms 16.239 ms
- 8 89.104.192.179 (89.104.192.179) 15.676 ms 15.674 ms 15.664 ms
- 9 03dm-com.mx1.staysecuregroup.com (89.104.206.4) 15.637 ms * *
-
-krf.no
- krf.no mail is handled by 10 krf-no.mail.protection.outlook.com.
-
-traceroute to krf-no.mail.protection.outlook.com (213.199.154.42), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.401 ms 0.438 ms 0.536 ms
- 2 uio-gw8.uio.no (129.240.24.229) 11.076 ms 11.120 ms 11.204 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.232 ms 0.234 ms 0.271 ms
- 4 se-tug.nordu.net (109.105.102.108) 6.811 ms 6.820 ms 6.815 ms
- 5 netnod-ix-ge-a-sth-4470.microsoft.com (195.245.240.181) 7.074 ms 7.013 ms 7.061 ms
- 6 ae1-0.sto-96cbe-1b.ntwk.msn.net (104.44.225.161) 7.227 ms 7.362 ms 7.293 ms
- 7 be-8-0.ibr01.ams.ntwk.msn.net (104.44.5.7) 41.993 ms 43.334 ms 41.939 ms
- 8 be-1-0.ibr02.ams.ntwk.msn.net (104.44.4.214) 43.153 ms 43.507 ms 43.404 ms
- 9 ae3-0.fra-96cbe-1b.ntwk.msn.net (104.44.5.17) 29.897 ms 29.831 ms 29.794 ms
-10 ae10-0.vie-96cbe-1a.ntwk.msn.net (198.206.164.1) 42.309 ms 42.130 ms 41.808 ms
-11 * ae8-0.vie-96cbe-1b.ntwk.msn.net (104.44.227.29) 41.425 ms *
-12 * * *
-
-arbeiderpartiet.no
- arbeiderpartiet.no mail is handled by 10 mail.intility.com.
- arbeiderpartiet.no mail is handled by 20 mail2.intility.com.
-
-traceroute to mail.intility.com (188.95.245.87), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.486 ms 0.508 ms 0.649 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.416 ms 0.508 ms 0.620 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.276 ms 0.278 ms 0.275 ms
- 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 0.374 ms 0.371 ms 0.416 ms
- 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 3.132 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 10.079 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 3.353 ms
- 6 te1-2-0.ar2.ulv89.as2116.net (195.0.243.194) 0.569 ms te5-0-0.ar2.ulv89.as2116.net (195.0.243.192) 0.661 ms 0.653 ms
- 7 cD2EC45C1.static.as2116.net (193.69.236.210) 0.654 ms 0.615 ms 0.590 ms
- 8 185.7.132.38 (185.7.132.38) 1.661 ms 1.808 ms 1.695 ms
- 9 185.7.132.100 (185.7.132.100) 1.793 ms 1.943 ms 1.546 ms
-10 * * *
-
-frp.no
- frp.no mail is handled by 10 mx03.telecomputing.no.
- frp.no mail is handled by 20 mx01.telecomputing.no.
-
-traceroute to mx03.telecomputing.no (95.128.105.102), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.6.1) 0.378 ms 0.402 ms 0.479 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.361 ms 0.458 ms 0.548 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.361 ms 0.352 ms 0.336 ms
- 4 xe-2-2-0-0.san-peer2.osl.no.ip.tdc.net (193.156.90.16) 0.375 ms 0.366 ms 0.346 ms
- 5 xe-2-0-2-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.97) 0.780 ms xe-2-0-0-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.101) 0.713 ms xe-2-0-2-0.ost-pe1.osl.no.ip.tdc.net (85.19.121.97) 0.759 ms
- 6 cpe.xe-0-2-0-100.ost-pe1.osl.no.customer.tdc.net (85.19.26.46) 0.837 ms 0.755 ms 0.759 ms
- 7 95.128.105.3 (95.128.105.3) 1.050 ms 1.288 ms 1.182 ms
- 8 mx03.telecomputing.no (95.128.105.102) 0.717 ms 0.703 ms 0.692 ms
-
--->
+ <title>Legal to share more than 3000 movies listed on IMDB?</title>
+ <link>http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html</guid>
+ <pubDate>Sat, 18 Nov 2017 21:20:00 +0100</pubDate>
+ <description><p>A month ago, I blogged about my work to
+<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically
+check the copyright status of IMDB entries</a>, and try to count the
+number of movies listed in IMDB that is legal to distribute on the
+Internet. I have continued to look for good data sources, and
+identified a few more. The code used to extract information from
+various data sources is available in
+<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a
+git repository</a>, currently available from github.</p>
+
+<p>So far I have identified 3186 unique IMDB title IDs. To gain
+better understanding of the structure of the data set, I created a
+histogram of the year associated with each movie (typically release
+year). It is interesting to notice where the peaks and dips in the
+graph are located. I wonder why they are placed there. I suspect
+World War II caused the dip around 1940, but what caused the peak
+around 2010?</p>
+
+<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p>
+
+<p>I've so far identified ten sources for IMDB title IDs for movies in
+the public domain or with a free license. This is the statistics
+reported when running 'make stats' in the git repository:</p>
+
+<pre>
+ 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
+ 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
+ 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
+ 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
+ 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
+ 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
+ 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
+ 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
+ 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
+ 3186 unique IMDB title IDs in total
+</pre>
+
+<p>The entries without IMDB title ID are candidates to increase the
+data set, but might equally well be duplicates of entries already
+listed with IMDB title ID in one of the other sources, or represent
+movies that lack a IMDB title ID. I've seen examples of all these
+situations when peeking at the entries without IMDB title ID. Based
+on these data sources, the lower bound for movies listed in IMDB that
+are legal to distribute on the Internet is between 3186 and 4713.
+
+<p>It would be great for improving the accuracy of this measurement,
+if the various sources added IMDB title ID to their metadata. I have
+tried to reach the people behind the various sources to ask if they
+are interested in doing this, without any replies so far. Perhaps you
+can help me get in touch with the people behind VODO, Public Domain
+Torrents, Public Domain Movies and Public Domain Review to try to
+convince them to add more metadata to their movie entries?</p>
+
+<p>Another way you could help is by adding pages to Wikipedia about
+movies that are legal to distribute on the Internet. If such page
+exist and include a link to both IMDB and The Internet Archive, the
+script used to generate free-movies-archive-org-wikidata.json should
+pick up the mapping as soon as wikidata is updates.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public</title>
- <link>http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/First_draft_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook_now_public.html</guid>
- <pubDate>Tue, 30 Aug 2016 10:10:00 +0200</pubDate>
- <description><p>In April we
-<a href="http://people.skolelinux.org/pere/blog/Lets_make_a_Norwegian_Bokm_l_edition_of_The_Debian_Administrator_s_Handbook.html">started
-to work</a> on a Norwegian Bokmål edition of the "open access" book on
-how to set up and administrate a Debian system. Today I am happy to
-report that the first draft is now publicly available. You can find
-it on <a href="https://debian-handbook.info/get/">get the Debian
-Administrator's Handbook page</a> (under Other languages). The first
-eight chapters have a first draft translation, and we are working on
-proofreading the content. If you want to help out, please start
-contributing using
-<a href="https://hosted.weblate.org/projects/debian-handbook/">the
-hosted weblate project page</a>, and get in touch using
-<a href="http://lists.alioth.debian.org/mailman/listinfo/debian-handbook-translators">the
-translators mailing list</a>. Please also check out
-<a href="https://debian-handbook.info/contribute/">the instructions for
-contributors</a>. A good way to contribute is to proofread the text
-and update weblate if you find errors.</p>
-
-<p>Our goal is still to make the Norwegian book available on paper as well as
-electronic form.</p>
+ <title>Some notes on fault tolerant storage systems</title>
+ <link>http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html</guid>
+ <pubDate>Wed, 1 Nov 2017 15:35:00 +0100</pubDate>
+ <description><p>If you care about how fault tolerant your storage is, you might
+find these articles and papers interesting. They have formed how I
+think of when designing a storage system.</p>
+
+<ul>
+
+<li>USENIX :login; <a
+href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy
+Does Not Imply Fault Tolerance. Analysis of Distributed Storage
+Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan,
+Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi
+H. Arpaci-Dusseau</li>
+
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why
+RAID 5 stops working in 2009</a> by Robin Harris</li>
+
+<li>ZDNet
+<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why
+RAID 6 stops working in 2019</a> by Robin Harris</li>
+
+<li>USENIX FAST'07
+<a href="http://research.google.com/archive/disk_failures.pdf">Failure
+Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro,
+Wolf-Dietrich Weber and Luiz André Barroso</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data
+Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug
+Hughes</li>
+
+<li>USENIX FAST'08
+<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An
+Analysis of Data Corruption in the Storage Stack</a> by
+L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C.
+Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li>
+
+<li>USENIX FAST'07 <a
+href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk
+failures in the real world: what does an MTTF of 1,000,000 hours mean
+to you?</a> by B. Schroeder and G. A. Gibson.</li>
+
+<li>USENIX ;login: <a
+href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are
+Disks the Dominant Contributor for Storage Failures? A Comprehensive
+Study of Storage Subsystem Failure Characteristics</a> by Weihang
+Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li>
+
+<li>SIGMETRICS 2007
+<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An
+analysis of latent sector errors in disk drives</a> by
+L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li>
+
+</ul>
+
+<p>Several of these research papers are based on data collected from
+hundred thousands or millions of disk, and their findings are eye
+opening. The short story is simply do not implicitly trust RAID or
+redundant storage systems. Details matter. And unfortunately there
+are few options on Linux addressing all the identified issues. Both
+ZFS and Btrfs are doing a fairly good job, but have legal and
+practical issues on their own. I wonder how cluster file systems like
+Ceph do in this regard. After all, there is an old saying, you know
+you have a distributed system when the crash of a computer you have
+never heard of stops you from getting any work done. The same holds
+true if fault tolerance do not work.</p>
+
+<p>Just remember, in the end, it do not matter how redundant, or how
+fault tolerant your storage is, if you do not continuously monitor its
+status to detect and replace failed disks.</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Coz can help you find bottlenecks in multi-threaded software - nice free software</title>
- <link>http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Coz_can_help_you_find_bottlenecks_in_multi_threaded_software___nice_free_software.html</guid>
- <pubDate>Thu, 11 Aug 2016 12:00:00 +0200</pubDate>
- <description><p>This summer, I read a great article
-"<a href="https://www.usenix.org/publications/login/summer2016/curtsinger">coz:
-This Is the Profiler You're Looking For</a>" in USENIX ;login: about
-how to profile multi-threaded programs. It presented a system for
-profiling software by running experiences in the running program,
-testing how run time performance is affected by "speeding up" parts of
-the code to various degrees compared to a normal run. It does this by
-slowing down parallel threads while the "faster up" code is running
-and measure how this affect processing time. The processing time is
-measured using probes inserted into the code, either using progress
-counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It
-can also measure unmodified code by measuring complete the program
-runtime and running the program several times instead.</p>
-
-<p>The project and presentation was so inspiring that I would like to
-get the system into Debian. I
-<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830708">created
-a WNPP request for it</a> and contacted upstream to try to make the
-system ready for Debian by sending patches. The build process need to
-be changed a bit to avoid running 'git clone' to get dependencies, and
-to include the JavaScript web page used to visualize the collected
-profiling information included in the source package.
-But I expect that should work out fairly soon.</p>
-
-<p>The way the system work is fairly simple. To run an coz experiment
-on a binary with debug symbols available, start the program like this:
-
-<p><blockquote><pre>
-coz run --- program-to-run
-</pre></blockquote></p>
-
-<p>This will create a text file profile.coz with the instrumentation
-information. To show what part of the code affect the performance
-most, use a web browser and either point it to
-<a href="http://plasma-umass.github.io/coz/">http://plasma-umass.github.io/coz/</a>
-or use the copy from git (in the gh-pages branch). Check out this web
-site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the
-profiling more useful you include &lt;coz.h&gt; and insert the
-COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the
-code, rebuild and run the profiler. This allow coz to do more
-targeted experiments.</p>
-
-<p>A video published by ACM
-<a href="https://www.youtube.com/watch?v=jE0V-p1odPg">presenting the
-Coz profiler</a> is available from Youtube. There is also a paper
-from the 25th Symposium on Operating Systems Principles available
-titled
-<a href="https://www.usenix.org/conference/atc16/technical-sessions/presentation/curtsinger">Coz:
-finding code that counts with causal profiling</a>.</p>
-
-<p><a href="https://github.com/plasma-umass/coz">The source code</a>
-for Coz is available from github. It will only build with clang
-because it uses a
-<a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55606">C++
-feature missing in GCC</a>, but I've submitted
-<a href="https://github.com/plasma-umass/coz/pull/67">a patch to solve
-it</a> and hope it will be included in the upstream source soon.</p>
-
-<p>Please get in touch if you, like me, would like to see this piece
-of software in Debian. I would very much like some help with the
-packaging effort, as I lack the in depth knowledge on how to package
-C++ libraries.</p>
+ <title>Web services for writing academic LaTeX papers as a team</title>
+ <link>http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html</guid>
+ <pubDate>Tue, 31 Oct 2017 21:00:00 +0100</pubDate>
+ <description><p>I was surprised today to learn that a friend in academia did not
+know there are easily available web services available for writing
+LaTeX documents as a team. I thought it was common knowledge, but to
+make sure at least my readers are aware of it, I would like to mention
+these useful services for writing LaTeX documents. Some of them even
+provide a WYSIWYG editor to ease writing even further.</p>
+
+<p>There are two commercial services available,
+<a href="https://sharelatex.com">ShareLaTeX</a> and
+<a href="https://overleaf.com">Overleaf</a>. They are very easy to
+use. Just start a new document, select which publisher to write for
+(ie which LaTeX style to use), and start writing. Note, these two
+have announced their intention to join forces, so soon it will only be
+one joint service. I've used both for different documents, and they
+work just fine. While
+<a href="https://github.com/sharelatex/sharelatex">ShareLaTeX is free
+software</a>, while the latter is not. According to <a
+href="https://www.overleaf.com/help/17-is-overleaf-open-source">a
+announcement from Overleaf</a>, they plan to keep the ShareLaTeX code
+base maintained as free software.</p>
+
+But these two are not the only alternatives.
+<a href="https://app.fiduswriter.org/">Fidus Writer</a> is another free
+software solution with <a href="https://github.com/fiduswriter">the
+source available on github</a>. I have not used it myself. Several
+others can be found on the nice
+<a href="https://alternativeto.net/software/sharelatex/">alterntiveTo
+web service</a>.
+
+<p>If you like Google Docs or Etherpad, but would like to write
+documents in LaTeX, you should check out these services. You can even
+host your own, if you want to. :)</p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>
<item>
- <title>Sales number for the Free Culture translation, first half of 2016</title>
- <link>http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Sales_number_for_the_Free_Culture_translation__first_half_of_2016.html</guid>
- <pubDate>Fri, 5 Aug 2016 22:45:00 +0200</pubDate>
- <description><p>As my regular readers probably remember, the last year I published
-a French and Norwegian translation of the classic
-<a href="http://www.free-culture.cc/">Free Culture book</a> by the
-founder of the Creative Commons movement, Lawrence Lessig. A bit less
-known is the fact that due to the way I created the translations,
-using docbook and po4a, I also recreated the English original. And
-because I already had created a new the PDF edition, I published it
-too. The revenue from the books are sent to the Creative Commons
-Corporation. In other words, I do not earn any money from this
-project, I just earn the warm fuzzy feeling that the text is available
-for a wider audience and more people can learn why the Creative
-Commons is needed.</p>
-
-<p>Today, just for fun, I had a look at the sales number over at
-Lulu.com, which take care of payment, printing and shipping. Much to
-my surprise, the English edition is selling better than both the
-French and Norwegian edition, despite the fact that it has been
-available in English since it was first published. In total, 24 paper
-books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:</p>
-
-<table border="0">
-<tr><th>Title / language</th><th>Quantity</th></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">Culture Libre / French</a></td><td align="right">3</td></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Fri kultur / Norwegian</a></td><td align="right">7</td></tr>
-<tr><td><a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">Free Culture / English</a></td><td align="right">14</td></tr>
-</table>
-
-<p>The books are available both from Lulu.com and from large book
-stores like Amazon and Barnes&Noble. Most revenue, around $10 per
-book, is sent to the Creative Commons project when the book is sold
-directly by Lulu.com. The other channels give less revenue. The
-summary from Lulu tell me 10 books was sold via the Amazon channel, 10
-via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells
-me that the revenue sent so far this year is USD $101.42. No idea
-what kind of sales numbers to expect, so I do not know if that is a
-good amount of sales for a 10 year old book or not. But it make me
-happy that the buyers find the book, and I hope they enjoy reading it
-as much as I did.</p>
-
-<p>The ebook edition is available for free from
-<a href="https://github.com/petterreinholdtsen/free-culture-lessig">Github</a>.</p>
-
-<p>If you would like to translate and publish the book in your native
-language, I would be happy to help make it happen. Please get in
-touch.</p>
+ <title>Locating IMDB IDs of movies in the Internet Archive using Wikidata</title>
+ <link>http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html</guid>
+ <pubDate>Wed, 25 Oct 2017 12:20:00 +0200</pubDate>
+ <description><p>Recently, I needed to automatically check the copyright status of a
+set of <a href="http://www.imdb.com/">The Internet Movie database
+(IMDB)</a> entries, to figure out which one of the movies they refer
+to can be freely distributed on the Internet. This proved to be
+harder than it sounds. IMDB for sure list movies without any
+copyright protection, where the copyright protection has expired or
+where the movie is lisenced using a permissive license like one from
+Creative Commons. These are mixed with copyright protected movies,
+and there seem to be no way to separate these classes of movies using
+the information in IMDB.</p>
+
+<p>First I tried to look up entries manually in IMDB,
+<a href="https://www.wikipedia.org/">Wikipedia</a> and
+<a href="https://www.archive.org/">The Internet Archive</a>, to get a
+feel how to do this. It is hard to know for sure using these sources,
+but it should be possible to be reasonable confident a movie is "out
+of copyright" with a few hours work per movie. As I needed to check
+almost 20,000 entries, this approach was not sustainable. I simply
+can not work around the clock for about 6 years to check this data
+set.</p>
+
+<p>I asked the people behind The Internet Archive if they could
+introduce a new metadata field in their metadata XML for IMDB ID, but
+was told that they leave it completely to the uploaders to update the
+metadata. Some of the metadata entries had IMDB links in the
+description, but I found no way to download all metadata files in bulk
+to locate those ones and put that approach aside.</p>
+
+<p>In the process I noticed several Wikipedia articles about movies
+had links to both IMDB and The Internet Archive, and it occured to me
+that I could use the Wikipedia RDF data set to locate entries with
+both, to at least get a lower bound on the number of movies on The
+Internet Archive with a IMDB ID. This is useful based on the
+assumption that movies distributed by The Internet Archive can be
+legally distributed on the Internet. With some help from the RDF
+community (thank you DanC), I was able to come up with this query to
+pass to <a href="https://query.wikidata.org/">the SPARQL interface on
+Wikidata</a>:
+
+<p><pre>
+SELECT ?work ?imdb ?ia ?when ?label
+WHERE
+{
+ ?work wdt:P31/wdt:P279* wd:Q11424.
+ ?work wdt:P345 ?imdb.
+ ?work wdt:P724 ?ia.
+ OPTIONAL {
+ ?work wdt:P577 ?when.
+ ?work rdfs:label ?label.
+ FILTER(LANG(?label) = "en").
+ }
+}
+</pre></p>
+
+<p>If I understand the query right, for every film entry anywhere in
+Wikpedia, it will return the IMDB ID and The Internet Archive ID, and
+when the movie was released and its English title, if either or both
+of the latter two are available. At the moment the result set contain
+2338 entries. Of course, it depend on volunteers including both
+correct IMDB and The Internet Archive IDs in the wikipedia articles
+for the movie. It should be noted that the result will include
+duplicates if the movie have entries in several languages. There are
+some bogus entries, either because The Internet Archive ID contain a
+typo or because the movie is not available from The Internet Archive.
+I did not verify the IMDB IDs, as I am unsure how to do that
+automatically.</p>
+
+<p>I wrote a small python script to extract the data set from Wikidata
+and check if the XML metadata for the movie is available from The
+Internet Archive, and after around 1.5 hour it produced a list of 2097
+free movies and their IMDB ID. In total, 171 entries in Wikidata lack
+the refered Internet Archive entry. I assume the 70 "disappearing"
+entries (ie 2338-2097-171) are duplicate entries.</p>
+
+<p>This is not too bad, given that The Internet Archive report to
+contain <a href="https://archive.org/details/feature_films">5331
+feature films</a> at the moment, but it also mean more than 3000
+movies are missing on Wikipedia or are missing the pair of references
+on Wikipedia.</p>
+
+<p>I was curious about the distribution by release year, and made a
+little graph to show how the amount of free movies is spread over the
+years:<p>
+
+<p><img src="http://people.skolelinux.org/pere/blog/images/2017-10-25-verk-i-det-fri-filmer.png"></p>
+
+<p>I expect the relative distribution of the remaining 3000 movies to
+be similar.</p>
+
+<p>If you want to help, and want to ensure Wikipedia can be used to
+cross reference The Internet Archive and The Internet Movie Database,
+please make sure entries like this are listed under the "External
+links" heading on the Wikipedia article for the movie:</p>
+
+<p><pre>
+* {{Internet Archive film|id=FightingLady}}
+* {{IMDb title|id=0036823|title=The Fighting Lady}}
+</pre></p>
+
+<p>Please verify the links on the final page, to make sure you did not
+introduce a typo.</p>
+
+<p>Here is the complete list, if you want to correct the 171
+identified Wikipedia entries with broken links to The Internet
+Archive: <a href="http://www.wikidata.org/entity/Q1140317">Q1140317</a>,
+<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
+<a href="http://www.wikidata.org/entity/Q458656">Q458656</a>,
+<a href="http://www.wikidata.org/entity/Q470560">Q470560</a>,
+<a href="http://www.wikidata.org/entity/Q743340">Q743340</a>,
+<a href="http://www.wikidata.org/entity/Q822580">Q822580</a>,
+<a href="http://www.wikidata.org/entity/Q480696">Q480696</a>,
+<a href="http://www.wikidata.org/entity/Q128761">Q128761</a>,
+<a href="http://www.wikidata.org/entity/Q1307059">Q1307059</a>,
+<a href="http://www.wikidata.org/entity/Q1335091">Q1335091</a>,
+<a href="http://www.wikidata.org/entity/Q1537166">Q1537166</a>,
+<a href="http://www.wikidata.org/entity/Q1438334">Q1438334</a>,
+<a href="http://www.wikidata.org/entity/Q1479751">Q1479751</a>,
+<a href="http://www.wikidata.org/entity/Q1497200">Q1497200</a>,
+<a href="http://www.wikidata.org/entity/Q1498122">Q1498122</a>,
+<a href="http://www.wikidata.org/entity/Q865973">Q865973</a>,
+<a href="http://www.wikidata.org/entity/Q834269">Q834269</a>,
+<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
+<a href="http://www.wikidata.org/entity/Q841781">Q841781</a>,
+<a href="http://www.wikidata.org/entity/Q1548193">Q1548193</a>,
+<a href="http://www.wikidata.org/entity/Q499031">Q499031</a>,
+<a href="http://www.wikidata.org/entity/Q1564769">Q1564769</a>,
+<a href="http://www.wikidata.org/entity/Q1585239">Q1585239</a>,
+<a href="http://www.wikidata.org/entity/Q1585569">Q1585569</a>,
+<a href="http://www.wikidata.org/entity/Q1624236">Q1624236</a>,
+<a href="http://www.wikidata.org/entity/Q4796595">Q4796595</a>,
+<a href="http://www.wikidata.org/entity/Q4853469">Q4853469</a>,
+<a href="http://www.wikidata.org/entity/Q4873046">Q4873046</a>,
+<a href="http://www.wikidata.org/entity/Q915016">Q915016</a>,
+<a href="http://www.wikidata.org/entity/Q4660396">Q4660396</a>,
+<a href="http://www.wikidata.org/entity/Q4677708">Q4677708</a>,
+<a href="http://www.wikidata.org/entity/Q4738449">Q4738449</a>,
+<a href="http://www.wikidata.org/entity/Q4756096">Q4756096</a>,
+<a href="http://www.wikidata.org/entity/Q4766785">Q4766785</a>,
+<a href="http://www.wikidata.org/entity/Q880357">Q880357</a>,
+<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
+<a href="http://www.wikidata.org/entity/Q882066">Q882066</a>,
+<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
+<a href="http://www.wikidata.org/entity/Q204191">Q204191</a>,
+<a href="http://www.wikidata.org/entity/Q1194170">Q1194170</a>,
+<a href="http://www.wikidata.org/entity/Q940014">Q940014</a>,
+<a href="http://www.wikidata.org/entity/Q946863">Q946863</a>,
+<a href="http://www.wikidata.org/entity/Q172837">Q172837</a>,
+<a href="http://www.wikidata.org/entity/Q573077">Q573077</a>,
+<a href="http://www.wikidata.org/entity/Q1219005">Q1219005</a>,
+<a href="http://www.wikidata.org/entity/Q1219599">Q1219599</a>,
+<a href="http://www.wikidata.org/entity/Q1643798">Q1643798</a>,
+<a href="http://www.wikidata.org/entity/Q1656352">Q1656352</a>,
+<a href="http://www.wikidata.org/entity/Q1659549">Q1659549</a>,
+<a href="http://www.wikidata.org/entity/Q1660007">Q1660007</a>,
+<a href="http://www.wikidata.org/entity/Q1698154">Q1698154</a>,
+<a href="http://www.wikidata.org/entity/Q1737980">Q1737980</a>,
+<a href="http://www.wikidata.org/entity/Q1877284">Q1877284</a>,
+<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
+<a href="http://www.wikidata.org/entity/Q1199354">Q1199354</a>,
+<a href="http://www.wikidata.org/entity/Q1199451">Q1199451</a>,
+<a href="http://www.wikidata.org/entity/Q1211871">Q1211871</a>,
+<a href="http://www.wikidata.org/entity/Q1212179">Q1212179</a>,
+<a href="http://www.wikidata.org/entity/Q1238382">Q1238382</a>,
+<a href="http://www.wikidata.org/entity/Q4906454">Q4906454</a>,
+<a href="http://www.wikidata.org/entity/Q320219">Q320219</a>,
+<a href="http://www.wikidata.org/entity/Q1148649">Q1148649</a>,
+<a href="http://www.wikidata.org/entity/Q645094">Q645094</a>,
+<a href="http://www.wikidata.org/entity/Q5050350">Q5050350</a>,
+<a href="http://www.wikidata.org/entity/Q5166548">Q5166548</a>,
+<a href="http://www.wikidata.org/entity/Q2677926">Q2677926</a>,
+<a href="http://www.wikidata.org/entity/Q2698139">Q2698139</a>,
+<a href="http://www.wikidata.org/entity/Q2707305">Q2707305</a>,
+<a href="http://www.wikidata.org/entity/Q2740725">Q2740725</a>,
+<a href="http://www.wikidata.org/entity/Q2024780">Q2024780</a>,
+<a href="http://www.wikidata.org/entity/Q2117418">Q2117418</a>,
+<a href="http://www.wikidata.org/entity/Q2138984">Q2138984</a>,
+<a href="http://www.wikidata.org/entity/Q1127992">Q1127992</a>,
+<a href="http://www.wikidata.org/entity/Q1058087">Q1058087</a>,
+<a href="http://www.wikidata.org/entity/Q1070484">Q1070484</a>,
+<a href="http://www.wikidata.org/entity/Q1080080">Q1080080</a>,
+<a href="http://www.wikidata.org/entity/Q1090813">Q1090813</a>,
+<a href="http://www.wikidata.org/entity/Q1251918">Q1251918</a>,
+<a href="http://www.wikidata.org/entity/Q1254110">Q1254110</a>,
+<a href="http://www.wikidata.org/entity/Q1257070">Q1257070</a>,
+<a href="http://www.wikidata.org/entity/Q1257079">Q1257079</a>,
+<a href="http://www.wikidata.org/entity/Q1197410">Q1197410</a>,
+<a href="http://www.wikidata.org/entity/Q1198423">Q1198423</a>,
+<a href="http://www.wikidata.org/entity/Q706951">Q706951</a>,
+<a href="http://www.wikidata.org/entity/Q723239">Q723239</a>,
+<a href="http://www.wikidata.org/entity/Q2079261">Q2079261</a>,
+<a href="http://www.wikidata.org/entity/Q1171364">Q1171364</a>,
+<a href="http://www.wikidata.org/entity/Q617858">Q617858</a>,
+<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
+<a href="http://www.wikidata.org/entity/Q5166611">Q5166611</a>,
+<a href="http://www.wikidata.org/entity/Q324513">Q324513</a>,
+<a href="http://www.wikidata.org/entity/Q374172">Q374172</a>,
+<a href="http://www.wikidata.org/entity/Q7533269">Q7533269</a>,
+<a href="http://www.wikidata.org/entity/Q970386">Q970386</a>,
+<a href="http://www.wikidata.org/entity/Q976849">Q976849</a>,
+<a href="http://www.wikidata.org/entity/Q7458614">Q7458614</a>,
+<a href="http://www.wikidata.org/entity/Q5347416">Q5347416</a>,
+<a href="http://www.wikidata.org/entity/Q5460005">Q5460005</a>,
+<a href="http://www.wikidata.org/entity/Q5463392">Q5463392</a>,
+<a href="http://www.wikidata.org/entity/Q3038555">Q3038555</a>,
+<a href="http://www.wikidata.org/entity/Q5288458">Q5288458</a>,
+<a href="http://www.wikidata.org/entity/Q2346516">Q2346516</a>,
+<a href="http://www.wikidata.org/entity/Q5183645">Q5183645</a>,
+<a href="http://www.wikidata.org/entity/Q5185497">Q5185497</a>,
+<a href="http://www.wikidata.org/entity/Q5216127">Q5216127</a>,
+<a href="http://www.wikidata.org/entity/Q5223127">Q5223127</a>,
+<a href="http://www.wikidata.org/entity/Q5261159">Q5261159</a>,
+<a href="http://www.wikidata.org/entity/Q1300759">Q1300759</a>,
+<a href="http://www.wikidata.org/entity/Q5521241">Q5521241</a>,
+<a href="http://www.wikidata.org/entity/Q7733434">Q7733434</a>,
+<a href="http://www.wikidata.org/entity/Q7736264">Q7736264</a>,
+<a href="http://www.wikidata.org/entity/Q7737032">Q7737032</a>,
+<a href="http://www.wikidata.org/entity/Q7882671">Q7882671</a>,
+<a href="http://www.wikidata.org/entity/Q7719427">Q7719427</a>,
+<a href="http://www.wikidata.org/entity/Q7719444">Q7719444</a>,
+<a href="http://www.wikidata.org/entity/Q7722575">Q7722575</a>,
+<a href="http://www.wikidata.org/entity/Q2629763">Q2629763</a>,
+<a href="http://www.wikidata.org/entity/Q2640346">Q2640346</a>,
+<a href="http://www.wikidata.org/entity/Q2649671">Q2649671</a>,
+<a href="http://www.wikidata.org/entity/Q7703851">Q7703851</a>,
+<a href="http://www.wikidata.org/entity/Q7747041">Q7747041</a>,
+<a href="http://www.wikidata.org/entity/Q6544949">Q6544949</a>,
+<a href="http://www.wikidata.org/entity/Q6672759">Q6672759</a>,
+<a href="http://www.wikidata.org/entity/Q2445896">Q2445896</a>,
+<a href="http://www.wikidata.org/entity/Q12124891">Q12124891</a>,
+<a href="http://www.wikidata.org/entity/Q3127044">Q3127044</a>,
+<a href="http://www.wikidata.org/entity/Q2511262">Q2511262</a>,
+<a href="http://www.wikidata.org/entity/Q2517672">Q2517672</a>,
+<a href="http://www.wikidata.org/entity/Q2543165">Q2543165</a>,
+<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
+<a href="http://www.wikidata.org/entity/Q426628">Q426628</a>,
+<a href="http://www.wikidata.org/entity/Q12126890">Q12126890</a>,
+<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
+<a href="http://www.wikidata.org/entity/Q13359969">Q13359969</a>,
+<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
+<a href="http://www.wikidata.org/entity/Q2294295">Q2294295</a>,
+<a href="http://www.wikidata.org/entity/Q2559509">Q2559509</a>,
+<a href="http://www.wikidata.org/entity/Q2559912">Q2559912</a>,
+<a href="http://www.wikidata.org/entity/Q7760469">Q7760469</a>,
+<a href="http://www.wikidata.org/entity/Q6703974">Q6703974</a>,
+<a href="http://www.wikidata.org/entity/Q4744">Q4744</a>,
+<a href="http://www.wikidata.org/entity/Q7766962">Q7766962</a>,
+<a href="http://www.wikidata.org/entity/Q7768516">Q7768516</a>,
+<a href="http://www.wikidata.org/entity/Q7769205">Q7769205</a>,
+<a href="http://www.wikidata.org/entity/Q7769988">Q7769988</a>,
+<a href="http://www.wikidata.org/entity/Q2946945">Q2946945</a>,
+<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
+<a href="http://www.wikidata.org/entity/Q3212086">Q3212086</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q18218448">Q18218448</a>,
+<a href="http://www.wikidata.org/entity/Q6909175">Q6909175</a>,
+<a href="http://www.wikidata.org/entity/Q7405709">Q7405709</a>,
+<a href="http://www.wikidata.org/entity/Q7416149">Q7416149</a>,
+<a href="http://www.wikidata.org/entity/Q7239952">Q7239952</a>,
+<a href="http://www.wikidata.org/entity/Q7317332">Q7317332</a>,
+<a href="http://www.wikidata.org/entity/Q7783674">Q7783674</a>,
+<a href="http://www.wikidata.org/entity/Q7783704">Q7783704</a>,
+<a href="http://www.wikidata.org/entity/Q7857590">Q7857590</a>,
+<a href="http://www.wikidata.org/entity/Q3372526">Q3372526</a>,
+<a href="http://www.wikidata.org/entity/Q3372642">Q3372642</a>,
+<a href="http://www.wikidata.org/entity/Q3372816">Q3372816</a>,
+<a href="http://www.wikidata.org/entity/Q3372909">Q3372909</a>,
+<a href="http://www.wikidata.org/entity/Q7959649">Q7959649</a>,
+<a href="http://www.wikidata.org/entity/Q7977485">Q7977485</a>,
+<a href="http://www.wikidata.org/entity/Q7992684">Q7992684</a>,
+<a href="http://www.wikidata.org/entity/Q3817966">Q3817966</a>,
+<a href="http://www.wikidata.org/entity/Q3821852">Q3821852</a>,
+<a href="http://www.wikidata.org/entity/Q3420907">Q3420907</a>,
+<a href="http://www.wikidata.org/entity/Q3429733">Q3429733</a>,
+<a href="http://www.wikidata.org/entity/Q774474">Q774474</a></p>
+
+<p>As usual, if you use Bitcoin and want to show your support of my
+activities, please send Bitcoin donations to my address
+<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>
</description>
</item>