X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/d00856cc47cd2a9e90b298b284988fe623f24269..f1963359ef2359f4f23e961ff32071d5cb5f93dc:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index 94283c83ef..58708d527b 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -6,6 +6,953 @@ http://people.skolelinux.org/pere/blog/ + + Legal to share more than 11,000 movies listed on IMDB? + http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html + http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html + Sun, 7 Jan 2018 23:30:00 +0100 + <p>I've continued to track down list of movies that are legal to +distribute on the Internet, and identified more than 11,000 title IDs +in The Internet Movie Database so far. Most of them (57%) are feature +films from USA published before 1923. I've also tracked down more +than 24,000 movies I have not yet been able to map to IMDB title ID, +so the real number could be a lot higher. According to the front web +page for <a href="https://retrofilmvault.com/">Retro Film Vault</A>, +there are 44,000 public domain films, so I guess there are still some +left to identify.</p> + +<p>The complete data set is available from +<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a +public git repository</a>, including the scripts used to create it. +Most of the data is collected using web scraping, for example from the +"product catalog" of companies selling copies of public domain movies, +but any source I find believable is used. I've so far had to throw +out three sources because I did not trust the public domain status of +the movies listed.</p> + +<p>Anyway, this is the summary of the 28 collected data sources so +far:</p> + +<p><pre> + 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json + 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json + 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json + 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json + 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json + 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json + 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json + 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json + 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json + 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json + 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json + 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json + 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json + 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json + 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json + 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json + 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json + 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json + 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json + 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json + 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json + 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json + 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json + 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json + 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json +11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID +</pre></p> + +<p> I keep finding more data sources. I found the cinemovies source +just a few days ago, and as you can see from the summary, it extended +my list with 63 movies. Check out the mklist-* scripts in the git +repository if you are curious how the lists are created. Many of the +titles are extracted using searches on IMDB, where I look for the +title and year, and accept search results with only one movie listed +if the year matches. This allow me to automatically use many lists of +movies without IMDB title ID references at the cost of increasing the +risk of wrongly identify a IMDB title ID as public domain. So far my +random manual checks have indicated that the method is solid, but I +really wish all lists of public domain movies would include unique +movie identifier like the IMDB title ID. It would make the job of +counting movies in the public domain a lot easier.</p> + + + + + Kommentarer til «Evaluation of (il)legality» for Popcorn Time + http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html + http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html + Wed, 20 Dec 2017 11:40:00 +0100 + <p>I går var jeg i Follo tingrett som sakkyndig vitne og presenterte + mine undersøkelser rundt + <a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">telling + av filmverk i det fri</a>, relatert til + <a href="https://www.nuug.no/">foreningen NUUG</a>s involvering i + <a href="https://www.nuug.no/news/tags/dns-domenebeslag/">saken om + Økokrims beslag og senere inndragning av DNS-domenet + popcorn-time.no</a>. Jeg snakket om flere ting, men mest om min + vurdering av hvordan filmbransjen har målt hvor ulovlig Popcorn Time + er. Filmbransjens måling er så vidt jeg kan se videreformidlet uten + endringer av norsk politi, og domstolene har lagt målingen til grunn + når de har vurdert Popcorn Time både i Norge og i utlandet (tallet + 99% er referert også i utenlandske domsavgjørelser).</p> + +<p>I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv, + med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg + skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha + notatet, så hvis jeg forsto rettsprosessen riktig ble kun + histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var + visst bare interessert i å forholde seg til det jeg sa i retten, + ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere + enn meg kan ha glede av teksten, og publiserer den derfor her. + Legger ved avskrift av dokument 09,13, som er det sentrale + dokumentet jeg kommenterer.</p> + +<p><strong>Kommentarer til «Evaluation of (il)legality» for Popcorn + Time</strong></p> + +<p><strong>Oppsummering</strong></p> + +<p>Målemetoden som Økokrim har lagt til grunn når de påstår at 99% av + filmene tilgjengelig fra Popcorn Time deles ulovlig har + svakheter.</p> + +<p>De eller den som har vurdert hvorvidt filmer kan lovlig deles har + ikke lyktes med å identifisere filmer som kan deles lovlig og har + tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig. + Økokrim legger til grunn at det bare finnes èn film, Charlie + Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de + som ble observert tilgjengelig via ulike Popcorn Time-varianter. + Jeg finner tre flere blant de observerte filmene: «The Brain That + Wouldn't Die» fra 1962, «God’s Little Acre» fra 1958 og «She Wore a + Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det + finnes dermed minst fire ganger så mange filmer som lovlig kan deles + på Internett i datasettet Økokrim har lagt til grunn når det påstås + at mindre enn 1 % kan deles lovlig.</p> + +<p>Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra + ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte + filmkatalogene som helhet, hvilket påvirker fordelingen mellom + filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I + tillegg gir valg av øvre del (de fem første) av søkeresultatene et + avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk + i det fri i søkeresultatet.</p> + +<p>Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn + Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger + som vedlikeholdes uavhengig av Popcorn Time.</p> + +<p>Omtalte dokumenter: 09,12, <a href="#dok-09-13">09,13</a>, 09,14, +09,18, 09,19, 09,20.</p> + +<p><strong>Utfyllende kommentarer</strong></p> + +<p>Økokrim har forklart domstolene at minst 99% av alt som er + tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig på + Internet. Jeg ble nysgjerrig på hvordan de er kommet frem til dette + tallet, og dette notatet er en samling kommentarer rundt målingen + Økokrim henviser til. Litt av bakgrunnen for at jeg valgte å se på + saken er at jeg er interessert i å identifisere og telle hvor mange + kunstneriske verk som er falt i det fri eller av andre grunner kan + lovlig deles på Internett, og dermed var interessert i hvordan en + hadde funnet den ene prosenten som kanskje deles lovlig.</p> + +<p>Andelen på 99% kommer fra et ukreditert og udatert notatet som tar + mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike + Popcorn Time-varianter er.</p> + +<p>Raskt oppsummert, så forteller metodedokumentet at på grunn av at + det ikke er mulig å få tak i komplett liste over alle filmtitler + tilgjengelig via Popcorn Time, så lages noe som skal være et + representativt utvalg ved å velge 50 søkeord større enn tre tegn fra + ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og + de første fem filmene i søkeresultatet samles inn inntil 100 unike + filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å + nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt + til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut + og søkt på flere tilfeldig valgte søkeord inntil 100 unike + filmtitler var identifisert.</p> + +<p>Deretter ble for hver av filmtitlene «vurdert hvorvidt det var + rimelig å forvente om at verket var vernet av copyright, ved å se på + om filmen var tilgjengelig i IMDB, samt se på regissør, + utgivelsesår, når det var utgitt for bestemte markedsområder samt + hvilke produksjons- og distribusjonsselskap som var registrert» (min + oversettelse).</p> + +<p>Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og + 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert + 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion + Picture Association EMEA. Metoden virker å ha flere svakheter som + gir resultatene en slagside. Den starter med å slå fast at det ikke + er mulig å hente ut en komplett liste over alle filmtitler som er + tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne + forutsetningen er ikke i tråd med det som står i dokument 09,12, som + ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller + hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument + 09,12 er muligens samme rapport som ble referert til i dom fra Oslo + Tingrett 2017-11-03 + (<a href="https://www.domstol.no/no/Enkelt-domstol/Oslo--tingrett/Nyheter/ma-sperre-for-popcorn-time/">sak + 17-093347TVI-OTIR/05</a>) som rapport av 1. juni 2017 av Alexander + Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord + for å kontrollere dette.</p> + +<p>IMDB er en forkortelse for The Internet Movie Database, en + anerkjent kommersiell nettjeneste som brukes aktivt av både + filmbransjen og andre til å holde rede på hvilke spillefilmer (og + endel andre filmer) som finnes eller er under produksjon, og + informasjon om disse filmene. Datakvaliteten er høy, med få feil og + få filmer som mangler. IMDB viser ikke informasjon om + opphavsrettslig status for filmene på infosiden for hver film. Som + del av IMDB-tjenesten finnes det lister med filmer laget av + frivillige som lister opp det som antas å være verk i det fri.</p> + +<p>Det finnes flere kilder som kan brukes til å finne filmer som er + allemannseie (public domain) eller har bruksvilkår som gjør det + lovlig for alleå dele dem på Internett. Jeg har de siste ukene + forsøkt å samle og krysskoble disse listene for å forsøke å telle + antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og + publiserte filmer for Internett-arkivets del), har jeg så langt + klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer. + +<p>De aller fleste oppføringene er hentet fra IMDB selv, basert på det + faktum at alle filmer laget i USA før 1923 er falt i det fri. + Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette + utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt). + En annen stor andel kommer fra Internett-arkivet, der jeg har + identifisert filmer med referanse til IMDB. Internett-arkivet, som + holder til i USA, har som + <a href="https://archive.org/about/terms.php">policy å kun publisere + filmer som det er lovlig å distribuere</a>. Jeg har under arbeidet + kommet over flere filmer som har blitt fjernet fra + Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene + som kontrollerer Internett-arkivet har et aktivt forhold til å kun + ha lovlig innhold der, selv om det i stor grad er drevet av + frivillige. En annen stor liste med filmer kommer fra det + kommersielle selskapet Retro Film Vault, som selger allemannseide + filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister + over filmer som hevdes å være allemannseie, det være seg Public + Domain Review, Public Domain Torrents og Public Domain Movies (.net + og .info), samt lister over filmer med Creative Commons-lisensiering + fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel + stikkontroll ved å vurdere filmer som kun omtales på en liste. Der + jeg har funnet feil som har gjort meg i tvil om vurderingen til de + som har laget listen har jeg forkastet listen fullstendig (gjelder + en liste fra IMDB).</p> + +<p>Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på + Internett (fra blant annet Internett-arkivet, Public Domain + Torrents, Public Domain Reivew og Public Domain Movies), og knytte + dem til oppføringer i IMDB, så har jeg så langt klart å identifisere + over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro + kan lovlig distribueres av alle på Internett. Som ekstra kilder er + det brukt lister over filmer som antas/påstås å være allemannseie. + Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig + for almennheten alle verk som er falt i det fri eller har + bruksvilkår som tillater deling. + +<p>I tillegg til de over 11 000 filmene der tittel-ID i IMDB er + identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå + ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av + disse er nok duplikater av de IMDB-oppføringene som er identifisert + så langt, men neppe alle. Retro Film Vault hevder å ha 44 000 + filmverk i det fri i sin katalog, så det er mulig at det reelle + tallet er betydelig høyere enn de jeg har klart å identifisere så + langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor + mange filmer i IMDB som kan lovlig deles på Internett. I følge <a + href="http://www.imdb.com/stats">statistikk fra IMDB</a> er det 4.6 + millioner titler registrert, hvorav 3 millioner er TV-serieepisoder. + Jeg har ikke funnet ut hvordan de fordeler seg per år.</p> + +<p>Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig + kunne deles på Internett, får en følgende histogram:</p> + +<p align="center"><img width="80%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year.png"></p> + +<p>En kan i histogrammet se at effekten av manglende registrering + eller fornying av registrering er at mange filmer gitt ut i USA før + 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere + filmer gitt ut de siste årene med bruksvilkår som tillater deling, + muligens på grunn av fremveksten av + <a href="https://creativecommons.org/">Creative + Commons</a>-bevegelsen..</p> + +<p>For maskinell analyse av katalogene har jeg laget et lite program + som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn + Time-varianter og laster ned komplett liste over filmer i + katalogene, noe som bekrefter at det er mulig å hente ned komplett + liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire + bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra + www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den + andre brukes i følge dokument 09,12 av klienten tilgjengelig fra + popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette + dokumentet. Den tredje brukes av websidene tilgjengelig fra + popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet. + Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i + følge dokument 09,12, og er navngitt 'ukrfnlge' i dette + dokumentet.</p> + +<p>Metoden Økokrim legger til grunn skriver i sitt punkt fire at + skjønn er en egnet metode for å finne ut om en film kan lovlig deles + på Internett eller ikke, og sier at det ble «vurdert hvorvidt det + var rimelig å forvente om at verket var vernet av copyright». For + det første er det ikke nok å slå fast om en film er «vernet av + copyright» for å vite om det er lovlig å dele den på Internett eller + ikke, da det finnes flere filmer med opphavsrettslige bruksvilkår + som tillater deling på Internett. Eksempler på dette er Creative + Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra + 2010. I tillegg til slike finnes det flere filmer som nå er + allemannseie (public domain) på grunn av manglende registrering + eller fornying av registrering selv om både regisør, + produksjonsselskap og distributør ønsker seg vern. Eksempler på + dette er Plan 9 from Outer Space fra 1959 og Night of the Living + Dead fra 1968. Alle filmer fra USA som var allemannseie før + 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i + USA på det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis + det er noe + <a href="http://www.latimes.com/local/lanow/la-me-ln-happy-birthday-song-lawsuit-decision-20150922-story.html">historien + om sangen «Happy birthday»</a> forteller oss, der betaling for bruk + har vært krevd inn i flere tiår selv om sangen ikke egentlig var + vernet av åndsverksloven, så er det at hvert enkelt verk må vurderes + nøye og i detalj før en kan slå fast om verket er allemannseie eller + ikke, det holder ikke å tro på selverklærte rettighetshavere. Flere + eksempel på verk i det fri som feilklassifiseres som vernet er fra + dokument 09,18, som lister opp søkeresultater for klienten omtalt + som popcorntime.sh og i følge notatet kun inneholder en film (The + Circus fra 1928) som under tvil kan antas å være allemannseie.</p> + +<p>Ved rask gjennomlesning av dokument 09,18, som inneholder + skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt + både filmen «The Brain That Wouldn't Die» fra 1962 som er + <a href="https://archive.org/details/brain_that_wouldnt_die">tilgjengelig + fra Internett-arkivet</a> og som + <a href="https://en.wikipedia.org/wiki/List_of_films_in_the_public_domain_in_the_United_States">i + følge Wikipedia er allemannseie i USA</a> da den ble gitt ut i + 1962 uten 'copyright'-merking, og filmen «God’s Little Acre» fra + 1958 <a href="https://en.wikipedia.org/wiki/God%27s_Little_Acre_%28film%29">som + er lagt ut på Wikipedia</a>, der det fortelles at + sort/hvit-utgaven er allemannseie. Det fremgår ikke fra dokument + 09,18 om filmen omtalt der er sort/hvit-utgaven. Av + kapasitetsårsaker og på grunn av at filmoversikten i dokument 09,18 + ikke er maskinlesbart har jeg ikke forsøkt å sjekke alle filmene som + listes opp der om mot liste med filmer som er antatt lovlig kan + distribueres på Internet.</p> + +<p>Ved maskinell gjennomgang av listen med IMDB-referanser under + regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg + filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er + feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig + fra Internett-arkivet og markert som allemannseie der. Det virker + dermed å være minst fire ganger så mange filmer som kan lovlig deles + på Internett enn det som er lagt til grunn når en påstår at minst + 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere + undersøkelser kan avdekke flere. Poenget er uansett at metodens + punkt om «rimelig å forvente om at verket var vernet av copyright» + gjør metoden upålitelig.</p> + +<p>Den omtalte målemetoden velger ut tilfeldige søketermer fra + ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske + som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke + hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg + om den er egnet til å få et representativt utvalg av filmer. Mange + av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk + ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger. + Dette antyder at enkeltmålinger av 100 filmer slik målemetoden + beskriver er gjort, ikke er velegnet til å finne andel ulovlig + innhold i bittorrent-katalogene.</p> + +<p>En kan motvirke dette store avviket for enkeltmålinger ved å gjøre + mange søk og slå sammen resultatet. Jeg har testet ved å + gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000 + tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig + avvik, i forhold til telling av filmer pr år i hele katalogen.</p> + +<p>Målemetoden henter ut de fem øverste i søkeresultatet. + Søkeresultatene er sortert på antall bittorrent-klienter registrert + som delere i katalogene, hvilket kan gi en slagside mot hvilke + filmer som er populære blant de som bruker bittorrent-katalogene, + uten at det forteller noe om hvilket innhold som er tilgjengelig + eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har + forsøkt å måle hvor stor en slik slagside eventuelt er ved å + sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i + stedet. Avviket for disse to metodene for endel kataloger er godt + synlig på histogramet. Her er histogram over filmer funnet i den + komplette katalogen (grønn strek), og filmer funnet ved søk etter + ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i + søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En + kan her se at resultatene påvirkes betydelig av hvorvidt en ser på + de første eller de siste filmene i et søketreff.</p> + +<p align="center"> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-top.png"/> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-bottom.png"/> + <br> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-top.png"/> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-bottom.png"/> + <br> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-top.png"/> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-bottom.png"/> + <br> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-top.png"/> + <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-bottom.png"/> +</p> + +<p>Det er verdt å bemerke at de omtalte bittorrent-katalogene ikke er + laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen + YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh, + et selvstendig fildelings-relatert nettsted YTS.AG med et separat + brukermiljø. Målemetoden foreslått av Økokrim måler dermed ikke + (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til + innholdet i disse katalogene.</p> + +<hr> + +<p id="dok-09-13">Metoden fra Økokrims dokument 09,13 i straffesaken +om DNS-beslag.</p> + +<p><strong>1. Evaluation of (il)legality</strong></p> + +<p><strong>1.1. Methodology</strong> + +<p>Due to its technical configuration, Popcorn Time applications don't +allow to make a full list of all titles made available. In order to +evaluate the level of illegal operation of PCT, the following +methodology was applied:</p> + +<ol> + + <li>A random selection of 50 keywords, greater than 3 letters, was + made from the Dale-Chall list that contains 3000 simple English + words1. The selection was made by using a Random Number + Generator2.</li> + + <li>For each keyword, starting with the first randomly selected + keyword, a search query was conducted in the movie section of the + respective Popcorn Time application. For each keyword, the first + five results were added to the title list until the number of 100 + unique titles was reached (duplicates were removed).</li> + + <li>For one fork, .CH, insufficient titles were generated via this + approach to reach 100 titles. This was solved by adding any + additional query results above five for each of the 50 keywords. + Since this still was not enough, another 42 random keywords were + selected to finally reach 100 titles.</li> + + <li>It was verified whether or not there is a reasonable expectation + that the work is copyrighted by checking if they are available on + IMDb, also verifying the director, the year when the title was + released, the release date for a certain market, the production + company/ies of the title and the distribution company/ies.</li> + +</ol> + +<p><strong>1.2. Results</strong></p> + +<p>Between 6 and 9 June 2016, four forks of Popcorn Time were +investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and +popcorntime.ch. An excel sheet with the results is included in +Appendix 1. Screenshots were secured in separate Appendixes for each +respective fork, see Appendix 2-5.</p> + +<p>For each fork, out of 100, de-duplicated titles it was possible to +retrieve data according to the parameters set out above that indicate +that the title is commercially available. Per fork, there was 1 title +that presumably falls within the public domain, i.e. the 1928 movie +"The Circus" by and with Charles Chaplin.</p> + +<p>Based on the above it is reasonable to assume that 99% of the movie +content of each fork is copyright protected and is made available +illegally.</p> + +<p>This exercise was not repeated for TV series, but considering that +besides production companies and distribution companies also +broadcasters may have relevant rights, it is reasonable to assume that +at least a similar level of infringement will be established.</p> + +<p>Based on the above it is reasonable to assume that 99% of all the +content of each fork is copyright protected and are made available +illegally.</p> + + + + + Cura, the nice 3D print slicer, is now in Debian Unstable + http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html + http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html + Sun, 17 Dec 2017 07:00:00 +0100 + <p>After several months of working and waiting, I am happy to report +that the nice and user friendly 3D printer slicer software Cura just +entered Debian Unstable. It consist of five packages, +<a href="https://tracker.debian.org/pkg/cura">cura</a>, +<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>, +<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>, +<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>, +<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and +<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last +two, uranium and cura, entered Unstable yesterday. This should make +it easier for Debian users to print on at least the Ultimaker class of +3D printers. My nearest 3D printer is an Ultimaker 2+, so it will +make life easier for at least me. :)</p> + +<p>The work to make this happen was done by Gregor Riepl, and I was +happy to assist him in sponsoring the packages. With the introduction +of Cura, Debian is up to three 3D printer slicers at your service, +Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D +printer, give it a go. :)</p> + +<p>The 3D printer software is maintained by the 3D printer Debian +team, flocking together on the +<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a> +mailing list and the +<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a> +IRC channel.</p> + +<p>The next step for Cura in Debian is to update the cura package to +version 3.0.3 and then update the entire set of packages to version +3.1.0 which showed up the last few days.</p> + + + + + Idea for finding all public domain movies in the USA + http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html + http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html + Wed, 13 Dec 2017 10:15:00 +0100 + <p>While looking at +<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies +for the copyright renewal entries for movies published in the USA</a>, +an idea occurred to me. The number of renewals are so few per year, it +should be fairly quick to transcribe them all and add references to +the corresponding IMDB title ID. This would give the (presumably) +complete list of movies published 28 years earlier that did _not_ +enter the public domain for the transcribed year. By fetching the +list of USA movies published 28 years earlier and subtract the movies +with renewals, we should be left with movies registered in IMDB that +are now in the public domain. For the year 1955 (which is the one I +have looked at the most), the total number of pages to transcribe is +21. For the 28 years from 1950 to 1978, it should be in the range +500-600 pages. It is just a few days of work, and spread among a +small group of people it should be doable in a few weeks of spare +time.</p> + +<p>A typical copyright renewal entry look like this (the first one +listed for 1955):</p> + +<p><blockquote> + ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer + Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); + 10Jun55; R151558. +</blockquote></p> + +<p>The movie title as well as registration and renewal dates are easy +enough to locate by a program (split on first comma and look for +DDmmmYY). The rest of the text is not required to find the movie in +IMDB, but is useful to confirm the correct movie is found. I am not +quite sure what the L and R numbers mean, but suspect they are +reference numbers into the archive of the US Copyright Office.</p> + +<p>Tracking down the equivalent IMDB title ID is probably going to be +a manual task, but given the year it is fairly easy to search for the +movie title using for example +<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>. +Using this search, I find that the equivalent IMDB title ID for the +first renewal entry from 1955 is +<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p> + +<p>I suspect the best way to do this would be to make a specialised +web service to make it easy for contributors to transcribe and track +down IMDB title IDs. In the web service, once a entry is transcribed, +the title and year could be extracted from the text, a search in IMDB +conducted for the user to pick the equivalent IMDB title ID right +away. By spreading out the work among volunteers, it would also be +possible to make at least two persons transcribe the same entries to +be able to discover any typos introduced. But I will need help to +make this happen, as I lack the spare time to do all of this on my +own. If you would like to help, please get in touch. Perhaps you can +draft a web service for crowd sourcing the task?</p> + +<p>Note, Project Gutenberg already have some +<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed +copies of the US Copyright Office renewal protocols</a>, but I have +not been able to find any film renewals there, so I suspect they only +have copies of renewal for written works. I have not been able to find +any transcribed versions of movie renewals so far. Perhaps they exist +somewhere?</p> + +<p>I would love to figure out methods for finding all the public +domain works in other countries too, but it is a lot harder. At least +for Norway and Great Britain, such work involve tracking down the +people involved in making the movie and figuring out when they died. +It is hard enough to figure out who was part of making a movie, but I +do not know how to automate such procedure without a registry of every +person involved in making movies and their death year.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Is the short movie «Empty Socks» from 1927 in the public domain or not? + http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html + http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html + Tue, 5 Dec 2017 12:30:00 +0100 + <p>Three years ago, a presumed lost animation film, +<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from +1927</a>, was discovered in the Norwegian National Library. At the +time it was discovered, it was generally assumed to be copyrighted by +The Walt Disney Company, and I blogged about +<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my +reasoning to conclude</a> that it would would enter the Norwegian +equivalent of the public domain in 2053, based on my understanding of +Norwegian Copyright Law. But a few days ago, I came across +<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a +blog post claiming the movie was already in the public domain</a>, at +least in USA. The reasoning is as follows: The film was released in +November or Desember 1927 (sources disagree), and presumably +registered its copyright that year. At that time, right holders of +movies registered by the copyright office received government +protection for there work for 28 years. After 28 years, the copyright +had to be renewed if the wanted the government to protect it further. +The blog post I found claim such renewal did not happen for this +movie, and thus it entered the public domain in 1956. Yet someone +claim the copyright was renewed and the movie is still copyright +protected. Can anyone help me to figure out which claim is correct? +I have not been able to find Empty Socks in Catalog of copyright +entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures +<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available +from the University of Pennsylvania</a>, neither in +<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page +45 for the first half of 1955</a>, nor in +<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page +119 for the second half of 1955</a>. It is of course possible that +the renewal entry was left out of the printed catalog by mistake. Is +there some way to rule out this possibility? Please help, and update +the wikipedia page with your findings. + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Metadata proposal for movies on the Internet Archive + http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html + http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html + Tue, 28 Nov 2017 12:00:00 +0100 + <p>It would be easier to locate the movie you want to watch in +<a href="https://www.archive.org/">the Internet Archive</a>, if the +metadata about each movie was more complete and accurate. In the +archiving community, a well known saying state that good metadata is a +love letter to the future. The metadata in the Internet Archive could +use a face lift for the future to love us back. Here is a proposal +for a small improvement that would make the metadata more useful +today. I've been unable to find any document describing the various +standard fields available when uploading videos to the archive, so +this proposal is based on my best quess and searching through several +of the existing movies.</p> + +<p>I have a few use cases in mind. First of all, I would like to be +able to count the number of distinct movies in the Internet Archive, +without duplicates. I would further like to identify the IMDB title +ID of the movies in the Internet Archive, to be able to look up a IMDB +title ID and know if I can fetch the video from there and share it +with my friends.</p> + +<p>Second, I would like the Butter data provider for The Internet +archive +(<a href="https://github.com/butterproviders/butter-provider-archive">available +from github</a>), to list as many of the good movies as possible. The +plugin currently do a search in the archive with the following +parameters:</p> + +<p><pre> +collection:moviesandfilms +AND NOT collection:movie_trailers +AND -mediatype:collection +AND format:"Archive BitTorrent" +AND year +</pre></p> + +<p>Most of the cool movies that fail to show up in Butter do so +because the 'year' field is missing. The 'year' field is populated by +the year part from the 'date' field, and should be when the movie was +released (date or year). Two such examples are +<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur +from 1905</a> and +<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes +2: Gran Dillama from 2013</a>, where the year metadata field is +missing.</p> + +So, my proposal is simply, for every movie in The Internet Archive +where an IMDB title ID exist, please fill in these metadata fields +(note, they can be updated also long after the video was uploaded, but +as far as I can tell, only by the uploader): + +<dl> + +<dt>mediatype</dt> +<dd>Should be 'movie' for movies.</dd> + +<dt>collection</dt> +<dd>Should contain 'moviesandfilms'.</dd> + +<dt>title</dt> +<dd>The title of the movie, without the publication year.</dd> + +<dt>date</dt> +<dd>The data or year the movie was released. This make the movie show +up in Butter, as well as make it possible to know the age of the +movie and is useful to figure out copyright status.</dd> + +<dt>director</dt> +<dd>The director of the movie. This make it easier to know if the +correct movie is found in movie databases.</dd> + +<dt>publisher</dt> +<dd>The production company making the movie. Also useful for +identifying the correct movie.</dd> + +<dt>links</dt> + +<dd>Add a link to the IMDB title page, for example like this: &lt;a +href="http://www.imdb.com/title/tt0028496/"&gt;Movie in +IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for +counting of number of unique movies in the Archive. Other external +references, like to TMDB, could be added like this too.</dd> + +</dl> + +<p>I did consider proposing a Custom field for the IMDB title ID (for +example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it +will be easier to simply place it in the links free text field.</p> + +<p>I created +<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a +list of IMDB title IDs for several thousand movies in the Internet +Archive</a>, but I also got a list of several thousand movies without +such IMDB title ID (and quite a few duplicates). It would be great if +this data set could be integrated into the Internet Archive metadata +to be available for everyone in the future, but with the current +policy of leaving metadata editing to the uploaders, it will take a +while before this happen. If you have uploaded movies into the +Internet Archive, you can help. Please consider following my proposal +above for your movies, to ensure that movie is properly +counted. :)</p> + +<p>The list is mostly generated using wikidata, which based on +Wikipedia articles make it possible to link between IMDB and movies in +the Internet Archive. But there are lots of movies without a +Wikipedia article, and some movies where only a collection page exist +(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the +Caminandes example above</a>, where there are three movies but only +one Wikidata entry).</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Legal to share more than 3000 movies listed on IMDB? + http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html + http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html + Sat, 18 Nov 2017 21:20:00 +0100 + <p>A month ago, I blogged about my work to +<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically +check the copyright status of IMDB entries</a>, and try to count the +number of movies listed in IMDB that is legal to distribute on the +Internet. I have continued to look for good data sources, and +identified a few more. The code used to extract information from +various data sources is available in +<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a +git repository</a>, currently available from github.</p> + +<p>So far I have identified 3186 unique IMDB title IDs. To gain +better understanding of the structure of the data set, I created a +histogram of the year associated with each movie (typically release +year). It is interesting to notice where the peaks and dips in the +graph are located. I wonder why they are placed there. I suspect +World War II caused the dip around 1940, but what caused the peak +around 2010?</p> + +<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p> + +<p>I've so far identified ten sources for IMDB title IDs for movies in +the public domain or with a free license. This is the statistics +reported when running 'make stats' in the git repository:</p> + +<pre> + 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json + 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json + 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json + 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json + 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json + 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json + 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json + 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json + 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json + 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json + 3186 unique IMDB title IDs in total +</pre> + +<p>The entries without IMDB title ID are candidates to increase the +data set, but might equally well be duplicates of entries already +listed with IMDB title ID in one of the other sources, or represent +movies that lack a IMDB title ID. I've seen examples of all these +situations when peeking at the entries without IMDB title ID. Based +on these data sources, the lower bound for movies listed in IMDB that +are legal to distribute on the Internet is between 3186 and 4713. + +<p>It would be great for improving the accuracy of this measurement, +if the various sources added IMDB title ID to their metadata. I have +tried to reach the people behind the various sources to ask if they +are interested in doing this, without any replies so far. Perhaps you +can help me get in touch with the people behind VODO, Public Domain +Torrents, Public Domain Movies and Public Domain Review to try to +convince them to add more metadata to their movie entries?</p> + +<p>Another way you could help is by adding pages to Wikipedia about +movies that are legal to distribute on the Internet. If such page +exist and include a link to both IMDB and The Internet Archive, the +script used to generate free-movies-archive-org-wikidata.json should +pick up the mapping as soon as wikidata is updates.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Some notes on fault tolerant storage systems + http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html + http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html + Wed, 1 Nov 2017 15:35:00 +0100 + <p>If you care about how fault tolerant your storage is, you might +find these articles and papers interesting. They have formed how I +think of when designing a storage system.</p> + +<ul> + +<li>USENIX :login; <a +href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy +Does Not Imply Fault Tolerance. Analysis of Distributed Storage +Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan, +Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi +H. Arpaci-Dusseau</li> + +<li>ZDNet +<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why +RAID 5 stops working in 2009</a> by Robin Harris</li> + +<li>ZDNet +<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why +RAID 6 stops working in 2019</a> by Robin Harris</li> + +<li>USENIX FAST'07 +<a href="http://research.google.com/archive/disk_failures.pdf">Failure +Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro, +Wolf-Dietrich Weber and Luiz André Barroso</li> + +<li>USENIX ;login: <a +href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data +Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug +Hughes</li> + +<li>USENIX FAST'08 +<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An +Analysis of Data Corruption in the Storage Stack</a> by +L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. +Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> + +<li>USENIX FAST'07 <a +href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk +failures in the real world: what does an MTTF of 1,000,000 hours mean +to you?</a> by B. Schroeder and G. A. Gibson.</li> + +<li>USENIX ;login: <a +href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are +Disks the Dominant Contributor for Storage Failures? A Comprehensive +Study of Storage Subsystem Failure Characteristics</a> by Weihang +Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li> + +<li>SIGMETRICS 2007 +<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An +analysis of latent sector errors in disk drives</a> by +L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li> + +</ul> + +<p>Several of these research papers are based on data collected from +hundred thousands or millions of disk, and their findings are eye +opening. The short story is simply do not implicitly trust RAID or +redundant storage systems. Details matter. And unfortunately there +are few options on Linux addressing all the identified issues. Both +ZFS and Btrfs are doing a fairly good job, but have legal and +practical issues on their own. I wonder how cluster file systems like +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a computer you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.</p> + +<p>Just remember, in the end, it do not matter how redundant, or how +fault tolerant your storage is, if you do not continuously monitor its +status to detect and replace failed disks.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Web services for writing academic LaTeX papers as a team + http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html + http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html + Tue, 31 Oct 2017 21:00:00 +0100 + <p>I was surprised today to learn that a friend in academia did not +know there are easily available web services available for writing +LaTeX documents as a team. I thought it was common knowledge, but to +make sure at least my readers are aware of it, I would like to mention +these useful services for writing LaTeX documents. Some of them even +provide a WYSIWYG editor to ease writing even further.</p> + +<p>There are two commercial services available, +<a href="https://sharelatex.com">ShareLaTeX</a> and +<a href="https://overleaf.com">Overleaf</a>. They are very easy to +use. Just start a new document, select which publisher to write for +(ie which LaTeX style to use), and start writing. Note, these two +have announced their intention to join forces, so soon it will only be +one joint service. I've used both for different documents, and they +work just fine. While +<a href="https://github.com/sharelatex/sharelatex">ShareLaTeX is free +software</a>, while the latter is not. According to <a +href="https://www.overleaf.com/help/17-is-overleaf-open-source">a +announcement from Overleaf</a>, they plan to keep the ShareLaTeX code +base maintained as free software.</p> + +But these two are not the only alternatives. +<a href="https://app.fiduswriter.org/">Fidus Writer</a> is another free +software solution with <a href="https://github.com/fiduswriter">the +source available on github</a>. I have not used it myself. Several +others can be found on the nice +<a href="https://alternativeto.net/software/sharelatex/">alterntiveTo +web service</a>. + +<p>If you like Google Docs or Etherpad, but would like to write +documents in LaTeX, you should check out these services. You can even +host your own, if you want to. :)</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + Locating IMDB IDs of movies in the Internet Archive using Wikidata http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html @@ -82,7 +1029,7 @@ automatically.</p> and check if the XML metadata for the movie is available from The Internet Archive, and after around 1.5 hour it produced a list of 2097 free movies and their IMDB ID. In total, 171 entries in Wikidata lack -the refered Internet Archive entry. I assume the 60 "disappearing" +the refered Internet Archive entry. I assume the 70 "disappearing" entries (ie 2338-2097-171) are duplicate entries.</p> <p>This is not too bad, given that The Internet Archive report to @@ -286,449 +1233,10 @@ Archive: <a href="http://www.wikidata.org/entity/Q1140317">Q1140 <a href="http://www.wikidata.org/entity/Q3420907">Q3420907</a>, <a href="http://www.wikidata.org/entity/Q3429733">Q3429733</a>, <a href="http://www.wikidata.org/entity/Q774474">Q774474</a></p> - - - - - A one-way wall on the border? - http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html - http://people.skolelinux.org/pere/blog/A_one_way_wall_on_the_border_.html - Sat, 14 Oct 2017 22:10:00 +0200 - <p>I find it fascinating how many of the people being locked inside -the proposed border wall between USA and Mexico support the idea. The -proposal to keep Mexicans out reminds me of -<a href="http://www.history.com/news/10-things-you-may-not-know-about-the-berlin-wall">the -propaganda twist from the East Germany government</a> calling the wall -the “Antifascist Bulwark” after erecting the Berlin Wall, claiming -that the wall was erected to keep enemies from creeping into East -Germany, while it was obvious to the people locked inside it that it -was erected to keep the people from escaping.</p> - -<p>Do the people in USA supporting this wall really believe it is a -one way wall, only keeping people on the outside from getting in, -while not keeping people in the inside from getting out?</p> - - - - - Generating 3D prints in Debian using Cura and Slic3r(-prusa) - http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html - http://people.skolelinux.org/pere/blog/Generating_3D_prints_in_Debian_using_Cura_and_Slic3r__prusa_.html - Mon, 9 Oct 2017 10:50:00 +0200 - <p>At my nearby maker space, -<a href="http://sonen.ifi.uio.no/">Sonen</a>, I heard the story that it -was easier to generate gcode files for theyr 3D printers (Ultimake 2+) -on Windows and MacOS X than Linux, because the software involved had -to be manually compiled and set up on Linux while premade packages -worked out of the box on Windows and MacOS X. I found this annoying, -as the software involved, -<a href="https://github.com/Ultimaker/Cura">Cura</a>, is free software -and should be trivial to get up and running on Linux if someone took -the time to package it for the relevant distributions. I even found -<a href="https://bugs.debian.org/706656">a request for adding into -Debian</a> from 2013, which had seem some activity over the years but -never resulted in the software showing up in Debian. So a few days -ago I offered my help to try to improve the situation.</p> - -<p>Now I am very happy to see that all the packages required by a -working Cura in Debian are uploaded into Debian and waiting in the NEW -queue for the ftpmasters to have a look. You can track the progress -on -<a href="https://qa.debian.org/developer.php?email=3dprinter-general%40lists.alioth.debian.org">the -status page for the 3D printer team</a>.</p> - -<p>The uploaded packages are a bit behind upstream, and was uploaded -now to get slots in <a href="https://ftp-master.debian.org/new.html">the NEW -queue</a> while we work up updating the packages to the latest -upstream version.</p> - -<p>On a related note, two competitors for Cura, which I found harder -to use and was unable to configure correctly for Ultimaker 2+ in the -short time I spent on it, are already in Debian. If you are looking -for 3D printer "slicers" and want something already available in -Debian, check out -<a href="https://tracker.debian.org/pkg/slic3r">slic3r</a> and -<a href="https://tracker.debian.org/pkg/slic3r-prusa">slic3r-prusa</a>. -The latter is a fork of the former.</p> - - - - - Mangler du en skrue, eller har du en skrue løs? - http://people.skolelinux.org/pere/blog/Mangler_du_en_skrue__eller_har_du_en_skrue_l_s_.html - http://people.skolelinux.org/pere/blog/Mangler_du_en_skrue__eller_har_du_en_skrue_l_s_.html - Wed, 4 Oct 2017 09:40:00 +0200 - Når jeg holder på med ulike prosjekter, så trenger jeg stadig ulike -skruer. Det siste prosjektet jeg holder på med er å lage -<a href="https://www.thingiverse.com/thing:676916">en boks til en -HDMI-touch-skjerm</a> som skal brukes med Raspberry Pi. Boksen settes -sammen med skruer og bolter, og jeg har vært i tvil om hvor jeg kan -få tak i de riktige skruene. Clas Ohlson og Jernia i nærheten har -sjelden hatt det jeg trenger. Men her om dagen fikk jeg et fantastisk -tips for oss som bor i Oslo. -<a href="http://www.zachskruer.no/">Zachariassen Jernvare AS</a> i -<a href="http://www.openstreetmap.org/?mlat=59.93421&mlon=10.76795#map=19/59.93421/10.76795">Hegermannsgate -23A på Torshov</a> har et fantastisk utvalg, og åpent mellom 09:00 og -17:00. De selger skruer, muttere, bolter, skiver etc i løs vekt, og -så langt har jeg fått alt jeg har lett etter. De har i tillegg det -meste av annen jernvare, som verktøy, lamper, ledninger, etc. Jeg -håper de har nok kunder til å holde det gående lenge, da dette er en -butikk jeg kommer til å besøke ofte. Butikken er et funn å ha i -nabolaget for oss som liker å bygge litt selv. :)</p> - - - - - Visualizing GSM radio chatter using gr-gsm and Hopglass - http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html - http://people.skolelinux.org/pere/blog/Visualizing_GSM_radio_chatter_using_gr_gsm_and_Hopglass.html - Fri, 29 Sep 2017 10:30:00 +0200 - <p>Every mobile phone announce its existence over radio to the nearby -mobile cell towers. And this radio chatter is available for anyone -with a radio receiver capable of receiving them. Details about the -mobile phones with very good accuracy is of course collected by the -phone companies, but this is not the topic of this blog post. The -mobile phone radio chatter make it possible to figure out when a cell -phone is nearby, as it include the SIM card ID (IMSI). By paying -attention over time, one can see when a phone arrive and when it leave -an area. I believe it would be nice to make this information more -available to the general public, to make more people aware of how -their phones are announcing their whereabouts to anyone that care to -listen.</p> - -<p>I am very happy to report that we managed to get something -visualizing this information up and running for -<a href="http://norwaymakers.org/osf17">Oslo Skaperfestival 2017</a> -(Oslo Makers Festival) taking place today and tomorrow at Deichmanske -library. The solution is based on the -<a href="http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html">simple -recipe for listening to GSM chatter</a> I posted a few days ago, and -will show up at the stand of <a href="http://sonen.ifi.uio.no/">Åpen -Sone from the Computer Science department of the University of -Oslo</a>. The presentation will show the nearby mobile phones (aka -IMSIs) as dots in a web browser graph, with lines to the dot -representing mobile base station it is talking to. It was working in -the lab yesterday, and was moved into place this morning.</p> - -<p>We set up a fairly powerful desktop machine using Debian -Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers -connected and visualize the visible cell phone towers using an -<a href="https://github.com/marlow925/hopglass">English version of -Hopglass</a>. A fairly powerfull machine is needed as the -grgsm_livemon_headless processes from -<a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a> converting -the radio signal to data packages is quite CPU intensive.</p> - -<p>The frequencies to listen to, are identified using a slightly -patched scan-and-livemon (to set the --args values for each receiver), -and the Hopglass data is generated using the -<a href="https://github.com/petterreinholdtsen/IMSI-catcher/tree/meshviewer-output">patches -in my meshviewer-output branch</a>. For some reason we could not get -more than four SDRs working. There is also a geographical map trying -to show the location of the base stations, but I believe their -coordinates are hardcoded to some random location in Germany, I -believe. The code should be replaced with code to look up location in -a text file, a sqlite database or one of the online databases -mentioned in -<a href="https://github.com/Oros42/IMSI-catcher/issues/14">the github -issue for the topic</a>. - -<p>If this sound interesting, visit the stand at the festival!</p> - - - - - Easier recipe to observe the cell phones around you - http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html - http://people.skolelinux.org/pere/blog/Easier_recipe_to_observe_the_cell_phones_around_you.html - Sun, 24 Sep 2017 08:30:00 +0200 - <p>A little more than a month ago I wrote -<a href="http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html">how -to observe the SIM card ID (aka IMSI number) of mobile phones talking -to nearby mobile phone base stations using Debian GNU/Linux and a -cheap USB software defined radio</a>, and thus being able to pinpoint -the location of people and equipment (like cars and trains) with an -accuracy of a few kilometer. Since then we have worked to make the -procedure even simpler, and it is now possible to do this without any -manual frequency tuning and without building your own packages.</p> - -<p>The <a href="https://tracker.debian.org/pkg/gr-gsm">gr-gsm</a> -package is now included in Debian testing and unstable, and the -IMSI-catcher code no longer require root access to fetch and decode -the GSM data collected using gr-gsm.</p> - -<p>Here is an updated recipe, using packages built by Debian and a git -clone of two python scripts:</p> - -<ol> - -<li>Start with a Debian machine running the Buster version (aka - testing).</li> - -<li>Run '<tt>apt install gr-gsm python-numpy python-scipy - python-scapy</tt>' as root to install required packages.</li> - -<li>Fetch the code decoding GSM packages using '<tt>git clone - github.com/Oros42/IMSI-catcher.git</tt>'.</li> - -<li>Insert USB software defined radio supported by GNU Radio.</li> - -<li>Enter the IMSI-catcher directory and run '<tt>python - scan-and-livemon</tt>' to locate the frequency of nearby base - stations and start listening for GSM packages on one of them.</li> - -<li>Enter the IMSI-catcher directory and run '<tt>python - simple_IMSI-catcher.py</tt>' to display the collected information.</li> - -</ol> - -<p>Note, due to a bug somewhere the scan-and-livemon program (actually -<a href="https://github.com/ptrkrysik/gr-gsm/issues/336">its underlying -program grgsm_scanner</a>) do not work with the HackRF radio. It does -work with RTL 8232 and other similar USB radio receivers you can get -very cheaply -(<a href="https://www.ebay.com/sch/items/?_nkw=rtl+2832">for example -from ebay</a>), so for now the solution is to scan using the RTL radio -and only use HackRF for fetching GSM data.</p> - -<p>As far as I can tell, a cell phone only show up on one of the -frequencies at the time, so if you are going to track and count every -cell phone around you, you need to listen to all the frequencies used. -To listen to several frequencies, use the --numrecv argument to -scan-and-livemon to use several receivers. Further, I am not sure if -phones using 3G or 4G will show as talking GSM to base stations, so -this approach might not see all phones around you. I typically see -0-400 IMSI numbers an hour when looking around where I live.</p> - -<p>I've tried to run the scanner on a -<a href="https://wiki.debian.org/RaspberryPi">Raspberry Pi 2 and 3 -running Debian Buster</a>, but the grgsm_livemon_headless process seem -to be too CPU intensive to keep up. When GNU Radio print 'O' to -stdout, I am told there it is caused by a buffer overflow between the -radio and GNU Radio, caused by the program being unable to read the -GSM data fast enough. If you see a stream of 'O's from the terminal -where you started scan-and-livemon, you need a give the process more -CPU power. Perhaps someone are able to optimize the code to a point -where it become possible to set up RPi3 based GSM sniffers? I tried -using Raspbian instead of Debian, but there seem to be something wrong -with GNU Radio on raspbian, causing glibc to abort().</p> - - - - - Datalagringsdirektivet kaster skygger over Høyre og Arbeiderpartiet - http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_kaster_skygger_over_H_yre_og_Arbeiderpartiet.html - http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_kaster_skygger_over_H_yre_og_Arbeiderpartiet.html - Thu, 7 Sep 2017 21:35:00 +0200 - <p>For noen dager siden publiserte Jon Wessel-Aas en bloggpost om -«<a href="http://www.uhuru.biz/?p=1821">Konklusjonen om datalagring som -EU-kommisjonen ikke ville at vi skulle få se</a>». Det er en -interessant gjennomgang av EU-domstolens syn på snurpenotovervåkning -av befolkningen, som er klar på at det er i strid med -EU-lovgivingen.</p> - -<p>Valgkampen går for fullt i Norge, og om noen få dager er siste -frist for å avgi stemme. En ting er sikkert, Høyre og Arbeiderpartiet -får ikke min stemme -<a href="http://people.skolelinux.org/pere/blog/Datalagringsdirektivet_gj_r_at_Oslo_H_yre_og_Arbeiderparti_ikke_f_r_min_stemme_i__r.html">denne -gangen heller</a>. Jeg har ikke glemt at de tvang igjennom loven som -skulle pålegge alle data- og teletjenesteleverandører å overvåke alle -sine kunder. En lov som er vedtatt, og aldri opphevet igjen.</p> - -<p>Det er tydelig fra diskusjonen rundt grenseløs digital overvåkning -(eller "Digital Grenseforsvar" som det kalles i Orvellisk nytale) at -hverken Høyre og Arbeiderpartiet har noen prinsipielle sperrer mot å -overvåke hele befolkningen, og diskusjonen så langt tyder på at flere -av de andre partiene heller ikke har det. Mange av -<a href="https://data.holderdeord.no/votes/1301946411e">de som stemte -for Datalagringsdirektivet i Stortinget</a> (64 fra Arbeiderpartiet, -25 fra Høyre) er fortsatt aktive og argumenterer fortsatt for å radere -vekk mer av innbyggernes privatsfære.</p> - -<p>Når myndighetene demonstrerer sin mistillit til folket, tror jeg -folket selv bør legge litt innsats i å verne sitt privatliv, ved å ta -i bruk ende-til-ende-kryptert kommunikasjon med sine kjente og kjære, -og begrense hvor mye privat informasjon som deles med uvedkommende. -Det er jo ingenting som tyder på at myndighetene kommer til å være vår -privatsfære. -<a href="http://people.skolelinux.org/pere/blog/How_to_talk_with_your_loved_ones_in_private.html">Det -er mange muligheter</a>. Selv har jeg litt sans for -<a href="https://ring.cx/">Ring</a>, som er basert på p2p-teknologi -uten sentral kontroll, er fri programvare, og støtter meldinger, tale -og video. Systemet er tilgjengelig ut av boksen fra -<a href="https://tracker.debian.org/pkg/ring">Debian</a> og -<a href="https://launchpad.net/ubuntu/+source/ring">Ubuntu</a>, og det -finnes pakker for Android, MacOSX og Windows. Foreløpig er det få -brukere med Ring, slik at jeg også bruker -<a href="https://signal.org/">Signal</a> som nettleserutvidelse.</p> - - - - - Simpler recipe on how to make a simple $7 IMSI Catcher using Debian - http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html - http://people.skolelinux.org/pere/blog/Simpler_recipe_on_how_to_make_a_simple__7_IMSI_Catcher_using_Debian.html - Wed, 9 Aug 2017 23:59:00 +0200 - <p>On friday, I came across an interesting article in the Norwegian -web based ICT news magazine digi.no on -<a href="https://www.digi.no/artikler/sikkerhetsforsker-lagde-enkel-imsi-catcher-for-60-kroner-na-kan-mobiler-kartlegges-av-alle/398588">how -to collect the IMSI numbers of nearby cell phones</a> using the cheap -DVB-T software defined radios. The article refered to instructions -and <a href="https://www.youtube.com/watch?v=UjwgNd_as30">a recipe by -Keld Norman on Youtube on how to make a simple $7 IMSI Catcher</a>, and I decided to test them out.</p> - -<p>The instructions said to use Ubuntu, install pip using apt (to -bypass apt), use pip to install pybombs (to bypass both apt and pip), -and the ask pybombs to fetch and build everything you need from -scratch. I wanted to see if I could do the same on the most recent -Debian packages, but this did not work because pybombs tried to build -stuff that no longer build with the most recent openssl library or -some other version skew problem. While trying to get this recipe -working, I learned that the apt->pip->pybombs route was a long detour, -and the only piece of software dependency missing in Debian was the -gr-gsm package. I also found out that the lead upstream developer of -gr-gsm (the name stand for GNU Radio GSM) project already had a set of -Debian packages provided in an Ubuntu PPA repository. All I needed to -do was to dget the Debian source package and built it.</p> - -<p>The IMSI collector is a python script listening for packages on the -loopback network device and printing to the terminal some specific GSM -packages with IMSI numbers in them. The code is fairly short and easy -to understand. The reason this work is because gr-gsm include a tool -to read GSM data from a software defined radio like a DVB-T USB stick -and other software defined radios, decode them and inject them into a -network device on your Linux machine (using the loopback device by -default). This proved to work just fine, and I've been testing the -collector for a few days now.</p> - -<p>The updated and simpler recipe is thus to</p> - -<ol> - -<li>start with a Debian machine running Stretch or newer,</li> - -<li>build and install the gr-gsm package available from -<a href="http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/">http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/</a>,</li> - -<li>clone the git repostory from <a href="https://github.com/Oros42/IMSI-catcher">https://github.com/Oros42/IMSI-catcher</a>,</li> - -<li>run grgsm_livemon and adjust the frequency until the terminal -where it was started is filled with a stream of text (meaning you -found a GSM station).</li> - -<li>go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers.</li> - -</ol> - -<p>To make it even easier in the future to get this sniffer up and -running, I decided to package -<a href="https://github.com/ptrkrysik/gr-gsm/">the gr-gsm project</a> -for Debian (<a href="https://bugs.debian.org/871055">WNPP -#871055</a>), and the package was uploaded into the NEW queue today. -Luckily the gnuradio maintainer has promised to help me, as I do not -know much about gnuradio stuff yet.</p> - -<p>I doubt this "IMSI cacher" is anywhere near as powerfull as -commercial tools like -<a href="https://www.thespyphone.com/portable-imsi-imei-catcher/">The -Spy Phone Portable IMSI / IMEI Catcher</a> or the -<a href="https://en.wikipedia.org/wiki/Stingray_phone_tracker">Harris -Stingray</a>, but I hope the existance of cheap alternatives can make -more people realise how their whereabouts when carrying a cell phone -is easily tracked. Seeing the data flow on the screen, realizing that -I live close to a police station and knowing that the police is also -wearing cell phones, I wonder how hard it would be for criminals to -track the position of the police officers to discover when there are -police near by, or for foreign military forces to track the location -of the Norwegian military forces, or for anyone to track the location -of government officials...</p> - -<p>It is worth noting that the data reported by the IMSI-catcher -script mentioned above is only a fraction of the data broadcasted on -the GSM network. It will only collect one frequency at the time, -while a typical phone will be using several frequencies, and not all -phones will be using the frequencies tracked by the grgsm_livemod -program. Also, there is a lot of radio chatter being ignored by the -simple_IMSI-catcher script, which would be collected by extending the -parser code. I wonder if gr-gsm can be set up to listen to more than -one frequency?</p> - - - - - Norwegian Bokmål edition of Debian Administrator's Handbook is now available - http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_edition_of_Debian_Administrator_s_Handbook_is_now_available.html - http://people.skolelinux.org/pere/blog/Norwegian_Bokm_l_edition_of_Debian_Administrator_s_Handbook_is_now_available.html - Tue, 25 Jul 2017 21:10:00 +0200 - <p align="center"><img align="center" src="http://people.skolelinux.org/pere/blog/images/2017-07-25-debian-handbook-nb-testprint.png"/></p> - -<p>I finally received a copy of the Norwegian Bokmål edition of -"<a href="https://debian-handbook.info/">The Debian Administrator's -Handbook</a>". This test copy arrived in the mail a few days ago, and -I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition -<a href="https://debian-handbook.info/get/#norwegian">is available -from lulu.com</a>. If you buy it quickly, you save 25% on the list -price. The book is also available for download in electronic form as -PDF, EPUB and Mobipocket, as can be -<a href="https://debian-handbook.info/browse/nb-NO/stable/">read online -as a web page</a>.</p> - -<p>This is the second book I publish (the first was the book -"<a href="http://free-culture.cc/">Free Culture</a>" by Lawrence Lessig -in -<a href="http://www.lulu.com/shop/lawrence-lessig/free-culture/paperback/product-22440520.html">English</a>, -<a href="http://www.lulu.com/shop/lawrence-lessig/culture-libre/paperback/product-22645082.html">French</a> -and -<a href="http://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Norwegian -Bokmål</a>), and I am very excited to finally wrap up this -project. I hope -"<a href="http://www.lulu.com/shop/rapha%C3%ABl-hertzog-and-roland-mas/h%C3%A5ndbok-for-debian-administratoren/paperback/product-23262290.html">Håndbok -for Debian-administratoren</a>" will be well received.</p> - - - - - «Rapporten ser ikke på informasjonssikkerhet knyttet til personlig integritet» - http://people.skolelinux.org/pere/blog/_Rapporten_ser_ikke_p__informasjonssikkerhet_knyttet_til_personlig_integritet_.html - http://people.skolelinux.org/pere/blog/_Rapporten_ser_ikke_p__informasjonssikkerhet_knyttet_til_personlig_integritet_.html - Tue, 27 Jun 2017 17:50:00 +0200 - <p>Jeg kom over teksten -«<a href="https://freedom-to-tinker.com/2017/06/21/killing-car-privacy-by-federal-mandate/">Killing -car privacy by federal mandate</a>» av Leonid Reyzin på Freedom to -Tinker i dag, og det gleder meg å se en god gjennomgang om hvorfor det -er et urimelig inngrep i privatsfæren å la alle biler kringkaste sin -posisjon og bevegelse via radio. Det omtalte forslaget basert på -Dedicated Short Range Communication (DSRC) kalles Basic Safety Message -(BSM) i USA og Cooperative Awareness Message (CAM) i Europa, og det -norske Vegvesenet er en av de som ser ut til å kunne tenke seg å -pålegge alle biler å fjerne nok en bit av innbyggernes privatsfære. -Anbefaler alle å lese det som står der. - -<p>Mens jeg tittet litt på DSRC på biler i Norge kom jeg over et sitat -jeg synes er illustrativt for hvordan det offentlige Norge håndterer -problemstillinger rundt innbyggernes privatsfære i SINTEF-rapporten -«<a href="https://www.sintef.no/publikasjoner/publikasjon/Download/?pubid=SINTEF+A23933">Informasjonssikkerhet -i AutoPASS-brikker</a>» av Trond Foss:</p> - -<p><blockquote> -«Rapporten ser ikke på informasjonssikkerhet knyttet til personlig - integritet.» -</blockquote></p> -<p>Så enkelt kan det tydeligvis gjøres når en vurderer -informasjonssikkerheten. Det holder vel at folkene på toppen kan si -at «Personvernet er ivaretatt», som jo er den populære intetsigende -frasen som gjør at mange tror enkeltindividers integritet tas vare på. -Sitatet fikk meg til å undres på hvor ofte samme tilnærming, å bare se -bort fra behovet for personlig itegritet, blir valgt når en velger å -legge til rette for nok et inngrep i privatsfæren til personer i -Norge. Det er jo sjelden det får reaksjoner. Historien om -reaksjonene på Helse Sør-Østs tjenesteutsetting er jo sørgelig nok et -unntak og toppen av isfjellet, desverre. Tror jeg fortsatt takker nei -til både AutoPASS og holder meg så langt unna det norske helsevesenet -som jeg kan, inntil de har demonstrert og dokumentert at de verdsetter -individets privatsfære og personlige integritet høyere enn kortsiktig -gevist og samfunnsnytte.</p> +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>