X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/7d79a3ed2b071df82e69eb512a6f3ed985516cd2..2aafae50dc90da8ae2432aaeca9569fbe49cf7d7:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index 43191070cf..0581cf3d48 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -7,96 +7,78 @@ - How hard can æ, ø and å be? - http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html - http://people.skolelinux.org/pere/blog/How_hard_can______and___be_.html - Sun, 11 Feb 2018 17:10:00 +0100 - <img src="http://people.skolelinux.org/pere/blog/images/2018-02-11-peppes-unicode.jpeg" align="right"/> - -<p>We write 2018, and it is 30 years since Unicode was introduced. -Most of us in Norway have come to expect the use of our alphabet to -just work with any computer system. But it is apparently beyond reach -of the computers printing recites at a restaurant. Recently I visited -a Peppes pizza resturant, and noticed a few details on the recite. -Notice how 'ø' and 'å' are replaced with strange symbols in -'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi -gleder oss til å se deg igjen'.</p> - -<p>I would say that this state is passed sad and over in embarrassing.</p> - -<p>I removed personal and private information to be nice.</p> + Software created using taxpayers’ money should be Free Software + http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html + http://people.skolelinux.org/pere/blog/Software_created_using_taxpayers__money_should_be_Free_Software.html + Thu, 30 Aug 2018 13:50:00 +0200 + <p>It might seem obvious that software created using tax money should +be available for everyone to use and improve. Free Software +Foundation Europe recentlystarted a campaign to help get more people +to understand this, and I just signed the petition on +<a href="https://publiccode.eu/">Public Money, Public Code</a> to help +them. I hope you too will do the same.</p> - Legal to share more than 11,000 movies listed on IMDB? - http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html - http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_11_000_movies_listed_on_IMDB_.html - Sun, 7 Jan 2018 23:30:00 +0100 - <p>I've continued to track down list of movies that are legal to -distribute on the Internet, and identified more than 11,000 title IDs -in The Internet Movie Database (IMDB) so far. Most of them (57%) are -feature films from USA published before 1923. I've also tracked down -more than 24,000 movies I have not yet been able to map to IMDB title -ID, so the real number could be a lot higher. According to the front -web page for <a href="https://retrofilmvault.com/">Retro Film -Vault</A>, there are 44,000 public domain films, so I guess there are -still some left to identify.</p> - -<p>The complete data set is available from -<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a -public git repository</a>, including the scripts used to create it. -Most of the data is collected using web scraping, for example from the -"product catalog" of companies selling copies of public domain movies, -but any source I find believable is used. I've so far had to throw -out three sources because I did not trust the public domain status of -the movies listed.</p> - -<p>Anyway, this is the summary of the 28 collected data sources so -far:</p> - -<p><pre> - 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json - 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json - 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json - 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json - 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json - 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json - 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json - 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json - 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json - 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json - 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json - 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json - 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json - 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json - 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json - 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json - 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json - 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json - 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json -11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID -</pre></p> - -<p> I keep finding more data sources. I found the cinemovies source -just a few days ago, and as you can see from the summary, it extended -my list with 63 movies. Check out the mklist-* scripts in the git -repository if you are curious how the lists are created. Many of the -titles are extracted using searches on IMDB, where I look for the -title and year, and accept search results with only one movie listed -if the year matches. This allow me to automatically use many lists of -movies without IMDB title ID references at the cost of increasing the -risk of wrongly identify a IMDB title ID as public domain. So far my -random manual checks have indicated that the method is solid, but I -really wish all lists of public domain movies would include unique -movie identifier like the IMDB title ID. It would make the job of -counting movies in the public domain a lot easier.</p> + A bit more on privacy respecting health monitor / fitness tracker + http://people.skolelinux.org/pere/blog/A_bit_more_on_privacy_respecting_health_monitor___fitness_tracker.html + http://people.skolelinux.org/pere/blog/A_bit_more_on_privacy_respecting_health_monitor___fitness_tracker.html + Mon, 13 Aug 2018 09:00:00 +0200 + <p>A few days ago, I wondered if there are any privacy respecting +health monitors and/or fitness trackers available for sale these days. +I would like to buy one, but do not want to share my personal data +with strangers, nor be forced to have a mobile phone to get data out +of the unit. I've received some ideas, and would like to share them +with you. + +One interesting data point was a pointer to a Free Software app for +Android named +<a href="https://github.com/Freeyourgadget/Gadgetbridge/">Gadgetbridge</a>. +It provide cloudless collection and storing of data from a variety of +trackers. Its +<a href="https://github.com/Freeyourgadget/Gadgetbridge/#supported-devices">list +of supported devices</a> is a good indicator for units where the +protocol is fairly open, as it is obviously being handled by Free +Software. Other units are reportedly encrypting the collected +information with their own public key, making sure only the vendor +cloud service is able to extract data from the unit. The people +contacting me about Gadgetbirde said they were using +<a href="https://us.amazfit.com/shop/bip?variant=336750">Amazfit +Bip</a> and +<a href="http://www.xiaomimi6phone.com/xiaomi-mi-band-3-features-release-date-rumors/">Xiaomi +Band 3</a>.</p> + +<p>I also got a suggestion to look at some of the units from Garmin. +I was told their GPS watches can be connected via USB and show up as a +USB storage device with +<a href="https://www.gpsbabel.org/htmldoc-development/fmt_garmin_fit.html">Garmin +FIT files</a> containing the collected measurements. While +proprietary, FIT files apparently can be read at least by +<a href="https://www.gpsbabel.org">GPSBabel</a> and the +<a href="https://apps.nextcloud.com/apps/gpxpod">GpxPod</a> Nextcloud +app. It is unclear to me if they can read step count and heart rate +data. The person I talked to was using a +<a href="https://buy.garmin.com/en-US/US/p/564291">Garmin Forerunner +935</a>, which is a fairly expensive unit. I doubt it is worth it for +a unit where the vendor clearly is trying its best to move from open +to closed systems. I still remember when Garmin dropped NMEA support +in its GPSes.</p> + +<p>A final idea was to build ones own unit, perhaps by basing it on a +wearable hardware platforms like +<a href="https://learn.adafruit.com/flora-geo-watch">the Flora Geo +Watch</a>. Sound like fun, but I had more money than time to spend on +the topic, so I suspect it will have to wait for another time.</p> + +<p>While I was working on tracking down links, I came across an +inspiring TED talk by Dave Debronkart about +<a href="https://archive.org/details/DavedeBronkart_2010X">being a +e-patient</a>, and discovered the web site +<a href="https://participatorymedicine.org/epatients/">Participatory +Medicine</a>. If you too want to track your own health and fitness +without having information about your private life floating around on +computers owned by others, I recommend checking it out.</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -105,513 +87,99 @@ activities, please send Bitcoin donations to my address - Kommentarer til «Evaluation of (il)legality» for Popcorn Time - http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html - http://people.skolelinux.org/pere/blog/Kommentarer_til__Evaluation_of__il_legality__for_Popcorn_Time.html - Wed, 20 Dec 2017 11:40:00 +0100 - <p>I går var jeg i Follo tingrett som sakkyndig vitne og presenterte - mine undersøkelser rundt - <a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">telling - av filmverk i det fri</a>, relatert til - <a href="https://www.nuug.no/">foreningen NUUG</a>s involvering i - <a href="https://www.nuug.no/news/tags/dns-domenebeslag/">saken om - Økokrims beslag og senere inndragning av DNS-domenet - popcorn-time.no</a>. Jeg snakket om flere ting, men mest om min - vurdering av hvordan filmbransjen har målt hvor ulovlig Popcorn Time - er. Filmbransjens måling er så vidt jeg kan se videreformidlet uten - endringer av norsk politi, og domstolene har lagt målingen til grunn - når de har vurdert Popcorn Time både i Norge og i utlandet (tallet - 99% er referert også i utenlandske domsavgjørelser).</p> - -<p>I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv, - med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg - skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha - notatet, så hvis jeg forsto rettsprosessen riktig ble kun - histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var - visst bare interessert i å forholde seg til det jeg sa i retten, - ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere - enn meg kan ha glede av teksten, og publiserer den derfor her. - Legger ved avskrift av dokument 09,13, som er det sentrale - dokumentet jeg kommenterer.</p> - -<p><strong>Kommentarer til «Evaluation of (il)legality» for Popcorn - Time</strong></p> - -<p><strong>Oppsummering</strong></p> - -<p>Målemetoden som Økokrim har lagt til grunn når de påstår at 99% av - filmene tilgjengelig fra Popcorn Time deles ulovlig har - svakheter.</p> - -<p>De eller den som har vurdert hvorvidt filmer kan lovlig deles har - ikke lyktes med å identifisere filmer som kan deles lovlig og har - tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig. - Økokrim legger til grunn at det bare finnes èn film, Charlie - Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de - som ble observert tilgjengelig via ulike Popcorn Time-varianter. - Jeg finner tre flere blant de observerte filmene: «The Brain That - Wouldn't Die» fra 1962, «God’s Little Acre» fra 1958 og «She Wore a - Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det - finnes dermed minst fire ganger så mange filmer som lovlig kan deles - på Internett i datasettet Økokrim har lagt til grunn når det påstås - at mindre enn 1 % kan deles lovlig.</p> - -<p>Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra - ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte - filmkatalogene som helhet, hvilket påvirker fordelingen mellom - filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I - tillegg gir valg av øvre del (de fem første) av søkeresultatene et - avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk - i det fri i søkeresultatet.</p> - -<p>Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn - Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger - som vedlikeholdes uavhengig av Popcorn Time.</p> - -<p>Omtalte dokumenter: 09,12, <a href="#dok-09-13">09,13</a>, 09,14, -09,18, 09,19, 09,20.</p> - -<p><strong>Utfyllende kommentarer</strong></p> - -<p>Økokrim har forklart domstolene at minst 99% av alt som er - tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig på - Internet. Jeg ble nysgjerrig på hvordan de er kommet frem til dette - tallet, og dette notatet er en samling kommentarer rundt målingen - Økokrim henviser til. Litt av bakgrunnen for at jeg valgte å se på - saken er at jeg er interessert i å identifisere og telle hvor mange - kunstneriske verk som er falt i det fri eller av andre grunner kan - lovlig deles på Internett, og dermed var interessert i hvordan en - hadde funnet den ene prosenten som kanskje deles lovlig.</p> - -<p>Andelen på 99% kommer fra et ukreditert og udatert notatet som tar - mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike - Popcorn Time-varianter er.</p> - -<p>Raskt oppsummert, så forteller metodedokumentet at på grunn av at - det ikke er mulig å få tak i komplett liste over alle filmtitler - tilgjengelig via Popcorn Time, så lages noe som skal være et - representativt utvalg ved å velge 50 søkeord større enn tre tegn fra - ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og - de første fem filmene i søkeresultatet samles inn inntil 100 unike - filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å - nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt - til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut - og søkt på flere tilfeldig valgte søkeord inntil 100 unike - filmtitler var identifisert.</p> - -<p>Deretter ble for hver av filmtitlene «vurdert hvorvidt det var - rimelig å forvente om at verket var vernet av copyright, ved å se på - om filmen var tilgjengelig i IMDB, samt se på regissør, - utgivelsesår, når det var utgitt for bestemte markedsområder samt - hvilke produksjons- og distribusjonsselskap som var registrert» (min - oversettelse).</p> - -<p>Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og - 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert - 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion - Picture Association EMEA. Metoden virker å ha flere svakheter som - gir resultatene en slagside. Den starter med å slå fast at det ikke - er mulig å hente ut en komplett liste over alle filmtitler som er - tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne - forutsetningen er ikke i tråd med det som står i dokument 09,12, som - ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller - hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument - 09,12 er muligens samme rapport som ble referert til i dom fra Oslo - Tingrett 2017-11-03 - (<a href="https://www.domstol.no/no/Enkelt-domstol/Oslo--tingrett/Nyheter/ma-sperre-for-popcorn-time/">sak - 17-093347TVI-OTIR/05</a>) som rapport av 1. juni 2017 av Alexander - Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord - for å kontrollere dette.</p> - -<p>IMDB er en forkortelse for The Internet Movie Database, en - anerkjent kommersiell nettjeneste som brukes aktivt av både - filmbransjen og andre til å holde rede på hvilke spillefilmer (og - endel andre filmer) som finnes eller er under produksjon, og - informasjon om disse filmene. Datakvaliteten er høy, med få feil og - få filmer som mangler. IMDB viser ikke informasjon om - opphavsrettslig status for filmene på infosiden for hver film. Som - del av IMDB-tjenesten finnes det lister med filmer laget av - frivillige som lister opp det som antas å være verk i det fri.</p> - -<p>Det finnes flere kilder som kan brukes til å finne filmer som er - allemannseie (public domain) eller har bruksvilkår som gjør det - lovlig for alleå dele dem på Internett. Jeg har de siste ukene - forsøkt å samle og krysskoble disse listene for å forsøke å telle - antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og - publiserte filmer for Internett-arkivets del), har jeg så langt - klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer. - -<p>De aller fleste oppføringene er hentet fra IMDB selv, basert på det - faktum at alle filmer laget i USA før 1923 er falt i det fri. - Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette - utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt). - En annen stor andel kommer fra Internett-arkivet, der jeg har - identifisert filmer med referanse til IMDB. Internett-arkivet, som - holder til i USA, har som - <a href="https://archive.org/about/terms.php">policy å kun publisere - filmer som det er lovlig å distribuere</a>. Jeg har under arbeidet - kommet over flere filmer som har blitt fjernet fra - Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene - som kontrollerer Internett-arkivet har et aktivt forhold til å kun - ha lovlig innhold der, selv om det i stor grad er drevet av - frivillige. En annen stor liste med filmer kommer fra det - kommersielle selskapet Retro Film Vault, som selger allemannseide - filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister - over filmer som hevdes å være allemannseie, det være seg Public - Domain Review, Public Domain Torrents og Public Domain Movies (.net - og .info), samt lister over filmer med Creative Commons-lisensiering - fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel - stikkontroll ved å vurdere filmer som kun omtales på en liste. Der - jeg har funnet feil som har gjort meg i tvil om vurderingen til de - som har laget listen har jeg forkastet listen fullstendig (gjelder - en liste fra IMDB).</p> - -<p>Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på - Internett (fra blant annet Internett-arkivet, Public Domain - Torrents, Public Domain Reivew og Public Domain Movies), og knytte - dem til oppføringer i IMDB, så har jeg så langt klart å identifisere - over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro - kan lovlig distribueres av alle på Internett. Som ekstra kilder er - det brukt lister over filmer som antas/påstås å være allemannseie. - Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig - for almennheten alle verk som er falt i det fri eller har - bruksvilkår som tillater deling. - -<p>I tillegg til de over 11 000 filmene der tittel-ID i IMDB er - identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå - ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av - disse er nok duplikater av de IMDB-oppføringene som er identifisert - så langt, men neppe alle. Retro Film Vault hevder å ha 44 000 - filmverk i det fri i sin katalog, så det er mulig at det reelle - tallet er betydelig høyere enn de jeg har klart å identifisere så - langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor - mange filmer i IMDB som kan lovlig deles på Internett. I følge <a - href="http://www.imdb.com/stats">statistikk fra IMDB</a> er det 4.6 - millioner titler registrert, hvorav 3 millioner er TV-serieepisoder. - Jeg har ikke funnet ut hvordan de fordeler seg per år.</p> - -<p>Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig - kunne deles på Internett, får en følgende histogram:</p> - -<p align="center"><img width="80%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year.png"></p> - -<p>En kan i histogrammet se at effekten av manglende registrering - eller fornying av registrering er at mange filmer gitt ut i USA før - 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere - filmer gitt ut de siste årene med bruksvilkår som tillater deling, - muligens på grunn av fremveksten av - <a href="https://creativecommons.org/">Creative - Commons</a>-bevegelsen..</p> - -<p>For maskinell analyse av katalogene har jeg laget et lite program - som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn - Time-varianter og laster ned komplett liste over filmer i - katalogene, noe som bekrefter at det er mulig å hente ned komplett - liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire - bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra - www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den - andre brukes i følge dokument 09,12 av klienten tilgjengelig fra - popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette - dokumentet. Den tredje brukes av websidene tilgjengelig fra - popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet. - Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i - følge dokument 09,12, og er navngitt 'ukrfnlge' i dette - dokumentet.</p> - -<p>Metoden Økokrim legger til grunn skriver i sitt punkt fire at - skjønn er en egnet metode for å finne ut om en film kan lovlig deles - på Internett eller ikke, og sier at det ble «vurdert hvorvidt det - var rimelig å forvente om at verket var vernet av copyright». For - det første er det ikke nok å slå fast om en film er «vernet av - copyright» for å vite om det er lovlig å dele den på Internett eller - ikke, da det finnes flere filmer med opphavsrettslige bruksvilkår - som tillater deling på Internett. Eksempler på dette er Creative - Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra - 2010. I tillegg til slike finnes det flere filmer som nå er - allemannseie (public domain) på grunn av manglende registrering - eller fornying av registrering selv om både regisør, - produksjonsselskap og distributør ønsker seg vern. Eksempler på - dette er Plan 9 from Outer Space fra 1959 og Night of the Living - Dead fra 1968. Alle filmer fra USA som var allemannseie før - 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i - USA på det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis - det er noe - <a href="http://www.latimes.com/local/lanow/la-me-ln-happy-birthday-song-lawsuit-decision-20150922-story.html">historien - om sangen «Happy birthday»</a> forteller oss, der betaling for bruk - har vært krevd inn i flere tiår selv om sangen ikke egentlig var - vernet av åndsverksloven, så er det at hvert enkelt verk må vurderes - nøye og i detalj før en kan slå fast om verket er allemannseie eller - ikke, det holder ikke å tro på selverklærte rettighetshavere. Flere - eksempel på verk i det fri som feilklassifiseres som vernet er fra - dokument 09,18, som lister opp søkeresultater for klienten omtalt - som popcorntime.sh og i følge notatet kun inneholder en film (The - Circus fra 1928) som under tvil kan antas å være allemannseie.</p> - -<p>Ved rask gjennomlesning av dokument 09,18, som inneholder - skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt - både filmen «The Brain That Wouldn't Die» fra 1962 som er - <a href="https://archive.org/details/brain_that_wouldnt_die">tilgjengelig - fra Internett-arkivet</a> og som - <a href="https://en.wikipedia.org/wiki/List_of_films_in_the_public_domain_in_the_United_States">i - følge Wikipedia er allemannseie i USA</a> da den ble gitt ut i - 1962 uten 'copyright'-merking, og filmen «God’s Little Acre» fra - 1958 <a href="https://en.wikipedia.org/wiki/God%27s_Little_Acre_%28film%29">som - er lagt ut på Wikipedia</a>, der det fortelles at - sort/hvit-utgaven er allemannseie. Det fremgår ikke fra dokument - 09,18 om filmen omtalt der er sort/hvit-utgaven. Av - kapasitetsårsaker og på grunn av at filmoversikten i dokument 09,18 - ikke er maskinlesbart har jeg ikke forsøkt å sjekke alle filmene som - listes opp der om mot liste med filmer som er antatt lovlig kan - distribueres på Internet.</p> - -<p>Ved maskinell gjennomgang av listen med IMDB-referanser under - regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg - filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er - feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig - fra Internett-arkivet og markert som allemannseie der. Det virker - dermed å være minst fire ganger så mange filmer som kan lovlig deles - på Internett enn det som er lagt til grunn når en påstår at minst - 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere - undersøkelser kan avdekke flere. Poenget er uansett at metodens - punkt om «rimelig å forvente om at verket var vernet av copyright» - gjør metoden upålitelig.</p> - -<p>Den omtalte målemetoden velger ut tilfeldige søketermer fra - ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske - som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke - hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg - om den er egnet til å få et representativt utvalg av filmer. Mange - av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk - ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger. - Dette antyder at enkeltmålinger av 100 filmer slik målemetoden - beskriver er gjort, ikke er velegnet til å finne andel ulovlig - innhold i bittorrent-katalogene.</p> - -<p>En kan motvirke dette store avviket for enkeltmålinger ved å gjøre - mange søk og slå sammen resultatet. Jeg har testet ved å - gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000 - tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig - avvik, i forhold til telling av filmer pr år i hele katalogen.</p> - -<p>Målemetoden henter ut de fem øverste i søkeresultatet. - Søkeresultatene er sortert på antall bittorrent-klienter registrert - som delere i katalogene, hvilket kan gi en slagside mot hvilke - filmer som er populære blant de som bruker bittorrent-katalogene, - uten at det forteller noe om hvilket innhold som er tilgjengelig - eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har - forsøkt å måle hvor stor en slik slagside eventuelt er ved å - sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i - stedet. Avviket for disse to metodene for endel kataloger er godt - synlig på histogramet. Her er histogram over filmer funnet i den - komplette katalogen (grønn strek), og filmer funnet ved søk etter - ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i - søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En - kan her se at resultatene påvirkes betydelig av hvorvidt en ser på - de første eller de siste filmene i et søketreff.</p> - -<p align="center"> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-top.png"/> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-sh-bottom.png"/> - <br> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-top.png"/> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-yts-bottom.png"/> - <br> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-top.png"/> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-ukrfnlge-bottom.png"/> - <br> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-top.png"/> - <img width="40%" src="http://people.skolelinux.org/pere/blog/images/2017-12-20-histogram-year-apidomain-bottom.png"/> -</p> - -<p>Det er verdt å bemerke at de omtalte bittorrent-katalogene ikke er - laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen - YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh, - et selvstendig fildelings-relatert nettsted YTS.AG med et separat - brukermiljø. Målemetoden foreslått av Økokrim måler dermed ikke - (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til - innholdet i disse katalogene.</p> - -<hr> - -<p id="dok-09-13">Metoden fra Økokrims dokument 09,13 i straffesaken -om DNS-beslag.</p> - -<p><strong>1. Evaluation of (il)legality</strong></p> - -<p><strong>1.1. Methodology</strong> - -<p>Due to its technical configuration, Popcorn Time applications don't -allow to make a full list of all titles made available. In order to -evaluate the level of illegal operation of PCT, the following -methodology was applied:</p> - -<ol> - - <li>A random selection of 50 keywords, greater than 3 letters, was - made from the Dale-Chall list that contains 3000 simple English - words1. The selection was made by using a Random Number - Generator2.</li> - - <li>For each keyword, starting with the first randomly selected - keyword, a search query was conducted in the movie section of the - respective Popcorn Time application. For each keyword, the first - five results were added to the title list until the number of 100 - unique titles was reached (duplicates were removed).</li> - - <li>For one fork, .CH, insufficient titles were generated via this - approach to reach 100 titles. This was solved by adding any - additional query results above five for each of the 50 keywords. - Since this still was not enough, another 42 random keywords were - selected to finally reach 100 titles.</li> - - <li>It was verified whether or not there is a reasonable expectation - that the work is copyrighted by checking if they are available on - IMDb, also verifying the director, the year when the title was - released, the release date for a certain market, the production - company/ies of the title and the distribution company/ies.</li> - -</ol> - -<p><strong>1.2. Results</strong></p> - -<p>Between 6 and 9 June 2016, four forks of Popcorn Time were -investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and -popcorntime.ch. An excel sheet with the results is included in -Appendix 1. Screenshots were secured in separate Appendixes for each -respective fork, see Appendix 2-5.</p> - -<p>For each fork, out of 100, de-duplicated titles it was possible to -retrieve data according to the parameters set out above that indicate -that the title is commercially available. Per fork, there was 1 title -that presumably falls within the public domain, i.e. the 1928 movie -"The Circus" by and with Charles Chaplin.</p> - -<p>Based on the above it is reasonable to assume that 99% of the movie -content of each fork is copyright protected and is made available -illegally.</p> - -<p>This exercise was not repeated for TV series, but considering that -besides production companies and distribution companies also -broadcasters may have relevant rights, it is reasonable to assume that -at least a similar level of infringement will be established.</p> - -<p>Based on the above it is reasonable to assume that 99% of all the -content of each fork is copyright protected and are made available -illegally.</p> - - - - - Cura, the nice 3D print slicer, is now in Debian Unstable - http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html - http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html - Sun, 17 Dec 2017 07:00:00 +0100 - <p>After several months of working and waiting, I am happy to report -that the nice and user friendly 3D printer slicer software Cura just -entered Debian Unstable. It consist of five packages, -<a href="https://tracker.debian.org/pkg/cura">cura</a>, -<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>, -<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>, -<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>, -<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and -<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last -two, uranium and cura, entered Unstable yesterday. This should make -it easier for Debian users to print on at least the Ultimaker class of -3D printers. My nearest 3D printer is an Ultimaker 2+, so it will -make life easier for at least me. :)</p> - -<p>The work to make this happen was done by Gregor Riepl, and I was -happy to assist him in sponsoring the packages. With the introduction -of Cura, Debian is up to three 3D printer slicers at your service, -Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D -printer, give it a go. :)</p> - -<p>The 3D printer software is maintained by the 3D printer Debian -team, flocking together on the -<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a> -mailing list and the -<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a> -IRC channel.</p> - -<p>The next step for Cura in Debian is to update the cura package to -version 3.0.3 and then update the entire set of packages to version -3.1.0 which showed up the last few days.</p> + Privacy respecting health monitor / fitness tracker? + http://people.skolelinux.org/pere/blog/Privacy_respecting_health_monitor___fitness_tracker_.html + http://people.skolelinux.org/pere/blog/Privacy_respecting_health_monitor___fitness_tracker_.html + Tue, 7 Aug 2018 16:00:00 +0200 + <p>Dear lazyweb,</p> + +<p>I wonder, is there a fitness tracker / health monitor available for +sale today that respect the users privacy? With this I mean a +watch/bracelet capable of measuring pulse rate and other +fitness/health related values (and by all means, also the correct time +and location if possible), which is <strong>only</strong> provided for +me to extract/read from the unit with computer without a radio beacon +and Internet connection. In other words, it do not depend on a cell +phone app, and do make the measurements available via other peoples +computer (aka "the cloud"). The collected data should be available +using only free software. I'm not interested in depending on some +non-free software that will leave me high and dry some time in the +future. I've been unable to find any such unit. I would like to buy +it. The ones I have seen for sale here in Norway are proud to report +that they share my health data with strangers (aka "cloud enabled"). +Is there an alternative? I'm not interested in giving money to people +requiring me to accept "privacy terms" to allow myself to measure my +own health.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> - Idea for finding all public domain movies in the USA - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - Wed, 13 Dec 2017 10:15:00 +0100 - <p>While looking at -<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies -for the copyright renewal entries for movies published in the USA</a>, -an idea occurred to me. The number of renewals are so few per year, it -should be fairly quick to transcribe them all and add references to -the corresponding IMDB title ID. This would give the (presumably) -complete list of movies published 28 years earlier that did _not_ -enter the public domain for the transcribed year. By fetching the -list of USA movies published 28 years earlier and subtract the movies -with renewals, we should be left with movies registered in IMDB that -are now in the public domain. For the year 1955 (which is the one I -have looked at the most), the total number of pages to transcribe is -21. For the 28 years from 1950 to 1978, it should be in the range -500-600 pages. It is just a few days of work, and spread among a -small group of people it should be doable in a few weeks of spare -time.</p> - -<p>A typical copyright renewal entry look like this (the first one -listed for 1955):</p> - -<p><blockquote> - ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer - Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); - 10Jun55; R151558. -</blockquote></p> - -<p>The movie title as well as registration and renewal dates are easy -enough to locate by a program (split on first comma and look for -DDmmmYY). The rest of the text is not required to find the movie in -IMDB, but is useful to confirm the correct movie is found. I am not -quite sure what the L and R numbers mean, but suspect they are -reference numbers into the archive of the US Copyright Office.</p> - -<p>Tracking down the equivalent IMDB title ID is probably going to be -a manual task, but given the year it is fairly easy to search for the -movie title using for example -<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>. -Using this search, I find that the equivalent IMDB title ID for the -first renewal entry from 1955 is -<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p> - -<p>I suspect the best way to do this would be to make a specialised -web service to make it easy for contributors to transcribe and track -down IMDB title IDs. In the web service, once a entry is transcribed, -the title and year could be extracted from the text, a search in IMDB -conducted for the user to pick the equivalent IMDB title ID right -away. By spreading out the work among volunteers, it would also be -possible to make at least two persons transcribe the same entries to -be able to discover any typos introduced. But I will need help to -make this happen, as I lack the spare time to do all of this on my -own. If you would like to help, please get in touch. Perhaps you can -draft a web service for crowd sourcing the task?</p> - -<p>Note, Project Gutenberg already have some -<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed -copies of the US Copyright Office renewal protocols</a>, but I have -not been able to find any film renewals there, so I suspect they only -have copies of renewal for written works. I have not been able to find -any transcribed versions of movie renewals so far. Perhaps they exist -somewhere?</p> - -<p>I would love to figure out methods for finding all the public -domain works in other countries too, but it is a lot harder. At least -for Norway and Great Britain, such work involve tracking down the -people involved in making the movie and figuring out when they died. -It is hard enough to figure out who was part of making a movie, but I -do not know how to automate such procedure without a registry of every -person involved in making movies and their death year.</p> + Sharing images with friends and family using RSS and EXIF/XMP metadata + http://people.skolelinux.org/pere/blog/Sharing_images_with_friends_and_family_using_RSS_and_EXIF_XMP_metadata.html + http://people.skolelinux.org/pere/blog/Sharing_images_with_friends_and_family_using_RSS_and_EXIF_XMP_metadata.html + Tue, 31 Jul 2018 23:30:00 +0200 + <p>For a while now, I have looked for a sensible way to share images +with my family using a self hosted solution, as it is unacceptable to +place images from my personal life under the control of strangers +working for data hoarders like Google or Dropbox. The last few days I +have drafted an approach that might work out, and I would like to +share it with you. I would like to publish images on a server under +my control, and point some Internet connected display units using some +free and open standard to the images I published. As my primary +language is not limited to ASCII, I need to store metadata using +UTF-8. Many years ago, I hoped to find a digital photo frame capable +of reading a RSS feed with image references (aka using the +&lt;enclosure&gt; RSS tag), but was unable to find a current supplier +of such frames. In the end I gave up that approach.</p> + +<p>Some months ago, I discovered that +<a href="https://www.jwz.org/xscreensaver/">XScreensaver</a> is able to +read images from a RSS feed, and used it to set up a screen saver on +my home info screen, showing images from the Daily images feed from +NASA. This proved to work well. More recently I discovered that +<a href="https://kodi.tv">Kodi</a> (both using +<a href="https://www.openelec.tv/">OpenELEC</a> and +<a href="https://libreelec.tv">LibreELEC</a>) provide the +<a href="https://github.com/grinsted/script.screensaver.feedreader">Feedreader</a> +screen saver capable of reading a RSS feed with images and news. For +fun, I used it this summer to test Kodi on my parents TV by hooking up +a Raspberry PI unit with LibreELEC, and wanted to provide them with a +screen saver showing selected pictures from my selection.</p> + +<p>Armed with motivation and a test photo frame, I set out to generate +a RSS feed for the Kodi instance. I adjusted my <a +href="https://freedombox.org/">Freedombox</a> instance, created +/var/www/html/privatepictures/, wrote a small Perl script to extract +title and description metadata from the photo files and generate the +RSS file. I ended up using Perl instead of python, as the +libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP +tags I ended up using, while python3-exif did not. The relevant EXIF +tags only support ASCII, so I had to find better alternatives. XMP +seem to have the support I need.</p> + +<p>I am a bit unsure which EXIF/XMP tags to use, as I would like to +use tags that can be easily added/updated using normal free software +photo managing software. I ended up using the tags set using this +exiftool command, as these tags can also be set using digiKam:</p> + +<blockquote><pre> +exiftool -headline='The RSS image title' \ + -description='The RSS image description.' \ + -subject+=for-family photo.jpeg +</pre></blockquote> + +<p>I initially tried the "-title" and "keyword" tags, but they were +invisible in digiKam, so I changed to "-headline" and "-subject". I +use the keyword/subject 'for-family' to flag that the photo should be +shared with my family. Images with this keyword set are located and +copied into my Freedombox for the RSS generating script to find.</p> + +<p>Are there better ways to do this? Get in touch if you have better +suggestions.</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -620,42 +188,98 @@ activities, please send Bitcoin donations to my address - Is the short movie «Empty Socks» from 1927 in the public domain or not? - http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html - http://people.skolelinux.org/pere/blog/Is_the_short_movie__Empty_Socks__from_1927_in_the_public_domain_or_not_.html - Tue, 5 Dec 2017 12:30:00 +0100 - <p>Three years ago, a presumed lost animation film, -<a href="https://en.wikipedia.org/wiki/Empty_Socks">Empty Socks from -1927</a>, was discovered in the Norwegian National Library. At the -time it was discovered, it was generally assumed to be copyrighted by -The Walt Disney Company, and I blogged about -<a href="http://people.skolelinux.org/pere/blog/Opphavsretts_status_for__Empty_Socks__fra_1927_.html">my -reasoning to conclude</a> that it would would enter the Norwegian -equivalent of the public domain in 2053, based on my understanding of -Norwegian Copyright Law. But a few days ago, I came across -<a href="http://www.toonzone.net/forums/threads/exposed-disneys-repurchase-of-oswald-the-rabbit-a-sham.4792291/">a -blog post claiming the movie was already in the public domain</a>, at -least in USA. The reasoning is as follows: The film was released in -November or Desember 1927 (sources disagree), and presumably -registered its copyright that year. At that time, right holders of -movies registered by the copyright office received government -protection for there work for 28 years. After 28 years, the copyright -had to be renewed if the wanted the government to protect it further. -The blog post I found claim such renewal did not happen for this -movie, and thus it entered the public domain in 1956. Yet someone -claim the copyright was renewed and the movie is still copyright -protected. Can anyone help me to figure out which claim is correct? -I have not been able to find Empty Socks in Catalog of copyright -entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures -<a href="http://onlinebooks.library.upenn.edu/cce/1955r.html#film">available -from the University of Pennsylvania</a>, neither in -<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=83;num=45">page -45 for the first half of 1955</a>, nor in -<a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015084451130;page=root;view=image;size=100;seq=175;num=119">page -119 for the second half of 1955</a>. It is of course possible that -the renewal entry was left out of the printed catalog by mistake. Is -there some way to rule out this possibility? Please help, and update -the wikipedia page with your findings. + Simple streaming the Linux desktop to Kodi using GStreamer and RTP + http://people.skolelinux.org/pere/blog/Simple_streaming_the_Linux_desktop_to_Kodi_using_GStreamer_and_RTP.html + http://people.skolelinux.org/pere/blog/Simple_streaming_the_Linux_desktop_to_Kodi_using_GStreamer_and_RTP.html + Thu, 12 Jul 2018 17:55:00 +0200 + <p>Last night, I wrote +<a href="http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html">a +recipe to stream a Linux desktop using VLC to a instance of Kodi</a>. +During the day I received valuable feedback, and thanks to the +suggestions I have been able to rewrite the recipe into a much simpler +approach requiring no setup at all. It is a single script that take +care of it all.</p> + +<p>This new script uses GStreamer instead of VLC to capture the +desktop and stream it to Kodi. This fixed the video quality issue I +saw initially. It further removes the need to add a m3u file on the +Kodi machine, as it instead connects to +<a href="https://kodi.wiki/view/JSON-RPC_API/v8">the JSON-RPC API in +Kodi</a> and simply ask Kodi to play from the stream created using +GStreamer. Streaming the desktop to Kodi now become trivial. Copy +the script below, run it with the DNS name or IP address of the kodi +server to stream to as the only argument, and watch your screen show +up on the Kodi screen. Note, it depend on multicast on the local +network, so if you need to stream outside the local network, the +script must be modified. Also note, I have no idea if audio work, as +I only care about the picture part.</p> + +<blockquote><pre> +#!/bin/sh +# +# Stream the Linux desktop view to Kodi. See +# http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html +# for backgorund information. + +# Make sure the stream is stopped in Kodi and the gstreamer process is +# killed if something go wrong (for example if curl is unable to find the +# kodi server). Do the same when interrupting this script. +kodicmd() { + host="$1" + cmd="$2" + params="$3" + curl --silent --header 'Content-Type: application/json' \ + --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \ + "http://$host/jsonrpc" +} +cleanup() { + if [ -n "$kodihost" ] ; then + # Stop the playing when we end + playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" | + jq .result[].playerid) + kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null + fi + if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then + kill "$gstpid" + fi +} +trap cleanup EXIT INT + +if [ -n "$1" ]; then + kodihost=$1 + shift +else + kodihost=kodi.local +fi + +mcast=239.255.0.1 +mcastport=1234 +mcastttl=1 + +pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \ + cut -d" " -f2|head -1) +gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \ + videoconvert ! queue2 ! \ + x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \ + key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \ + mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \ + udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \ + pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \ + > /dev/null 2>&1 & +gstpid=$! + +# Give stream a second to get going +sleep 1 + +# Ask kodi to start streaming using its JSON-RPC API +kodicmd "$kodihost" Player.Open \ + "{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null + +# wait for gst to end +wait "$gstpid" +</pre></blockquote> + +<p>I hope you find the approach useful. I know I do.</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -664,117 +288,145 @@ activities, please send Bitcoin donations to my address - Metadata proposal for movies on the Internet Archive - http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html - http://people.skolelinux.org/pere/blog/Metadata_proposal_for_movies_on_the_Internet_Archive.html - Tue, 28 Nov 2017 12:00:00 +0100 - <p>It would be easier to locate the movie you want to watch in -<a href="https://www.archive.org/">the Internet Archive</a>, if the -metadata about each movie was more complete and accurate. In the -archiving community, a well known saying state that good metadata is a -love letter to the future. The metadata in the Internet Archive could -use a face lift for the future to love us back. Here is a proposal -for a small improvement that would make the metadata more useful -today. I've been unable to find any document describing the various -standard fields available when uploading videos to the archive, so -this proposal is based on my best quess and searching through several -of the existing movies.</p> - -<p>I have a few use cases in mind. First of all, I would like to be -able to count the number of distinct movies in the Internet Archive, -without duplicates. I would further like to identify the IMDB title -ID of the movies in the Internet Archive, to be able to look up a IMDB -title ID and know if I can fetch the video from there and share it -with my friends.</p> - -<p>Second, I would like the Butter data provider for The Internet -archive -(<a href="https://github.com/butterproviders/butter-provider-archive">available -from github</a>), to list as many of the good movies as possible. The -plugin currently do a search in the archive with the following -parameters:</p> - -<p><pre> -collection:moviesandfilms -AND NOT collection:movie_trailers -AND -mediatype:collection -AND format:"Archive BitTorrent" -AND year -</pre></p> - -<p>Most of the cool movies that fail to show up in Butter do so -because the 'year' field is missing. The 'year' field is populated by -the year part from the 'date' field, and should be when the movie was -released (date or year). Two such examples are -<a href="https://archive.org/details/SidneyOlcottsBen-hur1905">Ben Hur -from 1905</a> and -<a href="https://archive.org/details/Caminandes2GranDillama">Caminandes -2: Gran Dillama from 2013</a>, where the year metadata field is -missing.</p> - -So, my proposal is simply, for every movie in The Internet Archive -where an IMDB title ID exist, please fill in these metadata fields -(note, they can be updated also long after the video was uploaded, but -as far as I can tell, only by the uploader): - -<dl> - -<dt>mediatype</dt> -<dd>Should be 'movie' for movies.</dd> - -<dt>collection</dt> -<dd>Should contain 'moviesandfilms'.</dd> - -<dt>title</dt> -<dd>The title of the movie, without the publication year.</dd> - -<dt>date</dt> -<dd>The data or year the movie was released. This make the movie show -up in Butter, as well as make it possible to know the age of the -movie and is useful to figure out copyright status.</dd> - -<dt>director</dt> -<dd>The director of the movie. This make it easier to know if the -correct movie is found in movie databases.</dd> - -<dt>publisher</dt> -<dd>The production company making the movie. Also useful for -identifying the correct movie.</dd> - -<dt>links</dt> - -<dd>Add a link to the IMDB title page, for example like this: &lt;a -href="http://www.imdb.com/title/tt0028496/"&gt;Movie in -IMDB&lt;/a&gt;. This make it easier to find duplicates and allow for -counting of number of unique movies in the Archive. Other external -references, like to TMDB, could be added like this too.</dd> - -</dl> - -<p>I did consider proposing a Custom field for the IMDB title ID (for -example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it -will be easier to simply place it in the links free text field.</p> - -<p>I created -<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a -list of IMDB title IDs for several thousand movies in the Internet -Archive</a>, but I also got a list of several thousand movies without -such IMDB title ID (and quite a few duplicates). It would be great if -this data set could be integrated into the Internet Archive metadata -to be available for everyone in the future, but with the current -policy of leaving metadata editing to the uploaders, it will take a -while before this happen. If you have uploaded movies into the -Internet Archive, you can help. Please consider following my proposal -above for your movies, to ensure that movie is properly -counted. :)</p> - -<p>The list is mostly generated using wikidata, which based on -Wikipedia articles make it possible to link between IMDB and movies in -the Internet Archive. But there are lots of movies without a -Wikipedia article, and some movies where only a collection page exist -(like for <a href="https://en.wikipedia.org/wiki/Caminandes">the -Caminandes example above</a>, where there are three movies but only -one Wikidata entry).</p> + Streaming the Linux desktop to Kodi using VLC and RTSP + http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html + http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html + Thu, 12 Jul 2018 02:00:00 +0200 + <p>PS: See +<ahref="http://people.skolelinux.org/pere/blog/Simple_streaming_the_Linux_desktop_to_Kodi_using_GStreamer_and_RTP.html">the +followup post</a> for a even better approach.</p> + +<p>A while back, I was asked by a friend how to stream the desktop to +my projector connected to Kodi. I sadly had to admit that I had no +idea, as it was a task I never had tried. Since then, I have been +looking for a way to do so, preferable without much extra software to +install on either side. Today I found a way that seem to kind of +work. Not great, but it is a start.</p> + +<p>I had a look at several approaches, for example +<a href="https://github.com/mfoetsch/dlna_live_streaming">using uPnP +DLNA as described in 2011</a>, but it required a uPnP server, fuse and +local storage enough to store the stream locally. This is not going +to work well for me, lacking enough free space, and it would +impossible for my friend to get working.</p> + +<p>Next, it occurred to me that perhaps I could use VLC to create a +video stream that Kodi could play. Preferably using +broadcast/multicast, to avoid having to change any setup on the Kodi +side when starting such stream. Unfortunately, the only recipe I +could find using multicast used the rtp protocol, and this protocol +seem to not be supported by Kodi.</p> + +<p>On the other hand, the rtsp protocol is working! Unfortunately I +have to specify the IP address of the streaming machine in both the +sending command and the file on the Kodi server. But it is showing my +desktop, and thus allow us to have a shared look on the big screen at +the programs I work on.</p> + +<p>I did not spend much time investigating codeces. I combined the +rtp and rtsp recipes from +<a href="https://wiki.videolan.org/Documentation:Streaming_HowTo/Command_Line_Examples/">the +VLC Streaming HowTo/Command Line Examples</a>, and was able to get +this working on the desktop/streaming end.</p> + +<blockquote><pre> +vlc screen:// --sout \ + '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{dst=projector.local,port=1234,sdp=rtsp://192.168.11.4:8080/test.sdp}' +</pre></blockquote> + +<p>I ssh-ed into my Kodi box and created a file like this with the +same IP address:</p> + +<blockquote><pre> +echo rtsp://192.168.11.4:8080/test.sdp \ + > /storage/videos/screenstream.m3u +</pre></blockquote> + +<p>Note the 192.168.11.4 IP address is my desktops IP address. As far +as I can tell the IP must be hardcoded for this to work. In other +words, if someone elses machine is going to do the steaming, you have +to update screenstream.m3u on the Kodi machine and adjust the vlc +recipe. To get started, locate the file in Kodi and select the m3u +file while the VLC stream is running. The desktop then show up in my +big screen. :)</p> + +<p>When using the same technique to stream a video file with audio, +the audio quality is really bad. No idea if the problem is package +loss or bad parameters for the transcode. I do not know VLC nor Kodi +enough to tell.</p> + +<p><strong>Update 2018-07-12</strong>: Johannes Schauer send me a few +succestions and reminded me about an important step. The "screen:" +input source is only available once the vlc-plugin-access-extra +package is installed on Debian. Without it, you will see this error +message: "VLC is unable to open the MRL 'screen://'. Check the log +for details." He further found that it is possible to drop some parts +of the VLC command line to reduce the amount of hardcoded information. +It is also useful to consider using cvlc to avoid having the VLC +window in the desktop view. In sum, this give us this command line on +the source end + +<blockquote><pre> +cvlc screen:// --sout \ + '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{sdp=rtsp://:8080/}' +</pre></blockquote> + +<p>and this on the Kodi end<p> + +<blockquote><pre> +echo rtsp://192.168.11.4:8080/ \ + > /storage/videos/screenstream.m3u +</pre></blockquote> + +<p>Still bad image quality, though. But I did discover that streaming +a DVD using dvdsimple:///dev/dvd as the source had excellent video and +audio quality, so I guess the issue is in the input or transcoding +parts, not the rtsp part. I've tried to change the vb and ab +parameters to use more bandwidth, but it did not make a +difference.</p> + +<p>I further received a suggestion from Einar Haraldseid to try using +gstreamer instead of VLC, and this proved to work great! He also +provided me with the trick to get Kodi to use a multicast stream as +its source. By using this monstrous oneliner, I can stream my desktop +with good video quality in reasonable framerate to the 239.255.0.1 +multicast address on port 1234: + +<blockquote><pre> +gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \ + videoconvert ! queue2 ! \ + x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \ + key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \ + mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \ + udpsink host=239.255.0.1 port=1234 ttl-mc=1 auto-multicast=1 sync=0 \ + pulsesrc device=$(pactl list | grep -A2 'Source #' | \ + grep 'Name: .*\.monitor$' | cut -d" " -f2|head -1) ! \ + audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. +</pre></blockquote> + +<p>and this on the Kodi end<p> + +<blockquote><pre> +echo udp://@239.255.0.1:1234 \ + > /storage/videos/screenstream.m3u +</pre></blockquote> + +<p>Note the trick to pick a valid pulseaudio source. It might not +pick the one you need. This approach will of course lead to trouble +if more than one source uses the same multicast port and address. +Note the ttl-mc=1 setting, which limit the multicast packages to the +local network. If the value is increased, your screen will be +broadcasted further, one network "hop" for each increase (read up on +multicast to learn more. :)!</p> + +<p>Having cracked how to get Kodi to receive multicast streams, I +could use this VLC command to stream to the same multicast address. +The image quality is way better than the rtsp approach, but gstreamer +seem to be doing a better job.</p> + +<blockquote><pre> +cvlc screen:// --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{mux=ts,dst=239.255.0.1,port=1234,sdp=sap}' +</pre></blockquote> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -783,69 +435,192 @@ activities, please send Bitcoin donations to my address - Legal to share more than 3000 movies listed on IMDB? - http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html - http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html - Sat, 18 Nov 2017 21:20:00 +0100 - <p>A month ago, I blogged about my work to -<a href="http://people.skolelinux.org/pere/blog/Locating_IMDB_IDs_of_movies_in_the_Internet_Archive_using_Wikidata.html">automatically -check the copyright status of IMDB entries</a>, and try to count the -number of movies listed in IMDB that is legal to distribute on the -Internet. I have continued to look for good data sources, and -identified a few more. The code used to extract information from -various data sources is available in -<a href="https://github.com/petterreinholdtsen/public-domain-free-imdb">a -git repository</a>, currently available from github.</p> - -<p>So far I have identified 3186 unique IMDB title IDs. To gain -better understanding of the structure of the data set, I created a -histogram of the year associated with each movie (typically release -year). It is interesting to notice where the peaks and dips in the -graph are located. I wonder why they are placed there. I suspect -World War II caused the dip around 1940, but what caused the peak -around 2010?</p> - -<p align="center"><img src="http://people.skolelinux.org/pere/blog/images/2017-11-18-verk-i-det-fri-filmer.png" /></p> - -<p>I've so far identified ten sources for IMDB title IDs for movies in -the public domain or with a free license. This is the statistics -reported when running 'make stats' in the git repository:</p> + What is the most supported MIME type in Debian in 2018? + http://people.skolelinux.org/pere/blog/What_is_the_most_supported_MIME_type_in_Debian_in_2018_.html + http://people.skolelinux.org/pere/blog/What_is_the_most_supported_MIME_type_in_Debian_in_2018_.html + Mon, 9 Jul 2018 08:05:00 +0200 + <p>Five years ago, +<a href="http://people.skolelinux.org/pere/blog/What_is_the_most_supported_MIME_type_in_Debian_.html">I +measured what the most supported MIME type in Debian was</a>, by +analysing the desktop files in all packages in the archive. Since +then, the DEP-11 AppStream system has been put into production, making +the task a lot easier. This made me want to repeat the measurement, +to see how much things changed. Here are the new numbers, for +unstable only this time: + +<p><strong>Debian Unstable:</strong></p> <pre> - 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json - 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json - 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json - 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json - 3186 unique IMDB title IDs in total + count MIME type + ----- ----------------------- + 56 image/jpeg + 55 image/png + 49 image/tiff + 48 image/gif + 39 image/bmp + 38 text/plain + 37 audio/mpeg + 34 application/ogg + 33 audio/x-flac + 32 audio/x-mp3 + 30 audio/x-wav + 30 audio/x-vorbis+ogg + 29 image/x-portable-pixmap + 27 inode/directory + 27 image/x-portable-bitmap + 27 audio/x-mpeg + 26 application/x-ogg + 25 audio/x-mpegurl + 25 audio/ogg + 24 text/html </pre> -<p>The entries without IMDB title ID are candidates to increase the -data set, but might equally well be duplicates of entries already -listed with IMDB title ID in one of the other sources, or represent -movies that lack a IMDB title ID. I've seen examples of all these -situations when peeking at the entries without IMDB title ID. Based -on these data sources, the lower bound for movies listed in IMDB that -are legal to distribute on the Internet is between 3186 and 4713. - -<p>It would be great for improving the accuracy of this measurement, -if the various sources added IMDB title ID to their metadata. I have -tried to reach the people behind the various sources to ask if they -are interested in doing this, without any replies so far. Perhaps you -can help me get in touch with the people behind VODO, Public Domain -Torrents, Public Domain Movies and Public Domain Review to try to -convince them to add more metadata to their movie entries?</p> - -<p>Another way you could help is by adding pages to Wikipedia about -movies that are legal to distribute on the Internet. If such page -exist and include a link to both IMDB and The Internet Archive, the -script used to generate free-movies-archive-org-wikidata.json should -pick up the mapping as soon as wikidata is updates.</p> +<p>The list was created like this using a sid chroot: "cat +/var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ +- \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"</p> + +<p>It is interesting to see how image formats have passed text/plain +as the most announced supported MIME type. These days, thanks to the +AppStream system, if you run into a file format you do not know, and +want to figure out which packages support the format, you can find the +MIME type of the file using "file --mime &lt;filename&gt;", and then +look up all packages announcing support for this format in their +AppStream metadata (XML or .desktop file) using "appstreamcli +what-provides mimetype &lt;mime-type&gt;. For example if you, like +me, want to know which packages support inode/directory, you can get a +list like this:</p> + +<p><blockquote><pre> +% appstreamcli what-provides mimetype inode/directory | grep Package: | sort +Package: anjuta +Package: audacious +Package: baobab +Package: cervisia +Package: chirp +Package: dolphin +Package: doublecmd-common +Package: easytag +Package: enlightenment +Package: ephoto +Package: filelight +Package: gwenview +Package: k4dirstat +Package: kaffeine +Package: kdesvn +Package: kid3 +Package: kid3-qt +Package: nautilus +Package: nemo +Package: pcmanfm +Package: pcmanfm-qt +Package: qweborf +Package: ranger +Package: sirikali +Package: spacefm +Package: spacefm +Package: vifm +% +</pre></blockquote></p> + +<p>Using the same method, I can quickly discover that the Sketchup file +format is not yet supported by any package in Debian:</p> + +<p><blockquote><pre> +% appstreamcli what-provides mimetype application/vnd.sketchup.skp +Could not find component providing 'mimetype::application/vnd.sketchup.skp'. +% +</pre></blockquote></p> + +<p>Yesterday I used it to figure out which packages support the STL 3D +format:</p> + +<p><blockquote><pre> +% appstreamcli what-provides mimetype application/sla|grep Package +Package: cura +Package: meshlab +Package: printrun +% +</pre></blockquote></p> + +<p>PS: A new version of Cura was uploaded to Debian yesterday.</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + + + Debian APT upgrade without enough free space on the disk... + http://people.skolelinux.org/pere/blog/Debian_APT_upgrade_without_enough_free_space_on_the_disk___.html + http://people.skolelinux.org/pere/blog/Debian_APT_upgrade_without_enough_free_space_on_the_disk___.html + Sun, 8 Jul 2018 12:10:00 +0200 + <p>Quite regularly, I let my Debian Sid/Unstable chroot stay untouch +for a while, and when I need to update it there is not enough free +space on the disk for apt to do a normal 'apt upgrade'. I normally +would resolve the issue by doing 'apt install &lt;somepackages&gt;' to +upgrade only some of the packages in one batch, until the amount of +packages to download fall below the amount of free space available. +Today, I had about 500 packages to upgrade, and after a while I got +tired of trying to install chunks of packages manually. I concluded +that I did not have the spare hours required to complete the task, and +decided to see if I could automate it. I came up with this small +script which I call 'apt-in-chunks':</p> + +<p><blockquote><pre> +#!/bin/sh +# +# Upgrade packages when the disk is too full to upgrade every +# upgradable package in one lump. Fetching packages to upgrade using +# apt, and then installing using dpkg, to avoid changing the package +# flag for manual/automatic. + +set -e + +ignore() { + if [ "$1" ]; then + grep -v "$1" + else + cat + fi +} + +for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do + echo "Upgrading $p" + apt clean + apt install --download-only -y $p + for f in /var/cache/apt/archives/*.deb; do + if [ -e "$f" ]; then + dpkg -i /var/cache/apt/archives/*.deb + break + fi + done +done +</pre></blockquote></p> + +<p>The script will extract the list of packages to upgrade, try to +download the packages needed to upgrade one package, install the +downloaded packages using dpkg. The idea is to upgrade packages +without changing the APT mark for the package (ie the one recording of +the package was manually requested or pulled in as a dependency). To +use it, simply run it as root from the command line. If it fail, try +'apt install -f' to clean up the mess and run the script again. This +might happen if the new packages conflict with one of the old +packages. dpkg is unable to remove, while apt can do this.</p> + +<p>It take one option, a package to ignore in the list of packages to +upgrade. The option to ignore a package is there to be able to skip +the packages that are simply too large to unpack. Today this was +'ghc', but I have run into other large packages causing similar +problems earlier (like TeX).</p> + +<p>Update 2018-07-08: Thanks to Paul Wise, I am aware of two +alternative ways to handle this. The "unattended-upgrades +--minimal-upgrade-steps" option will try to calculate upgrade sets for +each package to upgrade, and then upgrade them in order, smallest set +first. It might be a better option than my above mentioned script. +Also, "aptutude upgrade" can upgrade single packages, thus avoiding +the need for using "dpkg -i" in the script above.</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -854,80 +629,25 @@ activities, please send Bitcoin donations to my address - Some notes on fault tolerant storage systems - http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html - http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html - Wed, 1 Nov 2017 15:35:00 +0100 - <p>If you care about how fault tolerant your storage is, you might -find these articles and papers interesting. They have formed how I -think of when designing a storage system.</p> - -<ul> - -<li>USENIX :login; <a -href="https://www.usenix.org/publications/login/summer2017/ganesan">Redundancy -Does Not Imply Fault Tolerance. Analysis of Distributed Storage -Reactions to Single Errors and Corruptions</a> by Aishwarya Ganesan, -Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi -H. Arpaci-Dusseau</li> - -<li>ZDNet -<a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/">Why -RAID 5 stops working in 2009</a> by Robin Harris</li> - -<li>ZDNet -<a href="http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/">Why -RAID 6 stops working in 2019</a> by Robin Harris</li> - -<li>USENIX FAST'07 -<a href="http://research.google.com/archive/disk_failures.pdf">Failure -Trends in a Large Disk Drive Population</a> by Eduardo Pinheiro, -Wolf-Dietrich Weber and Luiz André Barroso</li> - -<li>USENIX ;login: <a -href="https://www.usenix.org/system/files/login/articles/hughes12-04.pdf">Data -Integrity. Finding Truth in a World of Guesses and Lies</a> by Doug -Hughes</li> - -<li>USENIX FAST'08 -<a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An -Analysis of Data Corruption in the Storage Stack</a> by -L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. -Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> - -<li>USENIX FAST'07 <a -href="https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/">Disk -failures in the real world: what does an MTTF of 1,000,000 hours mean -to you?</a> by B. Schroeder and G. A. Gibson.</li> - -<li>USENIX ;login: <a -href="https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/">Are -Disks the Dominant Contributor for Storage Failures? A Comprehensive -Study of Storage Subsystem Failure Characteristics</a> by Weihang -Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky</li> - -<li>SIGMETRICS 2007 -<a href="http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf">An -analysis of latent sector errors in disk drives</a> by -L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler</li> - -</ul> - -<p>Several of these research papers are based on data collected from -hundred thousands or millions of disk, and their findings are eye -opening. The short story is simply do not implicitly trust RAID or -redundant storage systems. Details matter. And unfortunately there -are few options on Linux addressing all the identified issues. Both -ZFS and Btrfs are doing a fairly good job, but have legal and -practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After all, there is an old saying, you know -you have a distributed system when the crash of a computer you have -never heard of stops you from getting any work done. The same holds -true if fault tolerance do not work.</p> - -<p>Just remember, in the end, it do not matter how redundant, or how -fault tolerant your storage is, if you do not continuously monitor its -status to detect and replace failed disks.</p> + The worlds only stone power plant? + http://people.skolelinux.org/pere/blog/The_worlds_only_stone_power_plant_.html + http://people.skolelinux.org/pere/blog/The_worlds_only_stone_power_plant_.html + Sat, 30 Jun 2018 10:35:00 +0200 + <p>So far, at least hydro-electric power, coal power, wind power, +solar power, and wood power are well known. Until a few days ago, I +had never heard of stone power. Then I learn about a quarry in a +mountain in +<a href="https://en.wikipedia.org/wiki/Bremanger">Bremanger</a> i +Norway, where +<a href="https://www.bontrup.com/en/activities/raw-materials/bremanger-quarry/">the +Bremanger Quarry</a> company is extracting stone and dumping the stone +into a shaft leading to its shipping harbour. This downward movement +in this shaft is used to produce electricity. In short, it is using +falling rocks instead of falling water to produce electricity, and +according to its own statements it is producing more power than it is +using, and selling the surplus electricity to the Norwegian power +grid. I find the concept truly amazing. Is this the worlds only +stone power plant?</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -936,42 +656,59 @@ activities, please send Bitcoin donations to my address - Web services for writing academic LaTeX papers as a team - http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html - http://people.skolelinux.org/pere/blog/Web_services_for_writing_academic_LaTeX_papers_as_a_team.html - Tue, 31 Oct 2017 21:00:00 +0100 - <p>I was surprised today to learn that a friend in academia did not -know there are easily available web services available for writing -LaTeX documents as a team. I thought it was common knowledge, but to -make sure at least my readers are aware of it, I would like to mention -these useful services for writing LaTeX documents. Some of them even -provide a WYSIWYG editor to ease writing even further.</p> - -<p>There are two commercial services available, -<a href="https://sharelatex.com">ShareLaTeX</a> and -<a href="https://overleaf.com">Overleaf</a>. They are very easy to -use. Just start a new document, select which publisher to write for -(ie which LaTeX style to use), and start writing. Note, these two -have announced their intention to join forces, so soon it will only be -one joint service. I've used both for different documents, and they -work just fine. While -<a href="https://github.com/sharelatex/sharelatex">ShareLaTeX is free -software</a>, while the latter is not. According to <a -href="https://www.overleaf.com/help/17-is-overleaf-open-source">a -announcement from Overleaf</a>, they plan to keep the ShareLaTeX code -base maintained as free software.</p> - -But these two are not the only alternatives. -<a href="https://app.fiduswriter.org/">Fidus Writer</a> is another free -software solution with <a href="https://github.com/fiduswriter">the -source available on github</a>. I have not used it myself. Several -others can be found on the nice -<a href="https://alternativeto.net/software/sharelatex/">alterntiveTo -web service</a>. - -<p>If you like Google Docs or Etherpad, but would like to write -documents in LaTeX, you should check out these services. You can even -host your own, if you want to. :)</p> + Add-on to control the projector from within Kodi + http://people.skolelinux.org/pere/blog/Add_on_to_control_the_projector_from_within_Kodi.html + http://people.skolelinux.org/pere/blog/Add_on_to_control_the_projector_from_within_Kodi.html + Tue, 26 Jun 2018 23:55:00 +0200 + <p>My movie playing setup involve <a href="https://kodi.tv/">Kodi</a>, +<a href="https://openelec.tv">OpenELEC</a> (probably soon to be +replaced with <a href="https://libreelec.tv/">LibreELEC</a>) and an +Infocus IN76 video projector. My projector can be controlled via both +a infrared remote controller, and a RS-232 serial line. The vendor of +my projector, <a href="https://www.infocus.com/">InFocus</a>, had been +sensible enough to document the serial protocol in its user manual, so +it is easily available, and I used it some years ago to write +<a href="https://github.com/petterreinholdtsen/infocus-projector-control">a +small script to control the projector</a>. For a while now, I longed +for a setup where the projector was controlled by Kodi, for example in +such a way that when the screen saver went on, the projector was +turned off, and when the screen saver exited, the projector was turned +on again.</p> + +<p>A few days ago, with very good help from parts of my family, I +managed to find a Kodi Add-on for controlling a Epson projector, and +got in touch with its author to see if we could join forces and make a +Add-on with support for several projectors. To my pleasure, he was +positive to the idea, and we set out to add InFocus support to his +add-on, and make the add-on suitable for the official Kodi add-on +repository.</p> + +<p>The Add-on is now working (for me, at least), with a few minor +adjustments. The most important change I do relative to the master +branch in the github repository is embedding the +<a href="https://github.com/pyserial/pyserial">pyserial module</a> in +the add-on. The long term solution is to make a "script" type +pyserial module for Kodi, that can be pulled in as a dependency in +Kodi. But until that in place, I embed it.</p> + +<p>The add-on can be configured to turn on the projector when Kodi +starts, off when Kodi stops as well as turn the projector off when the +screensaver start and on when the screesaver stops. It can also be +told to set the projector source when turning on the projector. + +<p>If this sound interesting to you, check out +<a href="https://github.com/fredrik-eriksson/kodi_projcontrol">the +project github repository</a>. Perhaps you can send patches to +support your projector too? As soon as we find time to wrap up the +latest changes, it should be available for easy installation using any +Kodi instance.</p> + +<p>For future improvements, I would like to add projector model +detection and the ability to adjust the brightness level of the +projector from within Kodi. We also need to figure out how to handle +the cooling period of the projector. My projector refuses to turn on +for 60 seconds after it was turned off. This is not handled well by +the add-on at the moment.</p> <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address