A new version of the -3D printer slicer -software Cura, version 3.1.0, is now available in Debian Testing -(aka Buster) and Debian Unstable (aka Sid). I hope you find it -useful. It was uploaded the last few days, and the last update will -enter testing tomorrow. See the -release -notes for the list of bug fixes and new features.
Version 3.2 -was announced 6 days ago. We will try to get it into Debian as -well. - -More information related to 3D printing is available on the -3D printing and -3D printer wiki pages -in Debian.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+ +It might seem obvious that software created using tax money should +be available for everyone to use and improve. Free Software +Foundation Europe recentlystarted a campaign to help get more people +to understand this, and I just signed the petition on +Public Money, Public Code to help +them. I hope you too will do the same.
Jeg lar meg fascinere av en artikkel -i -Dagbladet om Kinas håndtering av Xinjiang, spesielt følgende -utsnitt:
- -- -- -«I den sørvestlige byen Kashgar nærmere grensa til -Sentral-Asia meldes det nå at 120.000 uigurer er internert i såkalte -omskoleringsleirer. Samtidig er det innført et omfattende -helsesjekk-program med innsamling og lagring av DNA-prøver fra -absolutt alle innbyggerne. De mest avanserte overvåkingsmetodene -testes ut her. Programmer for å gjenkjenne ansikter og stemmer er på -plass i regionen. Der har de lokale myndighetene begynt å installere -GPS-systemer i alle kjøretøy og egne sporingsapper i -mobiltelefoner.
- -Politimetodene griper så dypt inn i folks dagligliv at motstanden -mot Beijing-regimet øker.»
- -
Beskrivelsen avviker jo desverre ikke så veldig mye fra tilstanden -her i Norge.
- -Dataregistrering | -Kina | -Norge | - -
---|---|---|
Innsamling og lagring av DNA-prøver fra befolkningen | -Ja | -Delvis, planlagt for alle nyfødte. | -
Ansiktsgjenkjenning | -Ja | -Ja | -
Stemmegjenkjenning | -Ja | -Nei | -
Posisjons-sporing av mobiltelefoner | -Ja | -Ja | -
Posisjons-sporing av biler | -Ja | -Ja | -
I Norge har jo situasjonen rundt Folkehelseinstituttets lagring av -DNA-informasjon på vegne av politiet, der de nektet å slette -informasjon politiet ikke hadde lov til å ta vare på, gjort det klart -at DNA tar vare på ganske lenge. I tillegg finnes det utallige -biobanker som lagres til evig tid, og det er planer om å innføre -evig -lagring av DNA-materiale fra alle spebarn som fødes (med mulighet -for å be om sletting).
- -I Norge er det system på plass for ansiktsgjenkjenning, som -en -NRK-artikkel fra 2015 forteller er aktiv på Gardermoen, samt -brukes -til å analysere bilder innsamlet av myndighetene. Brukes det også -flere plasser? Det er tett med overvåkningskamera kontrollert av -politi og andre myndigheter i for eksempel Oslo sentrum.
- -Jeg er ikke kjent med at Norge har noe system for identifisering av -personer ved hjelp av stemmegjenkjenning.
- -Posisjons-sporing av mobiltelefoner er ruinemessig tilgjengelig for -blant annet politi, NAV og Finanstilsynet, i tråd med krav i -telefonselskapenes konsesjon. I tillegg rapporterer smarttelefoner -sin posisjon til utviklerne av utallige mobil-apper, der myndigheter -og andre kan hente ut informasjon ved behov. Det er intet behov for -noen egen app for dette.
- -Posisjons-sporing av biler er rutinemessig tilgjengelig via et tett -nett av målepunkter på veiene (automatiske bomstasjoner, -køfribrikke-registrering, automatiske fartsmålere og andre veikamera). -Det er i tillegg vedtatt at alle nye biler skal selges med utstyr for -GPS-sporing (eCall).
- -Det er jammen godt vi lever i et liberalt demokrati, og ikke en -overvåkningsstat, eller?
- -Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til -det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner -til min adresse +
+A few days ago, I wondered if there are any privacy respecting +health monitors and/or fitness trackers available for sale these days. +I would like to buy one, but do not want to share my personal data +with strangers, nor be forced to have a mobile phone to get data out +of the unit. I've received some ideas, and would like to share them +with you. + +One interesting data point was a pointer to a Free Software app for +Android named +Gadgetbridge. +It provide cloudless collection and storing of data from a variety of +trackers. Its +list +of supported devices is a good indicator for units where the +protocol is fairly open, as it is obviously being handled by Free +Software. Other units are reportedly encrypting the collected +information with their own public key, making sure only the vendor +cloud service is able to extract data from the unit. The people +contacting me about Gadgetbirde said they were using +Amazfit +Bip and +Xiaomi +Band 3.
+ +I also got a suggestion to look at some of the units from Garmin. +I was told their GPS watches can be connected via USB and show up as a +USB storage device with +Garmin +FIT files containing the collected measurements. While +proprietary, FIT files apparently can be read at least by +GPSBabel and the +GpxPod Nextcloud +app. It is unclear to me if they can read step count and heart rate +data. The person I talked to was using a +Garmin Forerunner +935, which is a fairly expensive unit. I doubt it is worth it for +a unit where the vendor clearly is trying its best to move from open +to closed systems. I still remember when Garmin dropped NMEA support +in its GPSes.
+ +A final idea was to build ones own unit, perhaps by basing it on a +wearable hardware platforms like +the Flora Geo +Watch. Sound like fun, but I had more money than time to spend on +the topic, so I suspect it will have to wait for another time.
+ +While I was working on tracking down links, I came across an +inspiring TED talk by Dave Debronkart about +being a +e-patient, and discovered the web site +Participatory +Medicine. If you too want to track your own health and fitness +without having information about your private life floating around on +computers owned by others, I recommend checking it out.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

We write 2018, and it is 30 years since Unicode was introduced. -Most of us in Norway have come to expect the use of our alphabet to -just work with any computer system. But it is apparently beyond reach -of the computers printing recites at a restaurant. Recently I visited -a Peppes pizza resturant, and noticed a few details on the recite. -Notice how 'ø' and 'å' are replaced with strange symbols in -'Servitør', 'à BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi -gleder oss til å se deg igjen'.
- -I would say that this state is passed sad and over in embarrassing.
- -I removed personal and private information to be nice.
+ +Dear lazyweb,
+ +I wonder, is there a fitness tracker / health monitor available for +sale today that respect the users privacy? With this I mean a +watch/bracelet capable of measuring pulse rate and other +fitness/health related values (and by all means, also the correct time +and location if possible), which is only provided for +me to extract/read from the unit with computer without a radio beacon +and Internet connection. In other words, it do not depend on a cell +phone app, and do make the measurements available via other peoples +computer (aka "the cloud"). The collected data should be available +using only free software. I'm not interested in depending on some +non-free software that will leave me high and dry some time in the +future. I've been unable to find any such unit. I would like to buy +it. The ones I have seen for sale here in Norway are proud to report +that they share my health data with strangers (aka "cloud enabled"). +Is there an alternative? I'm not interested in giving money to people +requiring me to accept "privacy terms" to allow myself to measure my +own health.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -202,72 +149,66 @@ activities, please send Bitcoin donations to my address
I've continued to track down list of movies that are legal to -distribute on the Internet, and identified more than 11,000 title IDs -in The Internet Movie Database (IMDB) so far. Most of them (57%) are -feature films from USA published before 1923. I've also tracked down -more than 24,000 movies I have not yet been able to map to IMDB title -ID, so the real number could be a lot higher. According to the front -web page for Retro Film -Vault, there are 44,000 public domain films, so I guess there are -still some left to identify.
- -The complete data set is available from -a -public git repository, including the scripts used to create it. -Most of the data is collected using web scraping, for example from the -"product catalog" of companies selling copies of public domain movies, -but any source I find believable is used. I've so far had to throw -out three sources because I did not trust the public domain status of -the movies listed.
- -Anyway, this is the summary of the 28 collected data sources so -far:
- -- 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json - 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json - 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json - 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json - 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json - 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json - 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json - 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json - 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json - 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json - 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json - 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json - 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json - 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json - 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json - 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json - 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json - 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json - 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json -11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID -- -
I keep finding more data sources. I found the cinemovies source -just a few days ago, and as you can see from the summary, it extended -my list with 63 movies. Check out the mklist-* scripts in the git -repository if you are curious how the lists are created. Many of the -titles are extracted using searches on IMDB, where I look for the -title and year, and accept search results with only one movie listed -if the year matches. This allow me to automatically use many lists of -movies without IMDB title ID references at the cost of increasing the -risk of wrongly identify a IMDB title ID as public domain. So far my -random manual checks have indicated that the method is solid, but I -really wish all lists of public domain movies would include unique -movie identifier like the IMDB title ID. It would make the job of -counting movies in the public domain a lot easier.
+ +For a while now, I have looked for a sensible way to share images +with my family using a self hosted solution, as it is unacceptable to +place images from my personal life under the control of strangers +working for data hoarders like Google or Dropbox. The last few days I +have drafted an approach that might work out, and I would like to +share it with you. I would like to publish images on a server under +my control, and point some Internet connected display units using some +free and open standard to the images I published. As my primary +language is not limited to ASCII, I need to store metadata using +UTF-8. Many years ago, I hoped to find a digital photo frame capable +of reading a RSS feed with image references (aka using the +<enclosure> RSS tag), but was unable to find a current supplier +of such frames. In the end I gave up that approach.
+ +Some months ago, I discovered that +XScreensaver is able to +read images from a RSS feed, and used it to set up a screen saver on +my home info screen, showing images from the Daily images feed from +NASA. This proved to work well. More recently I discovered that +Kodi (both using +OpenELEC and +LibreELEC) provide the +Feedreader +screen saver capable of reading a RSS feed with images and news. For +fun, I used it this summer to test Kodi on my parents TV by hooking up +a Raspberry PI unit with LibreELEC, and wanted to provide them with a +screen saver showing selected pictures from my selection.
+ +Armed with motivation and a test photo frame, I set out to generate +a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created +/var/www/html/privatepictures/, wrote a small Perl script to extract +title and description metadata from the photo files and generate the +RSS file. I ended up using Perl instead of python, as the +libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP +tags I ended up using, while python3-exif did not. The relevant EXIF +tags only support ASCII, so I had to find better alternatives. XMP +seem to have the support I need.
+ +I am a bit unsure which EXIF/XMP tags to use, as I would like to +use tags that can be easily added/updated using normal free software +photo managing software. I ended up using the tags set using this +exiftool command, as these tags can also be set using digiKam:
+ ++ ++exiftool -headline='The RSS image title' \ + -description='The RSS image description.' \ + -subject+=for-family photo.jpeg +
I initially tried the "-title" and "keyword" tags, but they were +invisible in digiKam, so I changed to "-headline" and "-subject". I +use the keyword/subject 'for-family' to flag that the photo should be +shared with my family. Images with this keyword set are located and +copied into my Freedombox for the RSS generating script to find.
+ +Are there better ways to do this? Get in touch if you have better +suggestions.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -276,7 +217,7 @@ activities, please send Bitcoin donations to my address
@@ -284,402 +225,105 @@ activities, please send Bitcoin donations to my addressI gÃ¥r var jeg i Follo tingrett som sakkyndig vitne og presenterte - mine undersøkelser rundt - telling - av filmverk i det fri, relatert til - foreningen NUUGs involvering i - saken om - Ãkokrims beslag og senere inndragning av DNS-domenet - popcorn-time.no. Jeg snakket om flere ting, men mest om min - vurdering av hvordan filmbransjen har mÃ¥lt hvor ulovlig Popcorn Time - er. Filmbransjens mÃ¥ling er sÃ¥ vidt jeg kan se videreformidlet uten - endringer av norsk politi, og domstolene har lagt mÃ¥lingen til grunn - nÃ¥r de har vurdert Popcorn Time bÃ¥de i Norge og i utlandet (tallet - 99% er referert ogsÃ¥ i utenlandske domsavgjørelser).
- -I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv, - med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg - skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha - notatet, så hvis jeg forsto rettsprosessen riktig ble kun - histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var - visst bare interessert i å forholde seg til det jeg sa i retten, - ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere - enn meg kan ha glede av teksten, og publiserer den derfor her. - Legger ved avskrift av dokument 09,13, som er det sentrale - dokumentet jeg kommenterer.
- -Kommentarer til «Evaluation of (il)legality» for Popcorn - Time
- -Oppsummering
- -MÃ¥lemetoden som Ãkokrim har lagt til grunn nÃ¥r de pÃ¥stÃ¥r at 99% av - filmene tilgjengelig fra Popcorn Time deles ulovlig har - svakheter.
- -De eller den som har vurdert hvorvidt filmer kan lovlig deles har - ikke lyktes med Ã¥ identifisere filmer som kan deles lovlig og har - tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig. - Ãkokrim legger til grunn at det bare finnes èn film, Charlie - Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de - som ble observert tilgjengelig via ulike Popcorn Time-varianter. - Jeg finner tre flere blant de observerte filmene: «The Brain That - Wouldn't Die» fra 1962, «Godâs Little Acre» fra 1958 og «She Wore a - Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det - finnes dermed minst fire ganger sÃ¥ mange filmer som lovlig kan deles - pÃ¥ Internett i datasettet Ãkokrim har lagt til grunn nÃ¥r det pÃ¥stÃ¥s - at mindre enn 1 % kan deles lovlig.
- -Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra - ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte - filmkatalogene som helhet, hvilket påvirker fordelingen mellom - filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I - tillegg gir valg av øvre del (de fem første) av søkeresultatene et - avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk - i det fri i søkeresultatet.
- -Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn - Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger - som vedlikeholdes uavhengig av Popcorn Time.
- -Omtalte dokumenter: 09,12, 09,13, 09,14, -09,18, 09,19, 09,20.
- -Utfyllende kommentarer
- -Ãkokrim har forklart domstolene at minst 99% av alt som er - tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig pÃ¥ - Internet. Jeg ble nysgjerrig pÃ¥ hvordan de er kommet frem til dette - tallet, og dette notatet er en samling kommentarer rundt mÃ¥lingen - Ãkokrim henviser til. Litt av bakgrunnen for at jeg valgte Ã¥ se pÃ¥ - saken er at jeg er interessert i Ã¥ identifisere og telle hvor mange - kunstneriske verk som er falt i det fri eller av andre grunner kan - lovlig deles pÃ¥ Internett, og dermed var interessert i hvordan en - hadde funnet den ene prosenten som kanskje deles lovlig.
- -Andelen på 99% kommer fra et ukreditert og udatert notatet som tar - mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike - Popcorn Time-varianter er.
- -Raskt oppsummert, så forteller metodedokumentet at på grunn av at - det ikke er mulig å få tak i komplett liste over alle filmtitler - tilgjengelig via Popcorn Time, så lages noe som skal være et - representativt utvalg ved å velge 50 søkeord større enn tre tegn fra - ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og - de første fem filmene i søkeresultatet samles inn inntil 100 unike - filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å - nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt - til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut - og søkt på flere tilfeldig valgte søkeord inntil 100 unike - filmtitler var identifisert.
- -Deretter ble for hver av filmtitlene «vurdert hvorvidt det var - rimelig å forvente om at verket var vernet av copyright, ved å se på - om filmen var tilgjengelig i IMDB, samt se på regissør, - utgivelsesår, når det var utgitt for bestemte markedsområder samt - hvilke produksjons- og distribusjonsselskap som var registrert» (min - oversettelse).
- -Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og - 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert - 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion - Picture Association EMEA. Metoden virker å ha flere svakheter som - gir resultatene en slagside. Den starter med å slå fast at det ikke - er mulig å hente ut en komplett liste over alle filmtitler som er - tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne - forutsetningen er ikke i tråd med det som står i dokument 09,12, som - ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller - hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument - 09,12 er muligens samme rapport som ble referert til i dom fra Oslo - Tingrett 2017-11-03 - (sak - 17-093347TVI-OTIR/05) som rapport av 1. juni 2017 av Alexander - Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord - for å kontrollere dette.
- -IMDB er en forkortelse for The Internet Movie Database, en - anerkjent kommersiell nettjeneste som brukes aktivt av både - filmbransjen og andre til å holde rede på hvilke spillefilmer (og - endel andre filmer) som finnes eller er under produksjon, og - informasjon om disse filmene. Datakvaliteten er høy, med få feil og - få filmer som mangler. IMDB viser ikke informasjon om - opphavsrettslig status for filmene på infosiden for hver film. Som - del av IMDB-tjenesten finnes det lister med filmer laget av - frivillige som lister opp det som antas å være verk i det fri.
- -Det finnes flere kilder som kan brukes til å finne filmer som er - allemannseie (public domain) eller har bruksvilkår som gjør det - lovlig for alleå dele dem på Internett. Jeg har de siste ukene - forsøkt å samle og krysskoble disse listene for å forsøke å telle - antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og - publiserte filmer for Internett-arkivets del), har jeg så langt - klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer. - -
De aller fleste oppføringene er hentet fra IMDB selv, basert på det - faktum at alle filmer laget i USA før 1923 er falt i det fri. - Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette - utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt). - En annen stor andel kommer fra Internett-arkivet, der jeg har - identifisert filmer med referanse til IMDB. Internett-arkivet, som - holder til i USA, har som - policy å kun publisere - filmer som det er lovlig å distribuere. Jeg har under arbeidet - kommet over flere filmer som har blitt fjernet fra - Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene - som kontrollerer Internett-arkivet har et aktivt forhold til å kun - ha lovlig innhold der, selv om det i stor grad er drevet av - frivillige. En annen stor liste med filmer kommer fra det - kommersielle selskapet Retro Film Vault, som selger allemannseide - filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister - over filmer som hevdes å være allemannseie, det være seg Public - Domain Review, Public Domain Torrents og Public Domain Movies (.net - og .info), samt lister over filmer med Creative Commons-lisensiering - fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel - stikkontroll ved å vurdere filmer som kun omtales på en liste. Der - jeg har funnet feil som har gjort meg i tvil om vurderingen til de - som har laget listen har jeg forkastet listen fullstendig (gjelder - en liste fra IMDB).
- -Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på - Internett (fra blant annet Internett-arkivet, Public Domain - Torrents, Public Domain Reivew og Public Domain Movies), og knytte - dem til oppføringer i IMDB, så har jeg så langt klart å identifisere - over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro - kan lovlig distribueres av alle på Internett. Som ekstra kilder er - det brukt lister over filmer som antas/påstås å være allemannseie. - Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig - for almennheten alle verk som er falt i det fri eller har - bruksvilkår som tillater deling. - -
I tillegg til de over 11 000 filmene der tittel-ID i IMDB er - identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå - ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av - disse er nok duplikater av de IMDB-oppføringene som er identifisert - så langt, men neppe alle. Retro Film Vault hevder å ha 44 000 - filmverk i det fri i sin katalog, så det er mulig at det reelle - tallet er betydelig høyere enn de jeg har klart å identifisere så - langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor - mange filmer i IMDB som kan lovlig deles på Internett. I følge statistikk fra IMDB er det 4.6 - millioner titler registrert, hvorav 3 millioner er TV-serieepisoder. - Jeg har ikke funnet ut hvordan de fordeler seg per år.
- -Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig - kunne deles på Internett, får en følgende histogram:
- -En kan i histogrammet se at effekten av manglende registrering - eller fornying av registrering er at mange filmer gitt ut i USA før - 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere - filmer gitt ut de siste årene med bruksvilkår som tillater deling, - muligens på grunn av fremveksten av - Creative - Commons-bevegelsen..
- -For maskinell analyse av katalogene har jeg laget et lite program - som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn - Time-varianter og laster ned komplett liste over filmer i - katalogene, noe som bekrefter at det er mulig å hente ned komplett - liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire - bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra - www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den - andre brukes i følge dokument 09,12 av klienten tilgjengelig fra - popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette - dokumentet. Den tredje brukes av websidene tilgjengelig fra - popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet. - Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i - følge dokument 09,12, og er navngitt 'ukrfnlge' i dette - dokumentet.
- -Metoden Ãkokrim legger til grunn skriver i sitt punkt fire at - skjønn er en egnet metode for Ã¥ finne ut om en film kan lovlig deles - pÃ¥ Internett eller ikke, og sier at det ble «vurdert hvorvidt det - var rimelig Ã¥ forvente om at verket var vernet av copyright». For - det første er det ikke nok Ã¥ slÃ¥ fast om en film er «vernet av - copyright» for Ã¥ vite om det er lovlig Ã¥ dele den pÃ¥ Internett eller - ikke, da det finnes flere filmer med opphavsrettslige bruksvilkÃ¥r - som tillater deling pÃ¥ Internett. Eksempler pÃ¥ dette er Creative - Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra - 2010. I tillegg til slike finnes det flere filmer som nÃ¥ er - allemannseie (public domain) pÃ¥ grunn av manglende registrering - eller fornying av registrering selv om bÃ¥de regisør, - produksjonsselskap og distributør ønsker seg vern. Eksempler pÃ¥ - dette er Plan 9 from Outer Space fra 1959 og Night of the Living - Dead fra 1968. Alle filmer fra USA som var allemannseie før - 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i - USA pÃ¥ det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis - det er noe - historien - om sangen «Happy birthday» forteller oss, der betaling for bruk - har vært krevd inn i flere tiÃ¥r selv om sangen ikke egentlig var - vernet av Ã¥ndsverksloven, sÃ¥ er det at hvert enkelt verk mÃ¥ vurderes - nøye og i detalj før en kan slÃ¥ fast om verket er allemannseie eller - ikke, det holder ikke Ã¥ tro pÃ¥ selverklærte rettighetshavere. Flere - eksempel pÃ¥ verk i det fri som feilklassifiseres som vernet er fra - dokument 09,18, som lister opp søkeresultater for klienten omtalt - som popcorntime.sh og i følge notatet kun inneholder en film (The - Circus fra 1928) som under tvil kan antas Ã¥ være allemannseie.
- -Ved rask gjennomlesning av dokument 09,18, som inneholder - skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt - bÃ¥de filmen «The Brain That Wouldn't Die» fra 1962 som er - tilgjengelig - fra Internett-arkivet og som - i - følge Wikipedia er allemannseie i USA da den ble gitt ut i - 1962 uten 'copyright'-merking, og filmen «Godâs Little Acre» fra - 1958 som - er lagt ut pÃ¥ Wikipedia, der det fortelles at - sort/hvit-utgaven er allemannseie. Det fremgÃ¥r ikke fra dokument - 09,18 om filmen omtalt der er sort/hvit-utgaven. Av - kapasitetsÃ¥rsaker og pÃ¥ grunn av at filmoversikten i dokument 09,18 - ikke er maskinlesbart har jeg ikke forsøkt Ã¥ sjekke alle filmene som - listes opp der om mot liste med filmer som er antatt lovlig kan - distribueres pÃ¥ Internet.
- -Ved maskinell gjennomgang av listen med IMDB-referanser under - regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg - filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er - feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig - fra Internett-arkivet og markert som allemannseie der. Det virker - dermed å være minst fire ganger så mange filmer som kan lovlig deles - på Internett enn det som er lagt til grunn når en påstår at minst - 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere - undersøkelser kan avdekke flere. Poenget er uansett at metodens - punkt om «rimelig å forvente om at verket var vernet av copyright» - gjør metoden upålitelig.
- -Den omtalte målemetoden velger ut tilfeldige søketermer fra - ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske - som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke - hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg - om den er egnet til å få et representativt utvalg av filmer. Mange - av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk - ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger. - Dette antyder at enkeltmålinger av 100 filmer slik målemetoden - beskriver er gjort, ikke er velegnet til å finne andel ulovlig - innhold i bittorrent-katalogene.
- -En kan motvirke dette store avviket for enkeltmålinger ved å gjøre - mange søk og slå sammen resultatet. Jeg har testet ved å - gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000 - tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig - avvik, i forhold til telling av filmer pr år i hele katalogen.
- -Målemetoden henter ut de fem øverste i søkeresultatet. - Søkeresultatene er sortert på antall bittorrent-klienter registrert - som delere i katalogene, hvilket kan gi en slagside mot hvilke - filmer som er populære blant de som bruker bittorrent-katalogene, - uten at det forteller noe om hvilket innhold som er tilgjengelig - eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har - forsøkt å måle hvor stor en slik slagside eventuelt er ved å - sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i - stedet. Avviket for disse to metodene for endel kataloger er godt - synlig på histogramet. Her er histogram over filmer funnet i den - komplette katalogen (grønn strek), og filmer funnet ved søk etter - ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i - søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En - kan her se at resultatene påvirkes betydelig av hvorvidt en ser på - de første eller de siste filmene i et søketreff.
- -
-
-
-
-
-
-
-
-
-
-
-
-
Det er verdt Ã¥ bemerke at de omtalte bittorrent-katalogene ikke er - laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen - YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh, - et selvstendig fildelings-relatert nettsted YTS.AG med et separat - brukermiljø. MÃ¥lemetoden foreslÃ¥tt av Ãkokrim mÃ¥ler dermed ikke - (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til - innholdet i disse katalogene.
- -- -
Metoden fra Ãkokrims dokument 09,13 i straffesaken -om DNS-beslag.
- -1. Evaluation of (il)legality
+ +Last night, I wrote +a +recipe to stream a Linux desktop using VLC to a instance of Kodi. +During the day I received valuable feedback, and thanks to the +suggestions I have been able to rewrite the recipe into a much simpler +approach requiring no setup at all. It is a single script that take +care of it all.
+ +This new script uses GStreamer instead of VLC to capture the +desktop and stream it to Kodi. This fixed the video quality issue I +saw initially. It further removes the need to add a m3u file on the +Kodi machine, as it instead connects to +the JSON-RPC API in +Kodi and simply ask Kodi to play from the stream created using +GStreamer. Streaming the desktop to Kodi now become trivial. Copy +the script below, run it with the DNS name or IP address of the kodi +server to stream to as the only argument, and watch your screen show +up on the Kodi screen. Note, it depend on multicast on the local +network, so if you need to stream outside the local network, the +script must be modified. Also note, I have no idea if audio work, as +I only care about the picture part.
+ ++ ++#!/bin/sh +# +# Stream the Linux desktop view to Kodi. See +# http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html +# for backgorund information. + +# Make sure the stream is stopped in Kodi and the gstreamer process is +# killed if something go wrong (for example if curl is unable to find the +# kodi server). Do the same when interrupting this script. +kodicmd() { + host="$1" + cmd="$2" + params="$3" + curl --silent --header 'Content-Type: application/json' \ + --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \ + "http://$host/jsonrpc" +} +cleanup() { + if [ -n "$kodihost" ] ; then + # Stop the playing when we end + playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" | + jq .result[].playerid) + kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null + fi + if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then + kill "$gstpid" + fi +} +trap cleanup EXIT INT + +if [ -n "$1" ]; then + kodihost=$1 + shift +else + kodihost=kodi.local +fi + +mcast=239.255.0.1 +mcastport=1234 +mcastttl=1 + +pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \ + cut -d" " -f2|head -1) +gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \ + videoconvert ! queue2 ! \ + x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \ + key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \ + mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \ + udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \ + pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \ + > /dev/null 2>&1 & +gstpid=$! + +# Give stream a second to get going +sleep 1 + +# Ask kodi to start streaming using its JSON-RPC API +kodicmd "$kodihost" Player.Open \ + "{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null + +# wait for gst to end +wait "$gstpid" +
I hope you find the approach useful. I know I do.
-1.1. Methodology - -
Due to its technical configuration, Popcorn Time applications don't -allow to make a full list of all titles made available. In order to -evaluate the level of illegal operation of PCT, the following -methodology was applied:
- --
-
-
- A random selection of 50 keywords, greater than 3 letters, was - made from the Dale-Chall list that contains 3000 simple English - words1. The selection was made by using a Random Number - Generator2. - -
- For each keyword, starting with the first randomly selected - keyword, a search query was conducted in the movie section of the - respective Popcorn Time application. For each keyword, the first - five results were added to the title list until the number of 100 - unique titles was reached (duplicates were removed). - -
- For one fork, .CH, insufficient titles were generated via this - approach to reach 100 titles. This was solved by adding any - additional query results above five for each of the 50 keywords. - Since this still was not enough, another 42 random keywords were - selected to finally reach 100 titles. - -
- It was verified whether or not there is a reasonable expectation - that the work is copyrighted by checking if they are available on - IMDb, also verifying the director, the year when the title was - released, the release date for a certain market, the production - company/ies of the title and the distribution company/ies. - -
1.2. Results
- -Between 6 and 9 June 2016, four forks of Popcorn Time were -investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and -popcorntime.ch. An excel sheet with the results is included in -Appendix 1. Screenshots were secured in separate Appendixes for each -respective fork, see Appendix 2-5.
- -For each fork, out of 100, de-duplicated titles it was possible to -retrieve data according to the parameters set out above that indicate -that the title is commercially available. Per fork, there was 1 title -that presumably falls within the public domain, i.e. the 1928 movie -"The Circus" by and with Charles Chaplin.
- -Based on the above it is reasonable to assume that 99% of the movie -content of each fork is copyright protected and is made available -illegally.
- -This exercise was not repeated for TV series, but considering that -besides production companies and distribution companies also -broadcasters may have relevant rights, it is reasonable to assume that -at least a similar level of infringement will be established.
- -Based on the above it is reasonable to assume that 99% of all the -content of each fork is copyright protected and are made available -illegally.
+As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
After several months of working and waiting, I am happy to report -that the nice and user friendly 3D printer slicer software Cura just -entered Debian Unstable. It consist of five packages, -cura, -cura-engine, -libarcus, -fdm-materials, -libsavitar and -uranium. The last -two, uranium and cura, entered Unstable yesterday. This should make -it easier for Debian users to print on at least the Ultimaker class of -3D printers. My nearest 3D printer is an Ultimaker 2+, so it will -make life easier for at least me. :)
- -The work to make this happen was done by Gregor Riepl, and I was -happy to assist him in sponsoring the packages. With the introduction -of Cura, Debian is up to three 3D printer slicers at your service, -Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D -printer, give it a go. :)
- -The 3D printer software is maintained by the 3D printer Debian -team, flocking together on the -3dprinter-general -mailing list and the -#debian-3dprinting -IRC channel.
- -The next step for Cura in Debian is to update the cura package to -version 3.0.3 and then update the entire set of packages to version -3.1.0 which showed up the last few days.
+ +PS: See
+
A while back, I was asked by a friend how to stream the desktop to +my projector connected to Kodi. I sadly had to admit that I had no +idea, as it was a task I never had tried. Since then, I have been +looking for a way to do so, preferable without much extra software to +install on either side. Today I found a way that seem to kind of +work. Not great, but it is a start.
+ +I had a look at several approaches, for example +using uPnP +DLNA as described in 2011, but it required a uPnP server, fuse and +local storage enough to store the stream locally. This is not going +to work well for me, lacking enough free space, and it would +impossible for my friend to get working.
+ +Next, it occurred to me that perhaps I could use VLC to create a +video stream that Kodi could play. Preferably using +broadcast/multicast, to avoid having to change any setup on the Kodi +side when starting such stream. Unfortunately, the only recipe I +could find using multicast used the rtp protocol, and this protocol +seem to not be supported by Kodi.
+ +On the other hand, the rtsp protocol is working! Unfortunately I +have to specify the IP address of the streaming machine in both the +sending command and the file on the Kodi server. But it is showing my +desktop, and thus allow us to have a shared look on the big screen at +the programs I work on.
+ +I did not spend much time investigating codeces. I combined the +rtp and rtsp recipes from +the +VLC Streaming HowTo/Command Line Examples, and was able to get +this working on the desktop/streaming end.
+ ++ ++vlc screen:// --sout \ + '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{dst=projector.local,port=1234,sdp=rtsp://192.168.11.4:8080/test.sdp}' +
I ssh-ed into my Kodi box and created a file like this with the +same IP address:
+ ++ ++echo rtsp://192.168.11.4:8080/test.sdp \ + > /storage/videos/screenstream.m3u +
Note the 192.168.11.4 IP address is my desktops IP address. As far +as I can tell the IP must be hardcoded for this to work. In other +words, if someone elses machine is going to do the steaming, you have +to update screenstream.m3u on the Kodi machine and adjust the vlc +recipe. To get started, locate the file in Kodi and select the m3u +file while the VLC stream is running. The desktop then show up in my +big screen. :)
+ +When using the same technique to stream a video file with audio, +the audio quality is really bad. No idea if the problem is package +loss or bad parameters for the transcode. I do not know VLC nor Kodi +enough to tell.
+ +Update 2018-07-12: Johannes Schauer send me a few +succestions and reminded me about an important step. The "screen:" +input source is only available once the vlc-plugin-access-extra +package is installed on Debian. Without it, you will see this error +message: "VLC is unable to open the MRL 'screen://'. Check the log +for details." He further found that it is possible to drop some parts +of the VLC command line to reduce the amount of hardcoded information. +It is also useful to consider using cvlc to avoid having the VLC +window in the desktop view. In sum, this give us this command line on +the source end + +
+ ++cvlc screen:// --sout \ + '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{sdp=rtsp://:8080/}' +
and this on the Kodi end
+ +
+ ++echo rtsp://192.168.11.4:8080/ \ + > /storage/videos/screenstream.m3u +
Still bad image quality, though. But I did discover that streaming +a DVD using dvdsimple:///dev/dvd as the source had excellent video and +audio quality, so I guess the issue is in the input or transcoding +parts, not the rtsp part. I've tried to change the vb and ab +parameters to use more bandwidth, but it did not make a +difference.
+ +I further received a suggestion from Einar Haraldseid to try using +gstreamer instead of VLC, and this proved to work great! He also +provided me with the trick to get Kodi to use a multicast stream as +its source. By using this monstrous oneliner, I can stream my desktop +with good video quality in reasonable framerate to the 239.255.0.1 +multicast address on port 1234: + +
+ ++gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \ + videoconvert ! queue2 ! \ + x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \ + key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \ + mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \ + udpsink host=239.255.0.1 port=1234 ttl-mc=1 auto-multicast=1 sync=0 \ + pulsesrc device=$(pactl list | grep -A2 'Source #' | \ + grep 'Name: .*\.monitor$' | cut -d" " -f2|head -1) ! \ + audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. +
and this on the Kodi end
+ +
+ ++echo udp://@239.255.0.1:1234 \ + > /storage/videos/screenstream.m3u +
Note the trick to pick a valid pulseaudio source. It might not +pick the one you need. This approach will of course lead to trouble +if more than one source uses the same multicast port and address. +Note the ttl-mc=1 setting, which limit the multicast packages to the +local network. If the value is increased, your screen will be +broadcasted further, one network "hop" for each increase (read up on +multicast to learn more. :)!
+ +Having cracked how to get Kodi to receive multicast streams, I +could use this VLC command to stream to the same multicast address. +The image quality is way better than the rtsp approach, but gstreamer +seem to be doing a better job.
+ ++ ++cvlc screen:// --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{mux=ts,dst=239.255.0.1,port=1234,sdp=sap}' +
As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
While looking at -the scanned copies -for the copyright renewal entries for movies published in the USA, -an idea occurred to me. The number of renewals are so few per year, it -should be fairly quick to transcribe them all and add references to -the corresponding IMDB title ID. This would give the (presumably) -complete list of movies published 28 years earlier that did _not_ -enter the public domain for the transcribed year. By fetching the -list of USA movies published 28 years earlier and subtract the movies -with renewals, we should be left with movies registered in IMDB that -are now in the public domain. For the year 1955 (which is the one I -have looked at the most), the total number of pages to transcribe is -21. For the 28 years from 1950 to 1978, it should be in the range -500-600 pages. It is just a few days of work, and spread among a -small group of people it should be doable in a few weeks of spare -time.
- -A typical copyright renewal entry look like this (the first one -listed for 1955):
- -- ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer - Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); - 10Jun55; R151558. -- -
The movie title as well as registration and renewal dates are easy -enough to locate by a program (split on first comma and look for -DDmmmYY). The rest of the text is not required to find the movie in -IMDB, but is useful to confirm the correct movie is found. I am not -quite sure what the L and R numbers mean, but suspect they are -reference numbers into the archive of the US Copyright Office.
- -Tracking down the equivalent IMDB title ID is probably going to be -a manual task, but given the year it is fairly easy to search for the -movie title using for example -http://www.imdb.com/find?q=adam+and+evil+1927&s=all. -Using this search, I find that the equivalent IMDB title ID for the -first renewal entry from 1955 is -http://www.imdb.com/title/tt0017588/.
- -I suspect the best way to do this would be to make a specialised -web service to make it easy for contributors to transcribe and track -down IMDB title IDs. In the web service, once a entry is transcribed, -the title and year could be extracted from the text, a search in IMDB -conducted for the user to pick the equivalent IMDB title ID right -away. By spreading out the work among volunteers, it would also be -possible to make at least two persons transcribe the same entries to -be able to discover any typos introduced. But I will need help to -make this happen, as I lack the spare time to do all of this on my -own. If you would like to help, please get in touch. Perhaps you can -draft a web service for crowd sourcing the task?
- -Note, Project Gutenberg already have some -transcribed -copies of the US Copyright Office renewal protocols, but I have -not been able to find any film renewals there, so I suspect they only -have copies of renewal for written works. I have not been able to find -any transcribed versions of movie renewals so far. Perhaps they exist -somewhere?
- -I would love to figure out methods for finding all the public -domain works in other countries too, but it is a lot harder. At least -for Norway and Great Britain, such work involve tracking down the -people involved in making the movie and figuring out when they died. -It is hard enough to figure out who was part of making a movie, but I -do not know how to automate such procedure without a registry of every -person involved in making movies and their death year.
+ +Five years ago, +I +measured what the most supported MIME type in Debian was, by +analysing the desktop files in all packages in the archive. Since +then, the DEP-11 AppStream system has been put into production, making +the task a lot easier. This made me want to repeat the measurement, +to see how much things changed. Here are the new numbers, for +unstable only this time: + +
Debian Unstable:
+ ++ count MIME type + ----- ----------------------- + 56 image/jpeg + 55 image/png + 49 image/tiff + 48 image/gif + 39 image/bmp + 38 text/plain + 37 audio/mpeg + 34 application/ogg + 33 audio/x-flac + 32 audio/x-mp3 + 30 audio/x-wav + 30 audio/x-vorbis+ogg + 29 image/x-portable-pixmap + 27 inode/directory + 27 image/x-portable-bitmap + 27 audio/x-mpeg + 26 application/x-ogg + 25 audio/x-mpegurl + 25 audio/ogg + 24 text/html ++ +
The list was created like this using a sid chroot: "cat +/var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ +- \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"
+ +It is interesting to see how image formats have passed text/plain +as the most announced supported MIME type. These days, thanks to the +AppStream system, if you run into a file format you do not know, and +want to figure out which packages support the format, you can find the +MIME type of the file using "file --mime <filename>", and then +look up all packages announcing support for this format in their +AppStream metadata (XML or .desktop file) using "appstreamcli +what-provides mimetype <mime-type>. For example if you, like +me, want to know which packages support inode/directory, you can get a +list like this:
+ ++ ++% appstreamcli what-provides mimetype inode/directory | grep Package: | sort +Package: anjuta +Package: audacious +Package: baobab +Package: cervisia +Package: chirp +Package: dolphin +Package: doublecmd-common +Package: easytag +Package: enlightenment +Package: ephoto +Package: filelight +Package: gwenview +Package: k4dirstat +Package: kaffeine +Package: kdesvn +Package: kid3 +Package: kid3-qt +Package: nautilus +Package: nemo +Package: pcmanfm +Package: pcmanfm-qt +Package: qweborf +Package: ranger +Package: sirikali +Package: spacefm +Package: spacefm +Package: vifm +% +
Using the same method, I can quickly discover that the Sketchup file +format is not yet supported by any package in Debian:
+ ++ ++% appstreamcli what-provides mimetype application/vnd.sketchup.skp +Could not find component providing 'mimetype::application/vnd.sketchup.skp'. +% +
Yesterday I used it to figure out which packages support the STL 3D +format:
+ ++ ++% appstreamcli what-provides mimetype application/sla|grep Package +Package: cura +Package: meshlab +Package: printrun +% +
PS: A new version of Cura was uploaded to Debian yesterday.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -809,7 +598,7 @@ activities, please send Bitcoin donations to my address
@@ -817,40 +606,74 @@ activities, please send Bitcoin donations to my addressThree years ago, a presumed lost animation film, -Empty Socks from -1927, was discovered in the Norwegian National Library. At the -time it was discovered, it was generally assumed to be copyrighted by -The Walt Disney Company, and I blogged about -my -reasoning to conclude that it would would enter the Norwegian -equivalent of the public domain in 2053, based on my understanding of -Norwegian Copyright Law. But a few days ago, I came across -a -blog post claiming the movie was already in the public domain, at -least in USA. The reasoning is as follows: The film was released in -November or Desember 1927 (sources disagree), and presumably -registered its copyright that year. At that time, right holders of -movies registered by the copyright office received government -protection for there work for 28 years. After 28 years, the copyright -had to be renewed if the wanted the government to protect it further. -The blog post I found claim such renewal did not happen for this -movie, and thus it entered the public domain in 1956. Yet someone -claim the copyright was renewed and the movie is still copyright -protected. Can anyone help me to figure out which claim is correct? -I have not been able to find Empty Socks in Catalog of copyright -entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures -available -from the University of Pennsylvania, neither in -page -45 for the first half of 1955, nor in -page -119 for the second half of 1955. It is of course possible that -the renewal entry was left out of the printed catalog by mistake. Is -there some way to rule out this possibility? Please help, and update -the wikipedia page with your findings. +
+Quite regularly, I let my Debian Sid/Unstable chroot stay untouch +for a while, and when I need to update it there is not enough free +space on the disk for apt to do a normal 'apt upgrade'. I normally +would resolve the issue by doing 'apt install <somepackages>' to +upgrade only some of the packages in one batch, until the amount of +packages to download fall below the amount of free space available. +Today, I had about 500 packages to upgrade, and after a while I got +tired of trying to install chunks of packages manually. I concluded +that I did not have the spare hours required to complete the task, and +decided to see if I could automate it. I came up with this small +script which I call 'apt-in-chunks':
+ ++ ++#!/bin/sh +# +# Upgrade packages when the disk is too full to upgrade every +# upgradable package in one lump. Fetching packages to upgrade using +# apt, and then installing using dpkg, to avoid changing the package +# flag for manual/automatic. + +set -e + +ignore() { + if [ "$1" ]; then + grep -v "$1" + else + cat + fi +} + +for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do + echo "Upgrading $p" + apt clean + apt install --download-only -y $p + for f in /var/cache/apt/archives/*.deb; do + if [ -e "$f" ]; then + dpkg -i /var/cache/apt/archives/*.deb + break + fi + done +done +
The script will extract the list of packages to upgrade, try to +download the packages needed to upgrade one package, install the +downloaded packages using dpkg. The idea is to upgrade packages +without changing the APT mark for the package (ie the one recording of +the package was manually requested or pulled in as a dependency). To +use it, simply run it as root from the command line. If it fail, try +'apt install -f' to clean up the mess and run the script again. This +might happen if the new packages conflict with one of the old +packages. dpkg is unable to remove, while apt can do this.
+ +It take one option, a package to ignore in the list of packages to +upgrade. The option to ignore a package is there to be able to skip +the packages that are simply too large to unpack. Today this was +'ghc', but I have run into other large packages causing similar +problems earlier (like TeX).
+ +Update 2018-07-08: Thanks to Paul Wise, I am aware of two +alternative ways to handle this. The "unattended-upgrades +--minimal-upgrade-steps" option will try to calculate upgrade sets for +each package to upgrade, and then upgrade them in order, smallest set +first. It might be a better option than my above mentioned script. +Also, "aptutude upgrade" can upgrade single packages, thus avoiding +the need for using "dpkg -i" in the script above.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -859,7 +682,7 @@ activities, please send Bitcoin donations to my address
@@ -867,115 +690,23 @@ activities, please send Bitcoin donations to my addressIt would be easier to locate the movie you want to watch in -the Internet Archive, if the -metadata about each movie was more complete and accurate. In the -archiving community, a well known saying state that good metadata is a -love letter to the future. The metadata in the Internet Archive could -use a face lift for the future to love us back. Here is a proposal -for a small improvement that would make the metadata more useful -today. I've been unable to find any document describing the various -standard fields available when uploading videos to the archive, so -this proposal is based on my best quess and searching through several -of the existing movies.
- -I have a few use cases in mind. First of all, I would like to be -able to count the number of distinct movies in the Internet Archive, -without duplicates. I would further like to identify the IMDB title -ID of the movies in the Internet Archive, to be able to look up a IMDB -title ID and know if I can fetch the video from there and share it -with my friends.
- -Second, I would like the Butter data provider for The Internet -archive -(available -from github), to list as many of the good movies as possible. The -plugin currently do a search in the archive with the following -parameters:
- --collection:moviesandfilms -AND NOT collection:movie_trailers -AND -mediatype:collection -AND format:"Archive BitTorrent" -AND year -- -
Most of the cool movies that fail to show up in Butter do so -because the 'year' field is missing. The 'year' field is populated by -the year part from the 'date' field, and should be when the movie was -released (date or year). Two such examples are -Ben Hur -from 1905 and -Caminandes -2: Gran Dillama from 2013, where the year metadata field is -missing.
- -So, my proposal is simply, for every movie in The Internet Archive -where an IMDB title ID exist, please fill in these metadata fields -(note, they can be updated also long after the video was uploaded, but -as far as I can tell, only by the uploader): - --
-
-
- mediatype -
- Should be 'movie' for movies. - -
- collection -
- Should contain 'moviesandfilms'. - -
- title -
- The title of the movie, without the publication year. - -
- date -
- The data or year the movie was released. This make the movie show -up in Butter, as well as make it possible to know the age of the -movie and is useful to figure out copyright status. - -
- director -
- The director of the movie. This make it easier to know if the -correct movie is found in movie databases. - -
- publisher -
- The production company making the movie. Also useful for -identifying the correct movie. - -
- links - -
- Add a link to the IMDB title page, for example like this: <a -href="http://www.imdb.com/title/tt0028496/">Movie in -IMDB</a>. This make it easier to find duplicates and allow for -counting of number of unique movies in the Archive. Other external -references, like to TMDB, could be added like this too. - -
I did consider proposing a Custom field for the IMDB title ID (for -example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it -will be easier to simply place it in the links free text field.
- -I created -a -list of IMDB title IDs for several thousand movies in the Internet -Archive, but I also got a list of several thousand movies without -such IMDB title ID (and quite a few duplicates). It would be great if -this data set could be integrated into the Internet Archive metadata -to be available for everyone in the future, but with the current -policy of leaving metadata editing to the uploaders, it will take a -while before this happen. If you have uploaded movies into the -Internet Archive, you can help. Please consider following my proposal -above for your movies, to ensure that movie is properly -counted. :)
- -The list is mostly generated using wikidata, which based on -Wikipedia articles make it possible to link between IMDB and movies in -the Internet Archive. But there are lots of movies without a -Wikipedia article, and some movies where only a collection page exist -(like for the -Caminandes example above, where there are three movies but only -one Wikidata entry).
+ +So far, at least hydro-electric power, coal power, wind power, +solar power, and wood power are well known. Until a few days ago, I +had never heard of stone power. Then I learn about a quarry in a +mountain in +Bremanger i +Norway, where +the +Bremanger Quarry company is extracting stone and dumping the stone +into a shaft leading to its shipping harbour. This downward movement +in this shaft is used to produce electricity. In short, it is using +falling rocks instead of falling water to produce electricity, and +according to its own statements it is producing more power than it is +using, and selling the surplus electricity to the Norwegian power +grid. I find the concept truly amazing. Is this the worlds only +stone power plant?
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -984,7 +715,7 @@ activities, please send Bitcoin donations to my address
@@ -992,67 +723,57 @@ activities, please send Bitcoin donations to my addressA month ago, I blogged about my work to -automatically -check the copyright status of IMDB entries, and try to count the -number of movies listed in IMDB that is legal to distribute on the -Internet. I have continued to look for good data sources, and -identified a few more. The code used to extract information from -various data sources is available in -a -git repository, currently available from github.
- -So far I have identified 3186 unique IMDB title IDs. To gain -better understanding of the structure of the data set, I created a -histogram of the year associated with each movie (typically release -year). It is interesting to notice where the peaks and dips in the -graph are located. I wonder why they are placed there. I suspect -World War II caused the dip around 1940, but what caused the peak -around 2010?
- -I've so far identified ten sources for IMDB title IDs for movies in -the public domain or with a free license. This is the statistics -reported when running 'make stats' in the git repository:
- -- 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json - 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json - 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json - 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json - 3186 unique IMDB title IDs in total -- -
The entries without IMDB title ID are candidates to increase the -data set, but might equally well be duplicates of entries already -listed with IMDB title ID in one of the other sources, or represent -movies that lack a IMDB title ID. I've seen examples of all these -situations when peeking at the entries without IMDB title ID. Based -on these data sources, the lower bound for movies listed in IMDB that -are legal to distribute on the Internet is between 3186 and 4713. - -
It would be great for improving the accuracy of this measurement, -if the various sources added IMDB title ID to their metadata. I have -tried to reach the people behind the various sources to ask if they -are interested in doing this, without any replies so far. Perhaps you -can help me get in touch with the people behind VODO, Public Domain -Torrents, Public Domain Movies and Public Domain Review to try to -convince them to add more metadata to their movie entries?
- -Another way you could help is by adding pages to Wikipedia about -movies that are legal to distribute on the Internet. If such page -exist and include a link to both IMDB and The Internet Archive, the -script used to generate free-movies-archive-org-wikidata.json should -pick up the mapping as soon as wikidata is updates.
+ +My movie playing setup involve Kodi, +OpenELEC (probably soon to be +replaced with LibreELEC) and an +Infocus IN76 video projector. My projector can be controlled via both +a infrared remote controller, and a RS-232 serial line. The vendor of +my projector, InFocus, had been +sensible enough to document the serial protocol in its user manual, so +it is easily available, and I used it some years ago to write +a +small script to control the projector. For a while now, I longed +for a setup where the projector was controlled by Kodi, for example in +such a way that when the screen saver went on, the projector was +turned off, and when the screen saver exited, the projector was turned +on again.
+ +A few days ago, with very good help from parts of my family, I +managed to find a Kodi Add-on for controlling a Epson projector, and +got in touch with its author to see if we could join forces and make a +Add-on with support for several projectors. To my pleasure, he was +positive to the idea, and we set out to add InFocus support to his +add-on, and make the add-on suitable for the official Kodi add-on +repository.
+ +The Add-on is now working (for me, at least), with a few minor +adjustments. The most important change I do relative to the master +branch in the github repository is embedding the +pyserial module in +the add-on. The long term solution is to make a "script" type +pyserial module for Kodi, that can be pulled in as a dependency in +Kodi. But until that in place, I embed it.
+ +The add-on can be configured to turn on the projector when Kodi +starts, off when Kodi stops as well as turn the projector off when the +screensaver start and on when the screesaver stops. It can also be +told to set the projector source when turning on the projector. + +
If this sound interesting to you, check out +the +project github repository. Perhaps you can send patches to +support your projector too? As soon as we find time to wrap up the +latest changes, it should be available for easy installation using any +Kodi instance.
+ +For future improvements, I would like to add projector model +detection and the ability to adjust the brightness level of the +projector from within Kodi. We also need to figure out how to handle +the cooling period of the projector. My projector refuses to turn on +for 60 seconds after it was turned off. This is not handled well by +the add-on at the moment.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -1061,7 +782,7 @@ activities, please send Bitcoin donations to my address
@@ -1081,7 +802,17 @@ activities, please send Bitcoin donations to my address