A new version of the -3D printer slicer -software Cura, version 3.1.0, is now available in Debian Testing -(aka Buster) and Debian Unstable (aka Sid). I hope you find it -useful. It was uploaded the last few days, and the last update will -enter testing tomorrow. See the -release -notes for the list of bug fixes and new features.
Version 3.2 -was announced 6 days ago. We will try to get it into Debian as -well. - -More information related to 3D printing is available on the -3D printing and -3D printer wiki pages -in Debian.
+ +As part of my involvement in +the Nikita +archive API project, I've been importing a fairly large lump of +emails into a test instance of the archive to see how well this would +go. I picked a subset of my +notmuch email database, all public emails sent to me via +@lists.debian.org, giving me a set of around 216 000 emails to import. +In the process, I had a look at the various attachments included in +these emails, to figure out what to do with attachments, and noticed +that one of the most common attachment formats do not have +an +official MIME type registered with IANA/IETF. The output from +diff, ie the input for patch, is on the top 10 list of formats +included in these emails. At the moment people seem to use either +text/x-patch or text/x-diff, but neither is officially registered. It +would be better if one official MIME type were registered and used +everywhere.
+ +To try to get one official MIME type for these files, I've brought +up the topic on +the +media-types mailing list. If you are interested in discussion +which MIME type to use as the official for patch files, or involved in +making software using a MIME type for patches, perhaps you would like +to join the discussion?
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -45,7 +55,7 @@ activities, please send Bitcoin donations to my address
@@ -53,116 +63,79 @@ activities, please send Bitcoin donations to my addressJeg lar meg fascinere av en artikkel -i -Dagbladet om Kinas håndtering av Xinjiang, spesielt følgende -utsnitt:
- -- -- -«I den sørvestlige byen Kashgar nærmere grensa til -Sentral-Asia meldes det nå at 120.000 uigurer er internert i såkalte -omskoleringsleirer. Samtidig er det innført et omfattende -helsesjekk-program med innsamling og lagring av DNA-prøver fra -absolutt alle innbyggerne. De mest avanserte overvåkingsmetodene -testes ut her. Programmer for å gjenkjenne ansikter og stemmer er på -plass i regionen. Der har de lokale myndighetene begynt å installere -GPS-systemer i alle kjøretøy og egne sporingsapper i -mobiltelefoner.
- -Politimetodene griper så dypt inn i folks dagligliv at motstanden -mot Beijing-regimet øker.»
- -
Beskrivelsen avviker jo desverre ikke så veldig mye fra tilstanden -her i Norge.
- -Dataregistrering | -Kina | -Norge | - -
---|---|---|
Innsamling og lagring av DNA-prøver fra befolkningen | -Ja | -Delvis, planlagt for alle nyfødte. | -
Ansiktsgjenkjenning | -Ja | -Ja | -
Stemmegjenkjenning | -Ja | -Nei | -
Posisjons-sporing av mobiltelefoner | -Ja | -Ja | -
Posisjons-sporing av biler | -Ja | -Ja | -
I Norge har jo situasjonen rundt Folkehelseinstituttets lagring av -DNA-informasjon på vegne av politiet, der de nektet å slette -informasjon politiet ikke hadde lov til å ta vare på, gjort det klart -at DNA tar vare på ganske lenge. I tillegg finnes det utallige -biobanker som lagres til evig tid, og det er planer om å innføre -evig -lagring av DNA-materiale fra alle spebarn som fødes (med mulighet -for å be om sletting).
- -I Norge er det system på plass for ansiktsgjenkjenning, som -en -NRK-artikkel fra 2015 forteller er aktiv på Gardermoen, samt -brukes -til å analysere bilder innsamlet av myndighetene. Brukes det også -flere plasser? Det er tett med overvåkningskamera kontrollert av -politi og andre myndigheter i for eksempel Oslo sentrum.
- -Jeg er ikke kjent med at Norge har noe system for identifisering av -personer ved hjelp av stemmegjenkjenning.
- -Posisjons-sporing av mobiltelefoner er ruinemessig tilgjengelig for -blant annet politi, NAV og Finanstilsynet, i tråd med krav i -telefonselskapenes konsesjon. I tillegg rapporterer smarttelefoner -sin posisjon til utviklerne av utallige mobil-apper, der myndigheter -og andre kan hente ut informasjon ved behov. Det er intet behov for -noen egen app for dette.
- -Posisjons-sporing av biler er rutinemessig tilgjengelig via et tett -nett av målepunkter på veiene (automatiske bomstasjoner, -køfribrikke-registrering, automatiske fartsmålere og andre veikamera). -Det er i tillegg vedtatt at alle nye biler skal selges med utstyr for -GPS-sporing (eCall).
- -Det er jammen godt vi lever i et liberalt demokrati, og ikke en -overvåkningsstat, eller?
- -Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til -det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner -til min adresse +
+My current home stereo is a patchwork of various pieces I got on +flee markeds over the years. It is amazing what kind of equipment +show up there. I've been wondering for a while if it was possible to +measure how well this equipment is working together, and decided to +see how far I could get using free software. After trawling the web I +came across an article from DIY Audio and Video on +Speaker +Testing and Analysis describing how to test speakers, and it listing +several software options, among them +AUDio MEasurement +System (AUDMES). It is the only free software system I could find +focusing on measuring speakers and audio frequency response. In the +process I also found an interesting article from NOVO on +Understanding +Speaker Specifications and Frequency Response and an article from +ecoustics on +Understanding +Speaker Frequency Response, with a lot of information on what to +look for and how to interpret the graphs. Armed with this knowledge, +I set out to measure the state of my speakers.
+ +The first hurdle was that AUDMES hadn't seen a commit for 10 years +and did not build with current compilers and libraries. I got in +touch with its author, who no longer was spending time on the program +but gave me write access to the subversion repository on Sourceforge. +The end result is that now the code build on Linux and is capable of +saving and loading the collected frequency response data in CSV +format. The application is quite nice and flexible, and I was able to +select the input and output audio interfaces independently. This made +it possible to use a USB mixer as the input source, while sending +output via my laptop headphone connection. I lacked the hardware and +cabling to figure out a different way to get independent cabling to +speakers and microphone.
+ +Using this setup I could see how a large range of high frequencies +apparently were not making it out of my speakers. The picture show +the frequency response measurement of one of the speakers. Note the +frequency lines seem to be slightly misaligned, compared to the CSV +output from the program. I can not hear several of these are high +frequencies, according to measurement from +Free Hearing Test +Software, an freeware system to measure your hearing (still +looking for a free software alternative), so I do not know if they are +coming out out the speakers. I thus do not quite know how to figure +out if the missing frequencies is a problem with the microphone, the +amplifier or the speakers, but I managed to rule out the audio card in my +PC by measuring my Bose noise canceling headset using its own +microphone. This setup was able to see the high frequency tones, so +the problem with my stereo had to be in the amplifier or speakers.
+ +Anyway, to try to role out one factor I ended up picking up a new +set of speakers at a flee marked, and these work a lot better than the +old speakers, so I guess the microphone and amplifier is OK. If you +need to measure your own speakers, check out AUDMES. If more people +get involved, perhaps the project could become good enough to +include in Debian? And if +you know of some other free software to measure speakers and amplifier +performance, please let me know. I am aware of the freeware option +REW, but I want something +that can be developed also when the vendor looses interest.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Bittorrent is as far as I know, currently the most efficient way to +distribute content on the Internet. It is used all by all sorts of +content providers, from national TV stations like +NRK, Linux distributors like +Debian and +Ubuntu, and of course the +Internet archive. + +
Almost a month ago +a new +package adding Bittorrent support to VLC became available in +Debian testing and unstable. To test it, simply install it like +this:
-We write 2018, and it is 30 years since Unicode was introduced. -Most of us in Norway have come to expect the use of our alphabet to -just work with any computer system. But it is apparently beyond reach -of the computers printing recites at a restaurant. Recently I visited -a Peppes pizza resturant, and noticed a few details on the recite. -Notice how 'ø' and 'å' are replaced with strange symbols in -'Servitør', 'à BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi -gleder oss til å se deg igjen'.
- -I would say that this state is passed sad and over in embarrassing.
++apt install vlc-plugin-bittorrent +-
I removed personal and private information to be nice.
+Since the plugin was made available for the first time in Debian, +several improvements have been made to it. In version 2.2-4, now +available in both testing and unstable, a desktop file is provided to +teach browsers to start VLC when the user click on torrent files or +magnet links. The last part is thanks to me finally understanding +what the strange x-scheme-handler style MIME types in desktop files +are used for. By adding x-scheme-handler/magnet to the MimeType entry +in the desktop file, at least the browsers Firefox and Chromium will +suggest to start VLC when selecting a magnet URI on a web page. The +end result is that now, with the plugin installed in Buster and Sid, +one can visit any +Internet +Archive page with movies using a web browser and click on the +torrent link to start streaming the movie.
+ +Note, there is still some misfeatures in the plugin. One is the +fact that it will hang and +block VLC +from exiting until the torrent streaming starts. Another is the +fact that it +will pick +and play a random file in a multi file torrent. This is not +always the video file you want. Combined with the first it can be a +bit hard to get the video streaming going. But when it work, it seem +to do a good job.
+ +For the Debian packaging, I would love to find a good way to test +if the plugin work with VLC using autopkgtest. I tried, but do not +know enough of the inner workings of VLC to get it working. For now +the autopkgtest script is only checking if the .so file was +successfully loaded by VLC. If you have any suggestions, please +submit a patch to the Debian bug tracking system.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -194,7 +203,7 @@ activities, please send Bitcoin donations to my address
@@ -202,72 +211,71 @@ activities, please send Bitcoin donations to my addressI've continued to track down list of movies that are legal to -distribute on the Internet, and identified more than 11,000 title IDs -in The Internet Movie Database (IMDB) so far. Most of them (57%) are -feature films from USA published before 1923. I've also tracked down -more than 24,000 movies I have not yet been able to map to IMDB title -ID, so the real number could be a lot higher. According to the front -web page for Retro Film -Vault, there are 44,000 public domain films, so I guess there are -still some left to identify.
- -The complete data set is available from -a -public git repository, including the scripts used to create it. -Most of the data is collected using web scraping, for example from the -"product catalog" of companies selling copies of public domain movies, -but any source I find believable is used. I've so far had to throw -out three sources because I did not trust the public domain status of -the movies listed.
- -Anyway, this is the summary of the 28 collected data sources so -far:
+ +This morning, the new release of the +Nikita +Noark 5 core project was +announced +on the project mailing list. The free software solution is an +implementation of the Norwegian archive standard Noark 5 used by +government offices in Norway. These were the changes in version 0.2 +since version 0.1.1 (from NEWS.md): -
- 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json - 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json - 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json - 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json - 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json - 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json - 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json - 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json - 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json - 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json - 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json - 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json - 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json - 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json - 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json - 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json - 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json - 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json - 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json -11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID -+
-
+
- Fix typos in REL names +
- Tidy up error message reporting +
- Fix issue where we used Integer.valueOf(), not Integer.getInteger() +
- Change some String handling to StringBuffer +
- Fix error reporting +
- Code tidy-up +
- Fix issue using static non-synchronized SimpleDateFormat to avoid + race conditions +
- Fix problem where deserialisers were treating integers as strings +
- Update methods to make them null-safe +
- Fix many issues reported by coverity +
- Improve equals(), compareTo() and hash() in domain model +
- Improvements to the domain model for metadata classes +
- Fix CORS issues when downloading document +
- Implementation of case-handling with registryEntry and document upload +
- Better support in Javascript for OPTIONS +
- Adding concept description of mail integration +
- Improve setting of default values for GET on ny-journalpost +
- Better handling of required values during deserialisation +
- Changed tilknyttetDato (M620) from date to dateTime +
- Corrected some opprettetDato (M600) (de)serialisation errors. +
- Improve parse error reporting. +
- Started on OData search and filtering. +
- Added Contributor Covenant Code of Conduct to project. +
- Moved repository and project from Github to Gitlab. +
- Restructured repository, moved code into src/ and web/. +
- Updated code to use Spring Boot version 2. +
- Added support for OAuth2 authentication. +
- Fixed several bugs discovered by Coverity. +
- Corrected handling of date/datetime fields. +
- Improved error reporting when rejecting during deserializatoin. +
- Adjusted default values provided for ny-arkivdel, ny-mappe, + ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse. +
- Several fixes for korrespondansepart*. +
- Updated web GUI:
+
-
+
- Now handle both file upload and download. +
- Uses new OAuth2 authentication for login. +
- Forms now fetches default values from API using GET. +
- Added RFC 822 (email), TIFF and JPEG to list of possible file formats. +
+
The changes and improvements are extensive. Running diffstat on +the changes between git tab 0.1.1 and 0.2 show 1098 files changed, +108666 insertions(+), 54066 deletions(-).
-I keep finding more data sources. I found the cinemovies source -just a few days ago, and as you can see from the summary, it extended -my list with 63 movies. Check out the mklist-* scripts in the git -repository if you are curious how the lists are created. Many of the -titles are extracted using searches on IMDB, where I look for the -title and year, and accept search results with only one movie listed -if the year matches. This allow me to automatically use many lists of -movies without IMDB title ID references at the cost of increasing the -risk of wrongly identify a IMDB title ID as public domain. So far my -random manual checks have indicated that the method is solid, but I -really wish all lists of public domain movies would include unique -movie identifier like the IMDB title ID. It would make the job of -counting movies in the public domain a lot easier.
+If free and open standardized archiving API sound interesting to +you, please contact us on IRC +(#nikita on +irc.freenode.net) or email +(nikita-noark +mailing list).
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -276,7 +284,7 @@ activities, please send Bitcoin donations to my address
@@ -284,402 +292,111 @@ activities, please send Bitcoin donations to my addressI gÃ¥r var jeg i Follo tingrett som sakkyndig vitne og presenterte - mine undersøkelser rundt - telling - av filmverk i det fri, relatert til - foreningen NUUGs involvering i - saken om - Ãkokrims beslag og senere inndragning av DNS-domenet - popcorn-time.no. Jeg snakket om flere ting, men mest om min - vurdering av hvordan filmbransjen har mÃ¥lt hvor ulovlig Popcorn Time - er. Filmbransjens mÃ¥ling er sÃ¥ vidt jeg kan se videreformidlet uten - endringer av norsk politi, og domstolene har lagt mÃ¥lingen til grunn - nÃ¥r de har vurdert Popcorn Time bÃ¥de i Norge og i utlandet (tallet - 99% er referert ogsÃ¥ i utenlandske domsavgjørelser).
- -I forkant av mitt vitnemål skrev jeg et notat, mest til meg selv, - med de punktene jeg ønsket å få frem. Her er en kopi av notatet jeg - skrev og ga til aktoratet. Merkelig nok ville ikke dommerene ha - notatet, så hvis jeg forsto rettsprosessen riktig ble kun - histogram-grafen lagt inn i dokumentasjonen i saken. Dommerne var - visst bare interessert i å forholde seg til det jeg sa i retten, - ikke det jeg hadde skrevet i forkant. Uansett så antar jeg at flere - enn meg kan ha glede av teksten, og publiserer den derfor her. - Legger ved avskrift av dokument 09,13, som er det sentrale - dokumentet jeg kommenterer.
- -Kommentarer til «Evaluation of (il)legality» for Popcorn - Time
- -Oppsummering
- -MÃ¥lemetoden som Ãkokrim har lagt til grunn nÃ¥r de pÃ¥stÃ¥r at 99% av - filmene tilgjengelig fra Popcorn Time deles ulovlig har - svakheter.
- -De eller den som har vurdert hvorvidt filmer kan lovlig deles har - ikke lyktes med Ã¥ identifisere filmer som kan deles lovlig og har - tilsynelatende antatt at kun veldig gamle filmer kan deles lovlig. - Ãkokrim legger til grunn at det bare finnes èn film, Charlie - Chaplin-filmen «The Circus» fra 1928, som kan deles fritt blant de - som ble observert tilgjengelig via ulike Popcorn Time-varianter. - Jeg finner tre flere blant de observerte filmene: «The Brain That - Wouldn't Die» fra 1962, «Godâs Little Acre» fra 1958 og «She Wore a - Yellow Ribbon» fra 1949. Det er godt mulig det finnes flere. Det - finnes dermed minst fire ganger sÃ¥ mange filmer som lovlig kan deles - pÃ¥ Internett i datasettet Ãkokrim har lagt til grunn nÃ¥r det pÃ¥stÃ¥s - at mindre enn 1 % kan deles lovlig.
- -Dernest, utplukket som gjøres ved søk på tilfeldige ord hentet fra - ordlisten til Dale-Chall avviker fra årsfordelingen til de brukte - filmkatalogene som helhet, hvilket påvirker fordelingen mellom - filmer som kan lovlig deles og filmer som ikke kan lovlig deles. I - tillegg gir valg av øvre del (de fem første) av søkeresultatene et - avvik fra riktig årsfordeling, hvilket påvirker fordelingen av verk - i det fri i søkeresultatet.
- -Det som måles er ikke (u)lovligheten knyttet til bruken av Popcorn - Time, men (u)lovligheten til innholdet i bittorrent-filmkataloger - som vedlikeholdes uavhengig av Popcorn Time.
- -Omtalte dokumenter: 09,12, 09,13, 09,14, -09,18, 09,19, 09,20.
- -Utfyllende kommentarer
- -Ãkokrim har forklart domstolene at minst 99% av alt som er - tilgjengelig fra ulike Popcorn Time-varianter deles ulovlig pÃ¥ - Internet. Jeg ble nysgjerrig pÃ¥ hvordan de er kommet frem til dette - tallet, og dette notatet er en samling kommentarer rundt mÃ¥lingen - Ãkokrim henviser til. Litt av bakgrunnen for at jeg valgte Ã¥ se pÃ¥ - saken er at jeg er interessert i Ã¥ identifisere og telle hvor mange - kunstneriske verk som er falt i det fri eller av andre grunner kan - lovlig deles pÃ¥ Internett, og dermed var interessert i hvordan en - hadde funnet den ene prosenten som kanskje deles lovlig.
- -Andelen på 99% kommer fra et ukreditert og udatert notatet som tar - mål av seg å dokumentere en metode for å måle hvor (u)lovlig ulike - Popcorn Time-varianter er.
- -Raskt oppsummert, så forteller metodedokumentet at på grunn av at - det ikke er mulig å få tak i komplett liste over alle filmtitler - tilgjengelig via Popcorn Time, så lages noe som skal være et - representativt utvalg ved å velge 50 søkeord større enn tre tegn fra - ordlisten kjent som Dale-Chall. For hvert søkeord gjøres et søk og - de første fem filmene i søkeresultatet samles inn inntil 100 unike - filmtitler er funnet. Hvis 50 søkeord ikke var tilstrekkelig for å - nå 100 unike filmtitler ble flere filmer fra hvert søkeresultat lagt - til. Hvis dette heller ikke var tilstrekkelig, så ble det hentet ut - og søkt på flere tilfeldig valgte søkeord inntil 100 unike - filmtitler var identifisert.
- -Deretter ble for hver av filmtitlene «vurdert hvorvidt det var - rimelig å forvente om at verket var vernet av copyright, ved å se på - om filmen var tilgjengelig i IMDB, samt se på regissør, - utgivelsesår, når det var utgitt for bestemte markedsområder samt - hvilke produksjons- og distribusjonsselskap som var registrert» (min - oversettelse).
- -Metoden er gjengitt både i de ukrediterte dokumentene 09,13 og - 09,19, samt beskrevet fra side 47 i dokument 09,20, lysark datert - 2017-02-01. Sistnevnte er kreditert Geerart Bourlon fra Motion - Picture Association EMEA. Metoden virker å ha flere svakheter som - gir resultatene en slagside. Den starter med å slå fast at det ikke - er mulig å hente ut en komplett liste over alle filmtitler som er - tilgjengelig, og at dette er bakgrunnen for metodevalget. Denne - forutsetningen er ikke i tråd med det som står i dokument 09,12, som - ikke heller har oppgitt forfatter og dato. Dokument 09,12 forteller - hvordan hele kataloginnholdet ble lasted ned og talt opp. Dokument - 09,12 er muligens samme rapport som ble referert til i dom fra Oslo - Tingrett 2017-11-03 - (sak - 17-093347TVI-OTIR/05) som rapport av 1. juni 2017 av Alexander - Kind Petersen, men jeg har ikke sammenlignet dokumentene ord for ord - for å kontrollere dette.
- -IMDB er en forkortelse for The Internet Movie Database, en - anerkjent kommersiell nettjeneste som brukes aktivt av både - filmbransjen og andre til å holde rede på hvilke spillefilmer (og - endel andre filmer) som finnes eller er under produksjon, og - informasjon om disse filmene. Datakvaliteten er høy, med få feil og - få filmer som mangler. IMDB viser ikke informasjon om - opphavsrettslig status for filmene på infosiden for hver film. Som - del av IMDB-tjenesten finnes det lister med filmer laget av - frivillige som lister opp det som antas å være verk i det fri.
- -Det finnes flere kilder som kan brukes til å finne filmer som er - allemannseie (public domain) eller har bruksvilkår som gjør det - lovlig for alleå dele dem på Internett. Jeg har de siste ukene - forsøkt å samle og krysskoble disse listene for å forsøke å telle - antall filmer i det fri. Ved å ta utgangspunkt i slike lister (og - publiserte filmer for Internett-arkivets del), har jeg så langt - klart å identifisere over 11 000 filmer, hovedsaklig spillefilmer. - -
De aller fleste oppføringene er hentet fra IMDB selv, basert på det - faktum at alle filmer laget i USA før 1923 er falt i det fri. - Tilsvarende tidsgrense for Storbritannia er 1912-07-01, men dette - utgjør bare veldig liten del av spillefilmene i IMDB (19 totalt). - En annen stor andel kommer fra Internett-arkivet, der jeg har - identifisert filmer med referanse til IMDB. Internett-arkivet, som - holder til i USA, har som - policy å kun publisere - filmer som det er lovlig å distribuere. Jeg har under arbeidet - kommet over flere filmer som har blitt fjernet fra - Internett-arkivet, hvilket gjør at jeg konkluderer med at folkene - som kontrollerer Internett-arkivet har et aktivt forhold til å kun - ha lovlig innhold der, selv om det i stor grad er drevet av - frivillige. En annen stor liste med filmer kommer fra det - kommersielle selskapet Retro Film Vault, som selger allemannseide - filmer til TV- og filmbransjen, Jeg har også benyttet meg av lister - over filmer som hevdes å være allemannseie, det være seg Public - Domain Review, Public Domain Torrents og Public Domain Movies (.net - og .info), samt lister over filmer med Creative Commons-lisensiering - fra Wikipedia, VODO og The Hill Productions. Jeg har gjort endel - stikkontroll ved å vurdere filmer som kun omtales på en liste. Der - jeg har funnet feil som har gjort meg i tvil om vurderingen til de - som har laget listen har jeg forkastet listen fullstendig (gjelder - en liste fra IMDB).
- -Ved å ta utgangspunkt i verk som kan antas å være lovlig delt på - Internett (fra blant annet Internett-arkivet, Public Domain - Torrents, Public Domain Reivew og Public Domain Movies), og knytte - dem til oppføringer i IMDB, så har jeg så langt klart å identifisere - over 11 000 filmer (hovedsaklig spillefilmer) det er grunn til å tro - kan lovlig distribueres av alle på Internett. Som ekstra kilder er - det brukt lister over filmer som antas/påstås å være allemannseie. - Disse kildene kommer fra miljøer som jobber for å gjøre tilgjengelig - for almennheten alle verk som er falt i det fri eller har - bruksvilkår som tillater deling. - -
I tillegg til de over 11 000 filmene der tittel-ID i IMDB er - identifisert, har jeg funnet mer enn 20 000 oppføringer der jeg ennå - ikke har hatt kapasitet til å spore opp tittel-ID i IMDB. Noen av - disse er nok duplikater av de IMDB-oppføringene som er identifisert - så langt, men neppe alle. Retro Film Vault hevder å ha 44 000 - filmverk i det fri i sin katalog, så det er mulig at det reelle - tallet er betydelig høyere enn de jeg har klart å identifisere så - langt. Konklusjonen er at tallet 11 000 er nedre grense for hvor - mange filmer i IMDB som kan lovlig deles på Internett. I følge statistikk fra IMDB er det 4.6 - millioner titler registrert, hvorav 3 millioner er TV-serieepisoder. - Jeg har ikke funnet ut hvordan de fordeler seg per år.
- -Hvis en fordeler på år alle tittel-IDene i IMDB som hevdes å lovlig - kunne deles på Internett, får en følgende histogram:
- -En kan i histogrammet se at effekten av manglende registrering - eller fornying av registrering er at mange filmer gitt ut i USA før - 1978 er allemannseie i dag. I tillegg kan en se at det finnes flere - filmer gitt ut de siste årene med bruksvilkår som tillater deling, - muligens på grunn av fremveksten av - Creative - Commons-bevegelsen..
- -For maskinell analyse av katalogene har jeg laget et lite program - som kobler seg til bittorrent-katalogene som brukes av ulike Popcorn - Time-varianter og laster ned komplett liste over filmer i - katalogene, noe som bekrefter at det er mulig å hente ned komplett - liste med alle filmtitler som er tilgjengelig. Jeg har sett på fire - bittorrent-kataloger. Den ene brukes av klienten tilgjengelig fra - www.popcorntime.sh og er navngitt 'sh' i dette dokumentet. Den - andre brukes i følge dokument 09,12 av klienten tilgjengelig fra - popcorntime.ag og popcorntime.sh og er navngitt 'yts' i dette - dokumentet. Den tredje brukes av websidene tilgjengelig fra - popcorntime-online.tv og er navngitt 'apidomain' i dette dokumentet. - Den fjerde brukes av klienten tilgjenglig fra popcorn-time.to i - følge dokument 09,12, og er navngitt 'ukrfnlge' i dette - dokumentet.
- -Metoden Ãkokrim legger til grunn skriver i sitt punkt fire at - skjønn er en egnet metode for Ã¥ finne ut om en film kan lovlig deles - pÃ¥ Internett eller ikke, og sier at det ble «vurdert hvorvidt det - var rimelig Ã¥ forvente om at verket var vernet av copyright». For - det første er det ikke nok Ã¥ slÃ¥ fast om en film er «vernet av - copyright» for Ã¥ vite om det er lovlig Ã¥ dele den pÃ¥ Internett eller - ikke, da det finnes flere filmer med opphavsrettslige bruksvilkÃ¥r - som tillater deling pÃ¥ Internett. Eksempler pÃ¥ dette er Creative - Commons-lisensierte filmer som Citizenfour fra 2014 og Sintel fra - 2010. I tillegg til slike finnes det flere filmer som nÃ¥ er - allemannseie (public domain) pÃ¥ grunn av manglende registrering - eller fornying av registrering selv om bÃ¥de regisør, - produksjonsselskap og distributør ønsker seg vern. Eksempler pÃ¥ - dette er Plan 9 from Outer Space fra 1959 og Night of the Living - Dead fra 1968. Alle filmer fra USA som var allemannseie før - 1989-03-01 forble i det fri da Bern-konvensjonen, som tok effekt i - USA pÃ¥ det tidspunktet, ikke ble gitt tilbakevirkende kraft. Hvis - det er noe - historien - om sangen «Happy birthday» forteller oss, der betaling for bruk - har vært krevd inn i flere tiÃ¥r selv om sangen ikke egentlig var - vernet av Ã¥ndsverksloven, sÃ¥ er det at hvert enkelt verk mÃ¥ vurderes - nøye og i detalj før en kan slÃ¥ fast om verket er allemannseie eller - ikke, det holder ikke Ã¥ tro pÃ¥ selverklærte rettighetshavere. Flere - eksempel pÃ¥ verk i det fri som feilklassifiseres som vernet er fra - dokument 09,18, som lister opp søkeresultater for klienten omtalt - som popcorntime.sh og i følge notatet kun inneholder en film (The - Circus fra 1928) som under tvil kan antas Ã¥ være allemannseie.
- -Ved rask gjennomlesning av dokument 09,18, som inneholder - skjermbilder fra bruk av en Popcorn Time-variant, fant jeg omtalt - bÃ¥de filmen «The Brain That Wouldn't Die» fra 1962 som er - tilgjengelig - fra Internett-arkivet og som - i - følge Wikipedia er allemannseie i USA da den ble gitt ut i - 1962 uten 'copyright'-merking, og filmen «Godâs Little Acre» fra - 1958 som - er lagt ut pÃ¥ Wikipedia, der det fortelles at - sort/hvit-utgaven er allemannseie. Det fremgÃ¥r ikke fra dokument - 09,18 om filmen omtalt der er sort/hvit-utgaven. Av - kapasitetsÃ¥rsaker og pÃ¥ grunn av at filmoversikten i dokument 09,18 - ikke er maskinlesbart har jeg ikke forsøkt Ã¥ sjekke alle filmene som - listes opp der om mot liste med filmer som er antatt lovlig kan - distribueres pÃ¥ Internet.
- -Ved maskinell gjennomgang av listen med IMDB-referanser under - regnearkfanen 'Unique titles' i dokument 09.14, fant jeg i tillegg - filmen «She Wore a Yellow Ribbon» fra 1949) som nok også er - feilklassifisert. Filmen «She Wore a Yellow Ribbon» er tilgjengelig - fra Internett-arkivet og markert som allemannseie der. Det virker - dermed å være minst fire ganger så mange filmer som kan lovlig deles - på Internett enn det som er lagt til grunn når en påstår at minst - 99% av innholdet er ulovlig. Jeg ser ikke bort fra at nærmere - undersøkelser kan avdekke flere. Poenget er uansett at metodens - punkt om «rimelig å forvente om at verket var vernet av copyright» - gjør metoden upålitelig.
- -Den omtalte målemetoden velger ut tilfeldige søketermer fra - ordlisten Dale-Chall. Den ordlisten inneholder 3000 enkle engelske - som fjerdeklassinger i USA er forventet å forstå. Det fremgår ikke - hvorfor akkurat denne ordlisten er valgt, og det er uklart for meg - om den er egnet til å få et representativt utvalg av filmer. Mange - av ordene gir tomt søkeresultat. Ved å simulerte tilsvarende søk - ser jeg store avvik fra fordelingen i katalogen for enkeltmålinger. - Dette antyder at enkeltmålinger av 100 filmer slik målemetoden - beskriver er gjort, ikke er velegnet til å finne andel ulovlig - innhold i bittorrent-katalogene.
- -En kan motvirke dette store avviket for enkeltmålinger ved å gjøre - mange søk og slå sammen resultatet. Jeg har testet ved å - gjennomføre 100 enkeltmålinger (dvs. måling av (100x100=) 10 000 - tilfeldig valgte filmer) som gir mindre, men fortsatt betydelig - avvik, i forhold til telling av filmer pr år i hele katalogen.
- -Målemetoden henter ut de fem øverste i søkeresultatet. - Søkeresultatene er sortert på antall bittorrent-klienter registrert - som delere i katalogene, hvilket kan gi en slagside mot hvilke - filmer som er populære blant de som bruker bittorrent-katalogene, - uten at det forteller noe om hvilket innhold som er tilgjengelig - eller hvilket innhold som deles med Popcorn Time-klienter. Jeg har - forsøkt å måle hvor stor en slik slagside eventuelt er ved å - sammenligne fordelingen hvis en tar de 5 nederste i søkeresultatet i - stedet. Avviket for disse to metodene for endel kataloger er godt - synlig på histogramet. Her er histogram over filmer funnet i den - komplette katalogen (grønn strek), og filmer funnet ved søk etter - ord i Dale-Chall. Grafer merket 'top' henter fra de 5 første i - søkeresultatet, mens de merket 'bottom' henter fra de 5 siste. En - kan her se at resultatene påvirkes betydelig av hvorvidt en ser på - de første eller de siste filmene i et søketreff.
- -
-
-
-
-
-
-
-
-
-
-
-
-
Det er verdt Ã¥ bemerke at de omtalte bittorrent-katalogene ikke er - laget for bruk med Popcorn Time. Eksempelvis tilhører katalogen - YTS, som brukes av klientet som ble lastes ned fra popcorntime.sh, - et selvstendig fildelings-relatert nettsted YTS.AG med et separat - brukermiljø. MÃ¥lemetoden foreslÃ¥tt av Ãkokrim mÃ¥ler dermed ikke - (u)lovligheten rundt bruken av Popcorn Time, men (u)lovligheten til - innholdet i disse katalogene.
+ +I have earlier covered the basics of trusted timestamping using the +'openssl ts' client. See blog post for +2014, +2016 +and +2017 +for those stories. But some times I want to integrate the timestamping +in other code, and recently I needed to integrate it into Python. +After searching a bit, I found +the +rfc3161 library which seemed like a good fit, but I soon +discovered it only worked for python version 2, and I needed something +that work with python version 3. Luckily I next came across +the rfc3161ng library, +a fork of the original rfc3161 library. Not only is it working with +python 3, it have fixed a few of the bugs in the original library, and +it has an active maintainer. I decided to wrap it up and make it +available in +Debian, and a few days ago it entered Debian unstable and testing.
+ +Using the library is fairly straight forward. The only slightly +problematic step is to fetch the required certificates to verify the +timestamp. For some services it is straight forward, while for others +I have not yet figured out how to do it. Here is a small standalone +code example based on of the integration tests in the library code:
-- -
Metoden fra Ãkokrims dokument 09,13 i straffesaken -om DNS-beslag.
++#!/usr/bin/python3 + +""" + +Python 3 script demonstrating how to use the rfc3161ng module to +get trusted timestamps. + +The license of this code is the same as the license of the rfc3161ng +library, ie MIT/BSD. + +""" + +import os +import pyasn1.codec.der +import rfc3161ng +import subprocess +import tempfile +import urllib.request + +def store(f, data): + f.write(data) + f.flush() + f.seek(0) + +def fetch(url, f=None): + response = urllib.request.urlopen(url) + data = response.read() + if f: + store(f, data) + return data + +def main(): + with tempfile.NamedTemporaryFile() as cert_f,\ + tempfile.NamedTemporaryFile() as ca_f,\ + tempfile.NamedTemporaryFile() as msg_f,\ + tempfile.NamedTemporaryFile() as tsr_f: + + # First fetch certificates used by service + certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f) + ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f) + + # Then timestamp the message + timestamper = \ + rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr', + certificate=certificate_data) + data = b"Python forever!\n" + tsr = timestamper(data=data, return_tsr=True) + + # Finally, convert message and response to something 'openssl ts' can verify + store(msg_f, data) + store(tsr_f, pyasn1.codec.der.encoder.encode(tsr)) + args = ["openssl", "ts", "-verify", + "-data", msg_f.name, + "-in", tsr_f.name, + "-CAfile", ca_f.name, + "-untrusted", cert_f.name] + subprocess.check_call(args) + +if '__main__' == __name__: + main() +-
1. Evaluation of (il)legality
+The code fetches the required certificates, store them as temporary +files, timestamp a simple message, store the message and timestamp to +disk and ask 'openssl ts' to verify the timestamp. A timestamp is +around 1.5 kiB in size, and should be fairly easy to store for future +use.
-1.1. Methodology - -
Due to its technical configuration, Popcorn Time applications don't -allow to make a full list of all titles made available. In order to -evaluate the level of illegal operation of PCT, the following -methodology was applied:
- --
-
-
- A random selection of 50 keywords, greater than 3 letters, was - made from the Dale-Chall list that contains 3000 simple English - words1. The selection was made by using a Random Number - Generator2. - -
- For each keyword, starting with the first randomly selected - keyword, a search query was conducted in the movie section of the - respective Popcorn Time application. For each keyword, the first - five results were added to the title list until the number of 100 - unique titles was reached (duplicates were removed). - -
- For one fork, .CH, insufficient titles were generated via this - approach to reach 100 titles. This was solved by adding any - additional query results above five for each of the 50 keywords. - Since this still was not enough, another 42 random keywords were - selected to finally reach 100 titles. - -
- It was verified whether or not there is a reasonable expectation - that the work is copyrighted by checking if they are available on - IMDb, also verifying the director, the year when the title was - released, the release date for a certain market, the production - company/ies of the title and the distribution company/ies. - -
1.2. Results
- -Between 6 and 9 June 2016, four forks of Popcorn Time were -investigated: popcorn-time.to, popcorntime.ag, popcorntime.sh and -popcorntime.ch. An excel sheet with the results is included in -Appendix 1. Screenshots were secured in separate Appendixes for each -respective fork, see Appendix 2-5.
- -For each fork, out of 100, de-duplicated titles it was possible to -retrieve data according to the parameters set out above that indicate -that the title is commercially available. Per fork, there was 1 title -that presumably falls within the public domain, i.e. the 1928 movie -"The Circus" by and with Charles Chaplin.
- -Based on the above it is reasonable to assume that 99% of the movie -content of each fork is copyright protected and is made available -illegally.
- -This exercise was not repeated for TV series, but considering that -besides production companies and distribution companies also -broadcasters may have relevant rights, it is reasonable to assume that -at least a similar level of infringement will be established.
- -Based on the above it is reasonable to assume that 99% of all the -content of each fork is copyright protected and are made available -illegally.
+As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
After several months of working and waiting, I am happy to report -that the nice and user friendly 3D printer slicer software Cura just -entered Debian Unstable. It consist of five packages, -cura, -cura-engine, -libarcus, -fdm-materials, -libsavitar and -uranium. The last -two, uranium and cura, entered Unstable yesterday. This should make -it easier for Debian users to print on at least the Ultimaker class of -3D printers. My nearest 3D printer is an Ultimaker 2+, so it will -make life easier for at least me. :)
- -The work to make this happen was done by Gregor Riepl, and I was -happy to assist him in sponsoring the packages. With the introduction -of Cura, Debian is up to three 3D printer slicers at your service, -Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D -printer, give it a go. :)
- -The 3D printer software is maintained by the 3D printer Debian -team, flocking together on the -3dprinter-general -mailing list and the -#debian-3dprinting -IRC channel.
- -The next step for Cura in Debian is to update the cura package to -version 3.0.3 and then update the entire set of packages to version -3.1.0 which showed up the last few days.
+ +A few days, I rescued a Windows victim over to Debian. To try to +rescue the remains, I helped set up automatic sync with Google Drive. +I did not find any sensible Debian package handling this +automatically, so I rebuild the grive2 source from +the Ubuntu UPD8 PPA to do the +task and added a autostart desktop entry and a small shell script to +run in the background while the user is logged in to do the sync. +Here is a sketch of the setup for future reference.
+ +I first created ~/googledrive, entered the directory and +ran 'grive -a' to authenticate the machine/user. Next, I +created a autostart hook in ~/.config/autostart/grive.desktop +to start the sync when the user log in:
+ ++ ++[Desktop Entry] +Name=Google drive autosync +Type=Application +Exec=/home/user/bin/grive-sync +
Finally, I wrote the ~/bin/grive-sync script to sync +~/googledrive/ with the files in Google Drive.
+ ++ ++#!/bin/sh +set -e +cd ~/ +cleanup() { + if [ "$syncpid" ] ; then + kill $syncpid + fi +} +trap cleanup EXIT INT QUIT +/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" & +syncpdi=$! +while true; do + if ! xhost >/dev/null 2>&1 ; then + echo "no DISPLAY, exiting as the user probably logged out" + exit 1 + fi + if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then + /usr/lib/grive/grive-sync.sh sync googledrive + fi + sleep 300 +done 2>&1 | sed "s%^%$0:%" +
Feel free to use the setup if you want. It can be assumed to be +GNU GPL v2 licensed (or any later version, at your leisure), but I +doubt this code is possible to claim copyright on.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
While looking at -the scanned copies -for the copyright renewal entries for movies published in the USA, -an idea occurred to me. The number of renewals are so few per year, it -should be fairly quick to transcribe them all and add references to -the corresponding IMDB title ID. This would give the (presumably) -complete list of movies published 28 years earlier that did _not_ -enter the public domain for the transcribed year. By fetching the -list of USA movies published 28 years earlier and subtract the movies -with renewals, we should be left with movies registered in IMDB that -are now in the public domain. For the year 1955 (which is the one I -have looked at the most), the total number of pages to transcribe is -21. For the 28 years from 1950 to 1978, it should be in the range -500-600 pages. It is just a few days of work, and spread among a -small group of people it should be doable in a few weeks of spare -time.
- -A typical copyright renewal entry look like this (the first one -listed for 1955):
- -- ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer - Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); - 10Jun55; R151558. -- -
The movie title as well as registration and renewal dates are easy -enough to locate by a program (split on first comma and look for -DDmmmYY). The rest of the text is not required to find the movie in -IMDB, but is useful to confirm the correct movie is found. I am not -quite sure what the L and R numbers mean, but suspect they are -reference numbers into the archive of the US Copyright Office.
- -Tracking down the equivalent IMDB title ID is probably going to be -a manual task, but given the year it is fairly easy to search for the -movie title using for example -http://www.imdb.com/find?q=adam+and+evil+1927&s=all. -Using this search, I find that the equivalent IMDB title ID for the -first renewal entry from 1955 is -http://www.imdb.com/title/tt0017588/.
- -I suspect the best way to do this would be to make a specialised -web service to make it easy for contributors to transcribe and track -down IMDB title IDs. In the web service, once a entry is transcribed, -the title and year could be extracted from the text, a search in IMDB -conducted for the user to pick the equivalent IMDB title ID right -away. By spreading out the work among volunteers, it would also be -possible to make at least two persons transcribe the same entries to -be able to discover any typos introduced. But I will need help to -make this happen, as I lack the spare time to do all of this on my -own. If you would like to help, please get in touch. Perhaps you can -draft a web service for crowd sourcing the task?
- -Note, Project Gutenberg already have some -transcribed -copies of the US Copyright Office renewal protocols, but I have -not been able to find any film renewals there, so I suspect they only -have copies of renewal for written works. I have not been able to find -any transcribed versions of movie renewals so far. Perhaps they exist -somewhere?
- -I would love to figure out methods for finding all the public -domain works in other countries too, but it is a lot harder. At least -for Norway and Great Britain, such work involve tracking down the -people involved in making the movie and figuring out when they died. -It is hard enough to figure out who was part of making a movie, but I -do not know how to automate such procedure without a registry of every -person involved in making movies and their death year.
+ +It would come as no surprise to anyone that I am interested in +bitcoins and virtual currencies. I've been keeping an eye on virtual +currencies for many years, and it is part of the reason a few months +ago, I started writing a python library for collecting currency +exchange rates and trade on virtual currency exchanges. I decided to +name the end result valutakrambod, which perhaps can be translated to +small currency shop.
+ +The library uses the tornado python library to handle HTTP and +websocket connections, and provide a asynchronous system for +connecting to and tracking several services. The code is available +from +github.
+ +There are two example clients of the library. One is very simple and +list every updated buy/sell price received from the various services. +This code is started by running bin/btc-rates and call the client code +in valutakrambod/client.py. The simple client look like this: + ++ ++import functools +import tornado.ioloop +import valutakrambod +class SimpleClient(object): + def __init__(self): + self.services = [] + self.streams = [] + pass + def newdata(self, service, pair, changed): + print("%-15s %s-%s: %8.3f %8.3f" % ( + service.servicename(), + pair[0], + pair[1], + service.rates[pair]['ask'], + service.rates[pair]['bid']) + ) + async def refresh(self, service): + await service.fetchRates(service.wantedpairs) + def run(self): + self.ioloop = tornado.ioloop.IOLoop.current() + self.services = valutakrambod.service.knownServices() + for e in self.services: + service = e() + service.subscribe(self.newdata) + stream = service.websocket() + if stream: + self.streams.append(stream) + else: + # Fetch information from non-streaming services immediately + self.ioloop.call_later(len(self.services), + functools.partial(self.refresh, service)) + # as well as regularly + service.periodicUpdate(60) + for stream in self.streams: + stream.connect() + try: + self.ioloop.start() + except KeyboardInterrupt: + print("Interrupted by keyboard, closing all connections.") + pass + for stream in self.streams: + stream.close() +
The library client loops over all known "public" services, +initialises it, subscribes to any updates from the service, checks and +activates websocket streaming if the service provide it, and if no +streaming is supported, fetches information from the service and sets +up a periodic update every 60 seconds. The output from this client +can look like this:
+ ++ ++Bl3p BTC-EUR: 5687.110 5653.690 +Bl3p BTC-EUR: 5687.110 5653.690 +Bl3p BTC-EUR: 5687.110 5653.690 +Hitbtc BTC-USD: 6594.560 6593.690 +Hitbtc BTC-USD: 6594.560 6593.690 +Bl3p BTC-EUR: 5687.110 5653.690 +Hitbtc BTC-USD: 6594.570 6593.690 +Bitstamp EUR-USD: 1.159 1.154 +Hitbtc BTC-USD: 6594.570 6593.690 +Hitbtc BTC-USD: 6594.580 6593.690 +Hitbtc BTC-USD: 6594.580 6593.690 +Hitbtc BTC-USD: 6594.580 6593.690 +Bl3p BTC-EUR: 5687.110 5653.690 +Paymium BTC-EUR: 5680.000 5620.240 +
The exchange order book is tracked in addition to the best buy/sell +price, for those that need to know the details.
+ +The other example client is focusing on providing a curses view +with updated buy/sell prices as soon as they are received from the +services. This code is located in bin/btc-rates-curses and activated +by using the '-c' argument. Without the argument the "curses" output +is printed without using curses, which is useful for debugging. The +curses view look like this:
+ ++ ++ Name Pair Bid Ask Spr Ftcd Age + BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60 + Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59 + Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60 + Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60 + Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1 + Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19 + Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan + Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60 + Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan + BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60 + Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60 + Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60 + Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan + MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan + BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60 + Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57 + Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60 + Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1 + Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117 + Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan + Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0 + Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58 + Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan + Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan + Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1 + Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan + BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan + BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60 + BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan + Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan +
The code for this client is too complex for a simple blog post, so +you will have to check out the git repository to figure out how it +work. What I can tell is how the three last numbers on each line +should be interpreted. The first is how many seconds ago information +was received from the service. The second is how long ago, according +to the service, the provided information was updated. The last is an +estimate on how often the buy/sell values change.
+ +If you find this library useful, or would like to improve it, I +would love to hear from you. Note that for some of the services I've +implemented a trading API. It might be the topic of a future blog +post.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -809,7 +627,7 @@ activities, please send Bitcoin donations to my address
@@ -817,40 +635,38 @@ activities, please send Bitcoin donations to my addressThree years ago, a presumed lost animation film, -Empty Socks from -1927, was discovered in the Norwegian National Library. At the -time it was discovered, it was generally assumed to be copyrighted by -The Walt Disney Company, and I blogged about -my -reasoning to conclude that it would would enter the Norwegian -equivalent of the public domain in 2053, based on my understanding of -Norwegian Copyright Law. But a few days ago, I came across -a -blog post claiming the movie was already in the public domain, at -least in USA. The reasoning is as follows: The film was released in -November or Desember 1927 (sources disagree), and presumably -registered its copyright that year. At that time, right holders of -movies registered by the copyright office received government -protection for there work for 28 years. After 28 years, the copyright -had to be renewed if the wanted the government to protect it further. -The blog post I found claim such renewal did not happen for this -movie, and thus it entered the public domain in 1956. Yet someone -claim the copyright was renewed and the movie is still copyright -protected. Can anyone help me to figure out which claim is correct? -I have not been able to find Empty Socks in Catalog of copyright -entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures -available -from the University of Pennsylvania, neither in -page -45 for the first half of 1955, nor in -page -119 for the second half of 1955. It is of course possible that -the renewal entry was left out of the printed catalog by mistake. Is -there some way to rule out this possibility? Please help, and update -the wikipedia page with your findings. +
+Back in February, I got curious to see +if +VLC now supported Bittorrent streaming. It did not, despite the +fact that the idea and code to handle such streaming had been floating +around for years. I did however find +a standalone plugin +for VLC to do it, and half a year later I decided to wrap up the +plugin and get it into Debian. I uploaded it to NEW a few days ago, +and am very happy to report that it +entered +Debian a few hours ago, and should be available in Debian/Unstable +tomorrow, and Debian/Testing in a few days.
+ +With the vlc-plugin-bittorrent package installed you should be able +to stream videos using a simple call to
+ ++ +It can handle magnet links too. Now if only native vlc had +bittorrent support. Then a lot more would be helping each other to +share public domain and creative commons movies. The plugin need some +stability work with seeking and picking the right file in a torrent +with many files, but is already usable. Please note that the plugin +is not removing downloaded files when vlc is stopped, so it can fill +up your disk if you are not careful. Have fun. :) + ++vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent +
I would love to get help maintaining this package. Get in touch if +you are interested.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -859,7 +675,7 @@ activities, please send Bitcoin donations to my address
@@ -867,115 +683,26 @@ activities, please send Bitcoin donations to my addressIt would be easier to locate the movie you want to watch in -the Internet Archive, if the -metadata about each movie was more complete and accurate. In the -archiving community, a well known saying state that good metadata is a -love letter to the future. The metadata in the Internet Archive could -use a face lift for the future to love us back. Here is a proposal -for a small improvement that would make the metadata more useful -today. I've been unable to find any document describing the various -standard fields available when uploading videos to the archive, so -this proposal is based on my best quess and searching through several -of the existing movies.
- -I have a few use cases in mind. First of all, I would like to be -able to count the number of distinct movies in the Internet Archive, -without duplicates. I would further like to identify the IMDB title -ID of the movies in the Internet Archive, to be able to look up a IMDB -title ID and know if I can fetch the video from there and share it -with my friends.
- -Second, I would like the Butter data provider for The Internet -archive -(available -from github), to list as many of the good movies as possible. The -plugin currently do a search in the archive with the following -parameters:
- --collection:moviesandfilms -AND NOT collection:movie_trailers -AND -mediatype:collection -AND format:"Archive BitTorrent" -AND year -- -
Most of the cool movies that fail to show up in Butter do so -because the 'year' field is missing. The 'year' field is populated by -the year part from the 'date' field, and should be when the movie was -released (date or year). Two such examples are -Ben Hur -from 1905 and -Caminandes -2: Gran Dillama from 2013, where the year metadata field is -missing.
- -So, my proposal is simply, for every movie in The Internet Archive -where an IMDB title ID exist, please fill in these metadata fields -(note, they can be updated also long after the video was uploaded, but -as far as I can tell, only by the uploader): - --
-
-
- mediatype -
- Should be 'movie' for movies. - -
- collection -
- Should contain 'moviesandfilms'. - -
- title -
- The title of the movie, without the publication year. - -
- date -
- The data or year the movie was released. This make the movie show -up in Butter, as well as make it possible to know the age of the -movie and is useful to figure out copyright status. - -
- director -
- The director of the movie. This make it easier to know if the -correct movie is found in movie databases. - -
- publisher -
- The production company making the movie. Also useful for -identifying the correct movie. - -
- links - -
- Add a link to the IMDB title page, for example like this: <a -href="http://www.imdb.com/title/tt0028496/">Movie in -IMDB</a>. This make it easier to find duplicates and allow for -counting of number of unique movies in the Archive. Other external -references, like to TMDB, could be added like this too. - -
I did consider proposing a Custom field for the IMDB title ID (for -example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it -will be easier to simply place it in the links free text field.
- -I created -a -list of IMDB title IDs for several thousand movies in the Internet -Archive, but I also got a list of several thousand movies without -such IMDB title ID (and quite a few duplicates). It would be great if -this data set could be integrated into the Internet Archive metadata -to be available for everyone in the future, but with the current -policy of leaving metadata editing to the uploaders, it will take a -while before this happen. If you have uploaded movies into the -Internet Archive, you can help. Please consider following my proposal -above for your movies, to ensure that movie is properly -counted. :)
- -The list is mostly generated using wikidata, which based on -Wikipedia articles make it possible to link between IMDB and movies in -the Internet Archive. But there are lots of movies without a -Wikipedia article, and some movies where only a collection page exist -(like for the -Caminandes example above, where there are three movies but only -one Wikidata entry).
+ +I continue to explore my Kodi installation, and today I wanted to +tell it to play a youtube URL I received in a chat, without having to +insert search terms using the on-screen keyboard. After searching the +web for API access to the Youtube plugin and testing a bit, I managed +to find a recipe that worked. If you got a kodi instance with its API +available from http://kodihost/jsonrpc, you can try the following to +have check out a nice cover band.
+ ++ +curl --silent --header 'Content-Type: application/json' \ + --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open", + "params": {"item": { "file": + "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \ + http://projector.local/jsonrpc
I've extended kodi-stream program to take a video source as its +first argument. It can now handle direct video links, youtube links +and 'desktop' to stream my desktop to Kodi. It is almost like a +Chromecast. :)
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address @@ -984,7 +711,7 @@ activities, please send Bitcoin donations to my address
@@ -992,76 +719,19 @@ activities, please send Bitcoin donations to my addressA month ago, I blogged about my work to -automatically -check the copyright status of IMDB entries, and try to count the -number of movies listed in IMDB that is legal to distribute on the -Internet. I have continued to look for good data sources, and -identified a few more. The code used to extract information from -various data sources is available in -a -git repository, currently available from github.
- -So far I have identified 3186 unique IMDB title IDs. To gain -better understanding of the structure of the data set, I created a -histogram of the year associated with each movie (typically release -year). It is interesting to notice where the peaks and dips in the -graph are located. I wonder why they are placed there. I suspect -World War II caused the dip around 1940, but what caused the peak -around 2010?
- -I've so far identified ten sources for IMDB title IDs for movies in -the public domain or with a free license. This is the statistics -reported when running 'make stats' in the git repository:
- -- 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json - 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json - 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json - 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json - 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json - 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json - 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json - 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json - 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json - 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json - 3186 unique IMDB title IDs in total -- -
The entries without IMDB title ID are candidates to increase the -data set, but might equally well be duplicates of entries already -listed with IMDB title ID in one of the other sources, or represent -movies that lack a IMDB title ID. I've seen examples of all these -situations when peeking at the entries without IMDB title ID. Based -on these data sources, the lower bound for movies listed in IMDB that -are legal to distribute on the Internet is between 3186 and 4713. - -
It would be great for improving the accuracy of this measurement, -if the various sources added IMDB title ID to their metadata. I have -tried to reach the people behind the various sources to ask if they -are interested in doing this, without any replies so far. Perhaps you -can help me get in touch with the people behind VODO, Public Domain -Torrents, Public Domain Movies and Public Domain Review to try to -convince them to add more metadata to their movie entries?
- -Another way you could help is by adding pages to Wikipedia about -movies that are legal to distribute on the Internet. If such page -exist and include a link to both IMDB and The Internet Archive, the -script used to generate free-movies-archive-org-wikidata.json should -pick up the mapping as soon as wikidata is updates.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+ +It might seem obvious that software created using tax money should +be available for everyone to use and improve. Free Software +Foundation Europe recentlystarted a campaign to help get more people +to understand this, and I just signed the petition on +Public Money, Public Code to help +them. I hope you too will do the same.