X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/ae5db6d19f3d85fdd5e7bd4c12be28fa3f15fc43..cd775096bf095558010073a71ca149dbfe487c3f:/blog/index.html diff --git a/blog/index.html b/blog/index.html index 96c2134fd9..032f861b4b 100644 --- a/blog/index.html +++ b/blog/index.html @@ -20,69 +20,85 @@
-
Aktivitetsbånd som beskytter privatsfæren
-
3rd November 2016
-

Jeg ble så imponert over -dagens -gladnyhet på NRK, om at Forbrukerrådet klager inn vilkårene for -bruk av aktivitetsbånd fra Fitbit, Garmin, Jawbone og Mio til -Datatilsynet og forbrukerombudet, at jeg sendte følgende brev til -forbrukerrådet for å uttrykke min støtte: - -

- -

Jeg ble veldig glad over å lese at Forbrukerrådet -klager -inn flere aktivitetsbånd til Datatilsynet for dårlige vilkår. Jeg -har ønsket meg et aktivitetsbånd som kan måle puls, bevegelse og -gjerne også andre helserelaterte indikatorer en stund nå. De eneste -jeg har funnet i salg gjør, som dere også har oppdaget, graverende -inngrep i privatsfæren og sender informasjonen ut av huset til folk og -organisasjoner jeg ikke ønsker å dele aktivitets- og helseinformasjon -med. Jeg ønsker et alternativ som _ikke_ sender informasjon til -skyen, men derimot bruker -en -fritt og åpent standardisert protokoll (eller i det minste en -dokumentert protokoll uten patent- og opphavsrettslige -bruksbegrensinger) til å kommunisere med datautstyr jeg kontrollerer. -Er jo ikke interessert i å betale noen for å tilrøve seg -personopplysninger fra meg. Desverre har jeg ikke funnet noe -alternativ så langt.

- -

Det holder ikke å endre på bruksvilkårene for enhetene, slik -Datatilsynet ofte legger opp til i sin behandling, når de gjør slik -f.eks. Fitbit (den jeg har sett mest på). Fitbit krypterer -informasjonen på enheten og sender den kryptert til leverandøren. Det -gjør det i praksis umulig både å sjekke hva slags informasjon som -sendes over, og umulig å ta imot informasjonen selv i stedet for -Fitbit. Uansett hva slags historie som forteller i bruksvilkårene er -en jo både prisgitt leverandørens godvilje og at de ikke tvinges av -sitt lands myndigheter til å lyve til sine kunder om hvorvidt -personopplysninger spres ut over det bruksvilkårene sier. Det er -veldokumentert hvordan f.eks. USA tvinger selskaper vha. såkalte -National security letters til å utlevere personopplysninger samtidig -som de ikke får lov til å fortelle dette til kundene sine.

- -

Stå på, jeg er veldig glade for at dere har sett på saken. Vet -dere om aktivitetsbånd i salg i dag som ikke tvinger en til å utlevere -aktivitets- og helseopplysninger med leverandøren?

- -
- -

Jeg håper en konkurrent som respekterer kundenes privatliv klarer å -nå opp i markedet, slik at det finnes et reelt alternativ for oss som -har full tillit til at skyleverandører vil prioritere egen inntjening -og myndighetspålegg langt over kundenes rett til privatliv. Jeg har -ingen tiltro til at Datatilsynet vil kreve noe mer enn at vilkårene -endres slik at de forklarer eksplisitt i hvor stor grad bruk av -produktene utraderer privatsfæren til kundene. Det vil nok gjøre de -innklagede armbåndene "lovlige", men fortsatt tvinge kundene til å -dele sine personopplysninger med leverandøren.

+ +
13th December 2017
+

While looking at +the scanned copies +for the copyright renewal entries for movies published in the USA, +an idea occurred to me. The number of renewals are so few per year, it +should be fairly quick to transcribe them all and add references to +the corresponding IMDB title ID. This would give the (presumably) +complete list of movies published 28 years earlier that did _not_ +enter the public domain for the transcribed year. By fetching the +list of USA movies published 28 years earlier and subtract the movies +with renewals, we should be left with movies registered in IMDB that +are now in the public domain. For the year 1955 (which is the one I +have looked at the most), the total number of pages to transcribe is +21. For the 28 years from 1950 to 1978, it should be in the range +500-600 pages. It is just a few days of work, and spread among a +small group of people it should be doable in a few weeks of spare +time.

+ +

A typical copyright renewal entry look like this (the first one +listed for 1955):

+ +

+ ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer + Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); + 10Jun55; R151558. +

+ +

The movie title as well as registration and renewal dates are easy +enough to locate by a program (split on first comma and look for +DDmmmYY). The rest of the text is not required to find the movie in +IMDB, but is useful to confirm the correct movie is found. I am not +quite sure what the L and R numbers mean, but suspect they are +reference numbers into the archive of the US Copyright Office.

+ +

Tracking down the equivalent IMDB title ID is probably going to be +a manual task, but given the year it is fairly easy to search for the +movie title using for example +http://www.imdb.com/find?q=adam+and+evil+1927&s=all. +Using this search, I find that the equivalent IMDB title ID for the +first renewal entry from 1955 is +http://www.imdb.com/title/tt0017588/.

+ +

I suspect the best way to do this would be to make a specialised +web service to make it easy for contributors to transcribe and track +down IMDB title IDs. In the web service, once a entry is transcribed, +the title and year could be extracted from the text, a search in IMDB +conducted for the user to pick the equivalent IMDB title ID right +away. By spreading out the work among volunteers, it would also be +possible to make at least two persons transcribe the same entries to +be able to discover any typos introduced. But I will need help to +make this happen, as I lack the spare time to do all of this on my +own. If you would like to help, please get in touch. Perhaps you can +draft a web service for crowd sourcing the task?

+ +

Note, Project Gutenberg already have some +transcribed +copies of the US Copyright Office renewal protocols, but I have +not been able to find any film renewals there, so I suspect they only +have copies of renewal for written works. I have not been able to find +any transcribed versions of movie renewals so far. Perhaps they exist +somewhere?

+ +

I would love to figure out methods for finding all the public +domain works in other countries too, but it is a lot harder. At least +for Norway and Great Britain, such work involve tracking down the +people involved in making the movie and figuring out when they died. +It is hard enough to figure out who was part of making a movie, but I +do not know how to automate such procedure without a registry of every +person involved in making movies and their death year.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -90,189 +106,49 @@ dele sine personopplysninger med leverandøren.

- -
10th October 2016
-

In July -I -wrote how to get the Signal Chrome/Chromium app working without -the ability to receive SMS messages (aka without a cell phone). It is -time to share some experiences and provide an updated setup.

- -

The Signal app have worked fine for several months now, and I use -it regularly to chat with my loved ones. I had a major snag at the -end of my summer vacation, when the the app completely forgot my -setup, identity and keys. The reason behind this major mess was -running out of disk space. To avoid that ever happening again I have -started storing everything in userdata/ in git, to be able to -roll back to an earlier version if the files are wiped by mistake. I -had to use it once after introducing the git backup. When rolling -back to an earlier version, one need to use the 'reset session' option -in Signal to get going, and notify the people you talk with about the -problem. I assume there is some sequence number tracking in the -protocol to detect rollback attacks. The git repository is rather big -(674 MiB so far), but I have not tried to figure out if some of the -content can be added to a .gitignore file due to lack of spare -time.

- -

I've also hit the 90 days timeout blocking, and noticed that this -make it impossible to send messages using Signal. I could still -receive them, but had to patch the code with a new timestamp to send. -I believe the timeout is added by the developers to force people to -upgrade to the latest version of the app, even when there is no -protocol changes, to reduce the version skew among the user base and -thus try to keep the number of support requests down.

- -

Since my original recipe, the Signal source code changed slightly, -making the old patch fail to apply cleanly. Below is an updated -patch, including the shell wrapper I use to start Signal. The -original version required a new user to locate the JavaScript console -and call a function from there. I got help from a friend with more -JavaScript knowledge than me to modify the code to provide a GUI -button instead. This mean that to get started you just need to run -the wrapper and click the 'Register without mobile phone' to get going -now. I've also modified the timeout code to always set it to 90 days -in the future, to avoid having to patch the code regularly.

- -

So, the updated recipe for Debian Jessie:

- -
    - -
  1. First, install required packages to get the source code and the -browser you need. Signal only work with Chrome/Chromium, as far as I -know, so you need to install it. - -
    -apt install git tor chromium
    -git clone https://github.com/WhisperSystems/Signal-Desktop.git
    -
  2. - -
  3. Modify the source code using command listed in the the patch -block below.
  4. - -
  5. Start Signal using the run-signal-app wrapper (for example using -`pwd`/run-signal-app). - -
  6. Click on the 'Register without mobile phone', will in a phone -number you can receive calls to the next minute, receive the -verification code and enter it into the form field and press -'Register'. Note, the phone number you use will be user Signal -username, ie the way others can find you on Signal.
  7. - -
  8. You can now use Signal to contact others. Note, new contacts do -not show up in the contact list until you restart Signal, and there is -no way to assign names to Contacts. There is also no way to create or -update chat groups. I suspect this is because the web app do not have -a associated contact database.
  9. - -
- -

I am still a bit uneasy about using Signal, because of the way its -main author moxie0 reject federation and accept dependencies to major -corporations like Google (part of the code is fetched from Google) and -Amazon (the central coordination point is owned by Amazon). See for -example -the -LibreSignal issue tracker for a thread documenting the authors -view on these issues. But the network effect is strong in this case, -and several of the people I want to communicate with already use -Signal. Perhaps we can all move to Ring -once it work on my -laptop? It already work on Windows and Android, and is included -in Debian and -Ubuntu, but not -working on Debian Stable.

- -

Anyway, this is the patch I apply to the Signal code to get it -working. It switch to the production servers, disable to timeout, -make registration easier and add the shell wrapper:

- -
-cd Signal-Desktop; cat <<EOF | patch -p1
-diff --git a/js/background.js b/js/background.js
-index 24b4c1d..579345f 100644
---- a/js/background.js
-+++ b/js/background.js
-@@ -33,9 +33,9 @@
-         });
-     });
- 
--    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
-+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
-     var SERVER_PORTS = [80, 4433, 8443];
--    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
-+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
-     var messageReceiver;
-     window.getSocketStatus = function() {
-         if (messageReceiver) {
-diff --git a/js/expire.js b/js/expire.js
-index 639aeae..beb91c3 100644
---- a/js/expire.js
-+++ b/js/expire.js
-@@ -1,6 +1,6 @@
- ;(function() {
-     'use strict';
--    var BUILD_EXPIRATION = 0;
-+    var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
- 
-     window.extension = window.extension || {};
- 
-diff --git a/js/views/install_view.js b/js/views/install_view.js
-index 7816f4f..1d6233b 100644
---- a/js/views/install_view.js
-+++ b/js/views/install_view.js
-@@ -38,7 +38,8 @@
-             return {
-                 'click .step1': this.selectStep.bind(this, 1),
-                 'click .step2': this.selectStep.bind(this, 2),
--                'click .step3': this.selectStep.bind(this, 3)
-+                'click .step3': this.selectStep.bind(this, 3),
-+                'click .callreg': function() { extension.install('standalone') },
-             };
-         },
-         clearQR: function() {
-diff --git a/options.html b/options.html
-index dc0f28e..8d709f6 100644
---- a/options.html
-+++ b/options.html
-@@ -14,7 +14,10 @@
-         <div class='nav'>
-           <h1>{{ installWelcome }}</h1>
-           <p>{{ installTagline }}</p>
--          <div> <a class='button step2'>{{ installGetStartedButton }}</a> </div>
-+          <div> <a class='button step2'>{{ installGetStartedButton }}</a>
-+	    <br> <a class="button callreg">Register without mobile phone</a>
-+
-+	  </div>
-           <span class='dot step1 selected'></span>
-           <span class='dot step2'></span>
-           <span class='dot step3'></span>
---- /dev/null   2016-10-07 09:55:13.730181472 +0200
-+++ b/run-signal-app   2016-10-10 08:54:09.434172391 +0200
-@@ -0,0 +1,12 @@
-+#!/bin/sh
-+set -e
-+cd $(dirname $0)
-+mkdir -p userdata
-+userdata="`pwd`/userdata"
-+if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
-+    (cd $userdata && git init)
-+fi
-+(cd $userdata && git add . && git commit -m "Current status." || true)
-+exec chromium \
-+  --proxy-server="socks://localhost:9050" \
-+  --user-data-dir=$userdata --load-and-launch-app=`pwd`
-EOF
-chmod a+rx run-signal-app
-
+ +
5th December 2017
+

Three years ago, a presumed lost animation film, +Empty Socks from +1927, was discovered in the Norwegian National Library. At the +time it was discovered, it was generally assumed to be copyrighted by +The Walt Disney Company, and I blogged about +my +reasoning to conclude that it would would enter the Norwegian +equivalent of the public domain in 2053, based on my understanding of +Norwegian Copyright Law. But a few days ago, I came across +a +blog post claiming the movie was already in the public domain, at +least in USA. The reasoning is as follows: The film was released in +November or Desember 1927 (sources disagree), and presumably +registered its copyright that year. At that time, right holders of +movies registered by the copyright office received government +protection for there work for 28 years. After 28 years, the copyright +had to be renewed if the wanted the government to protect it further. +The blog post I found claim such renewal did not happen for this +movie, and thus it entered the public domain in 1956. Yet someone +claim the copyright was renewed and the movie is still copyright +protected. Can anyone help me to figure out which claim is correct? +I have not been able to find Empty Socks in Catalog of copyright +entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures +available +from the University of Pennsylvania, neither in +page +45 for the first half of 1955, nor in +page +119 for the second half of 1955. It is of course possible that +the renewal entry was left out of the printed catalog by mistake. Is +there some way to rule out this possibility? Please help, and update +the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

+15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -280,129 +156,124 @@ activities, please send Bitcoin donations to my address
- -
8th October 2016
-

NRK -lanserte -for noen uker siden en ny -varslerportal som bruker -SecureDrop til å ta imot tips der det er vesentlig at ingen -utenforstående får vite at NRK er tipset. Det er et langt steg -fremover for NRK, og når en leser bloggposten om hva de har tenkt på -og hvordan løsningen er satt opp virker det som om de har gjort en -grundig jobb der. Men det er ganske mye ekstra jobb å motta tips via -SecureDrop, så varslersiden skriver "Nyhetstips som ikke krever denne -typen ekstra vern vil vi gjerne ha på nrk.no/03030", og 03030-siden -foreslår i tillegg til et webskjema å bruke epost, SMS, telefon, -personlig oppmøte og brevpost. Denne artikkelen handler disse andre -metodene.

- -

Når en sender epost til en @nrk.no-adresse så vil eposten sendes ut -av landet til datamaskiner kontrollert av Microsoft. En kan sjekke -dette selv ved å slå opp epostleveringsadresse (MX) i DNS. For NRK er -dette i dag "nrk-no.mail.protection.outlook.com". NRK har som en ser -valgt å sette bort epostmottaket sitt til de som står bak outlook.com, -dvs. Microsoft. En kan sjekke hvor nettverkstrafikken tar veien -gjennom Internett til epostmottaket vha. programmet -traceroute, og finne ut hvem som eier en Internett-adresse -vha. whois-systemet. Når en gjør dette for epost-trafikk til @nrk.no -ser en at trafikken fra Norge mot nrk-no.mail.protection.outlook.com -går via Sverige mot enten Irland eller Tyskland (det varierer fra gang -til gang og kan endre seg over tid).

- -

Vi vet fra -introduksjonen av -FRA-loven at IP-trafikk som passerer grensen til Sverige avlyttes -av Försvarets radioanstalt (FRA). Vi vet videre takket være -Snowden-bekreftelsene at trafikk som passerer grensen til -Storbritannia avlyttes av Government Communications Headquarters -(GCHQ). I tillegg er er det nettopp lansert et forslag i Norge om at -forsvarets E-tjeneste skal få avlytte trafikk som krysser grensen til -Norge. Jeg er ikke kjent med dokumentasjon på at Irland og Tyskland -gjør det samme. Poenget er uansett at utenlandsk etterretning har -mulighet til å snappe opp trafikken når en sender epost til @nrk.no. -I tillegg er det selvsagt tilgjengelig for Microsoft som er underlagt USAs -jurisdiksjon og -samarbeider -med USAs etterretning på flere områder. De som tipser NRK om -nyheter via epost kan dermed gå ut fra at det blir kjent for mange -andre enn NRK at det er gjort.

- -

Bruk av SMS og telefon registreres av blant annet telefonselskapene -og er tilgjengelig i følge lov og forskrift for blant annet Politi, -NAV og Finanstilsynet, i tillegg til IT-folkene hos telefonselskapene -og deres overordnede. Hvis innringer eller mottaker bruker -smarttelefon vil slik kontakt også gjøres tilgjengelig for ulike -app-leverandører og de som lytter på trafikken mellom telefon og -app-leverandør, alt etter hva som er installert på telefonene som -brukes.

- -

Brevpost kan virke trygt, og jeg vet ikke hvor mye som registreres -og lagres av postens datastyrte postsorteringssentraler. Det vil ikke -overraske meg om det lagres hvor i landet hver konvolutt kommer fra og -hvor den er adressert, i hvert fall for en kortere periode. Jeg vet -heller ikke hvem slik informasjon gjøres tilgjengelig for. Det kan -være nok til å ringe inn potensielle kilder når det krysses med hvem -som kjente til aktuell informasjon og hvor de befant seg (tilgjengelig -f.eks. hvis de bærer mobiltelefon eller bor i nærheten).

- -

Personlig oppmøte hos en NRK-journalist er antagelig det tryggeste, -men en bør passe seg for å bruke NRK-kantina. Der bryter de nemlig -Sentralbanklovens -paragraf 14 og nekter folk å betale med kontanter. I stedet -krever de at en varsle sin bankkortutsteder om hvor en befinner seg -ved å bruke bankkort. Banktransaksjoner er tilgjengelig for -bankkortutsteder (det være seg VISA, Mastercard, Nets og/eller en -bank) i tillegg til politiet og i hvert fall tidligere med Se & Hør -(via utro tjenere, slik det ble avslørt etter utgivelsen av boken -«Livet, det forbannede» av Ken B. Rasmussen). Men hvor mange kjenner -en NRK-journalist personlig? Besøk på NRK på Marienlyst krever at en -registrerer sin ankost elektronisk i besøkssystemet. Jeg vet ikke hva -som skjer med det datasettet, men har grunn til å tro at det sendes ut -SMS til den en skal besøke med navnet som er oppgitt. Kanskje greit å -oppgi falskt navn.

- -

Når så tipset er kommet frem til NRK skal det behandles -redaksjonelt i NRK. Der vet jeg via ulike kilder at de fleste -journalistene bruker lokalt installert programvare, men noen bruker -Google Docs og andre skytjenester i strid med interne retningslinjer -når de skriver. Hvordan vet en hvem det gjelder? Ikke vet jeg, men -det kan være greit å spørre for å sjekke at journalisten har tenkt på -problemstillingen, før en gir et tips. Og hvis tipset omtales internt -på epost, er det jo grunn til å tro at også intern eposten vil deles -med Microsoft og utenlands etterretning, slik tidligere nevnt, men det -kan hende at det holdes internt i NRKs interne MS Exchange-løsning. -Men Microsoft ønsker å få alle Exchange-kunder over "i skyen" (eller -andre folks datamaskiner, som det jo innebærer), så jeg vet ikke hvor -lenge det i så fall vil vare.

- -

I tillegg vet en jo at -NRK -har valgt å gi nasjonal sikkerhetsmyndighet (NSM) tilgang til å se på -intern og ekstern Internett-trafikk hos NRK ved oppsett av såkalte -VDI-noder, på tross av -protester -fra NRKs journalistlag. Jeg vet ikke om den vil kunne snappe opp -dokumenter som lagres på interne filtjenere eller dokumenter som lages -i de interne webbaserte publiseringssystemene, men vet at hva noden -ser etter på nettet kontrolleres av NSM og oppdateres automatisk, slik -at det ikke gir så mye mening å sjekke hva noden ser etter i dag når -det kan endres automatisk i morgen.

- -

Personlig vet jeg ikke om jeg hadde turt tipse NRK hvis jeg satt på -noe som kunne være en trussel mot den bestående makten i Norge eller -verden. Til det virker det å være for mange åpninger for -utenforstående med andre prioriteter enn NRKs journalistiske fokus. -Og den største truslen for en varsler er jo om metainformasjon kommer -på avveie, dvs. informasjon om at en har vært i kontakt med en -journalist. Det kan være nok til at en kommer i myndighetenes -søkelys, og de færreste har nok operasjonell sikkerhet til at vil tåle -slik flombelysning på sitt privatliv.

+ +
28th November 2017
+

It would be easier to locate the movie you want to watch in +the Internet Archive, if the +metadata about each movie was more complete and accurate. In the +archiving community, a well known saying state that good metadata is a +love letter to the future. The metadata in the Internet Archive could +use a face lift for the future to love us back. Here is a proposal +for a small improvement that would make the metadata more useful +today. I've been unable to find any document describing the various +standard fields available when uploading videos to the archive, so +this proposal is based on my best quess and searching through several +of the existing movies.

+ +

I have a few use cases in mind. First of all, I would like to be +able to count the number of distinct movies in the Internet Archive, +without duplicates. I would further like to identify the IMDB title +ID of the movies in the Internet Archive, to be able to look up a IMDB +title ID and know if I can fetch the video from there and share it +with my friends.

+ +

Second, I would like the Butter data provider for The Internet +archive +(available +from github), to list as many of the good movies as possible. The +plugin currently do a search in the archive with the following +parameters:

+ +

+collection:moviesandfilms
+AND NOT collection:movie_trailers
+AND -mediatype:collection
+AND format:"Archive BitTorrent"
+AND year
+

+ +

Most of the cool movies that fail to show up in Butter do so +because the 'year' field is missing. The 'year' field is populated by +the year part from the 'date' field, and should be when the movie was +released (date or year). Two such examples are +Ben Hur +from 1905 and +Caminandes +2: Gran Dillama from 2013, where the year metadata field is +missing.

+ +So, my proposal is simply, for every movie in The Internet Archive +where an IMDB title ID exist, please fill in these metadata fields +(note, they can be updated also long after the video was uploaded, but +as far as I can tell, only by the uploader): + +
+ +
mediatype
+
Should be 'movie' for movies.
+ +
collection
+
Should contain 'moviesandfilms'.
+ +
title
+
The title of the movie, without the publication year.
+ +
date
+
The data or year the movie was released. This make the movie show +up in Butter, as well as make it possible to know the age of the +movie and is useful to figure out copyright status.
+ +
director
+
The director of the movie. This make it easier to know if the +correct movie is found in movie databases.
+ +
publisher
+
The production company making the movie. Also useful for +identifying the correct movie.
+ +
links
+ +
Add a link to the IMDB title page, for example like this: <a +href="http://www.imdb.com/title/tt0028496/">Movie in +IMDB</a>. This make it easier to find duplicates and allow for +counting of number of unique movies in the Archive. Other external +references, like to TMDB, could be added like this too.
+ +
+ +

I did consider proposing a Custom field for the IMDB title ID (for +example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it +will be easier to simply place it in the links free text field.

+ +

I created +a +list of IMDB title IDs for several thousand movies in the Internet +Archive, but I also got a list of several thousand movies without +such IMDB title ID (and quite a few duplicates). It would be great if +this data set could be integrated into the Internet Archive metadata +to be available for everyone in the future, but with the current +policy of leaving metadata editing to the uploaders, it will take a +while before this happen. If you have uploaded movies into the +Internet Archive, you can help. Please consider following my proposal +above for your movies, to ensure that movie is properly +counted. :)

+ +

The list is mostly generated using wikidata, which based on +Wikipedia articles make it possible to link between IMDB and movies in +the Internet Archive. But there are lots of movies without a +Wikipedia article, and some movies where only a collection page exist +(like for the +Caminandes example above, where there are three movies but only +one Wikidata entry).

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -410,125 +281,76 @@ slik flombelysning på sitt privatliv.

- -
7th October 2016
-

The Isenkram -system provide a practical and easy way to figure out which -packages support the hardware in a given machine. The command line -tool isenkram-lookup and the tasksel options provide a -convenient way to list and install packages relevant for the current -hardware during system installation, both user space packages and -firmware packages. The GUI background daemon on the other hand provide -a pop-up proposing to install packages when a new dongle is inserted -while using the computer. For example, if you plug in a smart card -reader, the system will ask if you want to install pcscd if -that package isn't already installed, and if you plug in a USB video -camera the system will ask if you want to install cheese if -cheese is currently missing. This already work just fine.

- -

But Isenkram depend on a database mapping from hardware IDs to -package names. When I started no such database existed in Debian, so -I made my own data set and included it with the isenkram package and -made isenkram fetch the latest version of this database from git using -http. This way the isenkram users would get updated package proposals -as soon as I learned more about hardware related packages.

- -

The hardware is identified using modalias strings. The modalias -design is from the Linux kernel where most hardware descriptors are -made available as a strings that can be matched using filename style -globbing. It handle USB, PCI, DMI and a lot of other hardware related -identifiers.

- -

The downside to the Isenkram specific database is that there is no -information about relevant distribution / Debian version, making -isenkram propose obsolete packages too. But along came AppStream, a -cross distribution mechanism to store and collect metadata about -software packages. When I heard about the proposal, I contacted the -people involved and suggested to add a hardware matching rule using -modalias strings in the specification, to be able to use AppStream for -mapping hardware to packages. This idea was accepted and AppStream is -now a great way for a package to announce the hardware it support in a -distribution neutral way. I wrote -a -recipe on how to add such meta-information in a blog post last -December. If you have a hardware related package in Debian, please -announce the relevant hardware IDs using AppStream.

- -

In Debian, almost all packages that can talk to a LEGO Mindestorms -RCX or NXT unit, announce this support using AppStream. The effect is -that when you insert such LEGO robot controller into your Debian -machine, Isenkram will propose to install the packages needed to get -it working. The intention is that this should allow the local user to -start programming his robot controller right away without having to -guess what packages to use or which permissions to fix.

- -

But when I sat down with my son the other day to program our NXT -unit using his Debian Stretch computer, I discovered something -annoying. The local console user (ie my son) did not get access to -the USB device for programming the unit. This used to work, but no -longer in Jessie and Stretch. After some investigation and asking -around on #debian-devel, I discovered that this was because udev had -changed the mechanism used to grant access to local devices. The -ConsoleKit mechanism from /lib/udev/rules.d/70-udev-acl.rules -no longer applied, because LDAP users no longer was added to the -plugdev group during login. Michael Biebl told me that this method -was obsolete and the new method used ACLs instead. This was good -news, as the plugdev mechanism is a mess when using a remote user -directory like LDAP. Using ACLs would make sure a user lost device -access when she logged out, even if the user left behind a background -process which would retain the plugdev membership with the ConsoleKit -setup. Armed with this knowledge I moved on to fix the access problem -for the LEGO Mindstorms related packages.

- -

The new system uses a udev tag, 'uaccess'. It can either be -applied directly for a device, or is applied in -/lib/udev/rules.d/70-uaccess.rules for classes of devices. As the -LEGO Mindstorms udev rules did not have a class, I decided to add the -tag directly in the udev rules files included in the packages. Here -is one example. For the nqc C compiler for the RCX, the -/lib/udev/rules.d/60-nqc.rules file now look like this: +

+
18th November 2017
+

A month ago, I blogged about my work to +automatically +check the copyright status of IMDB entries, and try to count the +number of movies listed in IMDB that is legal to distribute on the +Internet. I have continued to look for good data sources, and +identified a few more. The code used to extract information from +various data sources is available in +a +git repository, currently available from github.

+ +

So far I have identified 3186 unique IMDB title IDs. To gain +better understanding of the structure of the data set, I created a +histogram of the year associated with each movie (typically release +year). It is interesting to notice where the peaks and dips in the +graph are located. I wonder why they are placed there. I suspect +World War II caused the dip around 1940, but what caused the peak +around 2010?

+ +

+ +

I've so far identified ten sources for IMDB title IDs for movies in +the public domain or with a free license. This is the statistics +reported when running 'make stats' in the git repository:

-

-SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
-    SYMLINK+="rcx-%k", TAG+="uaccess"
-

+
+  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
+ 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
+  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
+ 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
+  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
+  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
+  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
+    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
+  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
+    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
+ 3186 unique IMDB title IDs in total
+
-

The key part is the 'TAG+="uaccess"' at the end. I suspect all -packages using plugdev in their /lib/udev/rules.d/ files should be -changed to use this tag (either directly or indirectly via -70-uaccess.rules). Perhaps a lintian check should be created -to detect this?

- -

I've been unable to find good documentation on the uaccess feature. -It is unclear to me if the uaccess tag is an internal implementation -detail like the udev-acl tag used by -/lib/udev/rules.d/70-udev-acl.rules. If it is, I guess the -indirect method is the preferred way. Michael -asked for more -documentation from the systemd project and I hope it will make -this clearer. For now I use the generic classes when they exist and -is already handled by 70-uaccess.rules, and add the tag -directly if no such class exist.

- -

To learn more about the isenkram system, please check out -my -blog posts tagged isenkram.

- -

To help out making life for LEGO constructors in Debian easier, -please join us on our IRC channel -#debian-lego and join -the Debian -LEGO team in the Alioth project we created yesterday. A mailing -list is not yet created, but we are working on it. :)

+

The entries without IMDB title ID are candidates to increase the +data set, but might equally well be duplicates of entries already +listed with IMDB title ID in one of the other sources, or represent +movies that lack a IMDB title ID. I've seen examples of all these +situations when peeking at the entries without IMDB title ID. Based +on these data sources, the lower bound for movies listed in IMDB that +are legal to distribute on the Internet is between 3186 and 4713. + +

It would be great for improving the accuracy of this measurement, +if the various sources added IMDB title ID to their metadata. I have +tried to reach the people behind the various sources to ask if they +are interested in doing this, without any replies so far. Perhaps you +can help me get in touch with the people behind VODO, Public Domain +Torrents, Public Domain Movies and Public Domain Review to try to +convince them to add more metadata to their movie entries?

+ +

Another way you could help is by adding pages to Wikipedia about +movies that are legal to distribute on the Internet. If such page +exist and include a link to both IMDB and The Internet Archive, the +script used to generate free-movies-archive-org-wikidata.json should +pick up the mapping as soon as wikidata is updates.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

+15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -536,47 +358,87 @@ activities, please send Bitcoin donations to my address
- -
9th September 2016
-

En av dagens nyheter er at Aftenpostens redaktør Espen Egil Hansen -bruker -forsiden -av papiravisen på et åpent brev til Facebooks sjef Mark Zuckerberg om -Facebooks fjerning av bilder, tekster og sider de ikke liker. Det -må være uvant for redaktøren i avisen Aftenposten å stå med lua i -handa og håpe på å bli hørt. Spesielt siden Aftenposten har vært med -på å gi Facebook makten de nå demonstrerer at de har. Ved å melde seg -inn i Facebook-samfunnet har de sagt ja til bruksvilkårene og inngått -en antagelig bindende avtale. Kanskje de skulle lest og vurdert -vilkårene litt nærmere før de sa ja, i stedet for å klage over at -reglende de har valgt å akseptere blir fulgt? Personlig synes jeg -vilkårene er uakseptable og det ville ikke falle meg inn å gå inn på -en avtale med slike vilkår. I tillegg til uakseptable vilkår er det -mange andre grunner til å unngå Facebook. Du kan finne en solid -gjennomgang av flere slike argumenter hos -Richard Stallmans side om -Facebook. - -

Jeg håper flere norske redaktører på samme vis må stå med lua i -hånden inntil de forstår at de selv er med på å føre samfunnet på -ville veier ved å omfavne Facebook slik de gjør når de omtaler og -løfter frem saker fra Facebook, og tar i bruk Facebook som -distribusjonskanal for sine nyheter. De bidrar til -overvåkningssamfunnet og raderer ut lesernes privatsfære når de lenker -til Facebook på sine sider, og låser seg selv inne i en omgivelse der -det er Facebook, og ikke redaktøren, som sitter med makta.

- -

Men det vil nok ta tid, i et Norge der de fleste nettredaktører -deler -sine leseres personopplysinger med utenlands etterretning.

- -

For øvrig burde varsleren Edward Snowden få politisk asyl i -Norge.

+ +
1st November 2017
+

If you care about how fault tolerant your storage is, you might +find these articles and papers interesting. They have formed how I +think of when designing a storage system.

+ + + +

Several of these research papers are based on data collected from +hundred thousands or millions of disk, and their findings are eye +opening. The short story is simply do not implicitly trust RAID or +redundant storage systems. Details matter. And unfortunately there +are few options on Linux addressing all the identified issues. Both +ZFS and Btrfs are doing a fairly good job, but have legal and +practical issues on their own. I wonder how cluster file systems like +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a computer you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

+ +

Just remember, in the end, it do not matter how redundant, or how +fault tolerant your storage is, if you do not continuously monitor its +status to detect and replace failed disks.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: norsk, surveillance. + Tags: english, raid, sysadmin.
@@ -584,159 +446,49 @@ Norge.

- -
6th September 2016
-

I helga kom det et hårreisende forslag fra Lysne II-utvalget satt -ned av Forsvarsdepartementet. Lysne II-utvalget var bedt om å vurdere -ønskelista til Forsvarets etterretningstjeneste (e-tjenesten), og har -kommet med -forslag -om at e-tjenesten skal få lov til a avlytte all Internett-trafikk -som passerer Norges grenser. Få er klar over at dette innebærer at -e-tjenesten får tilgang til epost sendt til de fleste politiske -partiene på Stortinget. Regjeringspartiet Høyre (@hoyre.no), -støttepartiene Venstre (@venstre.no) og Kristelig Folkeparti (@krf.no) -samt Sosialistisk Ventreparti (@sv.no) og Miljøpartiet de grønne -(@mdg.no) har nemlig alle valgt å ta imot eposten sin via utenlandske -tjenester. Det betyr at hvis noen sender epost til noen med en slik -adresse vil innholdet i eposten, om dette forslaget blir vedtatt, gjøres -tilgjengelig for e-tjenesten. Venstre, Sosialistisk Ventreparti og -Miljøpartiet De Grønne har valgt å motta sin epost hos Google, -Kristelig Folkeparti har valgt å motta sin epost hos Microsoft, og -Høyre har valgt å motta sin epost hos Comendo med mottak i Danmark og -Irland. Kun Arbeiderpartiet og Fremskrittspartiet har valgt å motta -eposten sin i Norge, hos henholdsvis Intility AS og Telecomputing -AS.

- -

Konsekvensen er at epost inn og ut av de politiske organisasjonene, -til og fra partimedlemmer og partiets tillitsvalgte vil gjøres -tilgjengelig for e-tjenesten for analyse og sortering. Jeg mistenker -at kunnskapen som slik blir tilgjengelig vil være nyttig hvis en -ønsker å vite hvilke argumenter som treffer publikum når en ønsker å -påvirke Stortingets representanter.

Ved hjelp av MX-oppslag i DNS for epost-domene, tilhørende -whois-oppslag av IP-adressene og traceroute for å se hvorvidt -trafikken går via utlandet kan enhver få bekreftet at epost sendt til -de omtalte partiene vil gjøres tilgjengelig for forsvarets -etterretningstjeneste hvis forslaget blir vedtatt. En kan også bruke -den kjekke nett-tjenesten ipinfo.io -for å få en ide om hvor i verden en IP-adresse hører til.

- -

På den positive siden vil forslaget gjøre at enda flere blir -motivert til å ta grep for å bruke -Tor og krypterte -kommunikasjonsløsninger for å kommunisere med sine kjære, for å sikre -at privatsfæren vernes. Selv bruker jeg blant annet -FreedomBox og -Signal til slikt. Ingen av -dem er optimale, men de fungerer ganske bra allerede og øker kostnaden -for dem som ønsker å invadere mitt privatliv.

- -

For øvrig burde varsleren Edward Snowden få politisk asyl i -Norge.

- - + +
31st October 2017
+

I was surprised today to learn that a friend in academia did not +know there are easily available web services available for writing +LaTeX documents as a team. I thought it was common knowledge, but to +make sure at least my readers are aware of it, I would like to mention +these useful services for writing LaTeX documents. Some of them even +provide a WYSIWYG editor to ease writing even further.

+ +

There are two commercial services available, +ShareLaTeX and +Overleaf. They are very easy to +use. Just start a new document, select which publisher to write for +(ie which LaTeX style to use), and start writing. Note, these two +have announced their intention to join forces, so soon it will only be +one joint service. I've used both for different documents, and they +work just fine. While +ShareLaTeX is free +software, while the latter is not. According to a +announcement from Overleaf, they plan to keep the ShareLaTeX code +base maintained as free software.

+ +But these two are not the only alternatives. +Fidus Writer is another free +software solution with the +source available on github. I have not used it myself. Several +others can be found on the nice +alterntiveTo +web service. + +

If you like Google Docs or Etherpad, but would like to write +documents in LaTeX, you should check out these services. You can even +host your own, if you want to. :)

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: norsk, surveillance. + Tags: english.
@@ -744,33 +496,292 @@ traceroute to mx03.telecomputing.no (95.128.105.102), 30 hops max, 60 byte packe
- -
30th August 2016
-

In April we -started -to work on a Norwegian Bokmål edition of the "open access" book on -how to set up and administrate a Debian system. Today I am happy to -report that the first draft is now publicly available. You can find -it on get the Debian -Administrator's Handbook page (under Other languages). The first -eight chapters have a first draft translation, and we are working on -proofreading the content. If you want to help out, please start -contributing using -the -hosted weblate project page, and get in touch using -the -translators mailing list. Please also check out -the instructions for -contributors. A good way to contribute is to proofread the text -and update weblate if you find errors.

- -

Our goal is still to make the Norwegian book available on paper as well as -electronic form.

+ +
25th October 2017
+

Recently, I needed to automatically check the copyright status of a +set of The Internet Movie database +(IMDB) entries, to figure out which one of the movies they refer +to can be freely distributed on the Internet. This proved to be +harder than it sounds. IMDB for sure list movies without any +copyright protection, where the copyright protection has expired or +where the movie is lisenced using a permissive license like one from +Creative Commons. These are mixed with copyright protected movies, +and there seem to be no way to separate these classes of movies using +the information in IMDB.

+ +

First I tried to look up entries manually in IMDB, +Wikipedia and +The Internet Archive, to get a +feel how to do this. It is hard to know for sure using these sources, +but it should be possible to be reasonable confident a movie is "out +of copyright" with a few hours work per movie. As I needed to check +almost 20,000 entries, this approach was not sustainable. I simply +can not work around the clock for about 6 years to check this data +set.

+ +

I asked the people behind The Internet Archive if they could +introduce a new metadata field in their metadata XML for IMDB ID, but +was told that they leave it completely to the uploaders to update the +metadata. Some of the metadata entries had IMDB links in the +description, but I found no way to download all metadata files in bulk +to locate those ones and put that approach aside.

+ +

In the process I noticed several Wikipedia articles about movies +had links to both IMDB and The Internet Archive, and it occured to me +that I could use the Wikipedia RDF data set to locate entries with +both, to at least get a lower bound on the number of movies on The +Internet Archive with a IMDB ID. This is useful based on the +assumption that movies distributed by The Internet Archive can be +legally distributed on the Internet. With some help from the RDF +community (thank you DanC), I was able to come up with this query to +pass to the SPARQL interface on +Wikidata: + +

+SELECT ?work ?imdb ?ia ?when ?label
+WHERE
+{
+  ?work wdt:P31/wdt:P279* wd:Q11424.
+  ?work wdt:P345 ?imdb.
+  ?work wdt:P724 ?ia.
+  OPTIONAL {
+        ?work wdt:P577 ?when.
+        ?work rdfs:label ?label.
+        FILTER(LANG(?label) = "en").
+  }
+}
+

+ +

If I understand the query right, for every film entry anywhere in +Wikpedia, it will return the IMDB ID and The Internet Archive ID, and +when the movie was released and its English title, if either or both +of the latter two are available. At the moment the result set contain +2338 entries. Of course, it depend on volunteers including both +correct IMDB and The Internet Archive IDs in the wikipedia articles +for the movie. It should be noted that the result will include +duplicates if the movie have entries in several languages. There are +some bogus entries, either because The Internet Archive ID contain a +typo or because the movie is not available from The Internet Archive. +I did not verify the IMDB IDs, as I am unsure how to do that +automatically.

+ +

I wrote a small python script to extract the data set from Wikidata +and check if the XML metadata for the movie is available from The +Internet Archive, and after around 1.5 hour it produced a list of 2097 +free movies and their IMDB ID. In total, 171 entries in Wikidata lack +the refered Internet Archive entry. I assume the 70 "disappearing" +entries (ie 2338-2097-171) are duplicate entries.

+ +

This is not too bad, given that The Internet Archive report to +contain 5331 +feature films at the moment, but it also mean more than 3000 +movies are missing on Wikipedia or are missing the pair of references +on Wikipedia.

+ +

I was curious about the distribution by release year, and made a +little graph to show how the amount of free movies is spread over the +years:

+ +

+ +

I expect the relative distribution of the remaining 3000 movies to +be similar.

+ +

If you want to help, and want to ensure Wikipedia can be used to +cross reference The Internet Archive and The Internet Movie Database, +please make sure entries like this are listed under the "External +links" heading on the Wikipedia article for the movie:

+ +

+* {{Internet Archive film|id=FightingLady}}
+* {{IMDb title|id=0036823|title=The Fighting Lady}}
+

+ +

Please verify the links on the final page, to make sure you did not +introduce a typo.

+ +

Here is the complete list, if you want to correct the 171 +identified Wikipedia entries with broken links to The Internet +Archive: Q1140317, +Q458656, +Q458656, +Q470560, +Q743340, +Q822580, +Q480696, +Q128761, +Q1307059, +Q1335091, +Q1537166, +Q1438334, +Q1479751, +Q1497200, +Q1498122, +Q865973, +Q834269, +Q841781, +Q841781, +Q1548193, +Q499031, +Q1564769, +Q1585239, +Q1585569, +Q1624236, +Q4796595, +Q4853469, +Q4873046, +Q915016, +Q4660396, +Q4677708, +Q4738449, +Q4756096, +Q4766785, +Q880357, +Q882066, +Q882066, +Q204191, +Q204191, +Q1194170, +Q940014, +Q946863, +Q172837, +Q573077, +Q1219005, +Q1219599, +Q1643798, +Q1656352, +Q1659549, +Q1660007, +Q1698154, +Q1737980, +Q1877284, +Q1199354, +Q1199354, +Q1199451, +Q1211871, +Q1212179, +Q1238382, +Q4906454, +Q320219, +Q1148649, +Q645094, +Q5050350, +Q5166548, +Q2677926, +Q2698139, +Q2707305, +Q2740725, +Q2024780, +Q2117418, +Q2138984, +Q1127992, +Q1058087, +Q1070484, +Q1080080, +Q1090813, +Q1251918, +Q1254110, +Q1257070, +Q1257079, +Q1197410, +Q1198423, +Q706951, +Q723239, +Q2079261, +Q1171364, +Q617858, +Q5166611, +Q5166611, +Q324513, +Q374172, +Q7533269, +Q970386, +Q976849, +Q7458614, +Q5347416, +Q5460005, +Q5463392, +Q3038555, +Q5288458, +Q2346516, +Q5183645, +Q5185497, +Q5216127, +Q5223127, +Q5261159, +Q1300759, +Q5521241, +Q7733434, +Q7736264, +Q7737032, +Q7882671, +Q7719427, +Q7719444, +Q7722575, +Q2629763, +Q2640346, +Q2649671, +Q7703851, +Q7747041, +Q6544949, +Q6672759, +Q2445896, +Q12124891, +Q3127044, +Q2511262, +Q2517672, +Q2543165, +Q426628, +Q426628, +Q12126890, +Q13359969, +Q13359969, +Q2294295, +Q2294295, +Q2559509, +Q2559912, +Q7760469, +Q6703974, +Q4744, +Q7766962, +Q7768516, +Q7769205, +Q7769988, +Q2946945, +Q3212086, +Q3212086, +Q18218448, +Q18218448, +Q18218448, +Q6909175, +Q7405709, +Q7416149, +Q7239952, +Q7317332, +Q7783674, +Q7783704, +Q7857590, +Q3372526, +Q3372642, +Q3372816, +Q3372909, +Q7959649, +Q7977485, +Q7992684, +Q3817966, +Q3821852, +Q3420907, +Q3429733, +Q774474

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -778,75 +789,30 @@ electronic form.

- -
11th August 2016
-

This summer, I read a great article -"coz: -This Is the Profiler You're Looking For" in USENIX ;login: about -how to profile multi-threaded programs. It presented a system for -profiling software by running experiences in the running program, -testing how run time performance is affected by "speeding up" parts of -the code to various degrees compared to a normal run. It does this by -slowing down parallel threads while the "faster up" code is running -and measure how this affect processing time. The processing time is -measured using probes inserted into the code, either using progress -counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It -can also measure unmodified code by measuring complete the program -runtime and running the program several times instead.

- -

The project and presentation was so inspiring that I would like to -get the system into Debian. I -created -a WNPP request for it and contacted upstream to try to make the -system ready for Debian by sending patches. The build process need to -be changed a bit to avoid running 'git clone' to get dependencies, and -to include the JavaScript web page used to visualize the collected -profiling information included in the source package. -But I expect that should work out fairly soon.

- -

The way the system work is fairly simple. To run an coz experiment -on a binary with debug symbols available, start the program like this: - -

-coz run --- program-to-run
-

- -

This will create a text file profile.coz with the instrumentation -information. To show what part of the code affect the performance -most, use a web browser and either point it to -http://plasma-umass.github.io/coz/ -or use the copy from git (in the gh-pages branch). Check out this web -site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the -profiling more useful you include <coz.h> and insert the -COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the -code, rebuild and run the profiler. This allow coz to do more -targeted experiments.

- -

A video published by ACM -presenting the -Coz profiler is available from Youtube. There is also a paper -from the 25th Symposium on Operating Systems Principles available -titled -Coz: -finding code that counts with causal profiling.

- -

The source code -for Coz is available from github. It will only build with clang -because it uses a -C++ -feature missing in GCC, but I've submitted -a patch to solve -it and hope it will be included in the upstream source soon.

- -

Please get in touch if you, like me, would like to see this piece -of software in Debian. I would very much like some help with the -packaging effort, as I lack the in depth knowledge on how to package -C++ libraries.

+ +
14th October 2017
+

I find it fascinating how many of the people being locked inside +the proposed border wall between USA and Mexico support the idea. The +proposal to keep Mexicans out reminds me of +the +propaganda twist from the East Germany government calling the wall +the “Antifascist Bulwark” after erecting the Berlin Wall, claiming +that the wall was erected to keep enemies from creeping into East +Germany, while it was obvious to the people locked inside it that it +was erected to keep the people from escaping.

+ +

Do the people in USA supporting this wall really believe it is a +one way wall, only keeping people on the outside from getting in, +while not keeping people in the inside from getting out?

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -854,58 +820,52 @@ C++ libraries.

- -
5th August 2016
-

As my regular readers probably remember, the last year I published -a French and Norwegian translation of the classic -Free Culture book by the -founder of the Creative Commons movement, Lawrence Lessig. A bit less -known is the fact that due to the way I created the translations, -using docbook and po4a, I also recreated the English original. And -because I already had created a new the PDF edition, I published it -too. The revenue from the books are sent to the Creative Commons -Corporation. In other words, I do not earn any money from this -project, I just earn the warm fuzzy feeling that the text is available -for a wider audience and more people can learn why the Creative -Commons is needed.

- -

Today, just for fun, I had a look at the sales number over at -Lulu.com, which take care of payment, printing and shipping. Much to -my surprise, the English edition is selling better than both the -French and Norwegian edition, despite the fact that it has been -available in English since it was first published. In total, 24 paper -books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

- - - - - - -
Title / languageQuantity
Culture Libre / French3
Fri kultur / Norwegian7
Free Culture / English14
- -

The books are available both from Lulu.com and from large book -stores like Amazon and Barnes&Noble. Most revenue, around $10 per -book, is sent to the Creative Commons project when the book is sold -directly by Lulu.com. The other channels give less revenue. The -summary from Lulu tell me 10 books was sold via the Amazon channel, 10 -via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells -me that the revenue sent so far this year is USD $101.42. No idea -what kind of sales numbers to expect, so I do not know if that is a -good amount of sales for a 10 year old book or not. But it make me -happy that the buyers find the book, and I hope they enjoy reading it -as much as I did.

- -

The ebook edition is available for free from -Github.

- -

If you would like to translate and publish the book in your native -language, I would be happy to help make it happen. Please get in -touch.

+ +
9th October 2017
+

At my nearby maker space, +Sonen, I heard the story that it +was easier to generate gcode files for theyr 3D printers (Ultimake 2+) +on Windows and MacOS X than Linux, because the software involved had +to be manually compiled and set up on Linux while premade packages +worked out of the box on Windows and MacOS X. I found this annoying, +as the software involved, +Cura, is free software +and should be trivial to get up and running on Linux if someone took +the time to package it for the relevant distributions. I even found +a request for adding into +Debian from 2013, which had seem some activity over the years but +never resulted in the software showing up in Debian. So a few days +ago I offered my help to try to improve the situation.

+ +

Now I am very happy to see that all the packages required by a +working Cura in Debian are uploaded into Debian and waiting in the NEW +queue for the ftpmasters to have a look. You can track the progress +on +the +status page for the 3D printer team.

+ +

The uploaded packages are a bit behind upstream, and was uploaded +now to get slots in the NEW +queue while we work up updating the packages to the latest +upstream version.

+ +

On a related note, two competitors for Cura, which I found harder +to use and was unable to configure correctly for Ultimaker 2+ in the +short time I spent on it, are already in Debian. If you are looking +for 3D printer "slicers" and want something already available in +Debian, check out +slic3r and +slic3r-prusa. +The latter is a fork of the former.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -913,40 +873,35 @@ touch.

- -
1st August 2016
-

For mange år siden leste jeg en klassisk tekst som gjorde såpass -inntrykk på meg at jeg husker den fortsatt, flere år senere, og bruker -argumentene fra den stadig vekk. Teksten var «The Relativity of -Wrong» som Isaac Asimov publiserte i Skeptical Inquirer i 1989. Den -gir litt perspektiv rundt formidlingen av vitenskapelige resultater. -Jeg har hatt lyst til å kunne dele den også med folk som ikke -behersker engelsk så godt, som barn og noen av mine eldre slektninger, -og har savnet å ha den tilgjengelig på norsk. For to uker siden tok -jeg meg sammen og kontaktet Asbjørn Dyrendal i foreningen Skepsis om -de var interessert i å publisere en norsk utgave på bloggen sin, og da -han var positiv tok jeg kontakt med Skeptical Inquirer og spurte om -det var greit for dem. I løpet av noen dager fikk vi tilbakemelding -fra Barry Karr hos The Skeptical Inquirer som hadde sjekket og fått OK -fra Robyn Asimov som representerte arvingene i Asmiov-familien og gikk -igang med oversettingen.

- -

Resultatet, «Relativt -feil», ble publisert på skepsis-bloggen for noen minutter siden. -Jeg anbefaler deg på det varmeste å lese denne teksten og dele den med -dine venner.

- -

For å håndtere oversettelsen og sikre at original og oversettelse -var i sync brukte vi git, po4a, GNU make og Transifex. Det hele -fungerte utmerket og gjorde det enkelt å dele tekstene og jobbe sammen -om finpuss på formuleringene. Hadde hosted.weblate.org latt meg -opprette nye prosjekter selv i stedet for å måtte kontakte -administratoren der, så hadde jeg brukt weblate i stedet.

+ +
4th October 2017
+
Når jeg holder på med ulike prosjekter, så trenger jeg stadig ulike +skruer. Det siste prosjektet jeg holder på med er å lage +en boks til en +HDMI-touch-skjerm som skal brukes med Raspberry Pi. Boksen settes +sammen med skruer og bolter, og jeg har vært i tvil om hvor jeg kan +få tak i de riktige skruene. Clas Ohlson og Jernia i nærheten har +sjelden hatt det jeg trenger. Men her om dagen fikk jeg et fantastisk +tips for oss som bor i Oslo. +Zachariassen Jernvare AS i +Hegermannsgate +23A på Torshov har et fantastisk utvalg, og åpent mellom 09:00 og +17:00. De selger skruer, muttere, bolter, skiver etc i løs vekt, og +så langt har jeg fått alt jeg har lett etter. De har i tillegg det +meste av annen jernvare, som verktøy, lamper, ledninger, etc. Jeg +håper de har nok kunder til å holde det gående lenge, da dette er en +butikk jeg kommer til å besøke ofte. Butikken er et funn å ha i +nabolaget for oss som liker å bygge litt selv. :)

+ +

Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til +det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner +til min adresse +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: norsk, skepsis. + Tags: norsk.
@@ -961,6 +916,33 @@ administratoren der, så hadde jeg brukt weblate i stedet.

Archive