Petter Reinholdtsen

How does it feel to be wiretapped, when you should be doing the wiretapping...
8th March 2017

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA according to themselves believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Tags: english, surveillance.
Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress
3rd March 2017

For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.

A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.

Tags: debian, debian-handbook, english.
Unlimited randomness with the ChaosKey?
1st March 2017

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%

Quite the difference. :) I bought a few more than I need, in case someone want to buy one here in Norway. :)

Update: The dongle was presented at Debconf last year. You might find the talk recording illuminating. It explains exactly what the source of randomness is, if you are unable to spot it from the schema drawing available from the ChaosKey web site linked at the start of this blog post.

Tags: debian, english.
Detect OOXML files with undefined behaviour?
21st February 2017

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.

Tags: english, nuug, standard.
Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)
13th February 2017

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain popcorn-time.no. The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.

Tags: english, nuug, offentlig innsyn, opphavsrett.
A day in court challenging seizure of popcorn-time.no for #domstolkontroll
3rd February 2017

On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen.

The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka Økokrim) decided on their own, to seize a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent.

I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal.

From the other side came two people from Økokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. Økokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously.

If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happens the money will be put to good use.

If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.

Tags: english, nuug, offentlig innsyn, opphavsrett.
Nasjonalbiblioteket avslutter sin ulovlige bruk av Google Skjemaer
12th January 2017

I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul arrangerte Nasjonalbiblioteket et seminar om sitt knakende gode tiltak «verksregister». Eneste måten å melde seg på dette seminaret var å sende personopplysninger til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis, da det bør være mulig å delta på seminarer arrangert av det offentlige uten å måtte dele sine interesser, posisjon og andre personopplysninger med Google. Jeg ba derfor om innsyn via Mimes brønn i avtaler og vurderinger Nasjonalbiblioteket hadde rundt dette. Personopplysningsloven legger klare rammer for hva som må være på plass før en kan be tredjeparter, spesielt i utlandet, behandle personopplysninger på sine vegne, så det burde eksistere grundig dokumentasjon før noe slikt kan bli lovlig. To jurister hos Nasjonalbiblioteket mente først dette var helt i orden, og at Googles standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg var merkelig, men har ikke hatt kapasitet til å følge opp saken før for to dager siden.

Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket om at Datatilsynet underkjente Googles standardavtaler som databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI for å finne bedre måter å håndtere påmeldinger i tråd med personopplysningsloven. Det er fantastisk å se at av og til hjelper det å spørre hva i alle dager det offentlige holder på med.

Tags: norsk, personvern, surveillance, web.
Bryter NAV sin egen personvernerklæring?
11th January 2017

Jeg leste med interesse en nyhetssak hos digi.no og NRK om at det ikke bare er meg, men at også NAV bedriver geolokalisering av IP-adresser, og at det gjøres analyse av IP-adressene til de som sendes inn meldekort for å se om meldekortet sendes inn fra utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare, er sitert i NRK på at «De to er jo blant annet avslørt av IP-adresser. At man ser at meldekortet kommer fra utlandet.»

Jeg synes det er fint at det blir bedre kjent at IP-adresser knyttes til enkeltpersoner og at innsamlet informasjon brukes til å stedsbestemme personer også av aktører her i Norge. Jeg ser det som nok et argument for å bruke Tor så mye som mulig for å gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin privatsfære og unngå å dele sin fysiske plassering med uvedkommede.

Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble tipset (takk #nuug) om NAVs personvernerklæring, som under punktet «Personvern og statistikk» lyder:

«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene dannes fordi din nettleser automatisk sender en rekke opplysninger til NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det er eksempelvis opplysninger om hvilken nettleser og -versjon du bruker, og din internettadresse (ip-adresse). For hver side som vises, lagres følgende opplysninger:

  • hvilken side du ser på
  • dato og tid
  • hvilken nettleser du bruker
  • din ip-adresse

Ingen av opplysningene vil bli brukt til å identifisere enkeltpersoner. NAV bruker disse opplysningene til å generere en samlet statistikk som blant annet viser hvilke sider som er mest populære. Statistikken er et redskap til å forbedre våre tjenester.»

Jeg klarer ikke helt å se hvordan analyse av de besøkendes IP-adresser for å se hvem som sender inn meldekort via web fra en IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om at «ingen av opplysningene vil bli brukt til å identifisere enkeltpersoner». Det virker dermed for meg som at NAV bryter sine egen personvernerklæring, hvilket Datatilsynet fortalte meg i starten av desember antagelig er brudd på personopplysningsloven.

I tillegg er personvernerklæringen ganske misvisende i og med at NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i tillegg ber brukernes nettleser kontakte fem andre nettjenere (script.hotjar.com, static.hotjar.com, vars.hotjar.com, www.google-analytics.com og www.googletagmanager.com), slik at personopplysninger blir gjort tilgjengelig for selskapene Hotjar og Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og NSA). Jeg klarer heller ikke se hvordan slikt spredning av personopplysninger kan være i tråd med kravene i personopplysningloven, eller i tråd med NAVs personvernerklæring.

Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller kanskje Datatilsynet bør gjøre det?

Tags: norsk, nuug, personvern, surveillance.
Where did that package go? — geolocated IP traceroute
9th January 2017

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (129.240.202.1)  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (129.240.24.229)  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (128.39.65.17)  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (193.156.90.3)  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48)  3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (195.0.242.39)  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23)  0.622 ms
 7  89.191.10.146 (89.191.10.146)  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *
[...]

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kart, nuug, personvern, stortinget, surveillance, web.
Introducing ical-archiver to split out old iCalendar entries
4th January 2017

Do you have a large iCalendar file with lots of old entries, and would like to archive them to save space and resources? At least those of us using KOrganizer know that turning on and off an event set become slower and slower the more entries are in the set. While working on migrating our calendars to a Radicale CalDAV server on our Freedombox server, my loved one wondered if I could find a way to split up the calendar file she had in KOrganizer, and I set out to write a tool. I spent a few days writing and polishing the system, and it is now ready for general consumption. The code for ical-archiver is publicly available from a git repository on github. The system is written in Python and depend on the vobject Python module.

To use it, locate the iCalendar file you want to operate on and give it as an argument to the ical-archiver script. This will generate a set of new files, one file per component type per year for all components expiring more than two years in the past. The vevent, vtodo and vjournal entries are handled by the script. The remaining entries are stored in a 'remaining' file.

This is what a test run can look like:

% ical-archiver t/2004-2016.ics 
Found 3612 vevents
Found 6 vtodos
Found 2 vjournals
Writing t/2004-2016.ics-subset-vevent-2004.ics
Writing t/2004-2016.ics-subset-vevent-2005.ics
Writing t/2004-2016.ics-subset-vevent-2006.ics
Writing t/2004-2016.ics-subset-vevent-2007.ics
Writing t/2004-2016.ics-subset-vevent-2008.ics
Writing t/2004-2016.ics-subset-vevent-2009.ics
Writing t/2004-2016.ics-subset-vevent-2010.ics
Writing t/2004-2016.ics-subset-vevent-2011.ics
Writing t/2004-2016.ics-subset-vevent-2012.ics
Writing t/2004-2016.ics-subset-vevent-2013.ics
Writing t/2004-2016.ics-subset-vevent-2014.ics
Writing t/2004-2016.ics-subset-vjournal-2007.ics
Writing t/2004-2016.ics-subset-vjournal-2011.ics
Writing t/2004-2016.ics-subset-vtodo-2012.ics
Writing t/2004-2016.ics-remaining.ics
%

As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections.

The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard.

RSS feed

Created by Chronicle v4.6