Petter Reinholdtsen

Unlimited randomness with the ChaosKey?
1st March 2017

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%

Quite the difference. :) I bought a few more than I need, in case someone want to buy one her in Norway. :)

Tags: debian, english.
Detect OOXML files with undefined behaviour?
21st February 2017

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.

Tags: english, nuug, standard.
Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)
13th February 2017

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain popcorn-time.no. The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.

Tags: english, nuug, offentlig innsyn, opphavsrett.
A day in court challenging seizure of popcorn-time.no for #domstolkontroll
3rd February 2017

On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen.

The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka Økokrim) decided on their own, to seize a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent.

I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal.

From the other side came two people from Økokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. Økokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously.

If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happens the money will be put to good use.

If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.

Tags: english, nuug, offentlig innsyn, opphavsrett.
Nasjonalbiblioteket avslutter sin ulovlige bruk av Google Skjemaer
12th January 2017

I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul arrangerte Nasjonalbiblioteket et seminar om sitt knakende gode tiltak «verksregister». Eneste måten å melde seg på dette seminaret var å sende personopplysninger til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis, da det bør være mulig å delta på seminarer arrangert av det offentlige uten å måtte dele sine interesser, posisjon og andre personopplysninger med Google. Jeg ba derfor om innsyn via Mimes brønn i avtaler og vurderinger Nasjonalbiblioteket hadde rundt dette. Personopplysningsloven legger klare rammer for hva som må være på plass før en kan be tredjeparter, spesielt i utlandet, behandle personopplysninger på sine vegne, så det burde eksistere grundig dokumentasjon før noe slikt kan bli lovlig. To jurister hos Nasjonalbiblioteket mente først dette var helt i orden, og at Googles standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg var merkelig, men har ikke hatt kapasitet til å følge opp saken før for to dager siden.

Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket om at Datatilsynet underkjente Googles standardavtaler som databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI for å finne bedre måter å håndtere påmeldinger i tråd med personopplysningsloven. Det er fantastisk å se at av og til hjelper det å spørre hva i alle dager det offentlige holder på med.

Tags: norsk, personvern, surveillance, web.
Bryter NAV sin egen personvernerklæring?
11th January 2017

Jeg leste med interesse en nyhetssak hos digi.no og NRK om at det ikke bare er meg, men at også NAV bedriver geolokalisering av IP-adresser, og at det gjøres analyse av IP-adressene til de som sendes inn meldekort for å se om meldekortet sendes inn fra utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare, er sitert i NRK på at «De to er jo blant annet avslørt av IP-adresser. At man ser at meldekortet kommer fra utlandet.»

Jeg synes det er fint at det blir bedre kjent at IP-adresser knyttes til enkeltpersoner og at innsamlet informasjon brukes til å stedsbestemme personer også av aktører her i Norge. Jeg ser det som nok et argument for å bruke Tor så mye som mulig for å gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin privatsfære og unngå å dele sin fysiske plassering med uvedkommede.

Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble tipset (takk #nuug) om NAVs personvernerklæring, som under punktet «Personvern og statistikk» lyder:

«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene dannes fordi din nettleser automatisk sender en rekke opplysninger til NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det er eksempelvis opplysninger om hvilken nettleser og -versjon du bruker, og din internettadresse (ip-adresse). For hver side som vises, lagres følgende opplysninger:

  • hvilken side du ser på
  • dato og tid
  • hvilken nettleser du bruker
  • din ip-adresse

Ingen av opplysningene vil bli brukt til å identifisere enkeltpersoner. NAV bruker disse opplysningene til å generere en samlet statistikk som blant annet viser hvilke sider som er mest populære. Statistikken er et redskap til å forbedre våre tjenester.»

Jeg klarer ikke helt å se hvordan analyse av de besøkendes IP-adresser for å se hvem som sender inn meldekort via web fra en IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om at «ingen av opplysningene vil bli brukt til å identifisere enkeltpersoner». Det virker dermed for meg som at NAV bryter sine egen personvernerklæring, hvilket Datatilsynet fortalte meg i starten av desember antagelig er brudd på personopplysningsloven.

I tillegg er personvernerklæringen ganske misvisende i og med at NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i tillegg ber brukernes nettleser kontakte fem andre nettjenere (script.hotjar.com, static.hotjar.com, vars.hotjar.com, www.google-analytics.com og www.googletagmanager.com), slik at personopplysninger blir gjort tilgjengelig for selskapene Hotjar og Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og NSA). Jeg klarer heller ikke se hvordan slikt spredning av personopplysninger kan være i tråd med kravene i personopplysningloven, eller i tråd med NAVs personvernerklæring.

Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller kanskje Datatilsynet bør gjøre det?

Tags: norsk, nuug, personvern, surveillance.
Where did that package go? — geolocated IP traceroute
9th January 2017

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (129.240.202.1)  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (129.240.24.229)  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (128.39.65.17)  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (193.156.90.3)  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48)  3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (195.0.242.39)  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23)  0.622 ms
 7  89.191.10.146 (89.191.10.146)  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *
[...]

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kart, nuug, personvern, stortinget, surveillance, web.
Introducing ical-archiver to split out old iCalendar entries
4th January 2017

Do you have a large iCalendar file with lots of old entries, and would like to archive them to save space and resources? At least those of us using KOrganizer know that turning on and off an event set become slower and slower the more entries are in the set. While working on migrating our calendars to a Radicale CalDAV server on our Freedombox server, my loved one wondered if I could find a way to split up the calendar file she had in KOrganizer, and I set out to write a tool. I spent a few days writing and polishing the system, and it is now ready for general consumption. The code for ical-archiver is publicly available from a git repository on github. The system is written in Python and depend on the vobject Python module.

To use it, locate the iCalendar file you want to operate on and give it as an argument to the ical-archiver script. This will generate a set of new files, one file per component type per year for all components expiring more than two years in the past. The vevent, vtodo and vjournal entries are handled by the script. The remaining entries are stored in a 'remaining' file.

This is what a test run can look like:

% ical-archiver t/2004-2016.ics 
Found 3612 vevents
Found 6 vtodos
Found 2 vjournals
Writing t/2004-2016.ics-subset-vevent-2004.ics
Writing t/2004-2016.ics-subset-vevent-2005.ics
Writing t/2004-2016.ics-subset-vevent-2006.ics
Writing t/2004-2016.ics-subset-vevent-2007.ics
Writing t/2004-2016.ics-subset-vevent-2008.ics
Writing t/2004-2016.ics-subset-vevent-2009.ics
Writing t/2004-2016.ics-subset-vevent-2010.ics
Writing t/2004-2016.ics-subset-vevent-2011.ics
Writing t/2004-2016.ics-subset-vevent-2012.ics
Writing t/2004-2016.ics-subset-vevent-2013.ics
Writing t/2004-2016.ics-subset-vevent-2014.ics
Writing t/2004-2016.ics-subset-vjournal-2007.ics
Writing t/2004-2016.ics-subset-vjournal-2011.ics
Writing t/2004-2016.ics-subset-vtodo-2012.ics
Writing t/2004-2016.ics-remaining.ics
%

As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections.

The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard.
Appstream just learned how to map hardware to packages too!
23rd December 2016

I received a very nice Christmas present today. As my regular readers probably know, I have been working on the the Isenkram system for many years. The goal of the Isenkram system is to make it easier for users to figure out what to install to get a given piece of hardware to work in Debian, and a key part of this system is a way to map hardware to packages. Isenkram have its own mapping database, and also uses data provided by each package using the AppStream metadata format. And today, AppStream in Debian learned to look up hardware the same way Isenkram is doing it, ie using fnmatch():

% appstreamcli what-provides modalias \
  usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
Identifier: pymissile [generic]
Name: pymissile
Summary: Control original Striker USB Missile Launcher
Package: pymissile
% appstreamcli what-provides modalias usb:v0694p0002d0000
Identifier: libnxt [generic]
Name: libnxt
Summary: utility library for talking to the LEGO Mindstorms NXT brick
Package: libnxt
---
Identifier: t2n [generic]
Name: t2n
Summary: Simple command-line tool for Lego NXT
Package: t2n
---
Identifier: python-nxt [generic]
Name: python-nxt
Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot
Package: python-nxt
---
Identifier: nbc [generic]
Name: nbc
Summary: C compiler for LEGO Mindstorms NXT bricks
Package: nbc
%

A similar query can be done using the combined AppStream and Isenkram databases using the isenkram-lookup tool:

% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
pymissile
% isenkram-lookup usb:v0694p0002d0000
libnxt
nbc
python-nxt
t2n
%

You can find modalias values relevant for your machine using cat $(find /sys/devices/ -name modalias).

If you want to make this system a success and help Debian users make the most of the hardware they have, please helpadd AppStream metadata for your package following the guidelines documented in the wiki. So far only 11 packages provide such information, among the several hundred hardware specific packages in Debian. The Isenkram database on the other hand contain 101 packages, mostly related to USB dongles. Most of the packages with hardware mapping in AppStream are LEGO Mindstorms related, because I have, as part of my involvement in the Debian LEGO team given priority to making sure LEGO users get proposed the complete set of packages in Debian for that particular hardware. The team also got a nice Christmas present today. The nxt-firmware package made it into Debian. With this package in place, it is now possible to use the LEGO Mindstorms NXT unit with only free software, as the nxt-firmware package contain the source and firmware binaries for the NXT brick.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, isenkram.
Isenkram updated with a lot more hardware-package mappings
20th December 2016

The Isenkram system I wrote two years ago to make it easier in Debian to find and install packages to get your hardware dongles to work, is still going strong. It is a system to look up the hardware present on or connected to the current system, and map the hardware to Debian packages. It can either be done using the tools in isenkram-cli or using the user space daemon in the isenkram package. The latter will notify you, when inserting new hardware, about what packages to install to get the dongle working. It will even provide a button to click on to ask packagekit to install the packages.

Here is an command line example from my Thinkpad laptop:

% isenkram-lookup  
bluez
cheese
ethtool
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tp-smapi-source
tpb
%

It can also list the firware package providing firmware requested by the load kernel modules, which in my case is an empty list because I have all the firmware my machine need:

% /usr/sbin/isenkram-autoinstall-firmware -l
info: did not find any firmware files requested by loaded kernel modules.  exiting
%

The last few days I had a look at several of the around 250 packages in Debian with udev rules. These seem like good candidates to install when a given hardware dongle is inserted, and I found several that should be proposed by isenkram. I have not had time to check all of them, but am happy to report that now there are 97 packages packages mapped to hardware by Isenkram. 11 of these packages provide hardware mapping using AppStream, while the rest are listed in the modaliases file provided in isenkram.

These are the packages with hardware mappings at the moment. The marked packages are also announcing their hardware support using AppStream, for everyone to use:

air-quality-sensor, alsa-firmware-loaders, argyll, array-info, avarice, avrdude, b43-fwcutter, bit-babbler, bluez, bluez-firmware, brltty, broadcom-sta-dkms, calibre, cgminer, cheese, colord, colorhug-client, dahdi-firmware-nonfree, dahdi-linux, dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, libnxt, libpam-fprintd, lomoco, madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, nbc, nqc, nut-hal-drivers, ola, open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, pcscd, pidgin-blinklight, printer-driver-splix, pymissile, python-nxt, qlandkartegt, qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, xserver-xorg-input-wacom, xserver-xorg-video-qxl, xserver-xorg-video-vmware, yubikey-personalization and zd1211-firmware

If you know of other packages, please let me know with a wishlist bug report against the isenkram-cli package, and ask the package maintainer to add AppStream metadata according to the guidelines to provide the information for everyone. In time, I hope to get rid of the isenkram specific hardware mapping and depend exclusively on AppStream.

Note, the AppStream metadata for broadcom-sta-dkms is matching too much hardware, and suggest that the package with with any ethernet card. See bug #838735 for the details. I hope the maintainer find time to address it soon. In the mean time I provide an override in isenkram.

Tags: debian, english, isenkram.

RSS feed

Created by Chronicle v4.6