A few days ago I ordered a small batch of -the ChaosKey, a small -USB dongle for generating entropy created by Bdale Garbee and Keith -Packard. Yesterday it arrived, and I am very happy to report that it -work great! According to its designers, to get it to work out of the -box, you need the Linux kernel version 4.1 or later. I tested on a -Debian Stretch machine (kernel version 4.9), and there it worked just -fine, increasing the available entropy very quickly. I wrote a small -test oneliner to test. It first print the current entropy level, -drain /dev/random, and then print the entropy level for five seconds. -Here is the situation without the ChaosKey inserted:
+ +I am very happy to report that the +Nikita Noark 5 +core project tagged its second release today. The free software +solution is an implementation of the Norwegian archive standard Noark +5 used by government offices in Norway. These were the changes in +version 0.1.1 since version 0.1.0 (from NEWS.md): -
- --% cat /proc/sys/kernel/random/entropy_avail; \ - dd bs=1M if=/dev/random of=/dev/null count=1; \ - for n in $(seq 1 5); do \ - cat /proc/sys/kernel/random/entropy_avail; \ - sleep 1; \ - done -300 -0+1 oppføringer inn -0+1 oppføringer ut -28 byte kopiert, 0,000264565 s, 106 kB/s -4 -8 -12 -17 -21 -% -
The entropy level increases by 3-4 every second. In such case any -application requiring random bits (like a HTTPS enabled web server) -will halt and wait for more entrpy. And here is the situation with -the ChaosKey inserted:
+-
-
- Continued work on the angularjs GUI, including document upload. +
- Implemented correspondencepartPerson, correspondencepartUnit and + correspondencepartInternal +
- Applied for coverity coverage and started submitting code on + regualr basis. +
- Started fixing bugs reported by coverity +
- Corrected and completed HATEOAS links to make sure entire API is + available via URLs in _links. +
- Corrected all relation URLs to use trailing slash. +
- Add initial support for storing data in ElasticSearch. +
- Now able to receive and store uploaded files in the archive. +
- Changed JSON output for object lists to have relations in _links. +
- Improve JSON output for empty object lists. +
- Now uses correct MIME type application/vnd.noark5-v4+json. +
- Added support for docker container images. +
- Added simple API browser implemented in JavaScript/Angular. +
- Started on archive client implemented in JavaScript/Angular. +
- Started on prototype to show the public mail journal. +
- Improved performance by disabling Sprint FileWatcher. +
- Added support for 'arkivskaper', 'saksmappe' and 'journalpost'. +
- Added support for some metadata codelists. +
- Added support for Cross-origin resource sharing (CORS). +
- Changed login method from Basic Auth to JSON Web Token (RFC 7519) + style. +
- Added support for GET-ing ny-* URLs. +
- Added support for modifying entities using PUT and eTag. +
- Added support for returning XML output on request. +
- Removed support for English field and class names, limiting ourself + to the official names. +
- ... + +
+-% cat /proc/sys/kernel/random/entropy_avail; \ - dd bs=1M if=/dev/random of=/dev/null count=1; \ - for n in $(seq 1 5); do \ - cat /proc/sys/kernel/random/entropy_avail; \ - sleep 1; \ - done -1079 -0+1 oppføringer inn -0+1 oppføringer ut -104 byte kopiert, 0,000487647 s, 213 kB/s -433 -1028 -1031 -1035 -1038 -% -
Quite the difference. :) I bought a few more than I need, in case -someone want to buy one here in Norway. :)
+If this sound interesting to you, please contact us on IRC (#nikita +on irc.freenode.net) or email +(nikita-noark +mailing list).
I just noticed -the -new Norwegian proposal for archiving rules in the goverment list -ECMA-376 -/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term -storage. Luckily such files will only be accepted based on -pre-approval from the National Archive. Allowing OOXML files to be -used for long term storage might seem like a good idea as long as we -forget that there are plenty of ways for a "valid" OOXML document to -have content with no defined interpretation in the standard, which -lead to a question and an idea.
- -Is there any tool to detect if a OOXML document depend on such -undefined behaviour? It would be useful for the National Archive (and -anyone else interested in verifying that a document is well defined) -to have such tool available when considering to approve the use of -OOXML. I'm aware of the -officeotron OOXML -validator, but do not know how complete it is nor if it will -report use of undefined behaviour. Are there other similar tools -available? Please send me an email if you know of any such tool.
+ +This is a copy of +an +email I posted to the nikita-noark mailing list. Please follow up +there if you would like to discuss this topic. The background is that +we are making a free software archive system based on the Norwegian +Noark +5 standard for government archives.
+ +I've been wondering a bit lately how trusted timestamps could be +stored in Noark 5. +Trusted +timestamps can be used to verify that some information +(document/file/checksum/metadata) have not been changed since a +specific time in the past. This is useful to verify the integrity of +the documents in the archive.
+ +Then it occured to me, perhaps the trusted timestamps could be +stored as dokument variants (ie dokumentobjekt referered to from +dokumentbeskrivelse) with the filename set to the hash it is +stamping?
+ +Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", +a new dokumentobjekt is associated with "dokumentbeskrivelse" with the +same attributes as the stamped dokumentobjekt except these +attributes:
+ +-
+
+
- format -> "RFC3161" +
- mimeType -> "application/timestamp-reply" +
- formatDetaljer -> "<source URL for timestamp service>" +
- filenavn -> "<sjekksum>.tsr" + +
This assume a service following +IETF RFC 3161 is +used, which specifiy the given MIME type for replies and the .tsr file +ending for the content of such trusted timestamp. As far as I can +tell from the Noark 5 specifications, it is OK to have several +variants/renderings of a dokument attached to a given +dokumentbeskrivelse objekt. It might be stretching it a bit to make +some of these variants represent crypto-signatures useful for +verifying the document integrity instead of representing the dokument +itself.
+ +Using the source of the service in formatDetaljer allow several +timestamping services to be used. This is useful to spread the risk +of key compromise over several organisations. It would only be a +problem to trust the timestamps if all of the organisations are +compromised.
+ +The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "
+ ++openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \ + | curl -s -H "Content-Type: application/timestamp-query" \ + --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr +
To verify the timestamp, you first need to download the public key +of the trusted timestamp service, for example using this command:
+ ++ ++wget -O ca-cert.txt \ + https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt +
Note, the public key should be stored alongside the timestamps in +the archive to make sure it is also available 100 years from now. It +is probably a good idea to standardise how and were to store such +public keys, to make it easier to find for those trying to verify +documents 100 or 1000 years from now. :)
+ +The verification itself is a simple openssl command:
+ ++ ++openssl ts -verify -data $inputfile -in $sha256.tsr \ + -CAfile ca-cert.txt -text +
Is there any reason this approach would not work? Is it somehow against +the Noark 5 specification?
A few days ago, we received the ruling from -my -day in court. The case in question is a challenge of the seizure -of the DNS domain popcorn-time.no. The ruling simply did not mention -most of our arguments, and seemed to take everything ÃKOKRIM said at -face value, ignoring our demonstration and explanations. But it is -hard to tell for sure, as we still have not seen most of the documents -in the case and thus were unprepared and unable to contradict several -of the claims made in court by the opposition. We are considering an -appeal, but it is partly a question of funding, as it is costing us -quite a bit to pay for our lawyer. If you want to help, please -donate to the -NUUG defense fund.
- -The details of the case, as far as we know it, is available in -Norwegian from -the NUUG -blog. This also include -the -ruling itself.
+ +Aftenposten +melder i dag om feil i eksamensoppgavene for eksamen i politikk og +menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var +like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring +på om den fri oversetterløsningen +Apertium ville gjort en bedre +jobb enn Utdanningsdirektoratet. Det kan se slik ut.
+ +Her er bokmålsoppgaven fra eksamenen:
+ +++ +Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers +rolle og muligheter til å håndtere internasjonale utfordringer, som +for eksempel flykningekrisen.
+ +Vedlegge er eksempler på tekster som kan gi relevante perspektiver +på temaet:
++
+ +- Flykningeregnskapet 2016, UNHCR og IDMC +
- «Grenseløst Europa for fall» A-Magasinet, 26. november 2015 +
Dette oversetter Apertium slik:
+ +++ +Drøft utfordringane knytte til nasjonalstatane sine og rolla til +andre aktørar og høve til å handtera internasjonale utfordringar, som +til dømes *flykningekrisen.
+ +Vedleggja er døme på tekster som kan gje relevante perspektiv på +temaet:
+ ++
+ +- *Flykningeregnskapet 2016, *UNHCR og *IDMC
+- «*Grenseløst Europa for fall» A-Magasinet, 26. november 2015
+
Ord som ikke ble forstått er markert med stjerne (*), og trenger +ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i +oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at +"andre aktørers rolle og muligheter til ..." burde vært oversatt til +"rolla til andre aktørar og deira høve til ..." eller noe slikt, men +det er kanskje flisespikking. Det understreker vel bare at det alltid +trengs korrekturlesning etter automatisk oversettelse.
On Wednesday, I spent the entire day in court in Follo Tingrett -representing the member association -NUUG, alongside the member -association EFN and the DNS registrar -IMC, challenging the seizure of the DNS name popcorn-time.no. It -was interesting to sit in a court of law for the first time in my -life. Our team can be seen in the picture above: attorney Ola -Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil -Eriksen and NUUG board member Petter Reinholdtsen.
- -The -case at hand is that the Norwegian National Authority for -Investigation and Prosecution of Economic and Environmental Crime (aka -Ãkokrim) decided on their own, to seize a DNS domain early last -year, without following -the -official policy of the Norwegian DNS authority which require a -court decision. The web site in question was a site covering Popcorn -Time. And Popcorn Time is the name of a technology with both legal -and illegal applications. Popcorn Time is a client combining -searching a Bittorrent directory available on the Internet with -downloading/distribute content via Bittorrent and playing the -downloaded content on screen. It can be used illegally if it is used -to distribute content against the will of the right holder, but it can -also be used legally to play a lot of content, for example the -millions of movies -available from the -Internet Archive or the collection -available from Vodo. We created -a -video demonstrating legally use of Popcorn Time and played it in -Court. It can of course be downloaded using Bittorrent.
- -I did not quite know what to expect from a day in court. The -government held on to their version of the story and we held on to -ours, and I hope the judge is able to make sense of it all. We will -know in two weeks time. Unfortunately I do not have high hopes, as -the Government have the upper hand here with more knowledge about the -case, better training in handling criminal law and in general higher -standing in the courts than fairly unknown DNS registrar and member -associations. It is expensive to be right also in Norway. So far the -case have cost more than NOK 70 000,-. To help fund the case, NUUG -and EFN have asked for donations, and managed to collect around NOK 25 -000,- so far. Given the presentation from the Government, I expect -the government to appeal if the case go our way. And if the case do -not go our way, I hope we have enough funding to appeal.
- -From the other side came two people from Ãkokrim. On the benches, -appearing to be part of the group from the government were two people -from the Simonsen Vogt Wiik lawyer office, and three others I am not -quite sure who was. Ãkokrim had proposed to present two witnesses -from The Motion Picture Association, but this was rejected because -they did not speak Norwegian and it was a bit late to bring in a -translator, but perhaps the two from MPA were present anyway. All -seven appeared to know each other. Good to see the case is take -seriously.
- -If you, like me, believe the courts should be involved before a DNS -domain is hijacked by the government, or you believe the Popcorn Time -technology have a lot of useful and legal applications, I suggest you -too donate to -the NUUG defense fund. Both Bitcoin and bank transfer are -available. If NUUG get more than we need for the legal action (very -unlikely), the rest will be spend promoting free software, open -standards and unix-like operating systems in Norway, so no matter what -happens the money will be put to good use.
- -If you want to lean more about the case, I recommend you check out -the blog -posts from NUUG covering the case. They cover the legal arguments -on both sides.
+ +I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på +sin forskrift. Som en kan se er det ikke mye tid igjen før fristen +som går ut på søndag. Denne forskriften er det som lister opp hvilke +formater det er greit å arkivere i +Noark +5-løsninger i Norge.
+ +Jeg fant høringsdokumentene hos +Norsk +Arkivråd etter å ha blitt tipset på epostlisten til +fri +programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 +Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket +være min interesse for tjenestegrensesnittsprosjektet har jeg lest en +god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget +at standard epost ikke er på listen over godkjente formater som kan +arkiveres. Høringen med frist søndag er en glimrende mulighet til å +forsøke å gjøre noe med det. Jeg holder på med +egen +høringsuttalelse, og lurer på om andre er interessert i å støtte +forslaget om å tillate arkivering av epost som epost i arkivet.
+ +Er du igang med å skrive egen høringsuttalelse allerede? I så fall +kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror +ikke det trengs så mye. Her et kort forslag til tekst:
+ ++ ++ +Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse + 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om + revisjon av Forskrift om utfyllende tekniske og arkivfaglige + bestemmelser om behandling av offentlige arkiver (Riksarkivarens + forskrift).
+ +Svært mye av vår kommuikasjon foregår i dag på e-post. Vi + foreslår derfor at Internett-e-post, slik det er beskrevet i IETF + RFC 5322, + https://tools.ietf.org/html/rfc5322. bør + inn som godkjent dokumentformat. Vi foreslår at forskriftens + oversikt over godkjente dokumentformater ved innlevering i § 5-16 + endres til å ta med Internett-e-post.
+ +
Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan +epost kan lagres i en Noark 5-struktur, og holder på å skrive et +forslag om hvordan dette kan gjøres som vil bli sendt over til +arkivverket så snart det er ferdig. De som er interesserte kan +følge +fremdriften på web.
+ +Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev + sendt + inn av foreningen NUUG.
I dag fikk jeg en skikkelig gladmelding. Bakgrunnen er at før jul -arrangerte Nasjonalbiblioteket -et -seminar om sitt knakende gode tiltak «verksregister». Eneste -måten å melde seg på dette seminaret var å sende personopplysninger -til Google via Google Skjemaer. Dette syntes jeg var tvilsom praksis, -da det bør være mulig å delta på seminarer arrangert av det offentlige -uten å måtte dele sine interesser, posisjon og andre -personopplysninger med Google. Jeg ba derfor om innsyn via -Mimes brønn i -avtaler -og vurderinger Nasjonalbiblioteket hadde rundt dette. -Personopplysningsloven legger klare rammer for hva som må være på -plass før en kan be tredjeparter, spesielt i utlandet, behandle -personopplysninger på sine vegne, så det burde eksistere grundig -dokumentasjon før noe slikt kan bli lovlig. To jurister hos -Nasjonalbiblioteket mente først dette var helt i orden, og at Googles -standardavtale kunne brukes som databehandlingsavtale. Det syntes jeg -var merkelig, men har ikke hatt kapasitet til å følge opp saken før -for to dager siden.
- -Gladnyheten i dag, som kom etter at jeg tipset Nasjonalbiblioteket -om at Datatilsynet underkjente Googles standardavtaler som -databehandleravtaler i 2011, er at Nasjonalbiblioteket har bestemt seg -for å avslutte bruken av Googles Skjemaer/Apps og gå i dialog med DIFI -for å finne bedre måter å håndtere påmeldinger i tråd med -personopplysningsloven. Det er fantastisk å se at av og til hjelper -det å spørre hva i alle dager det offentlige holder på med.
+ +Jeg oppdaget i dag at nettstedet som +publiserer offentlige postjournaler fra statlige etater, OEP, har +begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet +ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl +og curl. For å teste selv, kjør følgende:
+ ++ ++% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' +< HTTP/1.1 404 Not Found +% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' +< HTTP/1.1 200 OK +% +
Her kan en se at tjenesten gir «404 Not Found» for curl i +standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera +versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen +2017-03-02.
+ +Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente +informasjon fra oep.no. Kan blokkeringen være gjort for å hindre +automatisert innsamling av informasjon fra OEP, slik Pressens +Offentlighetsutvalg gjorde for å dokumentere hvordan departementene +hindrer innsyn i +rapporten +«Slik hindrer departementer innsyn» som ble publiserte i januar +2017. Det virker usannsynlig, da det jo er trivielt å bytte +User-Agent til noe nytt.
+ +Finnes det juridisk grunnlag for det offentlige å diskriminere +webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter +hva klienten sier at den heter? Da OEP eies av DIFI og driftes av +Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to +aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men +postjournalen +til DIFI viser kun to dokumenter det siste året mellom DIFI og +Basefarm. +Mimes brønn neste, +tenker jeg.
Jeg leste med interesse en nyhetssak hos -digi.no -og -NRK -om at det ikke bare er meg, men at også NAV bedriver geolokalisering -av IP-adresser, og at det gjøres analyse av IP-adressene til de som -sendes inn meldekort for å se om meldekortet sendes inn fra -utenlandske IP-adresser. Politiadvokat i Drammen, Hans Lyder Haare, -er sitert i NRK på at «De to er jo blant annet avslørt av -IP-adresser. At man ser at meldekortet kommer fra utlandet.»
- -Jeg synes det er fint at det blir bedre kjent at IP-adresser -knyttes til enkeltpersoner og at innsamlet informasjon brukes til å -stedsbestemme personer også av aktører her i Norge. Jeg ser det som -nok et argument for å bruke -Tor så mye som mulig for å -gjøre gjøre IP-lokalisering vanskeligere, slik at en kan beskytte sin -privatsfære og unngå å dele sin fysiske plassering med -uvedkommede.
- -Men det er en ting som bekymrer meg rundt denne nyheten. Jeg ble -tipset (takk #nuug) om -NAVs -personvernerklæring, som under punktet «Personvern og statistikk» -lyder:
- -- -- -«Når du besøker nav.no, etterlater du deg elektroniske spor. Sporene -dannes fordi din nettleser automatisk sender en rekke opplysninger til -NAVs tjener (server-maskin) hver gang du ber om å få vist en side. Det -er eksempelvis opplysninger om hvilken nettleser og -versjon du -bruker, og din internettadresse (ip-adresse). For hver side som vises, -lagres følgende opplysninger:
- --
- -- hvilken side du ser på
-- dato og tid
-- hvilken nettleser du bruker
-- din ip-adresse
-Ingen av opplysningene vil bli brukt til å identifisere -enkeltpersoner. NAV bruker disse opplysningene til å generere en -samlet statistikk som blant annet viser hvilke sider som er mest -populære. Statistikken er et redskap til å forbedre våre -tjenester.»
- -
Jeg klarer ikke helt å se hvordan analyse av de besøkendes -IP-adresser for å se hvem som sender inn meldekort via web fra en -IP-adresse i utlandet kan gjøres uten å komme i strid med påstanden om -at «ingen av opplysningene vil bli brukt til å identifisere -enkeltpersoner». Det virker dermed for meg som at NAV bryter sine -egen personvernerklæring, hvilket -Datatilsynet -fortalte meg i starten av desember antagelig er brudd på -personopplysningsloven. - -
I tillegg er personvernerklæringen ganske misvisende i og med at -NAVs nettsider ikke bare forsyner NAV med personopplysninger, men i -tillegg ber brukernes nettleser kontakte fem andre nettjenere -(script.hotjar.com, static.hotjar.com, vars.hotjar.com, -www.google-analytics.com og www.googletagmanager.com), slik at -personopplysninger blir gjort tilgjengelig for selskapene Hotjar og -Google , og alle som kan lytte på trafikken på veien (som FRA, GCHQ og -NSA). Jeg klarer heller ikke se hvordan slikt spredning av -personopplysninger kan være i tråd med kravene i -personopplysningloven, eller i tråd med NAVs personvernerklæring.
- -Kanskje NAV bør ta en nøye titt på sin personvernerklæring? Eller -kanskje Datatilsynet bør gjøre det?
+ +The Nikita +Noark 5 core project is implementing the Norwegian standard for +keeping an electronic archive of government documents. +The +Noark 5 standard document the requirement for data systems used by +the archives in the Norwegian government, and the Noark 5 web interface +specification document a REST web service for storing, searching and +retrieving documents and metadata in such archive. I've been involved +in the project since a few weeks before Christmas, when the Norwegian +Unix User Group +announced +it supported the project. I believe this is an important project, +and hope it can make it possible for the government archives in the +future to use free software to keep the archives we citizens depend +on. But as I do not hold such archive myself, personally my first use +case is to store and analyse public mail journal metadata published +from the government. I find it useful to have a clear use case in +mind when developing, to make sure the system scratches one of my +itches.
+ +If you would like to help make sure there is a free software +alternatives for the archives, please join our IRC channel +(#nikita on +irc.freenode.net) and +the +project mailing list.
+ +When I got involved, the web service could store metadata about +documents. But a few weeks ago, a new milestone was reached when it +became possible to store full text documents too. Yesterday, I +completed an implementation of a command line tool +archive-pdf to upload a PDF file to the archive using this +API. The tool is very simple at the moment, and find existing +fonds, series and +files while asking the user to select which one to use if more than +one exist. Once a file is identified, the PDF is associated with the +file and uploaded, using the title extracted from the PDF itself. The +process is fairly similar to visiting the archive, opening a cabinet, +locating a file and storing a piece of paper in the archive. Here is +a test run directly after populating the database with test data using +our API tester:
+ ++ ++~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf +using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 +using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 + + 0 - Title of the test case file created 2017-03-18T23:49:32.103446 + 1 - Title of the test file created 2017-03-18T23:49:32.103446 +Select which mappe you want (or search term): 0 +Uploading mangelmelding/mangler.pdf + PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt + File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 +~/src//noark5-tester$ +
You can see here how the fonds (arkiv) and serie (arkivdel) only had +one option, while the user need to choose which file (mappe) to use +among the two created by the API tester. The archive-pdf +tool can be found in the git repository for the API tester.
+ +In the project, I have been mostly working on +the API +tester so far, while getting to know the code base. The API +tester currently use +the HATEOAS links +to traverse the entire exposed service API and verify that the exposed +operations and objects match the specification, as well as trying to +create objects holding metadata and uploading a simple XML file to +store. The tester has proved very useful for finding flaws in our +implementation, as well as flaws in the reference site and the +specification.
+ +The test document I uploaded is a summary of all the specification +defects we have collected so far while implementing the web service. +There are several unclear and conflicting parts of the specification, +and we have +started +writing down the questions we get from implementing it. We use a +format inspired by how The +Austin Group collect defect reports for the POSIX standard with +their +instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). + +
The Nikita project is implemented using Java and Spring, and is +fairly easy to get up and running using Docker containers for those +that want to test the current code base. The API tester is +implemented in Python.
Did you ever wonder where the web trafic really flow to reach the -web servers, and who own the network equipment it is flowing through? -It is possible to get a glimpse of this from using traceroute, but it -is hard to find all the details. Many years ago, I wrote a system to -map the Norwegian Internet (trying to figure out if our plans for a -network game service would get low enough latency, and who we needed -to talk to about setting up game servers close to the users. Back -then I used traceroute output from many locations (I asked my friends -to run a script and send me their traceroute output) to create the -graph and the map. The output from traceroute typically look like -this: - -
-traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets - 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms - 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms - 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms - 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms - 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms - 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms - 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms - 8 * * * - 9 * * * + +9th March 2017+@@ -541,77 +580,44 @@ activities, please send Bitcoin donations to my addressOver the years, administrating thousand of NFS mounting linux +computers at the time, I often needed a way to detect if the machine +was experiencing NFS hang. If you try to use df or look at a +file or directory affected by the hang, the process (and possibly the +shell) will hang too. So you want to be able to detect this without +risking the detection process getting stuck too. It has not been +obvious how to do this. When the hang has lasted a while, it is +possible to find messages like these in dmesg:
+ ++nfs: server nfsserver not responding, still trying ++ +
nfs: server nfsserver OK +It is hard to know if the hang is still going on, and it is hard to +be sure looking in dmesg is going to work. If there are lots of other +messages in dmesg the lines might have rotated out of site before they +are noticed.
+ +While reading through the nfs client implementation in linux kernel +code, I came across some statistics that seem to give a way to detect +it. The om_timeouts sunrpc value in the kernel will increase every +time the above log entry is inserted into dmesg. And after digging a +bit further, I discovered that this value show up in +/proc/self/mountstats on Linux.
+ +The mountstats content seem to be shared between files using the +same file system context, so it is enough to check one of the +mountstats files to get the state of the mount point for the machine. +I assume this will not show lazy umounted NFS points, nor NFS mount +points in a different process context (ie with a different filesystem +view), but that does not worry me.
+ +The content for a NFS mount point look similar to this:
+ ++ ++[...] +device /dev/mapper/Debian-var mounted on /var with fstype ext3 +device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1 + opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all + age: 7863311 + caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255 + sec: flavor=1,pseudoflavor=1 + events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 + bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 + RPC iostats version: 1.0 p/v: 100003/3 (nfs) + xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820 + per-op statistics + NULL: 0 0 0 0 0 0 0 0 + GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132 + SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943 + LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511 + ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140 + READLINK: 125 125 0 20472 18620 0 1112 1118 + READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693 + WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771 + CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398 + MKDIR: 3680 3680 0 773980 993920 26 23990 24245 + SYMLINK: 903 903 0 233428 245488 6 5865 5917 + MKNOD: 80 80 0 20148 21760 0 299 304 + REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636 + RMDIR: 3367 3367 0 645112 484848 22 5782 6002 + RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288 + LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579 + READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917 + READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937 + FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022 + FSINFO: 2 2 0 232 328 0 1 1 + PATHCONF: 1 1 0 116 140 0 0 0 + COMMIT: 0 0 0 0 0 0 0 0 + +device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc [...] -- -This show the DNS names and IP addresses of (at least some of the) -network equipment involved in getting the data traffic from me to the -www.stortinget.no server, and how long it took in milliseconds for a -package to reach the equipment and return to me. Three packages are -sent, and some times the packages do not follow the same path. This -is shown for hop 5, where three different IP addresses replied to the -traceroute request.
- -There are many ways to measure trace routes. Other good traceroute -implementations I use are traceroute (using ICMP packages) mtr (can do -both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP -traceroute and a lot of other capabilities). All of them are easily -available in Debian.
- -This time around, I wanted to know the geographic location of -different route points, to visualize how visiting a web page spread -information about the visit to a lot of servers around the globe. The -background is that a web site today often will ask the browser to get -from many servers the parts (for example HTML, JSON, fonts, -JavaScript, CSS, video) required to display the content. This will -leak information about the visit to those controlling these servers -and anyone able to peek at the data traffic passing by (like your ISP, -the ISPs backbone provider, FRA, GCHQ, NSA and others).
- -Lets pick an example, the Norwegian parliament web site -www.stortinget.no. It is read daily by all members of parliament and -their staff, as well as political journalists, activits and many other -citizens of Norway. A visit to the www.stortinget.no web site will -ask your browser to contact 8 other servers: ajax.googleapis.com, -insights.hotjar.com, script.hotjar.com, static.hotjar.com, -stats.g.doubleclick.net, www.google-analytics.com, -www.googletagmanager.com and www.netigate.se. I extracted this by -asking PhantomJS to visit the -Stortinget web page and tell me all the URLs PhantomJS downloaded to -render the page (in HAR format using -their -netsniff example. I am very grateful to Gorm for showing me how -to do this). My goal is to visualize network traces to all IP -addresses behind these DNS names, do show where visitors personal -information is spread when visiting the page.
- - - -When I had a look around for options, I could not find any good -free software tools to do this, and decided I needed my own traceroute -wrapper outputting KML based on locations looked up using GeoIP. KML -is easy to work with and easy to generate, and understood by several -of the GIS tools I have available. I got good help from by NUUG -colleague Anders Einar with this, and the result can be seen in -my -kmltraceroute git repository. Unfortunately, the quality of the -free GeoIP databases I could find (and the for-pay databases my -friends had access to) is not up to the task. The IP addresses of -central Internet infrastructure would typically be placed near the -controlling companies main office, and not where the router is really -located, as you can see from the -KML file I created using the GeoLite City dataset from MaxMind. - -
- -I also had a look at the visual traceroute graph created by -the scrapy project, -showing IP network ownership (aka AS owner) for the IP address in -question. -The -graph display a lot of useful information about the traceroute in SVG -format, and give a good indication on who control the network -equipment involved, but it do not include geolocation. This graph -make it possible to see the information is made available at least for -UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level -3 Communications and NetDNA.
- - - -In the process, I came across the -web service GeoTraceroute by -Salim Gasmi. Its methology of combining guesses based on DNS names, -various location databases and finally use latecy times to rule out -candidate locations seemed to do a very good job of guessing correct -geolocation. But it could only do one trace at the time, did not have -a sensor in Norway and did not make the geolocations easily available -for postprocessing. So I contacted the developer and asked if he -would be willing to share the code (he refused until he had time to -clean it up), but he was interested in providing the geolocations in a -machine readable format, and willing to set up a sensor in Norway. So -since yesterday, it is possible to run traces from Norway in this -service thanks to a sensor node set up by -the NUUG assosiation, and get the -trace in KML format for further processing.
- - - -Here we can see a lot of trafic passes Sweden on its way to -Denmark, Germany, Holland and Ireland. Plenty of places where the -Snowden confirmations verified the traffic is read by various actors -without your best interest as their top priority.
- -Combining KML files is trivial using a text editor, so I could loop -over all the hosts behind the urls imported by www.stortinget.no and -ask for the KML file from GeoTraceroute, and create a combined KML -file with all the traces (unfortunately only one of the IP addresses -behind the DNS name is traced this time. To get them all, one would -have to request traces using IP number instead of DNS names from -GeoTraceroute). That might be the next step in this project.
- -Armed with these tools, I find it a lot easier to figure out where -the IP traffic moves and who control the boxes involved in moving it. -And every time the link crosses for example the Swedish border, we can -be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in -Britain and NSA in USA and cables around the globe. (Hm, what should -we tell them? :) Keep that in mind if you ever send anything -unencrypted over the Internet.
- -PS: KML files are drawn using -the KML viewer from Ivan -Rublev, as it was less cluttered than the local Linux application -Marble. There are heaps of other options too.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+The key number to look at is the third number in the per-op list. +It is the number of NFS timeouts experiences per file system +operation. Here 22 write timeouts and 5 access timeouts. If these +numbers are increasing, I believe the machine is experiencing NFS +hang. Unfortunately the timeout value do not start to increase right +away. The NFS operations need to time out first, and this can take a +while. The exact timeout value depend on the setup. For example the +defaults for TCP and UDP mount points are quite different, and the +timeout value is affected by the soft, hard, timeo and retrans NFS +mount options.
+ +The only way I have been able to get working on Debian and RedHat +Enterprise Linux for getting the timeout count is to peek in /proc/. +But according to +
+ +Solaris +10 System Administration Guide: Network Services, the 'nfsstat -c' +command can be used to get these timeout values. But this do not work +on Linux, as far as I can tell. I + asked Debian about this, +but have not seen any replies yet. Is there a better way to figure out if a Linux NFS client is +experiencing NFS hangs? Is there a way to detect which processes are +affected? Is there a way to get the NFS mount going quickly once the +network problem causing the NFS hang has been cleared? I would very +much welcome some clues, as we regularly run into NFS hangs.
- -4th January 2017-Do you have a large iCalendar -file with lots of old entries, and would like to archive them to save -space and resources? At least those of us using KOrganizer know that -turning on and off an event set become slower and slower the more -entries are in the set. While working on migrating our calendars to a -Radicale CalDAV server on our -Freedombox server, my -loved one wondered if I could find a way to split up the calendar file -she had in KOrganizer, and I set out to write a tool. I spent a few -days writing and polishing the system, and it is now ready for general -consumption. The -code for -ical-archiver is publicly available from a git repository on -github. The system is written in Python and depend on -the vobject Python -module.
- -To use it, locate the iCalendar file you want to operate on and -give it as an argument to the ical-archiver script. This will -generate a set of new files, one file per component type per year for -all components expiring more than two years in the past. The vevent, -vtodo and vjournal entries are handled by the script. The remaining -entries are stored in a 'remaining' file.
- -This is what a test run can look like: - -
-% ical-archiver t/2004-2016.ics -Found 3612 vevents -Found 6 vtodos -Found 2 vjournals -Writing t/2004-2016.ics-subset-vevent-2004.ics -Writing t/2004-2016.ics-subset-vevent-2005.ics -Writing t/2004-2016.ics-subset-vevent-2006.ics -Writing t/2004-2016.ics-subset-vevent-2007.ics -Writing t/2004-2016.ics-subset-vevent-2008.ics -Writing t/2004-2016.ics-subset-vevent-2009.ics -Writing t/2004-2016.ics-subset-vevent-2010.ics -Writing t/2004-2016.ics-subset-vevent-2011.ics -Writing t/2004-2016.ics-subset-vevent-2012.ics -Writing t/2004-2016.ics-subset-vevent-2013.ics -Writing t/2004-2016.ics-subset-vevent-2014.ics -Writing t/2004-2016.ics-subset-vjournal-2007.ics -Writing t/2004-2016.ics-subset-vjournal-2011.ics -Writing t/2004-2016.ics-subset-vtodo-2012.ics -Writing t/2004-2016.ics-remaining.ics -% -- -As you can see, the original file is untouched and new files are -written with names derived from the original file. If you are happy -with their content, the *-remaining.ics file can replace the original -the the others can be archived or imported as historical calendar -collections.
- -The script should probably be improved a bit. The error handling -when discovering broken entries is not good, and I am not sure yet if -it make sense to split different entry types into separate files or -not. The program is thus likely to change. If you find it -interesting, please get in touch. :)
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+ +8th March 2017+@@ -619,96 +625,33 @@ activities, please send Bitcoin donations to my addressSo the new president in the United States of America claim to be +surprised to discover that he was wiretapped during the election +before he was elected president. He even claim this must be illegal. +Well, doh, if it is one thing the confirmations from Snowden +documented, it is that the entire population in USA is wiretapped, one +way or another. Of course the president candidates were wiretapped, +alongside the senators, judges and the rest of the people in USA.
+ +Next, the Federal Bureau of Investigation ask the Department of +Justice to go public rejecting the claims that Donald Trump was +wiretapped illegally. I fail to see the relevance, given that I am +sure the surveillance industry in USA believe they have all the legal +backing they need to conduct mass surveillance on the entire +world.
+ +There is even the director of the FBI stating that he never saw an +order requesting wiretapping of Donald Trump. That is not very +surprising, given how the FISA court work, with all its activity being +secret. Perhaps he only heard about it?
+ +What I find most sad in this story is how Norwegian journalists +present it. In a news reports the other day in the radio from the +Norwegian National broadcasting Company (NRK), I heard the journalist +claim that 'the FBI denies any wiretapping', while the reality is that +'the FBI denies any illegal wiretapping'. There is a fundamental and +important difference, and it make me sad that the journalists are +unable to grasp it.
+ +Update 2017-03-13: Look like +The +Intercept report that US Senator Rand Paul confirm what I state above.
- -23rd December 2016-I received a very nice Christmas present today. As my regular -readers probably know, I have been working on the -the Isenkram -system for many years. The goal of the Isenkram system is to make -it easier for users to figure out what to install to get a given piece -of hardware to work in Debian, and a key part of this system is a way -to map hardware to packages. Isenkram have its own mapping database, -and also uses data provided by each package using the AppStream -metadata format. And today, -AppStream in -Debian learned to look up hardware the same way Isenkram is doing it, -ie using fnmatch():
- --% appstreamcli what-provides modalias \ - usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00 -Identifier: pymissile [generic] -Name: pymissile -Summary: Control original Striker USB Missile Launcher -Package: pymissile -% appstreamcli what-provides modalias usb:v0694p0002d0000 -Identifier: libnxt [generic] -Name: libnxt -Summary: utility library for talking to the LEGO Mindstorms NXT brick -Package: libnxt ---- -Identifier: t2n [generic] -Name: t2n -Summary: Simple command-line tool for Lego NXT -Package: t2n ---- -Identifier: python-nxt [generic] -Name: python-nxt -Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot -Package: python-nxt ---- -Identifier: nbc [generic] -Name: nbc -Summary: C compiler for LEGO Mindstorms NXT bricks -Package: nbc -% -- -A similar query can be done using the combined AppStream and -Isenkram databases using the isenkram-lookup tool:
- --% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00 -pymissile -% isenkram-lookup usb:v0694p0002d0000 -libnxt -nbc -python-nxt -t2n -% -- -You can find modalias values relevant for your machine using -cat $(find /sys/devices/ -name modalias). - -
If you want to make this system a success and help Debian users -make the most of the hardware they have, please -helpadd -AppStream metadata for your package following the guidelines -documented in the wiki. So far only 11 packages provide such -information, among the several hundred hardware specific packages in -Debian. The Isenkram database on the other hand contain 101 packages, -mostly related to USB dongles. Most of the packages with hardware -mapping in AppStream are LEGO Mindstorms related, because I have, as -part of my involvement in -the Debian LEGO -team given priority to making sure LEGO users get proposed the -complete set of packages in Debian for that particular hardware. The -team also got a nice Christmas present today. The -nxt-firmware -package made it into Debian. With this package in place, it is -now possible to use the LEGO Mindstorms NXT unit with only free -software, as the nxt-firmware package contain the source and firmware -binaries for the NXT brick.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+ +3rd March 2017+@@ -716,106 +659,77 @@ activities, please send Bitcoin donations to my addressFor almost a year now, we have been working on making a Norwegian +Bokmål edition of The Debian +Administrator's Handbook. Now, thanks to the tireless effort of +Ole-Erik, Ingrid and Andreas, the initial translation is complete, and +we are working on the proof reading to ensure consistent language and +use of correct computer science terms. The plan is to make the book +available on paper, as well as in electronic form. For that to +happen, the proof reading must be completed and all the figures need +to be translated. If you want to help out, get in touch.
+ +A + +fresh PDF edition in A4 format (the final book will have smaller +pages) of the book created every morning is available for +proofreading. If you find any errors, please +visit +Weblate and correct the error. The +state +of the translation including figures is a useful source for those +provide Norwegian bokmål screen shots and figures.
- -20th December 2016-The Isenkram -system I wrote two years ago to make it easier in Debian to find -and install packages to get your hardware dongles to work, is still -going strong. It is a system to look up the hardware present on or -connected to the current system, and map the hardware to Debian -packages. It can either be done using the tools in isenkram-cli or -using the user space daemon in the isenkram package. The latter will -notify you, when inserting new hardware, about what packages to -install to get the dongle working. It will even provide a button to -click on to ask packagekit to install the packages.
- -Here is an command line example from my Thinkpad laptop:
- --% isenkram-lookup -bluez -cheese -ethtool -fprintd -fprintd-demo -gkrellm-thinkbat -hdapsd -libpam-fprintd -pidgin-blinklight -thinkfan -tlp -tp-smapi-dkms -tp-smapi-source -tpb + +1st March 2017+@@ -837,7 +751,11 @@ the mean time I provide an override in isenkram.A few days ago I ordered a small batch of +the ChaosKey, a small +USB dongle for generating entropy created by Bdale Garbee and Keith +Packard. Yesterday it arrived, and I am very happy to report that it +work great! According to its designers, to get it to work out of the +box, you need the Linux kernel version 4.1 or later. I tested on a +Debian Stretch machine (kernel version 4.9), and there it worked just +fine, increasing the available entropy very quickly. I wrote a small +test oneliner to test. It first print the current entropy level, +drain /dev/random, and then print the entropy level for five seconds. +Here is the situation without the ChaosKey inserted:
+ +-+% cat /proc/sys/kernel/random/entropy_avail; \ + dd bs=1M if=/dev/random of=/dev/null count=1; \ + for n in $(seq 1 5); do \ + cat /proc/sys/kernel/random/entropy_avail; \ + sleep 1; \ + done +300 +0+1 oppføringer inn +0+1 oppføringer ut +28 byte kopiert, 0,000264565 s, 106 kB/s +4 +8 +12 +17 +21 % -+It can also list the firware package providing firmware requested -by the load kernel modules, which in my case is an empty list because -I have all the firmware my machine need: +
The entropy level increases by 3-4 every second. In such case any +application requiring random bits (like a HTTPS enabled web server) +will halt and wait for more entrpy. And here is the situation with +the ChaosKey inserted:
--% /usr/sbin/isenkram-autoinstall-firmware -l -info: did not find any firmware files requested by loaded kernel modules. exiting ++ ++% cat /proc/sys/kernel/random/entropy_avail; \ + dd bs=1M if=/dev/random of=/dev/null count=1; \ + for n in $(seq 1 5); do \ + cat /proc/sys/kernel/random/entropy_avail; \ + sleep 1; \ + done +1079 +0+1 oppføringer inn +0+1 oppføringer ut +104 byte kopiert, 0,000487647 s, 213 kB/s +433 +1028 +1031 +1035 +1038 % -- -The last few days I had a look at several of the around 250 -packages in Debian with udev rules. These seem like good candidates -to install when a given hardware dongle is inserted, and I found -several that should be proposed by isenkram. I have not had time to -check all of them, but am happy to report that now there are 97 -packages packages mapped to hardware by Isenkram. 11 of these -packages provide hardware mapping using AppStream, while the rest are -listed in the modaliases file provided in isenkram.
- -These are the packages with hardware mappings at the moment. The -marked packages are also announcing their hardware -support using AppStream, for everyone to use:
- -air-quality-sensor, alsa-firmware-loaders, argyll, -array-info, avarice, avrdude, b43-fwcutter, -bit-babbler, bluez, bluez-firmware, brltty, -broadcom-sta-dkms, calibre, cgminer, cheese, colord, -colorhug-client, dahdi-firmware-nonfree, dahdi-linux, -dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, -fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, -gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, -gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, -ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, -libnxt, libpam-fprintd, lomoco, -madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, -nbc, nqc, nut-hal-drivers, ola, -open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, -pcscd, pidgin-blinklight, printer-driver-splix, -pymissile, python-nxt, qlandkartegt, -qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, -soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, -t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, -tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, -virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, -xserver-xorg-input-wacom, xserver-xorg-video-qxl, -xserver-xorg-video-vmware, yubikey-personalization and -zd1211-firmware
- -If you know of other packages, please let me know with a wishlist -bug report against the isenkram-cli package, and ask the package -maintainer to -add AppStream -metadata according to the guidelines to provide the information -for everyone. In time, I hope to get rid of the isenkram specific -hardware mapping and depend exclusively on AppStream.
- -Note, the AppStream metadata for broadcom-sta-dkms is matching too -much hardware, and suggest that the package with with any ethernet -card. See bug #838735 for -the details. I hope the maintainer find time to address it soon. In -the mean time I provide an override in isenkram.
+Quite the difference. :) I bought a few more than I need, in case +someone want to buy one here in Norway. :)
+ +Update: The dongle was presented at Debconf last year. You might +find the talk +recording illuminating. It explains exactly what the source of +randomness is, if you are unable to spot it from the schema drawing +available from the ChaosKey web site linked at the start of this blog +post.
February (3) -March (1) +March (5) + +April (2) + +June (3) @@ -1105,10 +1023,12 @@ the mean time I provide an override in isenkram.chrpath (2) -debian (146) +debian (149) debian edu (158) +debian-handbook (3) +digistan (10) dld (16) @@ -1117,7 +1037,7 @@ the mean time I provide an override in isenkram.drivstoffpriser (4) -english (342) +english (348) fiksgatami (23) @@ -1151,11 +1071,11 @@ the mean time I provide an override in isenkram.nice free software (9) -norsk (287) +norsk (290) -nuug (187) +nuug (189) -offentlig innsyn (28) +offentlig innsyn (33) open311 (2) @@ -1185,15 +1105,15 @@ the mean time I provide an override in isenkram.skepsis (5) -standard (51) +standard (55) -stavekontroll (5) +stavekontroll (6) stortinget (11) -surveillance (47) +surveillance (48) -sysadmin (2) +sysadmin (3) usenix (2)