X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/477d730373d4136424f11718d89288096af838a9..48242bda221d48f034db275c4758ac81cc92d31b:/blog/index.html?ds=inline diff --git a/blog/index.html b/blog/index.html index 5fc74795d6..95d6887c41 100644 --- a/blog/index.html +++ b/blog/index.html @@ -20,67 +20,111 @@
-
Epost inn som arkivformat i Riksarkivarens forskrift?
-
27th April 2017
-

I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på -sin forskrift. Som en kan se er det ikke mye tid igjen før fristen -som går ut på søndag. Denne forskriften er det som lister opp hvilke -formater det er greit å arkivere i -Noark -5-løsninger i Norge.

- -

Jeg fant høringsdokumentene hos -Norsk -Arkivråd etter å ha blitt tipset på epostlisten til -fri -programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 -Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket -være min interesse for tjenestegrensesnittsprosjektet har jeg lest en -god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget -at standard epost ikke er på listen over godkjente formater som kan -arkiveres. Høringen med frist søndag er en glimrende mulighet til å -forsøke å gjøre noe med det. Jeg holder på med -egen -høringsuttalelse, og lurer på om andre er interessert i å støtte -forslaget om å tillate arkivering av epost som epost i arkivet.

- -

Er du igang med å skrive egen høringsuttalelse allerede? I så fall -kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror -ikke det trengs så mye. Her et kort forslag til tekst:

- -

- -

Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse - 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om - revisjon av Forskrift om utfyllende tekniske og arkivfaglige - bestemmelser om behandling av offentlige arkiver (Riksarkivarens - forskrift).

- -

Svært mye av vår kommuikasjon foregår i dag på e-post.  Vi - foreslår derfor at Internett-e-post, slik det er beskrevet i IETF - RFC 5322, - https://tools.ietf.org/html/rfc5322. bør - inn som godkjent dokumentformat.  Vi foreslår at forskriftens - oversikt over godkjente dokumentformater ved innlevering i § 5-16 - endres til å ta med Internett-e-post.

- -

- -

Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan -epost kan lagres i en Noark 5-struktur, og holder på å skrive et -forslag om hvordan dette kan gjøres som vil bli sendt over til -arkivverket så snart det er ferdig. De som er interesserte kan -følge -fremdriften på web.

- -

Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev - sendt - inn av foreningen NUUG.

+ +
8th October 2018
+

I have earlier covered the basics of trusted timestamping using the +'openssl ts' client. See blog post for +2014, +2016 +and +2017 +for those stories. But some times I want to integrate the timestamping +in other code, and recently I needed to integrate it into Python. +After searching a bit, I found +the +rfc3161 library which seemed like a good fit, but I soon +discovered it only worked for python version 2, and I needed something +that work with python version 3. Luckily I next came across +the rfc3161ng library, +a fork of the original rfc3161 library. Not only is it working with +python 3, it have fixed a few of the bugs in the original library, and +it has an active maintainer. I decided to wrap it up and make it +available in +Debian, and a few days ago it entered Debian unstable and testing.

+ +

Using the library is fairly straight forward. The only slightly +problematic step is to fetch the required certificates to verify the +timestamp. For some services it is straight forward, while for others +I have not yet figured out how to do it. Here is a small standalone +code example based on of the integration tests in the library code:

+ +
+#!/usr/bin/python3
+
+"""
+
+Python 3 script demonstrating how to use the rfc3161ng module to
+get trusted timestamps.
+
+The license of this code is the same as the license of the rfc3161ng
+library, ie MIT/BSD.
+
+"""
+
+import os
+import pyasn1.codec.der
+import rfc3161ng
+import subprocess
+import tempfile
+import urllib.request
+
+def store(f, data):
+    f.write(data)
+    f.flush()
+    f.seek(0)
+
+def fetch(url, f=None):
+    response = urllib.request.urlopen(url)
+    data = response.read()
+    if f:
+        store(f, data)
+    return data
+
+def main():
+    with tempfile.NamedTemporaryFile() as cert_f,\
+    	 tempfile.NamedTemporaryFile() as ca_f,\
+    	 tempfile.NamedTemporaryFile() as msg_f,\
+    	 tempfile.NamedTemporaryFile() as tsr_f:
+
+        # First fetch certificates used by service
+        certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f)
+        ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f)
+
+        # Then timestamp the message
+        timestamper = \
+            rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr',
+                                        certificate=certificate_data)
+        data = b"Python forever!\n"
+        tsr = timestamper(data=data, return_tsr=True)
+
+        # Finally, convert message and response to something 'openssl ts' can verify
+        store(msg_f, data)
+        store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
+        args = ["openssl", "ts", "-verify",
+                "-data", msg_f.name,
+	        "-in", tsr_f.name,
+		"-CAfile", ca_f.name,
+                "-untrusted", cert_f.name]
+        subprocess.check_call(args)
+
+if '__main__' == __name__:
+   main()
+
+ +

The code fetches the required certificates, store them as temporary +files, timestamp a simple message, store the message and timestamp to +disk and ask 'openssl ts' to verify the timestamp. A timestamp is +around 1.5 kiB in size, and should be fairly easy to store for future +use.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: norsk, offentlig innsyn. + Tags: english, sikkerhet.
@@ -88,52 +132,68 @@ fremdriften på web.

- -
20th April 2017
-

Jeg oppdaget i dag at nettstedet som -publiserer offentlige postjournaler fra statlige etater, OEP, har -begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet -ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl -og curl. For å teste selv, kjør følgende:

+ +
4th October 2018
+

A few days, I rescued a Windows victim over to Debian. To try to +rescue the remains, I helped set up automatic sync with Google Drive. +I did not find any sensible Debian package handling this +automatically, so I rebuild the grive2 source from +the Ubuntu UPD8 PPA to do the +task and added a autostart desktop entry and a small shell script to +run in the background while the user is logged in to do the sync. +Here is a sketch of the setup for future reference.

+ +

I first created ~/googledrive, entered the directory and +ran 'grive -a' to authenticate the machine/user. Next, I +created a autostart hook in ~/.config/autostart/grive.desktop +to start the sync when the user log in:

-
-% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
-< HTTP/1.1 404 Not Found
-% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
-< HTTP/1.1 200 OK
-%
-
+

+[Desktop Entry]
+Name=Google drive autosync
+Type=Application
+Exec=/home/user/bin/grive-sync
+

+ +

Finally, I wrote the ~/bin/grive-sync script to sync +~/googledrive/ with the files in Google Drive.

+ +

+#!/bin/sh
+set -e
+cd ~/
+cleanup() {
+    if [ "$syncpid" ] ; then
+        kill $syncpid
+    fi
+}
+trap cleanup EXIT INT QUIT
+/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" &
+syncpdi=$!
+while true; do
+    if ! xhost >/dev/null 2>&1 ; then
+        echo "no DISPLAY, exiting as the user probably logged out"
+        exit 1
+    fi
+    if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then
+        /usr/lib/grive/grive-sync.sh sync googledrive
+    fi
+    sleep 300
+done 2>&1 | sed "s%^%$0:%"
+

+ +

Feel free to use the setup if you want. It can be assumed to be +GNU GPL v2 licensed (or any later version, at your leisure), but I +doubt this code is possible to claim copyright on.

-

Her kan en se at tjenesten gir «404 Not Found» for curl i -standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera -versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen -2017-03-02.

- -

Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente -informasjon fra oep.no. Kan blokkeringen være gjort for å hindre -automatisert innsamling av informasjon fra OEP, slik Pressens -Offentlighetsutvalg gjorde for å dokumentere hvordan departementene -hindrer innsyn i -rapporten -«Slik hindrer departementer innsyn» som ble publiserte i januar -2017. Det virker usannsynlig, da det jo er trivielt å bytte -User-Agent til noe nytt.

- -

Finnes det juridisk grunnlag for det offentlige å diskriminere -webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter -hva klienten sier at den heter? Da OEP eies av DIFI og driftes av -Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to -aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men -postjournalen -til DIFI viser kun to dokumenter det siste året mellom DIFI og -Basefarm. -Mimes brønn neste, -tenker jeg.

+

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: norsk, offentlig innsyn. + Tags: debian, english.
@@ -141,101 +201,161 @@ tenker jeg.

- -
19th March 2017
-

The Nikita -Noark 5 core project is implementing the Norwegian standard for -keeping an electronic archive of government documents. -The -Noark 5 standard document the requirement for data systems used by -the archives in the Norwegian government, and the Noark 5 web interface -specification document a REST web service for storing, searching and -retrieving documents and metadata in such archive. I've been involved -in the project since a few weeks before Christmas, when the Norwegian -Unix User Group -announced -it supported the project. I believe this is an important project, -and hope it can make it possible for the government archives in the -future to use free software to keep the archives we citizens depend -on. But as I do not hold such archive myself, personally my first use -case is to store and analyse public mail journal metadata published -from the government. I find it useful to have a clear use case in -mind when developing, to make sure the system scratches one of my -itches.

- -

If you would like to help make sure there is a free software -alternatives for the archives, please join our IRC channel -(#nikita on -irc.freenode.net) and -the -project mailing list.

- -

When I got involved, the web service could store metadata about -documents. But a few weeks ago, a new milestone was reached when it -became possible to store full text documents too. Yesterday, I -completed an implementation of a command line tool -archive-pdf to upload a PDF file to the archive using this -API. The tool is very simple at the moment, and find existing -fonds, series and -files while asking the user to select which one to use if more than -one exist. Once a file is identified, the PDF is associated with the -file and uploaded, using the title extracted from the PDF itself. The -process is fairly similar to visiting the archive, opening a cabinet, -locating a file and storing a piece of paper in the archive. Here is -a test run directly after populating the database with test data using -our API tester:

+ +
29th September 2018
+

It would come as no surprise to anyone that I am interested in +bitcoins and virtual currencies. I've been keeping an eye on virtual +currencies for many years, and it is part of the reason a few months +ago, I started writing a python library for collecting currency +exchange rates and trade on virtual currency exchanges. I decided to +name the end result valutakrambod, which perhaps can be translated to +small currency shop.

+ +

The library uses the tornado python library to handle HTTP and +websocket connections, and provide a asynchronous system for +connecting to and tracking several services. The code is available +from +github.

+ +

There are two example clients of the library. One is very simple and +list every updated buy/sell price received from the various services. +This code is started by running bin/btc-rates and call the client code +in valutakrambod/client.py. The simple client look like this:

+ +

+import functools
+import tornado.ioloop
+import valutakrambod
+class SimpleClient(object):
+    def __init__(self):
+        self.services = []
+        self.streams = []
+        pass
+    def newdata(self, service, pair, changed):
+        print("%-15s %s-%s: %8.3f %8.3f" % (
+            service.servicename(),
+            pair[0],
+            pair[1],
+            service.rates[pair]['ask'],
+            service.rates[pair]['bid'])
+        )
+    async def refresh(self, service):
+        await service.fetchRates(service.wantedpairs)
+    def run(self):
+        self.ioloop = tornado.ioloop.IOLoop.current()
+        self.services = valutakrambod.service.knownServices()
+        for e in self.services:
+            service = e()
+            service.subscribe(self.newdata)
+            stream = service.websocket()
+            if stream:
+                self.streams.append(stream)
+            else:
+                # Fetch information from non-streaming services immediately
+                self.ioloop.call_later(len(self.services),
+                                       functools.partial(self.refresh, service))
+                # as well as regularly
+                service.periodicUpdate(60)
+        for stream in self.streams:
+            stream.connect()
+        try:
+            self.ioloop.start()
+        except KeyboardInterrupt:
+            print("Interrupted by keyboard, closing all connections.")
+            pass
+        for stream in self.streams:
+            stream.close()
+

+ +

The library client loops over all known "public" services, +initialises it, subscribes to any updates from the service, checks and +activates websocket streaming if the service provide it, and if no +streaming is supported, fetches information from the service and sets +up a periodic update every 60 seconds. The output from this client +can look like this:

-~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
-using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
-using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
-
- 0 - Title of the test case file created 2017-03-18T23:49:32.103446
- 1 - Title of the test file created 2017-03-18T23:49:32.103446
-Select which mappe you want (or search term): 0
-Uploading mangelmelding/mangler.pdf
-  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
-  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
-~/src//noark5-tester$
+Bl3p            BTC-EUR: 5687.110 5653.690
+Bl3p            BTC-EUR: 5687.110 5653.690
+Bl3p            BTC-EUR: 5687.110 5653.690
+Hitbtc          BTC-USD: 6594.560 6593.690
+Hitbtc          BTC-USD: 6594.560 6593.690
+Bl3p            BTC-EUR: 5687.110 5653.690
+Hitbtc          BTC-USD: 6594.570 6593.690
+Bitstamp        EUR-USD:    1.159    1.154
+Hitbtc          BTC-USD: 6594.570 6593.690
+Hitbtc          BTC-USD: 6594.580 6593.690
+Hitbtc          BTC-USD: 6594.580 6593.690
+Hitbtc          BTC-USD: 6594.580 6593.690
+Bl3p            BTC-EUR: 5687.110 5653.690
+Paymium         BTC-EUR: 5680.000 5620.240
 

-

You can see here how the fonds (arkiv) and serie (arkivdel) only had -one option, while the user need to choose which file (mappe) to use -among the two created by the API tester. The archive-pdf -tool can be found in the git repository for the API tester.

- -

In the project, I have been mostly working on -the API -tester so far, while getting to know the code base. The API -tester currently use -the HATEOAS links -to traverse the entire exposed service API and verify that the exposed -operations and objects match the specification, as well as trying to -create objects holding metadata and uploading a simple XML file to -store. The tester has proved very useful for finding flaws in our -implementation, as well as flaws in the reference site and the -specification.

- -

The test document I uploaded is a summary of all the specification -defects we have collected so far while implementing the web service. -There are several unclear and conflicting parts of the specification, -and we have -started -writing down the questions we get from implementing it. We use a -format inspired by how The -Austin Group collect defect reports for the POSIX standard with -their -instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). - -

The Nikita project is implemented using Java and Spring, and is -fairly easy to get up and running using Docker containers for those -that want to test the current code base. The API tester is -implemented in Python.

+

The exchange order book is tracked in addition to the best buy/sell +price, for those that need to know the details.

+ +

The other example client is focusing on providing a curses view +with updated buy/sell prices as soon as they are received from the +services. This code is located in bin/btc-rates-curses and activated +by using the '-c' argument. Without the argument the "curses" output +is printed without using curses, which is useful for debugging. The +curses view look like this:

+ +

+           Name Pair   Bid         Ask         Spr    Ftcd    Age
+ BitcoinsNorway BTCEUR   5591.8400   5711.0800   2.1%   16    nan     60
+       Bitfinex BTCEUR   5671.0000   5671.2000   0.0%   16     22     59
+        Bitmynt BTCEUR   5580.8000   5807.5200   3.9%   16     41     60
+         Bitpay BTCEUR   5663.2700         nan   nan%   15    nan     60
+       Bitstamp BTCEUR   5664.8400   5676.5300   0.2%    0      1      1
+           Bl3p BTCEUR   5653.6900   5684.9400   0.5%    0    nan     19
+       Coinbase BTCEUR   5600.8200   5714.9000   2.0%   15    nan    nan
+         Kraken BTCEUR   5670.1000   5670.2000   0.0%   14     17     60
+        Paymium BTCEUR   5620.0600   5680.0000   1.1%    1   7515    nan
+ BitcoinsNorway BTCNOK  52898.9700  54034.6100   2.1%   16    nan     60
+        Bitmynt BTCNOK  52960.3200  54031.1900   2.0%   16     41     60
+         Bitpay BTCNOK  53477.7833         nan   nan%   16    nan     60
+       Coinbase BTCNOK  52990.3500  54063.0600   2.0%   15    nan    nan
+        MiraiEx BTCNOK  52856.5300  54100.6000   2.3%   16    nan    nan
+ BitcoinsNorway BTCUSD   6495.5300   6631.5400   2.1%   16    nan     60
+       Bitfinex BTCUSD   6590.6000   6590.7000   0.0%   16     23     57
+         Bitpay BTCUSD   6564.1300         nan   nan%   15    nan     60
+       Bitstamp BTCUSD   6561.1400   6565.6200   0.1%    0      2      1
+       Coinbase BTCUSD   6504.0600   6635.9700   2.0%   14    nan    117
+         Gemini BTCUSD   6567.1300   6573.0700   0.1%   16     89    nan
+         Hitbtc+BTCUSD   6592.6200   6594.2100   0.0%    0      0      0
+         Kraken BTCUSD   6565.2000   6570.9000   0.1%   15     17     58
+  Exchangerates EURNOK      9.4665      9.4665   0.0%   16 107789    nan
+     Norgesbank EURNOK      9.4665      9.4665   0.0%   16 107789    nan
+       Bitstamp EURUSD      1.1537      1.1593   0.5%    4      5      1
+  Exchangerates EURUSD      1.1576      1.1576   0.0%   16 107789    nan
+ BitcoinsNorway LTCEUR      1.0000     49.0000  98.0%   16    nan    nan
+ BitcoinsNorway LTCNOK    492.4800    503.7500   2.2%   16    nan     60
+ BitcoinsNorway LTCUSD      1.0221     49.0000  97.9%   15    nan    nan
+     Norgesbank USDNOK      8.1777      8.1777   0.0%   16 107789    nan
+

+ +

The code for this client is too complex for a simple blog post, so +you will have to check out the git repository to figure out how it +work. What I can tell is how the three last numbers on each line +should be interpreted. The first is how many seconds ago information +was received from the service. The second is how long ago, according +to the service, the provided information was updated. The last is an +estimate on how often the buy/sell values change.

+ +

If you find this library useful, or would like to improve it, I +would love to hear from you. Note that for some of the services I've +implemented a trading API. It might be the topic of a future blog +post.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -243,114 +363,47 @@ implemented in Python.

- -
9th March 2017
-

Over the years, administrating thousand of NFS mounting linux -computers at the time, I often needed a way to detect if the machine -was experiencing NFS hang. If you try to use df or look at a -file or directory affected by the hang, the process (and possibly the -shell) will hang too. So you want to be able to detect this without -risking the detection process getting stuck too. It has not been -obvious how to do this. When the hang has lasted a while, it is -possible to find messages like these in dmesg:

- -

-nfs: server nfsserver not responding, still trying -
nfs: server nfsserver OK -

- -

It is hard to know if the hang is still going on, and it is hard to -be sure looking in dmesg is going to work. If there are lots of other -messages in dmesg the lines might have rotated out of site before they -are noticed.

- -

While reading through the nfs client implementation in linux kernel -code, I came across some statistics that seem to give a way to detect -it. The om_timeouts sunrpc value in the kernel will increase every -time the above log entry is inserted into dmesg. And after digging a -bit further, I discovered that this value show up in -/proc/self/mountstats on Linux.

- -

The mountstats content seem to be shared between files using the -same file system context, so it is enough to check one of the -mountstats files to get the state of the mount point for the machine. -I assume this will not show lazy umounted NFS points, nor NFS mount -points in a different process context (ie with a different filesystem -view), but that does not worry me.

- -

The content for a NFS mount point look similar to this:

+ +
24th September 2018
+

Back in February, I got curious to see +if +VLC now supported Bittorrent streaming. It did not, despite the +fact that the idea and code to handle such streaming had been floating +around for years. I did however find +a standalone plugin +for VLC to do it, and half a year later I decided to wrap up the +plugin and get it into Debian. I uploaded it to NEW a few days ago, +and am very happy to report that it +entered +Debian a few hours ago, and should be available in Debian/Unstable +tomorrow, and Debian/Testing in a few days.

+ +

With the vlc-plugin-bittorrent package installed you should be able +to stream videos using a simple call to

-[...]
-device /dev/mapper/Debian-var mounted on /var with fstype ext3
-device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
-        opts:   rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
-        age:    7863311
-        caps:   caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
-        sec:    flavor=1,pseudoflavor=1
-        events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 
-        bytes:  166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 
-        RPC iostats version: 1.0  p/v: 100003/3 (nfs)
-        xprt:   tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
-        per-op statistics
-                NULL: 0 0 0 0 0 0 0 0
-             GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
-             SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
-              LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
-              ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
-            READLINK: 125 125 0 20472 18620 0 1112 1118
-                READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
-               WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
-              CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
-               MKDIR: 3680 3680 0 773980 993920 26 23990 24245
-             SYMLINK: 903 903 0 233428 245488 6 5865 5917
-               MKNOD: 80 80 0 20148 21760 0 299 304
-              REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
-               RMDIR: 3367 3367 0 645112 484848 22 5782 6002
-              RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
-                LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
-             READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
-         READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
-              FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
-              FSINFO: 2 2 0 232 328 0 1 1
-            PATHCONF: 1 1 0 116 140 0 0 0
-              COMMIT: 0 0 0 0 0 0 0 0
-
-device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
-[...]
+vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent
 

-

The key number to look at is the third number in the per-op list. -It is the number of NFS timeouts experiences per file system -operation. Here 22 write timeouts and 5 access timeouts. If these -numbers are increasing, I believe the machine is experiencing NFS -hang. Unfortunately the timeout value do not start to increase right -away. The NFS operations need to time out first, and this can take a -while. The exact timeout value depend on the setup. For example the -defaults for TCP and UDP mount points are quite different, and the -timeout value is affected by the soft, hard, timeo and retrans NFS -mount options.

- -

The only way I have been able to get working on Debian and RedHat -Enterprise Linux for getting the timeout count is to peek in /proc/. -But according to -Solaris -10 System Administration Guide: Network Services, the 'nfsstat -c' -command can be used to get these timeout values. But this do not work -on Linux, as far as I can tell. I -asked Debian about this, -but have not seen any replies yet.

- -

Is there a better way to figure out if a Linux NFS client is -experiencing NFS hangs? Is there a way to detect which processes are -affected? Is there a way to get the NFS mount going quickly once the -network problem causing the NFS hang has been cleared? I would very -much welcome some clues, as we regularly run into NFS hangs.

+

It can handle magnet links too. Now if only native vlc had +bittorrent support. Then a lot more would be helping each other to +share public domain and creative commons movies. The plugin need some +stability work with seeking and picking the right file in a torrent +with many files, but is already usable. Please note that the plugin +is not removing downloaded files when vlc is stopped, so it can fill +up your disk if you are not careful. Have fun. :)

+ +

I would love to get help maintaining this package. Get in touch if +you are interested.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -358,44 +411,35 @@ much welcome some clues, as we regularly run into NFS hangs.

- -
8th March 2017
-

So the new president in the United States of America claim to be -surprised to discover that he was wiretapped during the election -before he was elected president. He even claim this must be illegal. -Well, doh, if it is one thing the confirmations from Snowden -documented, it is that the entire population in USA is wiretapped, one -way or another. Of course the president candidates were wiretapped, -alongside the senators, judges and the rest of the people in USA.

- -

Next, the Federal Bureau of Investigation ask the Department of -Justice to go public rejecting the claims that Donald Trump was -wiretapped illegally. I fail to see the relevance, given that I am -sure the surveillance industry in USA believe they have all the legal -backing they need to conduct mass surveillance on the entire -world.

- -

There is even the director of the FBI stating that he never saw an -order requesting wiretapping of Donald Trump. That is not very -surprising, given how the FISA court work, with all its activity being -secret. Perhaps he only heard about it?

- -

What I find most sad in this story is how Norwegian journalists -present it. In a news reports the other day in the radio from the -Norwegian National broadcasting Company (NRK), I heard the journalist -claim that 'the FBI denies any wiretapping', while the reality is that -'the FBI denies any illegal wiretapping'. There is a fundamental and -important difference, and it make me sad that the journalists are -unable to grasp it.

- -

Update 2017-03-13: Look like -The -Intercept report that US Senator Rand Paul confirm what I state above.

+ +
2nd September 2018
+

I continue to explore my Kodi installation, and today I wanted to +tell it to play a youtube URL I received in a chat, without having to +insert search terms using the on-screen keyboard. After searching the +web for API access to the Youtube plugin and testing a bit, I managed +to find a recipe that worked. If you got a kodi instance with its API +available from http://kodihost/jsonrpc, you can try the following to +have check out a nice cover band.

+ +

curl --silent --header 'Content-Type: application/json' \
+  --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open",
+  "params": {"item": { "file":
+  "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \
+  http://projector.local/jsonrpc

+ +

I've extended kodi-stream program to take a video source as its +first argument. It can now handle direct video links, youtube links +and 'desktop' to stream my desktop to Kodi. It is almost like a +Chromecast. :)

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: english, surveillance. + Tags: debian, english, kodi, video.
@@ -403,33 +447,19 @@ Intercept report that US Senator Rand Paul confirm what I state above.

- -
3rd March 2017
-

For almost a year now, we have been working on making a Norwegian -Bokmål edition of The Debian -Administrator's Handbook. Now, thanks to the tireless effort of -Ole-Erik, Ingrid and Andreas, the initial translation is complete, and -we are working on the proof reading to ensure consistent language and -use of correct computer science terms. The plan is to make the book -available on paper, as well as in electronic form. For that to -happen, the proof reading must be completed and all the figures need -to be translated. If you want to help out, get in touch.

- -

A - -fresh PDF edition in A4 format (the final book will have smaller -pages) of the book created every morning is available for -proofreading. If you find any errors, please -visit -Weblate and correct the error. The -state -of the translation including figures is a useful source for those -provide Norwegian bokmål screen shots and figures.

+ +
30th August 2018
+

It might seem obvious that software created using tax money should +be available for everyone to use and improve. Free Software +Foundation Europe recentlystarted a campaign to help get more people +to understand this, and I just signed the petition on +Public Money, Public Code to help +them. I hope you too will do the same.

@@ -437,77 +467,71 @@ provide Norwegian bokmål screen shots and figures.

- -
1st March 2017
-

A few days ago I ordered a small batch of -the ChaosKey, a small -USB dongle for generating entropy created by Bdale Garbee and Keith -Packard. Yesterday it arrived, and I am very happy to report that it -work great! According to its designers, to get it to work out of the -box, you need the Linux kernel version 4.1 or later. I tested on a -Debian Stretch machine (kernel version 4.9), and there it worked just -fine, increasing the available entropy very quickly. I wrote a small -test oneliner to test. It first print the current entropy level, -drain /dev/random, and then print the entropy level for five seconds. -Here is the situation without the ChaosKey inserted:

- -
-% cat /proc/sys/kernel/random/entropy_avail; \
-  dd bs=1M if=/dev/random of=/dev/null count=1; \
-  for n in $(seq 1 5); do \
-     cat /proc/sys/kernel/random/entropy_avail; \
-     sleep 1; \
-  done
-300
-0+1 oppføringer inn
-0+1 oppføringer ut
-28 byte kopiert, 0,000264565 s, 106 kB/s
-4
-8
-12
-17
-21
-%
-
- -

The entropy level increases by 3-4 every second. In such case any -application requiring random bits (like a HTTPS enabled web server) -will halt and wait for more entrpy. And here is the situation with -the ChaosKey inserted:

- -
-% cat /proc/sys/kernel/random/entropy_avail; \
-  dd bs=1M if=/dev/random of=/dev/null count=1; \
-  for n in $(seq 1 5); do \
-     cat /proc/sys/kernel/random/entropy_avail; \
-     sleep 1; \
-  done
-1079
-0+1 oppføringer inn
-0+1 oppføringer ut
-104 byte kopiert, 0,000487647 s, 213 kB/s
-433
-1028
-1031
-1035
-1038
-%
-
- -

Quite the difference. :) I bought a few more than I need, in case -someone want to buy one here in Norway. :)

- -

Update: The dongle was presented at Debconf last year. You might -find the talk -recording illuminating. It explains exactly what the source of -randomness is, if you are unable to spot it from the schema drawing -available from the ChaosKey web site linked at the start of this blog -post.

+ +
13th August 2018
+

A few days ago, I wondered if there are any privacy respecting +health monitors and/or fitness trackers available for sale these days. +I would like to buy one, but do not want to share my personal data +with strangers, nor be forced to have a mobile phone to get data out +of the unit. I've received some ideas, and would like to share them +with you. + +One interesting data point was a pointer to a Free Software app for +Android named +Gadgetbridge. +It provide cloudless collection and storing of data from a variety of +trackers. Its +list +of supported devices is a good indicator for units where the +protocol is fairly open, as it is obviously being handled by Free +Software. Other units are reportedly encrypting the collected +information with their own public key, making sure only the vendor +cloud service is able to extract data from the unit. The people +contacting me about Gadgetbirde said they were using +Amazfit +Bip and +Xiaomi +Band 3.

+ +

I also got a suggestion to look at some of the units from Garmin. +I was told their GPS watches can be connected via USB and show up as a +USB storage device with +Garmin +FIT files containing the collected measurements. While +proprietary, FIT files apparently can be read at least by +GPSBabel and the +GpxPod Nextcloud +app. It is unclear to me if they can read step count and heart rate +data. The person I talked to was using a +Garmin Forerunner +935, which is a fairly expensive unit. I doubt it is worth it for +a unit where the vendor clearly is trying its best to move from open +to closed systems. I still remember when Garmin dropped NMEA support +in its GPSes.

+ +

A final idea was to build ones own unit, perhaps by basing it on a +wearable hardware platforms like +the Flora Geo +Watch. Sound like fun, but I had more money than time to spend on +the topic, so I suspect it will have to wait for another time.

+ +

While I was working on tracking down links, I came across an +inspiring TED talk by Dave Debronkart about +being a +e-patient, and discovered the web site +Participatory +Medicine. If you too want to track your own health and fitness +without having information about your private life floating around on +computers owned by others, I recommend checking it out.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: debian, english. + Tags: english.
@@ -515,34 +539,36 @@ post.

- -
21st February 2017
-

I just noticed -the -new Norwegian proposal for archiving rules in the goverment list -ECMA-376 -/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term -storage. Luckily such files will only be accepted based on -pre-approval from the National Archive. Allowing OOXML files to be -used for long term storage might seem like a good idea as long as we -forget that there are plenty of ways for a "valid" OOXML document to -have content with no defined interpretation in the standard, which -lead to a question and an idea.

- -

Is there any tool to detect if a OOXML document depend on such -undefined behaviour? It would be useful for the National Archive (and -anyone else interested in verifying that a document is well defined) -to have such tool available when considering to approve the use of -OOXML. I'm aware of the -officeotron OOXML -validator, but do not know how complete it is nor if it will -report use of undefined behaviour. Are there other similar tools -available? Please send me an email if you know of any such tool.

+ +
7th August 2018
+

Dear lazyweb,

+ +

I wonder, is there a fitness tracker / health monitor available for +sale today that respect the users privacy? With this I mean a +watch/bracelet capable of measuring pulse rate and other +fitness/health related values (and by all means, also the correct time +and location if possible), which is only provided for +me to extract/read from the unit with computer without a radio beacon +and Internet connection. In other words, it do not depend on a cell +phone app, and do make the measurements available via other peoples +computer (aka "the cloud"). The collected data should be available +using only free software. I'm not interested in depending on some +non-free software that will leave me high and dry some time in the +future. I've been unable to find any such unit. I would like to buy +it. The ones I have seen for sale here in Norway are proud to report +that they share my health data with strangers (aka "cloud enabled"). +Is there an alternative? I'm not interested in giving money to people +requiring me to accept "privacy terms" to allow myself to measure my +own health.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

- Tags: english, nuug, standard. + Tags: english.
@@ -550,33 +576,75 @@ available? Please send me an email if you know of any such tool.

- -
13th February 2017
-

A few days ago, we received the ruling from -my -day in court. The case in question is a challenge of the seizure -of the DNS domain popcorn-time.no. The ruling simply did not mention -most of our arguments, and seemed to take everything ØKOKRIM said at -face value, ignoring our demonstration and explanations. But it is -hard to tell for sure, as we still have not seen most of the documents -in the case and thus were unprepared and unable to contradict several -of the claims made in court by the opposition. We are considering an -appeal, but it is partly a question of funding, as it is costing us -quite a bit to pay for our lawyer. If you want to help, please -donate to the -NUUG defense fund.

- -

The details of the case, as far as we know it, is available in -Norwegian from -the NUUG -blog. This also include -the -ruling itself.

+ +
31st July 2018
+

For a while now, I have looked for a sensible way to share images +with my family using a self hosted solution, as it is unacceptable to +place images from my personal life under the control of strangers +working for data hoarders like Google or Dropbox. The last few days I +have drafted an approach that might work out, and I would like to +share it with you. I would like to publish images on a server under +my control, and point some Internet connected display units using some +free and open standard to the images I published. As my primary +language is not limited to ASCII, I need to store metadata using +UTF-8. Many years ago, I hoped to find a digital photo frame capable +of reading a RSS feed with image references (aka using the +<enclosure> RSS tag), but was unable to find a current supplier +of such frames. In the end I gave up that approach.

+ +

Some months ago, I discovered that +XScreensaver is able to +read images from a RSS feed, and used it to set up a screen saver on +my home info screen, showing images from the Daily images feed from +NASA. This proved to work well. More recently I discovered that +Kodi (both using +OpenELEC and +LibreELEC) provide the +Feedreader +screen saver capable of reading a RSS feed with images and news. For +fun, I used it this summer to test Kodi on my parents TV by hooking up +a Raspberry PI unit with LibreELEC, and wanted to provide them with a +screen saver showing selected pictures from my selection.

+ +

Armed with motivation and a test photo frame, I set out to generate +a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created +/var/www/html/privatepictures/, wrote a small Perl script to extract +title and description metadata from the photo files and generate the +RSS file. I ended up using Perl instead of python, as the +libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP +tags I ended up using, while python3-exif did not. The relevant EXIF +tags only support ASCII, so I had to find better alternatives. XMP +seem to have the support I need.

+ +

I am a bit unsure which EXIF/XMP tags to use, as I would like to +use tags that can be easily added/updated using normal free software +photo managing software. I ended up using the tags set using this +exiftool command, as these tags can also be set using digiKam:

+ +
+exiftool -headline='The RSS image title' \
+  -description='The RSS image description.' \
+  -subject+=for-family photo.jpeg
+
+ +

I initially tried the "-title" and "keyword" tags, but they were +invisible in digiKam, so I changed to "-headline" and "-subject". I +use the keyword/subject 'for-family' to flag that the photo should be +shared with my family. Images with this keyword set are located and +copied into my Freedombox for the RSS generating script to find.

+ +

Are there better ways to do this? Get in touch if you have better +suggestions.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -584,86 +652,105 @@ ruling itself.

- -
3rd February 2017
-

- -

On Wednesday, I spent the entire day in court in Follo Tingrett -representing the member association -NUUG, alongside the member -association EFN and the DNS registrar -IMC, challenging the seizure of the DNS name popcorn-time.no. It -was interesting to sit in a court of law for the first time in my -life. Our team can be seen in the picture above: attorney Ola -Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil -Eriksen and NUUG board member Petter Reinholdtsen.

- -

The -case at hand is that the Norwegian National Authority for -Investigation and Prosecution of Economic and Environmental Crime (aka -Økokrim) decided on their own, to seize a DNS domain early last -year, without following -the -official policy of the Norwegian DNS authority which require a -court decision. The web site in question was a site covering Popcorn -Time. And Popcorn Time is the name of a technology with both legal -and illegal applications. Popcorn Time is a client combining -searching a Bittorrent directory available on the Internet with -downloading/distribute content via Bittorrent and playing the -downloaded content on screen. It can be used illegally if it is used -to distribute content against the will of the right holder, but it can -also be used legally to play a lot of content, for example the -millions of movies -available from the -Internet Archive or the collection -available from Vodo. We created -a -video demonstrating legally use of Popcorn Time and played it in -Court. It can of course be downloaded using Bittorrent.

- -

I did not quite know what to expect from a day in court. The -government held on to their version of the story and we held on to -ours, and I hope the judge is able to make sense of it all. We will -know in two weeks time. Unfortunately I do not have high hopes, as -the Government have the upper hand here with more knowledge about the -case, better training in handling criminal law and in general higher -standing in the courts than fairly unknown DNS registrar and member -associations. It is expensive to be right also in Norway. So far the -case have cost more than NOK 70 000,-. To help fund the case, NUUG -and EFN have asked for donations, and managed to collect around NOK 25 -000,- so far. Given the presentation from the Government, I expect -the government to appeal if the case go our way. And if the case do -not go our way, I hope we have enough funding to appeal.

- -

From the other side came two people from Økokrim. On the benches, -appearing to be part of the group from the government were two people -from the Simonsen Vogt Wiik lawyer office, and three others I am not -quite sure who was. Økokrim had proposed to present two witnesses -from The Motion Picture Association, but this was rejected because -they did not speak Norwegian and it was a bit late to bring in a -translator, but perhaps the two from MPA were present anyway. All -seven appeared to know each other. Good to see the case is take -seriously.

- -

If you, like me, believe the courts should be involved before a DNS -domain is hijacked by the government, or you believe the Popcorn Time -technology have a lot of useful and legal applications, I suggest you -too donate to -the NUUG defense fund. Both Bitcoin and bank transfer are -available. If NUUG get more than we need for the legal action (very -unlikely), the rest will be spend promoting free software, open -standards and unix-like operating systems in Norway, so no matter what -happens the money will be put to good use.

- -

If you want to lean more about the case, I recommend you check out -the blog -posts from NUUG covering the case. They cover the legal arguments -on both sides.

+ +
12th July 2018
+

Last night, I wrote +a +recipe to stream a Linux desktop using VLC to a instance of Kodi. +During the day I received valuable feedback, and thanks to the +suggestions I have been able to rewrite the recipe into a much simpler +approach requiring no setup at all. It is a single script that take +care of it all.

+ +

This new script uses GStreamer instead of VLC to capture the +desktop and stream it to Kodi. This fixed the video quality issue I +saw initially. It further removes the need to add a m3u file on the +Kodi machine, as it instead connects to +the JSON-RPC API in +Kodi and simply ask Kodi to play from the stream created using +GStreamer. Streaming the desktop to Kodi now become trivial. Copy +the script below, run it with the DNS name or IP address of the kodi +server to stream to as the only argument, and watch your screen show +up on the Kodi screen. Note, it depend on multicast on the local +network, so if you need to stream outside the local network, the +script must be modified. Also note, I have no idea if audio work, as +I only care about the picture part.

+ +
+#!/bin/sh
+#
+# Stream the Linux desktop view to Kodi.  See
+# http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html
+# for backgorund information.
+
+# Make sure the stream is stopped in Kodi and the gstreamer process is
+# killed if something go wrong (for example if curl is unable to find the
+# kodi server).  Do the same when interrupting this script.
+kodicmd() {
+    host="$1"
+    cmd="$2"
+    params="$3"
+    curl --silent --header 'Content-Type: application/json' \
+	 --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \
+	 "http://$host/jsonrpc"
+}
+cleanup() {
+    if [ -n "$kodihost" ] ; then
+	# Stop the playing when we end
+	playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" |
+			    jq .result[].playerid)
+	kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null
+    fi
+    if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then
+	kill "$gstpid"
+    fi
+}
+trap cleanup EXIT INT
+
+if [ -n "$1" ]; then
+    kodihost=$1
+    shift
+else
+    kodihost=kodi.local
+fi
+
+mcast=239.255.0.1
+mcastport=1234
+mcastttl=1
+
+pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \
+  cut -d" " -f2|head -1)
+gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
+  videoconvert ! queue2 ! \
+  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
+  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
+  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
+  udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \
+  pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \
+  > /dev/null 2>&1 &
+gstpid=$!
+
+# Give stream a second to get going
+sleep 1
+
+# Ask kodi to start streaming using its JSON-RPC API
+kodicmd "$kodihost" Player.Open \
+	"{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null
+
+# wait for gst to end
+wait "$gstpid"
+
+ +

I hope you find the approach useful. I know I do.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

@@ -678,6 +765,29 @@ on both sides.

Archive