I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på -sin forskrift. Som en kan se er det ikke mye tid igjen før fristen -som går ut på søndag. Denne forskriften er det som lister opp hvilke -formater det er greit å arkivere i -Noark -5-løsninger i Norge.
- -Jeg fant høringsdokumentene hos -Norsk -Arkivråd etter å ha blitt tipset på epostlisten til -fri -programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 -Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket -være min interesse for tjenestegrensesnittsprosjektet har jeg lest en -god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget -at standard epost ikke er på listen over godkjente formater som kan -arkiveres. Høringen med frist søndag er en glimrende mulighet til å -forsøke å gjøre noe med det. Jeg holder på med -egen -høringsuttalelse, og lurer på om andre er interessert i å støtte -forslaget om å tillate arkivering av epost som epost i arkivet.
- -Er du igang med å skrive egen høringsuttalelse allerede? I så fall -kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror -ikke det trengs så mye. Her et kort forslag til tekst:
- -- -- -Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse - 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om - revisjon av Forskrift om utfyllende tekniske og arkivfaglige - bestemmelser om behandling av offentlige arkiver (Riksarkivarens - forskrift).
- -Svært mye av vår kommuikasjon foregår i dag på e-post. Vi - foreslår derfor at Internett-e-post, slik det er beskrevet i IETF - RFC 5322, - https://tools.ietf.org/html/rfc5322. bør - inn som godkjent dokumentformat. Vi foreslår at forskriftens - oversikt over godkjente dokumentformater ved innlevering i § 5-16 - endres til å ta med Internett-e-post.
- -
Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan -epost kan lagres i en Noark 5-struktur, og holder på å skrive et -forslag om hvordan dette kan gjøres som vil bli sendt over til -arkivverket så snart det er ferdig. De som er interesserte kan -følge -fremdriften på web.
+ +So, Cambridge Analytica is getting some well deserved criticism for +(mis)using information it got from Facebook about 50 million people, +mostly in the USA. What I find a bit surprising, is how little +criticism Facebook is getting for handing the information over to +Cambridge Analytica and others in the first place. And what about the +people handing their private and personal information to Facebook? +And last, but not least, what about the government offices who are +handing information about the visitors of their web pages to Facebook? +No-one who looked at the terms of use of Facebook should be surprised +that information about peoples interests, political views, personal +lifes and whereabouts would be sold by Facebook.
+ +What I find to be the real scandal is the fact that Facebook is +selling your personal information, not that one of the buyers used it +in a way Facebook did not approve when exposed. It is well known that +Facebook is selling out their users privacy, but a scandal +nevertheless. Of course the information provided to them by Facebook +would be misused by one of the parties given access to personal +information about the millions of Facebook users. Collected +information will be misused sooner or later. The only way to avoid +such misuse, is to not collect the information in the first place. If +you do not want Facebook to hand out information about yourself for +the use and misuse of its customers, do not give Facebook the +information.
+ +Personally, I would recommend to completely remove your Facebook +account, and take back some control of your personal information. +According +to The Guardian, it is a bit hard to find out how to request +account removal (and not just 'disabling'). You need to +visit +a specific Facebook page and click on 'let us know' on that page +to get to the +real account deletion screen. Perhaps something to consider? I +would not trust the information to really be deleted (who knows, +perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the +exposure a bit.
+ +If you want to learn more about the capabilities of Cambridge +Analytica, I recommend to see the video recording of the one hour talk +Paul-Olivier Dehaye gave to NUUG last april about + +Data collection, psychometric profiling and their impact on +politics.
+ +And if you want to communicate with your friends and loved ones, +use some end-to-end encrypted method like +Signal or +Ring, and stop sharing your private +messages with strangers like Facebook and Google.
Jeg oppdaget i dag at nettstedet som -publiserer offentlige postjournaler fra statlige etater, OEP, har -begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet -ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl -og curl. For å teste selv, kjør følgende:
- -- --% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' -< HTTP/1.1 404 Not Found -% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' -< HTTP/1.1 200 OK -% -
Her kan en se at tjenesten gir «404 Not Found» for curl i -standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera -versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen -2017-03-02.
- -Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente -informasjon fra oep.no. Kan blokkeringen være gjort for å hindre -automatisert innsamling av informasjon fra OEP, slik Pressens -Offentlighetsutvalg gjorde for å dokumentere hvordan departementene -hindrer innsyn i -rapporten -«Slik hindrer departementer innsyn» som ble publiserte i januar -2017. Det virker usannsynlig, da det jo er trivielt å bytte -User-Agent til noe nytt.
- -Finnes det juridisk grunnlag for det offentlige å diskriminere -webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter -hva klienten sier at den heter? Da OEP eies av DIFI og driftes av -Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to -aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men -postjournalen -til DIFI viser kun to dokumenter det siste året mellom DIFI og -Basefarm. -Mimes brønn neste, -tenker jeg.
+ +I går kom det nok et argument for å holde seg unna det norske +helsevesenet. Da annonserte et stortingsflertall, bestående av Høyre, +Arbeiderpartiet, Fremskrittspartiet og Venstre, at de går inn for å +samle inn og lagre DNA-prøver fra hele befolkningen i Norge til evig +tid. Endringen gjelder innsamlede blodprøver fra nyfødte i Norge. +Det vil dermed ta litt tid før en har hele befolkningen, men det er +dit vi havner gitt nok tid. I dag er det nesten hundre prosent +oppslutning om undersøkelsen som gjøres like etter fødselen, på +bakgrunn av blodprøven det er snakk om å lagre, for å oppdage endel +medfødte sykdommer. Blodprøven lagres i dag i inntil seks år. +Stortingets +flertallsinnstilling er at tidsbegrensingen skal fjernes, og mener +at tidsubegrenset lagring ikke vil påvirke oppslutningen om +undersøkelsen.
+ +Datatilsynet har ikke akkurat applaudert forslaget:
+ ++ ++ +«Datatilsynet mener forslaget ikke i tilstrekkelig grad + synliggjør hvilke etiske og personvernmessige utfordringer som må + diskuteres før en etablerer en nasjonal biobank med blodprøver fra + hele befolkningen.»
+ +
Det er flere historier om hvordan innsamlet biologisk materiale har +blitt brukt til andre formål enn de ble innsamlet til, og historien om +folkehelseinstituttets +lagring på vegne av politiet (Kripos) av innsamlet biologisk materiale +og DNA-informasjon i strid med loven viser at en ikke kan være +trygg på at lover og intensjoner beskytter de som blir berørt mot +misbruk av slik privat og personlig informasjon.
+ +Det er verdt å merke seg at det kan forskes på de innsamlede +blodprøvene uten samtykke fra den det gjelder (eller foreldre når det +gjelder barn), etter en lovendring for en stund tilbake, med mindre +det er sendt inn skjema der en reserverer seg mot forskning uten +samtykke. Skjemaet er tilgjengelig fra +folkehelseinstituttets +websider, og jeg anbefaler, uavhengig av denne saken, varmt alle å +sende inn skjemaet for å dokumentere hvor mange som ikke synes det er +greit å fjerne krav om samtykke.
+ +I tillegg bør en kreve destruering av alt biologisk materiale som +er samlet inn om en selv, for å redusere eventuelle negative +konsekvenser i fremtiden når materialet kommer på avveie eller blir +brukt uten samtykke, men det er så vidt jeg vet ikke noe system for +dette i dag.
+ +Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til +det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner +til min adresse +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
The Nikita -Noark 5 core project is implementing the Norwegian standard for -keeping an electronic archive of government documents. -The -Noark 5 standard document the requirement for data systems used by -the archives in the Norwegian government, and the Noark 5 web interface -specification document a REST web service for storing, searching and -retrieving documents and metadata in such archive. I've been involved -in the project since a few weeks before Christmas, when the Norwegian -Unix User Group -announced -it supported the project. I believe this is an important project, -and hope it can make it possible for the government archives in the -future to use free software to keep the archives we citizens depend -on. But as I do not hold such archive myself, personally my first use -case is to store and analyse public mail journal metadata published -from the government. I find it useful to have a clear use case in -mind when developing, to make sure the system scratches one of my -itches.
- -If you would like to help make sure there is a free software -alternatives for the archives, please join our IRC channel -(#nikita on -irc.freenode.net) and -the -project mailing list.
- -When I got involved, the web service could store metadata about -documents. But a few weeks ago, a new milestone was reached when it -became possible to store full text documents too. Yesterday, I -completed an implementation of a command line tool -archive-pdf to upload a PDF file to the archive using this -API. The tool is very simple at the moment, and find existing -fonds, series and -files while asking the user to select which one to use if more than -one exist. Once a file is identified, the PDF is associated with the -file and uploaded, using the title extracted from the PDF itself. The -process is fairly similar to visiting the archive, opening a cabinet, -locating a file and storing a piece of paper in the archive. Here is -a test run directly after populating the database with test data using -our API tester:
- -- --~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf -using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 -using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 - - 0 - Title of the test case file created 2017-03-18T23:49:32.103446 - 1 - Title of the test file created 2017-03-18T23:49:32.103446 -Select which mappe you want (or search term): 0 -Uploading mangelmelding/mangler.pdf - PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt - File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 -~/src//noark5-tester$ -
You can see here how the fonds (arkiv) and serie (arkivdel) only had -one option, while the user need to choose which file (mappe) to use -among the two created by the API tester. The archive-pdf -tool can be found in the git repository for the API tester.
- -In the project, I have been mostly working on -the API -tester so far, while getting to know the code base. The API -tester currently use -the HATEOAS links -to traverse the entire exposed service API and verify that the exposed -operations and objects match the specification, as well as trying to -create objects holding metadata and uploading a simple XML file to -store. The tester has proved very useful for finding flaws in our -implementation, as well as flaws in the reference site and the -specification.
- -The test document I uploaded is a summary of all the specification -defects we have collected so far while implementing the web service. -There are several unclear and conflicting parts of the specification, -and we have -started -writing down the questions we get from implementing it. We use a -format inspired by how The -Austin Group collect defect reports for the POSIX standard with -their -instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). - -
The Nikita project is implemented using Java and Spring, and is -fairly easy to get up and running using Docker containers for those -that want to test the current code base. The API tester is -implemented in Python.
+ +I am working on publishing yet another book related to Creative +Commons. This time it is a book filled with interviews and histories +from those around the globe making a living using Creative +Commons.
+ +Yesterday, after many months of hard work by several volunteer +translators, the first draft of a Norwegian Bokmål edition of the book +Made with Creative Commons from 2017 +was complete. The Spanish translation is also complete, while the +Dutch, Polish, German and Ukraine edition need a lot of work. Get in +touch if you want to help make those happen, or would like to +translate into your mother tongue.
+ +The whole book project started when +Gunnar Wolf announced that he +was going to make a Spanish edition of the book. I noticed, and +offered some input on how to make a book, based on my experience with +translating the +Free +Culture and +The Debian +Administrator's Handbook books to Norwegian Bokmål. To make a +long story short, we ended up working on a Bokmål edition, and now the +first rough translation is complete, thanks to the hard work of +Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first +proof reading is almost done, and only the second and third proof +reading remains. We will also need to translate the 14 figures and +create a book cover. Once it is done we will publish the book on +paper, as well as in PDF, ePub and possibly Mobi formats.
+ +The book itself originates as a manuscript on Google Docs, is +downloaded as ODT from there and converted to Markdown using pandoc. +The Markdown is modified by a script before is converted to DocBook +using pandoc. The DocBook is modified again using a script before it +is used to create a Gettext POT file for translators. The translated +PO file is then combined with the earlier mentioned DocBook file to +create a translated DocBook file, which finally is given to dblatex to +create the final PDF. The end result is a set of editions of the +manuscript, one English and one for each of the translations.
+ +The translation is conducted using +the +Weblate web based translation system. Please have a look there +and get in touch if you would like to help out with proof +reading. :)
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Over the years, administrating thousand of NFS mounting linux -computers at the time, I often needed a way to detect if the machine -was experiencing NFS hang. If you try to use df or look at a -file or directory affected by the hang, the process (and possibly the -shell) will hang too. So you want to be able to detect this without -risking the detection process getting stuck too. It has not been -obvious how to do this. When the hang has lasted a while, it is -possible to find messages like these in dmesg:
- --nfs: server nfsserver not responding, still trying -- -
nfs: server nfsserver OK -
It is hard to know if the hang is still going on, and it is hard to -be sure looking in dmesg is going to work. If there are lots of other -messages in dmesg the lines might have rotated out of site before they -are noticed.
- -While reading through the nfs client implementation in linux kernel -code, I came across some statistics that seem to give a way to detect -it. The om_timeouts sunrpc value in the kernel will increase every -time the above log entry is inserted into dmesg. And after digging a -bit further, I discovered that this value show up in -/proc/self/mountstats on Linux.
- -The mountstats content seem to be shared between files using the -same file system context, so it is enough to check one of the -mountstats files to get the state of the mount point for the machine. -I assume this will not show lazy umounted NFS points, nor NFS mount -points in a different process context (ie with a different filesystem -view), but that does not worry me.
- -The content for a NFS mount point look similar to this:
- -- --[...] -device /dev/mapper/Debian-var mounted on /var with fstype ext3 -device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1 - opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all - age: 7863311 - caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255 - sec: flavor=1,pseudoflavor=1 - events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 - bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 - RPC iostats version: 1.0 p/v: 100003/3 (nfs) - xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820 - per-op statistics - NULL: 0 0 0 0 0 0 0 0 - GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132 - SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943 - LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511 - ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140 - READLINK: 125 125 0 20472 18620 0 1112 1118 - READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693 - WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771 - CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398 - MKDIR: 3680 3680 0 773980 993920 26 23990 24245 - SYMLINK: 903 903 0 233428 245488 6 5865 5917 - MKNOD: 80 80 0 20148 21760 0 299 304 - REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636 - RMDIR: 3367 3367 0 645112 484848 22 5782 6002 - RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288 - LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579 - READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917 - READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937 - FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022 - FSINFO: 2 2 0 232 328 0 1 1 - PATHCONF: 1 1 0 116 140 0 0 0 - COMMIT: 0 0 0 0 0 0 0 0 - -device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc -[...] -
The key number to look at is the third number in the per-op list. -It is the number of NFS timeouts experiences per file system -operation. Here 22 write timeouts and 5 access timeouts. If these -numbers are increasing, I believe the machine is experiencing NFS -hang. Unfortunately the timeout value do not start to increase right -away. The NFS operations need to time out first, and this can take a -while. The exact timeout value depend on the setup. For example the -defaults for TCP and UDP mount points are quite different, and the -timeout value is affected by the soft, hard, timeo and retrans NFS -mount options.
- -The only way I have been able to get working on Debian and RedHat
-Enterprise Linux for getting the timeout count is to peek in /proc/.
-But according to
-
Is there a better way to figure out if a Linux NFS client is -experiencing NFS hangs? Is there a way to detect which processes are -affected? Is there a way to get the NFS mount going quickly once the -network problem causing the NFS hang has been cleared? I would very -much welcome some clues, as we regularly run into NFS hangs.
+ +Today I was pleasantly surprised to discover my operating system of +choice, Debian, was used in the info screens on the subway stations. +While passing Nydalen subway station in Oslo, Norway, I discovered the +info screen booting with some text scrolling. I was not quick enough +with my camera to be able to record a video of the scrolling boot +screen, but I did get a photo from when the boot got stuck with a +corrupt file system: + +
+ +While I am happy to see Debian used more places, some details of the +content on the screen worries me.
+ +The image show the version booting is 'Debian GNU/Linux lenny/sid', +indicating that this is based on code taken from Debian Unstable/Sid +after Debian Etch (version 4) was released 2007-04-08 and before +Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian +has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8 +(Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to +a Debian +version history on Wikpedia. This mean the system is running +around 10 year old code, with no security fixes from the vendor for +many years.
+ +This is not the first time I discover the Oslo subway company, +Ruter, running outdated software. In 2012, +I +discovered the ticket vending machines were running Windows 2000, +and this was +still +the case in 2016. Given the response from the responsible people +in 2016, I would assume the machines are still running unpatched +Windows 2000. Thus, an unpatched Debian setup come as no surprise.
+ +The photo is made available under the license terms +Creative Commons +4.0 Attribution International (CC BY 4.0).
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
So the new president in the United States of America claim to be -surprised to discover that he was wiretapped during the election -before he was elected president. He even claim this must be illegal. -Well, doh, if it is one thing the confirmations from Snowden -documented, it is that the entire population in USA is wiretapped, one -way or another. Of course the president candidates were wiretapped, -alongside the senators, judges and the rest of the people in USA.
- -Next, the Federal Bureau of Investigation ask the Department of -Justice to go public rejecting the claims that Donald Trump was -wiretapped illegally. I fail to see the relevance, given that I am -sure the surveillance industry in USA believe they have all the legal -backing they need to conduct mass surveillance on the entire -world.
- -There is even the director of the FBI stating that he never saw an -order requesting wiretapping of Donald Trump. That is not very -surprising, given how the FISA court work, with all its activity being -secret. Perhaps he only heard about it?
- -What I find most sad in this story is how Norwegian journalists -present it. In a news reports the other day in the radio from the -Norwegian National broadcasting Company (NRK), I heard the journalist -claim that 'the FBI denies any wiretapping', while the reality is that -'the FBI denies any illegal wiretapping'. There is a fundamental and -important difference, and it make me sad that the journalists are -unable to grasp it.
- -Update 2017-03-13: Look like -The -Intercept report that US Senator Rand Paul confirm what I state above.
+ +Surprising as it might sound, there are still computers using the +traditional Sys V init system, and there probably will be until +systemd start working on Hurd and FreeBSD. +The upstream +project still exist, though, and up until today, the upstream +source was available from Savannah via subversion. I am happy to +report that this just changed.
+ +The upstream source is now in Git, and consist of three +repositories:
+ + + +I do not really spend much time on the project these days, and I +has mostly retired, but found it best to migrate the source to a good +version control system to help those willing to move it forward.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
For almost a year now, we have been working on making a Norwegian -Bokmål edition of The Debian -Administrator's Handbook. Now, thanks to the tireless effort of -Ole-Erik, Ingrid and Andreas, the initial translation is complete, and -we are working on the proof reading to ensure consistent language and -use of correct computer science terms. The plan is to make the book -available on paper, as well as in electronic form. For that to -happen, the proof reading must be completed and all the figures need -to be translated. If you want to help out, get in touch.
- -A - -fresh PDF edition in A4 format (the final book will have smaller -pages) of the book created every morning is available for -proofreading. If you find any errors, please -visit -Weblate and correct the error. The -state -of the translation including figures is a useful source for those -provide Norwegian bokmål screen shots and figures.
+ +A few days ago, a new major version of +VLC was announced, and I +decided to check out if it now supported streaming over +bittorrent and +webtorrent. Bittorrent is one of +the most efficient ways to distribute large files on the Internet, and +Webtorrent is a variant of Bittorrent using +WebRTC as its transport channel, +allowing web pages to stream and share files using the same technique. +The network protocols are similar but not identical, so a client +supporting one of them can not talk to a client supporting the other. +I was a bit surprised with what I discovered when I started to look. +Looking at +the release +notes did not help answering this question, so I started searching +the web. I found several news articles from 2013, most of them +tracing the news from Torrentfreak +("Open +Source Giant VLC Mulls BitTorrent Streaming Support"), about a +initiative to pay someone to create a VLC patch for bittorrent +support. To figure out what happend with this initiative, I headed +over to the #videolan IRC channel and asked if there were some bug or +feature request tickets tracking such feature. I got an answer from +lead developer Jean-Babtiste Kempf, telling me that there was a patch +but neither he nor anyone else knew where it was. So I searched a bit +more, and came across an independent +VLC plugin to add +bittorrent support, created by Johan Gunnarsson in 2016/2017. +Again according to Jean-Babtiste, this is not the patch he was talking +about.
+ +Anyway, to test the plugin, I made a working Debian package from +the git repository, with some modifications. After installing this +package, I could stream videos from +The Internet Archive using VLC +commands like this:
+ ++ ++vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent +
The plugin is supposed to handle magnet links too, but since The +Internet Archive do not have magnet links and I did not want to spend +time tracking down another source, I have not tested it. It can take +quite a while before the video start playing without any indication of +what is going on from VLC. It took 10-20 seconds when I measured it. +Some times the plugin seem unable to find the correct video file to +play, and show the metadata XML file name in the VLC status line. I +have no idea why.
+ +I have created a request for +a new package in Debian (RFP) and +asked if +the upstream author is willing to help make this happen. Now we +wait to see what come out of this. I do not want to maintain a +package that is not maintained upstream, nor do I really have time to +maintain more packages myself, so I might leave it at this. But I +really hope someone step up to do the packaging, and hope upstream is +still maintaining the source. If you want to help, please update the +RFP request or the upstream issue.
+ +I have not found any traces of webtorrent support for VLC.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
A few days ago I ordered a small batch of -the ChaosKey, a small -USB dongle for generating entropy created by Bdale Garbee and Keith -Packard. Yesterday it arrived, and I am very happy to report that it -work great! According to its designers, to get it to work out of the -box, you need the Linux kernel version 4.1 or later. I tested on a -Debian Stretch machine (kernel version 4.9), and there it worked just -fine, increasing the available entropy very quickly. I wrote a small -test oneliner to test. It first print the current entropy level, -drain /dev/random, and then print the entropy level for five seconds. -Here is the situation without the ChaosKey inserted:
- -- --% cat /proc/sys/kernel/random/entropy_avail; \ - dd bs=1M if=/dev/random of=/dev/null count=1; \ - for n in $(seq 1 5); do \ - cat /proc/sys/kernel/random/entropy_avail; \ - sleep 1; \ - done -300 -0+1 oppføringer inn -0+1 oppføringer ut -28 byte kopiert, 0,000264565 s, 106 kB/s -4 -8 -12 -17 -21 -% -
The entropy level increases by 3-4 every second. In such case any -application requiring random bits (like a HTTPS enabled web server) -will halt and wait for more entrpy. And here is the situation with -the ChaosKey inserted:
- -- --% cat /proc/sys/kernel/random/entropy_avail; \ - dd bs=1M if=/dev/random of=/dev/null count=1; \ - for n in $(seq 1 5); do \ - cat /proc/sys/kernel/random/entropy_avail; \ - sleep 1; \ - done -1079 -0+1 oppføringer inn -0+1 oppføringer ut -104 byte kopiert, 0,000487647 s, 213 kB/s -433 -1028 -1031 -1035 -1038 -% -
Quite the difference. :) I bought a few more than I need, in case -someone want to buy one here in Norway. :)
- -Update: The dongle was presented at Debconf last year. You might -find the talk -recording illuminating. It explains exactly what the source of -randomness is, if you are unable to spot it from the schema drawing -available from the ChaosKey web site linked at the start of this blog -post.
+ +A new version of the +3D printer slicer +software Cura, version 3.1.0, is now available in Debian Testing +(aka Buster) and Debian Unstable (aka Sid). I hope you find it +useful. It was uploaded the last few days, and the last update will +enter testing tomorrow. See the +release +notes for the list of bug fixes and new features. Version 3.2 +was announced 6 days ago. We will try to get it into Debian as +well.
+ +More information related to 3D printing is available on the +3D printing and +3D printer wiki pages +in Debian.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
I just noticed -the -new Norwegian proposal for archiving rules in the goverment list -ECMA-376 -/ ISO/IEC 29500 (aka OOXML) as valid formats to put in long term -storage. Luckily such files will only be accepted based on -pre-approval from the National Archive. Allowing OOXML files to be -used for long term storage might seem like a good idea as long as we -forget that there are plenty of ways for a "valid" OOXML document to -have content with no defined interpretation in the standard, which -lead to a question and an idea.
- -Is there any tool to detect if a OOXML document depend on such -undefined behaviour? It would be useful for the National Archive (and -anyone else interested in verifying that a document is well defined) -to have such tool available when considering to approve the use of -OOXML. I'm aware of the -officeotron OOXML -validator, but do not know how complete it is nor if it will -report use of undefined behaviour. Are there other similar tools -available? Please send me an email if you know of any such tool.
+ +Jeg lar meg fascinere av en artikkel +i +Dagbladet om Kinas håndtering av Xinjiang, spesielt følgende +utsnitt:
+ ++ ++ +«I den sørvestlige byen Kashgar nærmere grensa til +Sentral-Asia meldes det nå at 120.000 uigurer er internert i såkalte +omskoleringsleirer. Samtidig er det innført et omfattende +helsesjekk-program med innsamling og lagring av DNA-prøver fra +absolutt alle innbyggerne. De mest avanserte overvåkingsmetodene +testes ut her. Programmer for å gjenkjenne ansikter og stemmer er på +plass i regionen. Der har de lokale myndighetene begynt å installere +GPS-systemer i alle kjøretøy og egne sporingsapper i +mobiltelefoner.
+ +Politimetodene griper så dypt inn i folks dagligliv at motstanden +mot Beijing-regimet øker.»
+ +
Beskrivelsen avviker jo desverre ikke så veldig mye fra tilstanden +her i Norge.
+ +Dataregistrering | +Kina | +Norge | + +
---|---|---|
Innsamling og lagring av DNA-prøver fra befolkningen | +Ja | +Delvis, planlagt for alle nyfødte. | +
Ansiktsgjenkjenning | +Ja | +Ja | +
Stemmegjenkjenning | +Ja | +Nei | +
Posisjons-sporing av mobiltelefoner | +Ja | +Ja | +
Posisjons-sporing av biler | +Ja | +Ja | +
I Norge har jo situasjonen rundt Folkehelseinstituttets lagring av +DNA-informasjon på vegne av politiet, der de nektet å slette +informasjon politiet ikke hadde lov til å ta vare på, gjort det klart +at DNA tar vare på ganske lenge. I tillegg finnes det utallige +biobanker som lagres til evig tid, og det er planer om å innføre +evig +lagring av DNA-materiale fra alle spebarn som fødes (med mulighet +for å be om sletting).
+ +I Norge er det system på plass for ansiktsgjenkjenning, som +en +NRK-artikkel fra 2015 forteller er aktiv på Gardermoen, samt +brukes +til å analysere bilder innsamlet av myndighetene. Brukes det også +flere plasser? Det er tett med overvåkningskamera kontrollert av +politi og andre myndigheter i for eksempel Oslo sentrum.
+ +Jeg er ikke kjent med at Norge har noe system for identifisering av +personer ved hjelp av stemmegjenkjenning.
+ +Posisjons-sporing av mobiltelefoner er ruinemessig tilgjengelig for +blant annet politi, NAV og Finanstilsynet, i tråd med krav i +telefonselskapenes konsesjon. I tillegg rapporterer smarttelefoner +sin posisjon til utviklerne av utallige mobil-apper, der myndigheter +og andre kan hente ut informasjon ved behov. Det er intet behov for +noen egen app for dette.
+ +Posisjons-sporing av biler er rutinemessig tilgjengelig via et tett +nett av målepunkter på veiene (automatiske bomstasjoner, +køfribrikke-registrering, automatiske fartsmålere og andre veikamera). +Det er i tillegg vedtatt at alle nye biler skal selges med utstyr for +GPS-sporing (eCall).
+ +Det er jammen godt vi lever i et liberalt demokrati, og ikke en +overvåkningsstat, eller?
+ +Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til +det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner +til min adresse +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
A few days ago, we received the ruling from -my -day in court. The case in question is a challenge of the seizure -of the DNS domain popcorn-time.no. The ruling simply did not mention -most of our arguments, and seemed to take everything ÃKOKRIM said at -face value, ignoring our demonstration and explanations. But it is -hard to tell for sure, as we still have not seen most of the documents -in the case and thus were unprepared and unable to contradict several -of the claims made in court by the opposition. We are considering an -appeal, but it is partly a question of funding, as it is costing us -quite a bit to pay for our lawyer. If you want to help, please -donate to the -NUUG defense fund.
- -The details of the case, as far as we know it, is available in -Norwegian from -the NUUG -blog. This also include -the -ruling itself.
+ +
We write 2018, and it is 30 years since Unicode was introduced. +Most of us in Norway have come to expect the use of our alphabet to +just work with any computer system. But it is apparently beyond reach +of the computers printing recites at a restaurant. Recently I visited +a Peppes pizza resturant, and noticed a few details on the recite. +Notice how 'ø' and 'å' are replaced with strange symbols in +'Servitør', 'à BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi +gleder oss til å se deg igjen'.
+ +I would say that this state is passed sad and over in embarrassing.
+ +I removed personal and private information to be nice.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
On Wednesday, I spent the entire day in court in Follo Tingrett -representing the member association -NUUG, alongside the member -association EFN and the DNS registrar -IMC, challenging the seizure of the DNS name popcorn-time.no. It -was interesting to sit in a court of law for the first time in my -life. Our team can be seen in the picture above: attorney Ola -Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil -Eriksen and NUUG board member Petter Reinholdtsen.
- -The -case at hand is that the Norwegian National Authority for -Investigation and Prosecution of Economic and Environmental Crime (aka -Ãkokrim) decided on their own, to seize a DNS domain early last -year, without following -the -official policy of the Norwegian DNS authority which require a -court decision. The web site in question was a site covering Popcorn -Time. And Popcorn Time is the name of a technology with both legal -and illegal applications. Popcorn Time is a client combining -searching a Bittorrent directory available on the Internet with -downloading/distribute content via Bittorrent and playing the -downloaded content on screen. It can be used illegally if it is used -to distribute content against the will of the right holder, but it can -also be used legally to play a lot of content, for example the -millions of movies -available from the -Internet Archive or the collection -available from Vodo. We created -a -video demonstrating legally use of Popcorn Time and played it in -Court. It can of course be downloaded using Bittorrent.
- -I did not quite know what to expect from a day in court. The -government held on to their version of the story and we held on to -ours, and I hope the judge is able to make sense of it all. We will -know in two weeks time. Unfortunately I do not have high hopes, as -the Government have the upper hand here with more knowledge about the -case, better training in handling criminal law and in general higher -standing in the courts than fairly unknown DNS registrar and member -associations. It is expensive to be right also in Norway. So far the -case have cost more than NOK 70 000,-. To help fund the case, NUUG -and EFN have asked for donations, and managed to collect around NOK 25 -000,- so far. Given the presentation from the Government, I expect -the government to appeal if the case go our way. And if the case do -not go our way, I hope we have enough funding to appeal.
- -From the other side came two people from Ãkokrim. On the benches, -appearing to be part of the group from the government were two people -from the Simonsen Vogt Wiik lawyer office, and three others I am not -quite sure who was. Ãkokrim had proposed to present two witnesses -from The Motion Picture Association, but this was rejected because -they did not speak Norwegian and it was a bit late to bring in a -translator, but perhaps the two from MPA were present anyway. All -seven appeared to know each other. Good to see the case is take -seriously.
- -If you, like me, believe the courts should be involved before a DNS -domain is hijacked by the government, or you believe the Popcorn Time -technology have a lot of useful and legal applications, I suggest you -too donate to -the NUUG defense fund. Both Bitcoin and bank transfer are -available. If NUUG get more than we need for the legal action (very -unlikely), the rest will be spend promoting free software, open -standards and unix-like operating systems in Norway, so no matter what -happens the money will be put to good use.
- -If you want to lean more about the case, I recommend you check out -the blog -posts from NUUG covering the case. They cover the legal arguments -on both sides.
+ +I've continued to track down list of movies that are legal to +distribute on the Internet, and identified more than 11,000 title IDs +in The Internet Movie Database (IMDB) so far. Most of them (57%) are +feature films from USA published before 1923. I've also tracked down +more than 24,000 movies I have not yet been able to map to IMDB title +ID, so the real number could be a lot higher. According to the front +web page for Retro Film +Vault, there are 44,000 public domain films, so I guess there are +still some left to identify.
+ +The complete data set is available from +a +public git repository, including the scripts used to create it. +Most of the data is collected using web scraping, for example from the +"product catalog" of companies selling copies of public domain movies, +but any source I find believable is used. I've so far had to throw +out three sources because I did not trust the public domain status of +the movies listed.
+ +Anyway, this is the summary of the 28 collected data sources so +far:
+ ++ 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json + 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json + 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json + 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json + 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json + 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json + 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json + 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json + 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json + 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json + 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json + 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json + 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json + 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json + 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json + 229 entries ( 87 unique) with and 1 without IMDB title ID in free-movies-manual.json + 44 entries ( 2 unique) with and 64 without IMDB title ID in free-movies-openflix.json + 291 entries ( 33 unique) with and 474 without IMDB title ID in free-movies-profilms-pd.json + 211 entries ( 7 unique) with and 0 without IMDB title ID in free-movies-publicdomainmovies-info.json + 1232 entries ( 57 unique) with and 1875 without IMDB title ID in free-movies-publicdomainmovies-net.json + 46 entries ( 13 unique) with and 81 without IMDB title ID in free-movies-publicdomainreview.json + 698 entries ( 64 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json + 1758 entries ( 882 unique) with and 3786 without IMDB title ID in free-movies-retrofilmvault.json + 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json + 63 entries ( 16 unique) with and 141 without IMDB title ID in free-movies-vodo.json +11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID ++ +
I keep finding more data sources. I found the cinemovies source +just a few days ago, and as you can see from the summary, it extended +my list with 63 movies. Check out the mklist-* scripts in the git +repository if you are curious how the lists are created. Many of the +titles are extracted using searches on IMDB, where I look for the +title and year, and accept search results with only one movie listed +if the year matches. This allow me to automatically use many lists of +movies without IMDB title ID references at the cost of increasing the +risk of wrongly identify a IMDB title ID as public domain. So far my +random manual checks have indicated that the method is solid, but I +really wish all lists of public domain movies would include unique +movie identifier like the IMDB title ID. It would make the job of +counting movies in the public domain a lot easier.
+ +As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Archive
-
+
- 2018
+
-
+
+
- January (1) + +
- February (5) + +
- March (4) + +
+
- 2017
-
@@ -685,6 +683,20 @@ on both sides.
- April (2) +
- June (5) + +
- July (1) + +
- August (1) + +
- September (3) + +
- October (5) + +
- November (3) + +
- December (4) +
- 2016
@@ -935,7 +947,7 @@ on both sides.
Tags
-
-
- 3d-printer (13) +
- 3d-printer (16)
- amiga (1) @@ -945,33 +957,33 @@ on both sides.
- bitcoin (9) -
- bootsystem (16) +
- bootsystem (17)
- bsa (2)
- chrpath (2) -
- debian (148) +
- debian (156)
- debian edu (158) -
- debian-handbook (3) +
- debian-handbook (4)
- digistan (10) -
- dld (16) +
- dld (17) -
- docbook (23) +
- docbook (25)
- drivstoffpriser (4) -
- english (346) +
- english (371)
- fiksgatami (23) -
- fildeling (12) +
- fildeling (13) -
- freeculture (29) +
- freeculture (32)
- freedombox (9) @@ -987,6 +999,8 @@ on both sides.
- ldap (9) +
- lego (4) +
- lenker (8)
- lsdvd (2) @@ -999,19 +1013,19 @@ on both sides.
- nice free software (9) -
- norsk (289) +
- norsk (296) -
- nuug (188) +
- nuug (190) -
- offentlig innsyn (31) +
- offentlig innsyn (33)
- open311 (2) -
- opphavsrett (64) +
- opphavsrett (71) -
- personvern (99) +
- personvern (106) -
- raid (1) +
- raid (2)
- reactos (1) @@ -1023,31 +1037,33 @@ on both sides.
- rss (1) -
- ruter (5) +
- ruter (6)
- scraperwiki (2) -
- sikkerhet (52) +
- sikkerhet (53)
- sitesummary (4)
- skepsis (5) -
- standard (52) +
- standard (55) -
- stavekontroll (5) +
- stavekontroll (6) -
- stortinget (11) +
- stortinget (12) -
- surveillance (48) +
- surveillance (54) -
- sysadmin (3) +
- sysadmin (4)
- usenix (2) -
- valg (8) +
- valg (9) + +
- verkidetfri (11) -
- video (59) +
- video (62)
- vitenskap (4)