Jeg har fortsatt behov for å kunne laste ned innslag fra NRKs -nettsted av og til for å se senere når jeg ikke er på nett, men -min -oppskrift fra 2011 sluttet å fungere da NRK byttet -avspillermetode. I dag fikk jeg endelig lett etter oppdatert løsning, -og jeg er veldig glad for å fortelle at den enkleste måten å laste ned -innslag er å bruke siste versjon 2014.06.07 av youtube-dl. Støtten i -youtube-dl kom -inn for 23 dager siden og -versjonen i -Debian fungerer fint også som backport til Debian Wheezy. Det er -et lite problem, det håndterer kun URLer med små bokstaver, men hvis -en har en URL med store bokstaver kan en bare gjøre alle store om til -små bokstaver for å få youtube-dl til å laste ned. Rapporterte -problemet nettopp til utviklerne, og antar de får fikset det -snart.
- -Dermed er alt klart til å laste ned dokumentarene om -USAs -hemmelige avlytting og -Selskapene -bak USAs avlytting, i tillegg til -intervjuet -med Edward Snowden gjort av den tyske tv-kanalen ARD. Anbefaler -alle å se disse, sammen med -foredraget -til Jacob Appelbaum på siste CCC-konferanse, for å forstå mer om -hvordan overvåkningen av borgerne brer om seg.
- -Takk til gode venner på foreningen NUUGs IRC-kanal -#nuug på irc.freenode.net -for tipsene som fikk meg i mål.
+ +On friday, I came across an interesting article in the Norwegian +web based ICT news magazine digi.no on +how +to collect the IMSI numbers of nearby cell phones using the cheap +DVB-T software defined radios. The article refered to instructions +and a recipe by +Keld Norman on Youtube on how to make a simple $7 IMSI Catcher, and I decided to test them out.
+ +The instructions said to use Ubuntu, install pip using apt (to +bypass apt), use pip to install pybombs (to bypass both apt and pip), +and the ask pybombs to fetch and build everything you need from +scratch. I wanted to see if I could do the same on the most recent +Debian packages, but this did not work because pybombs tried to build +stuff that no longer build with the most recent openssl library or +some other version skew problem. While trying to get this recipe +working, I learned that the apt->pip->pybombs route was a long detour, +and the only piece of software dependency missing in Debian was the +gr-gsm package. I also found out that the lead upstream developer of +gr-gsm (the name stand for GNU Radio GSM) project already had a set of +Debian packages provided in an Ubuntu PPA repository. All I needed to +do was to dget the Debian source package and built it.
+ +The IMSI collector is a python script listening for packages on the +loopback network device and printing to the terminal some specific GSM +packages with IMSI numbers in them. The code is fairly short and easy +to understand. The reason this work is because gr-gsm include a tool +to read GSM data from a software defined radio like a DVB-T USB stick +and other software defined radios, decode them and inject them into a +network device on your Linux machine (using the loopback device by +default). This proved to work just fine, and I've been testing the +collector for a few days now.
+ +The updated and simpler recipe is thus to
+ +-
+
+
- start with a Debian machine running Stretch or newer, + +
- build and install the gr-gsm package available from +http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/, + +
- clone the git repostory from https://github.com/Oros42/IMSI-catcher, + +
- run grgsm_livemon and adjust the frequency until the terminal +where it was started is filled with a stream of text (meaning you +found a GSM station). + +
- go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers. + +
To make it even easier in the future to get this sniffer up and +running, I decided to package +the gr-gsm project +for Debian (WNPP +#871055), and the package was uploaded into the NEW queue today. +Luckily the gnuradio maintainer has promised to help me, as I do not +know much about gnuradio stuff yet.
+ +I doubt this "IMSI cacher" is anywhere near as powerfull as +commercial tools like +The +Spy Phone Portable IMSI / IMEI Catcher or the +Harris +Stingray, but I hope the existance of cheap alternatives can make +more people realise how their whereabouts when carrying a cell phone +is easily tracked. Seeing the data flow on the screen, realizing that +I live close to a police station and knowing that the police is also +wearing cell phones, I wonder how hard it would be for criminals to +track the position of the police officers to discover when there are +police near by, or for foreign military forces to track the location +of the Norwegian military forces, or for anyone to track the location +of government officials...
+ +It is worth noting that the data reported by the IMSI-catcher +script mentioned above is only a fraction of the data broadcasted on +the GSM network. It will only collect one frequency at the time, +while a typical phone will be using several frequencies, and not all +phones will be using the frequencies tracked by the grgsm_livemod +program. Also, there is a lot of radio chatter being ignored by the +simple_IMSI-catcher script, which would be collected by extending the +parser code. I wonder if gr-gsm can be set up to listen to more than +one frequency?
Dear lazyweb. I'm planning to set up a small Raspberry Pi computer -in my car, connected to -a -small screen next to the rear mirror. I plan to hook it up with a -GPS and a USB wifi card too. The idea is to get my own -"Carputer". But I -wonder if someone already created a good free software solution for -such car computer.
- -This is my current wish list for such system:
- --
-
-
- Work on Raspberry Pi. - -
- Show current speed limit based on location, and warn if going too - fast (for example using color codes yellow and red on the screen, - or make a sound). This could be done either using either data from - Openstreetmap or OCR - info gathered from a dashboard camera. - -
- Track automatic toll road passes and their cost, show total spent - and make it possible to calculate toll costs for planned - route. - -
- Collect GPX tracks for use with OpenStreetMap. - -
- Automatically detect and use any wireless connection to connect - to home server. Try IP over DNS - (iodine) or ICMP - (Hans) if direct - connection do not work. - -
- Set up mesh network to talk to other cars with the same system, - or some standard car mesh protocol. - -
- Warn when approaching speed cameras and speed camera ranges - (speed calculated between two cameras). - -
- Suport dashboard/front facing camera to discover speed limits and - run OCR to track registration number of passing cars. - -
If you know of any free software car computer system supporting -some or all of these features, please let me know.
-I've been following the Gnash -project for quite a while now. It is a free software -implementation of Adobe Flash, both a standalone player and a browser -plugin. Gnash implement support for the AVM1 format (and not the -newer AVM2 format - see -Lightspark for that one), -allowing several flash based sites to work. Thanks to the friendly -developers at Youtube, it also work with Youtube videos, because the -Javascript code at Youtube detect Gnash and serve a AVM1 player to -those users. :) Would be great if someone found time to implement AVM2 -support, but it has not happened yet. If you install both Lightspark -and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file, -so you can get both handled as free software. Unfortunately, -Lightspark so far only implement a small subset of AVM2, and many -sites do not work yet.
- -A few months ago, I started looking at -Coverity, the static source -checker used to find heaps and heaps of bugs in free software (thanks -to the donation of a scanning service to free software projects by the -company developing this non-free code checker), and Gnash was one of -the projects I decided to check out. Coverity is able to find lock -errors, memory errors, dead code and more. A few days ago they even -extended it to also be able to find the heartbleed bug in OpenSSL. -There are heaps of checks being done on the instrumented code, and the -amount of bogus warnings is quite low compared to the other static -code checkers I have tested over the years.
- -Since a few weeks ago, I've been working with the other Gnash -developers squashing bugs discovered by Coverity. I was quite happy -today when I checked the current status and saw that of the 777 issues -detected so far, 374 are marked as fixed. This make me confident that -the next Gnash release will be more stable and more dependable than -the previous one. Most of the reported issues were and are in the -test suite, but it also found a few in the rest of the code.
- -If you want to help out, you find us on -the -gnash-dev mailing list and on -the #gnash channel on -irc.freenode.net IRC server.
+ +I finally received a copy of the Norwegian Bokmål edition of +"The Debian Administrator's +Handbook". This test copy arrived in the mail a few days ago, and +I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition +is available +from lulu.com. If you buy it quickly, you save 25% on the list +price. The book is also available for download in electronic form as +PDF, EPUB and Mobipocket, as can be +read online +as a web page.
+ +This is the second book I publish (the first was the book +"Free Culture" by Lawrence Lessig +in +English, +French +and +Norwegian +Bokmål), and I am very excited to finally wrap up this +project. I hope +"Håndbok +for Debian-administratoren" will be well received.
It would be nice if it was easier in Debian to get all the hardware -related packages relevant for the computer installed automatically. -So I implemented one, using -my Isenkram -package. To use it, install the tasksel and isenkram packages and -run tasksel as user root. You should be presented with a new option, -"Hardware specific packages (autodetected by isenkram)". When you -select it, tasksel will install the packages isenkram claim is fit for -the current hardware, hot pluggable or not.
- -
The implementation is in two files, one is the tasksel menu entry -description, and the other is the script used to extract the list of -packages to install. The first part is in -/usr/share/tasksel/descs/isenkram.desc and look like -this:
- -- --Task: isenkram -Section: hardware -Description: Hardware specific packages (autodetected by isenkram) - Based on the detected hardware various hardware specific packages are - proposed. -Test-new-install: mark show -Relevance: 8 -Packages: for-current-hardware -
The second part is in -/usr/lib/tasksel/packages/for-current-hardware and look like -this:
- -- --#!/bin/sh -# -( - isenkram-lookup - isenkram-autoinstall-firmware -l -) | sort -u -
All in all, a very short and simple implementation making it -trivial to install the hardware dependent package we all may want to -have installed on our machines. I've not been able to find a way to -get tasksel to tell you exactly which packages it plan to install -before doing the installation. So if you are curious or careful, -check the output from the isenkram-* command line tools first.
- -The information about which packages are handling which hardware is -fetched either from the isenkram package itself in -/usr/share/isenkram/, from git.debian.org or from the APT package -database (using the Modaliases header). The APT package database -parsing have caused a nasty resource leak in the isenkram daemon (bugs -#719837 and -#730704). The cause is in -the python-apt code (bug -#745487), but using a -workaround I was able to get rid of the file descriptor leak and -reduce the memory leak from ~30 MiB per hardware detection down to -around 2 MiB per hardware detection. It should make the desktop -daemon a lot more useful. The fix is in version 0.7 uploaded to -unstable today.
- -I believe the current way of mapping hardware to packages in -Isenkram is is a good draft, but in the future I expect isenkram to -use the AppStream data source for this. A proposal for getting proper -AppStream support into Debian is floating around as -DEP-11, and -GSoC -project will take place this summer to improve the situation. I -look forward to seeing the result, and welcome patches for isenkram to -start using the information when it is ready.
- -If you want your package to map to some specific hardware, either -add a "Xb-Modaliases" header to your control file like I did in -the pymissile -package or submit a bug report with the details to the isenkram -package. See also -all my -blog posts tagged isenkram for details on the notation. I expect -the information will be migrated to AppStream eventually, but for the -moment I got no better place to store it.
+ +Jeg kom over teksten +«Killing +car privacy by federal mandate» av Leonid Reyzin på Freedom to +Tinker i dag, og det gleder meg å se en god gjennomgang om hvorfor det +er et urimelig inngrep i privatsfæren å la alle biler kringkaste sin +posisjon og bevegelse via radio. Det omtalte forslaget basert på +Dedicated Short Range Communication (DSRC) kalles Basic Safety Message +(BSM) i USA og Cooperative Awareness Message (CAM) i Europa, og det +norske Vegvesenet er en av de som ser ut til å kunne tenke seg å +pålegge alle biler å fjerne nok en bit av innbyggernes privatsfære. +Anbefaler alle å lese det som står der. + +
Mens jeg tittet litt på DSRC på biler i Norge kom jeg over et sitat +jeg synes er illustrativt for hvordan det offentlige Norge håndterer +problemstillinger rundt innbyggernes privatsfære i SINTEF-rapporten +«Informasjonssikkerhet +i AutoPASS-brikker» av Trond Foss:
+ ++«Rapporten ser ikke på informasjonssikkerhet knyttet til personlig + integritet.» ++ +
SÃ¥ enkelt kan det tydeligvis gjøres nÃ¥r en vurderer +informasjonssikkerheten. Det holder vel at folkene pÃ¥ toppen kan si +at «Personvernet er ivaretatt», som jo er den populære intetsigende +frasen som gjør at mange tror enkeltindividers integritet tas vare pÃ¥. +Sitatet fikk meg til Ã¥ undres pÃ¥ hvor ofte samme tilnærming, Ã¥ bare se +bort fra behovet for personlig itegritet, blir valgt nÃ¥r en velger Ã¥ +legge til rette for nok et inngrep i privatsfæren til personer i +Norge. Det er jo sjelden det fÃ¥r reaksjoner. Historien om +reaksjonene pÃ¥ Helse Sør-Ãsts tjenesteutsetting er jo sørgelig nok et +unntak og toppen av isfjellet, desverre. Tror jeg fortsatt takker nei +til bÃ¥de AutoPASS og holder meg sÃ¥ langt unna det norske helsevesenet +som jeg kan, inntil de har demonstrert og dokumentert at de verdsetter +individets privatsfære og personlige integritet høyere enn kortsiktig +gevist og samfunnsnytte.
The Freedombox -project is working on providing the software and hardware to make -it easy for non-technical people to host their data and communication -at home, and being able to communicate with their friends and family -encrypted and away from prying eyes. It is still going strong, and -today a major mile stone was reached.
- -Today, the last of the packages currently used by the project to -created the system images were accepted into Debian Unstable. It was -the freedombox-setup package, which is used to configure the images -during build and on the first boot. Now all one need to get going is -the build code from the freedom-maker git repository and packages from -Debian. And once the freedombox-setup package enter testing, we can -build everything directly from Debian. :)
- -Some key packages used by Freedombox are -freedombox-setup, -plinth, -pagekite, -tor, -privoxy, -owncloud and -dnsmasq. There -are plans to integrate more packages into the setup. User -documentation is maintained on the Debian wiki. Please -check out -the manual and help us improve it.
- -To test for yourself and create boot images with the FreedomBox -setup, run this on a Debian machine using a user with sudo rights to -become root:
- --sudo apt-get install git vmdebootstrap mercurial python-docutils \ - mktorrent extlinux virtualbox qemu-user-static binfmt-support \ - u-boot-tools -git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \ - freedom-maker -make -C freedom-maker dreamplug-image raspberry-image virtualbox-image -- -
Root access is needed to run debootstrap and mount loopback -devices. See the README in the freedom-maker git repo for more -details on the build. If you do not want all three images, trim the -make line. Note that the virtualbox-image target is not really -virtualbox specific. It create a x86 image usable in kvm, qemu, -vmware and any other x86 virtual machine environment. You might need -the version of vmdebootstrap in Jessie to get the build working, as it -include fixes for a race condition with kpartx.
- -If you instead want to install using a Debian CD and the preseed -method, boot a Debian Wheezy ISO and use this boot argument to load -the preseed values:
- --url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat -- -
I have not tested it myself the last few weeks, so I do not know if -it still work.
- -If you wonder how to help, one task you could look at is using -systemd as the boot system. It will become the default for Linux in -Jessie, so we need to make sure it is usable on the Freedombox. I did -a simple test a few weeks ago, and noticed dnsmasq failed to start -during boot when using systemd. I suspect there are other problems -too. :) To detect problems, there is a test suite included, which can -be run from the plinth web interface.
- -Give it a go and let us know how it goes on the mailing list, and help -us get the new release published. :) Please join us on -IRC (#freedombox on -irc.debian.org) and -the -mailing list if you want to help make this vision come true.
+ +It is pleasing to see that the work we put down in publishing new +editions of the classic Free +Culture book by the founder of the Creative Commons movement, +Lawrence Lessig, is still being appreciated. I had a look at the +latest sales numbers for the paper edition today. Not too impressive, +but happy to see some buyers still exist. All the revenue from the +books is sent to the Creative +Commons Corporation, and they receive the largest cut if you buy +directly from Lulu. Most books are sold via Amazon, with Ingram +second and only a small fraction directly from Lulu. The ebook +edition is available for free from +Github.
+ +Title / language | Quantity | ||
---|---|---|---|
2016 jan-jun | 2016 jul-dec | 2017 jan-may | |
Culture Libre / French | +3 | +6 | +15 | +
Fri kultur / Norwegian | +7 | +1 | +0 | +
Free Culture / English | +14 | +27 | +16 | +
Total | +24 | +34 | +31 | +
A bit sad to see the low sales number on the Norwegian edition, and +a bit surprising the English edition still selling so well.
+ +If you would like to translate and publish the book in your native +language, I would be happy to help make it happen. Please get in +touch.
For 12 år siden, skrev jeg et lite notat om -bruk av språkkoder -i Norge. Jeg ble nettopp minnet på dette da jeg fikk spørsmål om -notatet fortsatt var aktuelt, og tenkte det var greit å repetere hva -som fortsatt gjelder. Det jeg skrev da er fortsatt like aktuelt.
- -Når en velger språk i programmer på unix, så velger en blant mange -språkkoder. For språk i Norge anbefales følgende språkkoder (anbefalt -locale i parantes):
- --
-
- nb (nb_NO)
- Bokmål i Norge -
- nn (nn_NO)
- Nynorsk i Norge -
- se (se_NO)
- Nordsamisk i Norge -
Alle programmer som bruker andre koder bør endres.
- -Språkkoden bør brukes når .po-filer navngis og installeres. Dette -er ikke det samme som locale-koden. For Norsk Bokmål, så bør filene -være navngitt nb.po, mens locale (LANG) bør være nb_NO.
- -Hvis vi ikke får standardisert de kodene i alle programmene med -norske oversettelser, så er det umulig å gi LANG-variablen ett innhold -som fungerer for alle programmer.
- -Språkkodene er de offisielle kodene fra ISO 639, og bruken av dem i -forbindelse med POSIX localer er standardisert i RFC 3066 og ISO -15897. Denne anbefalingen er i tråd med de angitte standardene.
- -Følgende koder er eller har vært i bruk som locale-verdier for -"norske" språk. Disse bør unngås, og erstattes når de oppdages:
- -norwegian | -> nb_NO |
bokmål | -> nb_NO |
bokmal | -> nb_NO |
nynorsk | -> nn_NO |
no | -> nb_NO |
no_NO | -> nb_NO |
no_NY | -> nn_NO |
sme_NO | -> se_NO |
Merk at når det gjelder de samiske språkene, at se_NO i praksis -henviser til nordsamisk i Norge, mens f.eks. smj_NO henviser til -lulesamisk. Dette notatet er dog ikke ment å gi råd rundt samiske -språkkoder, der gjør -Divvun-prosjektet en bedre -jobb.
- -Referanser:
+ +I am very happy to report that the +Nikita Noark 5 +core project tagged its second release today. The free software +solution is an implementation of the Norwegian archive standard Noark +5 used by government offices in Norway. These were the changes in +version 0.1.1 since version 0.1.0 (from NEWS.md):
-
-
- RFC 3066 - Tags - for the Identification of Languages (Erstatter RFC 1766) - -
- ISO - 639 - Codes for the Representation of Names of Languages - -
- ISO - DTR 14652 - locale-standard Specification method for cultural - conventions +
- Continued work on the angularjs GUI, including document upload. +
- Implemented correspondencepartPerson, correspondencepartUnit and + correspondencepartInternal +
- Applied for coverity coverage and started submitting code on + regualr basis. +
- Started fixing bugs reported by coverity +
- Corrected and completed HATEOAS links to make sure entire API is + available via URLs in _links. +
- Corrected all relation URLs to use trailing slash. +
- Add initial support for storing data in ElasticSearch. +
- Now able to receive and store uploaded files in the archive. +
- Changed JSON output for object lists to have relations in _links. +
- Improve JSON output for empty object lists. +
- Now uses correct MIME type application/vnd.noark5-v4+json. +
- Added support for docker container images. +
- Added simple API browser implemented in JavaScript/Angular. +
- Started on archive client implemented in JavaScript/Angular. +
- Started on prototype to show the public mail journal. +
- Improved performance by disabling Sprint FileWatcher. +
- Added support for 'arkivskaper', 'saksmappe' and 'journalpost'. +
- Added support for some metadata codelists. +
- Added support for Cross-origin resource sharing (CORS). +
- Changed login method from Basic Auth to JSON Web Token (RFC 7519) + style. +
- Added support for GET-ing ny-* URLs. +
- Added support for modifying entities using PUT and eTag. +
- Added support for returning XML output on request. +
- Removed support for English field and class names, limiting ourself + to the official names. +
- ... -
- ISO - 15897: Registration procedures for cultural elements (cultural - registry), - (nytt - draft) - -
- ISO/IEC - JTC1/SC22/WG20 - Gruppen for i18n-standardisering i ISO +
-
+
If this sound interesting to you, please contact us on IRC (#nikita +on irc.freenode.net) or email +(nikita-noark +mailing list).
For a while now, I have been looking for a sensible offsite backup -solution for use at home. My requirements are simple, it must be -cheap and locally encrypted (in other words, I keep the encryption -keys, the storage provider do not have access to my private files). -One idea me and my friends had many years ago, before the cloud -storage providers showed up, was to use Google mail as storage, -writing a Linux block device storing blocks as emails in the mail -service provided by Google, and thus get heaps of free space. On top -of this one can add encryption, RAID and volume management to have -lots of (fairly slow, I admit that) cheap and encrypted storage. But -I never found time to implement such system. But the last few weeks I -have looked at a system called -S3QL, a locally -mounted network backed file system with the features I need.
- -S3QL is a fuse file system with a local cache and cloud storage, -handling several different storage providers, any with Amazon S3, -Google Drive or OpenStack API. There are heaps of such storage -providers. S3QL can also use a local directory as storage, which -combined with sshfs allow for file storage on any ssh server. S3QL -include support for encryption, compression, de-duplication, snapshots -and immutable file systems, allowing me to mount the remote storage as -a local mount point, look at and use the files as if they were local, -while the content is stored in the cloud as well. This allow me to -have a backup that should survive fire. The file system can not be -shared between several machines at the same time, as only one can -mount it at the time, but any machine with the encryption key and -access to the storage service can mount it if it is unmounted.
- -It is simple to use. I'm using it on Debian Wheezy, where the -package is included already. So to get started, run apt-get -install s3ql. Next, pick a storage provider. I ended up picking -Greenqloud, after reading their nice recipe on -how -to use S3QL with their Amazon S3 service, because I trust the laws -in Iceland more than those in USA when it come to keeping my personal -data safe and private, and thus would rather spend money on a company -in Iceland. Another nice recipe is available from the article -S3QL -Filesystem for HPC Storage by Jeff Layton in the HPC section of -Admin magazine. When the provider is picked, figure out how to get -the API key needed to connect to the storage API. With Greencloud, -the key did not show up until I had added payment details to my -account.
- -Armed with the API access details, it is time to create the file -system. First, create a new bucket in the cloud. This bucket is the -file system storage area. I picked a bucket name reflecting the -machine that was going to store data there, but any name will do. -I'll refer to it as bucket-name below. In addition, one need -the API login and password, and a locally created password. Store it -all in ~root/.s3ql/authinfo2 like this: - -
+ +-[s3c] -storage-url: s3c://s.greenqloud.com:443/bucket-name -backend-login: API-login -backend-password: API-password -fs-passphrase: local-password -
This is a copy of +an +email I posted to the nikita-noark mailing list. Please follow up +there if you would like to discuss this topic. The background is that +we are making a free software archive system based on the Norwegian +Noark +5 standard for government archives.
+ +I've been wondering a bit lately how trusted timestamps could be +stored in Noark 5. +Trusted +timestamps can be used to verify that some information +(document/file/checksum/metadata) have not been changed since a +specific time in the past. This is useful to verify the integrity of +the documents in the archive.
+ +Then it occured to me, perhaps the trusted timestamps could be +stored as dokument variants (ie dokumentobjekt referered to from +dokumentbeskrivelse) with the filename set to the hash it is +stamping?
+ +Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", +a new dokumentobjekt is associated with "dokumentbeskrivelse" with the +same attributes as the stamped dokumentobjekt except these +attributes:
-I create my local passphrase using pwget 50 or similar, -but any sensible way to create a fairly random password should do it. -Armed with these details, it is now time to run mkfs, entering the API -details and password to create it:
- -- --# mkdir -m 700 /var/lib/s3ql-cache -# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl s3c://s.greenqloud.com:443/bucket-name -Enter backend login: -Enter backend password: -Before using S3QL, make sure to read the user's guide, especially -the 'Important Rules to Avoid Loosing Data' section. -Enter encryption password: -Confirm encryption password: -Generating random encryption key... -Creating metadata tables... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.00 MB of compressed metadata. -#
The next step is mounting the file system to make the storage available. - -
+-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 4 upload threads. -Downloading and decompressing metadata... -Reading metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Mounting filesystem... -# df -h /s3ql -Filesystem Size Used Avail Use% Mounted on -s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql -# -
-
-
- format -> "RFC3161" +
- mimeType -> "application/timestamp-reply" +
- formatDetaljer -> "<source URL for timestamp service>" +
- filenavn -> "<sjekksum>.tsr"
-
+-# umount.s3ql /s3ql -# -
The file system is now ready for use. I use rsync to store my -backups in it, and as the metadata used by rsync is downloaded at -mount time, no network traffic (and storage cost) is triggered by -running rsync. To unmount, one should not use the normal umount -command, as this will not flush the cache to the cloud storage, but -instead running the umount.s3ql command like this: +
There is a fsck command available to check the file system and -correct any problems detected. This can be used if the local server -crashes while the file system is mounted, to reset the "already -mounted" flag. This is what it look like when processing a working -file system:
+This assume a service following +IETF RFC 3161 is +used, which specifiy the given MIME type for replies and the .tsr file +ending for the content of such trusted timestamp. As far as I can +tell from the Noark 5 specifications, it is OK to have several +variants/renderings of a dokument attached to a given +dokumentbeskrivelse objekt. It might be stretching it a bit to make +some of these variants represent crypto-signatures useful for +verifying the document integrity instead of representing the dokument +itself.
+ +Using the source of the service in formatDetaljer allow several +timestamping services to be used. This is useful to spread the risk +of key compromise over several organisations. It would only be a +problem to trust the timestamps if all of the organisations are +compromised.
+ +The following oneliner on Linux can be used to generate the tsr
+file. $input is the path to the file to checksum, and $sha256 is the
+SHA-256 checksum of the file (ie the "
--# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name -Using cached metadata. -File system seems clean, checking anyway. -Checking DB integrity... -Creating temporary extra indices... -Checking lost+found... -Checking cached objects... -Checking names (refcounts)... -Checking contents (names)... -Checking contents (inodes)... -Checking contents (parent inodes)... -Checking objects (reference counts)... -Checking objects (backend)... -..processed 5000 objects so far.. -..processed 10000 objects so far.. -..processed 15000 objects so far.. -Checking objects (sizes)... -Checking blocks (referenced objects)... -Checking blocks (refcounts)... -Checking inode-block mapping (blocks)... -Checking inode-block mapping (inodes)... -Checking inodes (refcounts)... -Checking inodes (sizes)... -Checking extended attributes (names)... -Checking extended attributes (inodes)... -Checking symlinks (inodes)... -Checking directory reachability... -Checking unix conventions... -Checking referential integrity... -Dropping temporary indices... -Backing up old metadata... -Dumping metadata... -..objects.. -..blocks.. -..inodes.. -..inode_blocks.. -..symlink_targets.. -..names.. -..contents.. -..ext_attributes.. -Compressing and uploading metadata... -Wrote 0.89 MB of compressed metadata. -# +openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \ + | curl -s -H "Content-Type: application/timestamp-query" \ + --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
Thanks to the cache, working on files that fit in the cache is very -quick, about the same speed as local file access. Uploading large -amount of data is to me limited by the bandwidth out of and into my -house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, -which is very close to my upload speed, and downloading the same -Debian installation ISO gave me 610 kiB/s, close to my download speed. -Both were measured using dd. So for me, the bottleneck is my -network, not the file system code. I do not know what a good cache -size would be, but suspect that the cache should e larger than your -working set.
- -I mentioned that only one machine can mount the file system at the -time. If another machine try, it is told that the file system is -busy:
+To verify the timestamp, you first need to download the public key +of the trusted timestamp service, for example using this command:
--# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ - --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql -Using 8 upload threads. -Backend reports that fs is still mounted elsewhere, aborting. -# +wget -O ca-cert.txt \ + https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
The file content is uploaded when the cache is full, while the -metadata is uploaded once every 24 hour by default. To ensure the -file system content is flushed to the cloud, one can either umount the -file system, or ask S3QL to flush the cache and metadata using -s3qlctrl: - -
+-# s3qlctrl upload-meta /s3ql -# s3qlctrl flushcache /s3ql -# -
Note, the public key should be stored alongside the timestamps in +the archive to make sure it is also available 100 years from now. It +is probably a good idea to standardise how and were to store such +public keys, to make it easier to find for those trying to verify +documents 100 or 1000 years from now. :)
-If you are curious about how much space your data uses in the -cloud, and how much compression and deduplication cut down on the -storage usage, you can use s3qlstat on the mounted file system to get -a report:
+The verification itself is a simple openssl command:
--# s3qlstat /s3ql -Directory entries: 9141 -Inodes: 9143 -Data blocks: 8851 -Total data size: 22049.38 MB -After de-duplication: 21955.46 MB (99.57% of total) -After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) -Database size: 2.39 MB (uncompressed) -(some values do not take into account not-yet-uploaded dirty blocks in cache) -# +openssl ts -verify -data $inputfile -in $sha256.tsr \ + -CAfile ca-cert.txt -text
I mentioned earlier that there are several possible suppliers of -storage. I did not try to locate them all, but am aware of at least -Greenqloud, -Google Drive, -Amazon S3 web serivces, -Rackspace and -Crowncloud. The latter even -accept payment in Bitcoin. Pick one that suit your need. Some of -them provide several GiB of free storage, but the prize models are -quite different and you will have to figure out what suits you -best.
- -While researching this blog post, I had a look at research papers -and posters discussing the S3QL file system. There are several, which -told me that the file system is getting a critical check by the -science community and increased my confidence in using it. One nice -poster is titled -"An -Innovative Parallel Cloud Storage System using OpenStackâs SwiftObject -Store and Transformative Parallel I/O Approach" by Hsing-Bung -Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields -and Pamela Smith. Please have a look.
- -Given my problems with different file systems earlier, I decided to -check out the mounted S3QL file system to see if it would be usable as -a home directory (in other word, that it provided POSIX semantics when -it come to locking and umask handling etc). Running -my -test code to check file system semantics, I was happy to discover that -no error was found. So the file system can be used for home -directories, if one chooses to do so.
- -If you do not want a locally file system, and want something that -work without the Linux fuse file system, I would like to mention the -Tarsnap service, which also -provide locally encrypted backup using a command line client. It have -a nicer access control system, where one can split out read and write -access, allowing some systems to write to the backup and others to -only read from it.
- -As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
+Is there any reason this approach would not work? Is it somehow against +the Noark 5 specification?
I dag kom endelig avgjørelsen fra EU-domstolen om -datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i -strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva -datalagringsdirektivet er for noe, så er det -en -flott dokumentar tilgjengelig hos NRK som jeg tidligere -har -anbefalt alle å se.
- -Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at -det kommer flere ut over dagen. Flere kan finnes -via -mylder.
- --
-
-
- EU-domstolen: -Datalagringsdirektivet er ugyldig - e24.no 2014-04-08 - -
- EU-domstolen: -Datalagringsdirektivet er ulovlig - aftenposten.no 2014-04-08 - -
- Krever -DLD-stopp i Norge - aftenposten.no 2014-04-08 - -
- Apenes: - En -gledens dag - p4.no 2014-04-08 - -
- EU-domstolen: -â Datalagringsdirektivet er ugyldig - nrk.no 2014-04-08 - -
- EU-domstolen: -Datalagringsdirektivet er ugyldig - vg.no 2014-04-08 - -
- - -Vi bør skrote hele datalagringsdirektivet - dagbladet.no -2014-04-08 - -
- EU-domstolen: -DLD er ugyldig - digi.no 2014-04-08 - -
- European -court declares data retention directive invalid - irishtimes.com -2014-04-08 - -
- EU -court rules against requirement to keep data of telecom users - -reuters.com 2014-04-08 - -
Jeg synes det er veldig fint at nok en stemme slår fast at -totalitær overvåkning av befolkningen er uakseptabelt, men det er -fortsatt like viktig å beskytte privatsfæren som før, da de -teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror -innsats i prosjekter som -Freedombox og -Dugnadsnett er viktigere enn -noen gang.
- -Update 2014-04-08 12:10: Kronerullingen for å -stoppe datalagringsdirektivet i Norge gjøres hos foreningen -Digitalt Personvern, -som har samlet inn 843 215,- så langt men trenger nok mye mer hvis - -ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var -kun -partinene Høyre og Arbeiderpartiet som stemte for -Datalagringsdirektivet, og en av dem må bytte mening for at det skal -bli flertall mot i Stortinget. Se mer om saken -Holder -de ord.
+ +Aftenposten +melder i dag om feil i eksamensoppgavene for eksamen i politikk og +menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var +like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring +på om den fri oversetterløsningen +Apertium ville gjort en bedre +jobb enn Utdanningsdirektoratet. Det kan se slik ut.
+ +Her er bokmålsoppgaven fra eksamenen:
+ +++ +Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers +rolle og muligheter til å håndtere internasjonale utfordringer, som +for eksempel flykningekrisen.
+ +Vedlegge er eksempler på tekster som kan gi relevante perspektiver +på temaet:
++
+ +- Flykningeregnskapet 2016, UNHCR og IDMC +
- «Grenseløst Europa for fall» A-Magasinet, 26. november 2015 +
Dette oversetter Apertium slik:
+ +++ +Drøft utfordringane knytte til nasjonalstatane sine og rolla til +andre aktørar og høve til å handtera internasjonale utfordringar, som +til dømes *flykningekrisen.
+ +Vedleggja er døme på tekster som kan gje relevante perspektiv på +temaet:
+ ++
+ +- *Flykningeregnskapet 2016, *UNHCR og *IDMC
+- «*Grenseløst Europa for fall» A-Magasinet, 26. november 2015
+
Ord som ikke ble forstått er markert med stjerne (*), og trenger +ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i +oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at +"andre aktørers rolle og muligheter til ..." burde vært oversatt til +"rolla til andre aktørar og deira høve til ..." eller noe slikt, men +det er kanskje flisespikking. Det understreker vel bare at det alltid +trengs korrekturlesning etter automatisk oversettelse.
+I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på +sin forskrift. Som en kan se er det ikke mye tid igjen før fristen +som går ut på søndag. Denne forskriften er det som lister opp hvilke +formater det er greit å arkivere i +Noark +5-løsninger i Norge.
+ +Jeg fant høringsdokumentene hos +Norsk +Arkivråd etter å ha blitt tipset på epostlisten til +fri +programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 +Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket +være min interesse for tjenestegrensesnittsprosjektet har jeg lest en +god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget +at standard epost ikke er på listen over godkjente formater som kan +arkiveres. Høringen med frist søndag er en glimrende mulighet til å +forsøke å gjøre noe med det. Jeg holder på med +egen +høringsuttalelse, og lurer på om andre er interessert i å støtte +forslaget om å tillate arkivering av epost som epost i arkivet.
+ +Er du igang med å skrive egen høringsuttalelse allerede? I så fall +kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror +ikke det trengs så mye. Her et kort forslag til tekst:
+ ++ ++ +Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse + 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om + revisjon av Forskrift om utfyllende tekniske og arkivfaglige + bestemmelser om behandling av offentlige arkiver (Riksarkivarens + forskrift).
+ +Svært mye av vår kommuikasjon foregår i dag på e-post. Vi + foreslår derfor at Internett-e-post, slik det er beskrevet i IETF + RFC 5322, + https://tools.ietf.org/html/rfc5322. bør + inn som godkjent dokumentformat. Vi foreslår at forskriftens + oversikt over godkjente dokumentformater ved innlevering i § 5-16 + endres til å ta med Internett-e-post.
+ +
Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan +epost kan lagres i en Noark 5-struktur, og holder på å skrive et +forslag om hvordan dette kan gjøres som vil bli sendt over til +arkivverket så snart det er ferdig. De som er interesserte kan +følge +fremdriften på web.
+ +Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev + sendt + inn av foreningen NUUG.
Microsoft have announced that Windows XP reaches its end of life -2014-04-08, in 7 days. But there are heaps of machines still running -Windows XP, and depending on Windows XP to run their applications, and -upgrading will be expensive, both when it comes to money and when it -comes to the amount of effort needed to migrate from Windows XP to a -new operating system. Some obvious options (buy new a Windows -machine, buy a MacOSX machine, install Linux on the existing machine) -are already well known and covered elsewhere. Most of them involve -leaving the user applications installed on Windows XP behind and -trying out replacements or updated versions. In this blog post I want -to mention one strange bird that allow people to keep the hardware and -the existing Windows XP applications and run them on a free software -operating system that is Windows XP compatible.
- -ReactOS is a free software -operating system (GNU GPL licensed) working on providing a operating -system that is binary compatible with Windows, able to run windows -programs directly and to use Windows drivers for hardware directly. -The project goal is for Windows user to keep their existing machines, -drivers and software, and gain the advantages from user a operating -system without usage limitations caused by non-free licensing. It is -a Windows clone running directly on the hardware, so quite different -from the approach taken by the Wine -project, which make it possible to run Windows binaries on -Linux.
- -The ReactOS project share code with the Wine project, so most -shared libraries available on Windows are already implemented already. -There is also a software manager like the one we are used to on Linux, -allowing the user to install free software applications with a simple -click directly from the Internet. Check out the -screen shots on the -project web site for an idea what it look like (it looks just like -Windows before metro).
- -I do not use ReactOS myself, preferring Linux and Unix like -operating systems. I've tested it, and it work fine in a virt-manager -virtual machine. The browser, minesweeper, notepad etc is working -fine as far as I can tell. Unfortunately, my main test application -is the software included on a CD with the Lego Mindstorms NXT, which -seem to install just fine from CD but fail to leave any binaries on -the disk after the installation. So no luck with that test software. -No idea why, but hope someone else figure out and fix the problem. -I've tried the ReactOS Live ISO on a physical machine, and it seemed -to work just fine. If you like Windows and want to keep running your -old Windows binaries, check it out by -downloading the -installation CD, the live CD or the preinstalled virtual machine -image.
+ +Jeg oppdaget i dag at nettstedet som +publiserer offentlige postjournaler fra statlige etater, OEP, har +begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet +ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl +og curl. For å teste selv, kjør følgende:
+ ++ ++% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' +< HTTP/1.1 404 Not Found +% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP' +< HTTP/1.1 200 OK +% +
Her kan en se at tjenesten gir «404 Not Found» for curl i +standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera +versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen +2017-03-02.
+ +Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente +informasjon fra oep.no. Kan blokkeringen være gjort for å hindre +automatisert innsamling av informasjon fra OEP, slik Pressens +Offentlighetsutvalg gjorde for å dokumentere hvordan departementene +hindrer innsyn i +rapporten +«Slik hindrer departementer innsyn» som ble publiserte i januar +2017. Det virker usannsynlig, da det jo er trivielt å bytte +User-Agent til noe nytt.
+ +Finnes det juridisk grunnlag for det offentlige å diskriminere +webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter +hva klienten sier at den heter? Da OEP eies av DIFI og driftes av +Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to +aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men +postjournalen +til DIFI viser kun to dokumenter det siste året mellom DIFI og +Basefarm. +Mimes brønn neste, +tenker jeg.
Debian Edu / Skolelinux -keep gaining new users. Some weeks ago, a person showed up on IRC, -#debian-edu, with a -wish to contribute, and I managed to get a interview with this great -contributor Roger Marsal to learn more about his background.
- -Who are you, and how do you spend your days?
- -My name is Roger Marsal, I'm 27 years old (1986 generation) and I -live in Barcelona, Spain. I've got a strong business background and I -work as a patrimony manager and as a real estate agent. Additionally, -I've co-founded a British based tech company that is nowadays on the -last development phase of a new social networking concept.
- -I'm a Linux enthusiast that started its journey with Ubuntu four years -ago and have recently switched to Debian seeking rock solid stability -and as a necessary step to gain expertise.
- -In a nutshell, I spend my days working and learning as much as I -can to face both my job, entrepreneur project and feed my Linux -hunger.
- -How did you get in contact with the Skolelinux / Debian Edu -project?
- -I discovered the LTSP advantages -with "Ubuntu 12.04 alternate install" and after a year of use I -started looking for an alternative. Even though I highly value and -respect the Ubuntu project, I thought it was necessary for me to -change to a more robust and stable alternative. As far as I was using -Debian on my personal laptop I thought it would be fine to install -Debian and configure an LTSP server myself. Surprised, I discovered -that the Debian project also supported a kind of Edubuntu equivalent, -and after having some pain I obtained a Debian Edu network up and -running. I just loved it.
- -What do you see as the advantages of Skolelinux / Debian -Edu?
- -I found a main advantage in that, once you know "the tips and -tricks", a new installation just works out of the box. It's the most -complete alternative I've found to create an LTSP network. All the -other distributions seems to be made of plastic, Debian Edu seems to -be made of steel.
- -What do you see as the disadvantages of Skolelinux / Debian -Edu?
- -I found two main disadvantages.
- -I'm not an expert but I've got notions and I had to spent a considerable -amount of time trying to bring up a standard network topology. I'm quite -stubborn and I just worked until I did but I'm sure many people with few -resources (not big schools, but academies for example) would have switched -or dropped.
- -It's amazing how such a complex system like Debian Edu has achieved -this out-of-the-box state. Even though tweaking without breaking gets -more difficult, as more factors have to be considered. This can -discourage many people too.
- -Which free software do you use daily?
- -I use Debian, Firefox, Okular, Inkscape, LibreOffice and -Virtualbox.
- - -Which strategy do you believe is the right one to use to -get schools to use free software?
- -I don't think there is a need for a particular strategy. The free -attribute in both "freedom" and "no price" meanings is what will -really bring free software to schools. In my experience I can think of -the "R" statistical language; a -few years a ago was an extremely nerd tool for university people. -Today it's being increasingly used to teach statistics at many -different level of studies. I believe free and open software will -increasingly gain popularity, but I'm sure schools will be one of the -first scenarios where this will happen.
+ +The Nikita +Noark 5 core project is implementing the Norwegian standard for +keeping an electronic archive of government documents. +The +Noark 5 standard document the requirement for data systems used by +the archives in the Norwegian government, and the Noark 5 web interface +specification document a REST web service for storing, searching and +retrieving documents and metadata in such archive. I've been involved +in the project since a few weeks before Christmas, when the Norwegian +Unix User Group +announced +it supported the project. I believe this is an important project, +and hope it can make it possible for the government archives in the +future to use free software to keep the archives we citizens depend +on. But as I do not hold such archive myself, personally my first use +case is to store and analyse public mail journal metadata published +from the government. I find it useful to have a clear use case in +mind when developing, to make sure the system scratches one of my +itches.
+ +If you would like to help make sure there is a free software +alternatives for the archives, please join our IRC channel +(#nikita on +irc.freenode.net) and +the +project mailing list.
+ +When I got involved, the web service could store metadata about +documents. But a few weeks ago, a new milestone was reached when it +became possible to store full text documents too. Yesterday, I +completed an implementation of a command line tool +archive-pdf to upload a PDF file to the archive using this +API. The tool is very simple at the moment, and find existing +fonds, series and +files while asking the user to select which one to use if more than +one exist. Once a file is identified, the PDF is associated with the +file and uploaded, using the title extracted from the PDF itself. The +process is fairly similar to visiting the archive, opening a cabinet, +locating a file and storing a piece of paper in the archive. Here is +a test run directly after populating the database with test data using +our API tester:
+ ++ ++~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf +using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 +using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 + + 0 - Title of the test case file created 2017-03-18T23:49:32.103446 + 1 - Title of the test file created 2017-03-18T23:49:32.103446 +Select which mappe you want (or search term): 0 +Uploading mangelmelding/mangler.pdf + PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt + File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 +~/src//noark5-tester$ +
You can see here how the fonds (arkiv) and serie (arkivdel) only had +one option, while the user need to choose which file (mappe) to use +among the two created by the API tester. The archive-pdf +tool can be found in the git repository for the API tester.
+ +In the project, I have been mostly working on +the API +tester so far, while getting to know the code base. The API +tester currently use +the HATEOAS links +to traverse the entire exposed service API and verify that the exposed +operations and objects match the specification, as well as trying to +create objects holding metadata and uploading a simple XML file to +store. The tester has proved very useful for finding flaws in our +implementation, as well as flaws in the reference site and the +specification.
+ +The test document I uploaded is a summary of all the specification +defects we have collected so far while implementing the web service. +There are several unclear and conflicting parts of the specification, +and we have +started +writing down the questions we get from implementing it. We use a +format inspired by how The +Austin Group collect defect reports for the POSIX standard with +their +instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). + +
The Nikita project is implemented using Java and Spring, and is +fairly easy to get up and running using Docker containers for those +that want to test the current code base. The API tester is +implemented in Python.
Archive
-
+
- 2017
+
-
+
+
- January (4) + +
- February (3) + +
- March (5) + +
- April (2) + +
- June (5) + +
- July (1) + +
- August (1) + +
+
+ - 2016
+
-
+
+
- January (3) + +
- February (2) + +
- March (3) + +
- April (8) + +
- May (8) + +
- June (2) + +
- July (2) + +
- August (5) + +
- September (2) + +
- October (3) + +
- November (8) + +
- December (5) + +
+
+ - 2015
+
-
+
+
- January (7) + +
- February (6) + +
- March (1) + +
- April (4) + +
- May (3) + +
- June (4) + +
- July (6) + +
- August (2) + +
- September (2) + +
- October (9) + +
- November (6) + +
- December (3) + +
+
- 2014
-
@@ -1005,7 +815,19 @@ first scenarios where this will happen.
- May (1) -
- June (1) +
- June (2) + +
- July (2) + +
- August (2) + +
- September (5) + +
- October (6) + +
- November (3) + +
- December (5)
@@ -1178,65 +1000,73 @@ first scenarios where this will happen.
- bankid (4) -
- bitcoin (8) +
- bitcoin (9) -
- bootsystem (14) +
- bootsystem (16)
- bsa (2)
- chrpath (2) -
- debian (98) +
- debian (151) + +
- debian edu (158) -
- debian edu (146) +
- debian-handbook (4)
- digistan (10) -
- dld (15) +
- dld (16) -
- docbook (10) +
- docbook (24)
- drivstoffpriser (4) -
- english (247) +
- english (351) -
- fiksgatami (21) +
- fiksgatami (23)
- fildeling (12) -
- freeculture (12) +
- freeculture (30) -
- freedombox (8) +
- freedombox (9) -
- frikanalen (11) +
- frikanalen (18) -
- intervju (40) +
- h264 (20) -
- isenkram (9) +
- intervju (42) -
- kart (18) +
- isenkram (15) + +
- kart (20)
- ldap (9) -
- lenker (7) +
- lenker (8) + +
- lsdvd (2)
- ltsp (1)
- mesh network (8) -
- multimedia (28) +
- multimedia (39) -
- norsk (246) +
- nice free software (9) -
- nuug (162) +
- norsk (291) -
- offentlig innsyn (11) +
- nuug (189) + +
- offentlig innsyn (33)
- open311 (2) -
- opphavsrett (46) +
- opphavsrett (64) -
- personvern (72) +
- personvern (101)
- raid (1) @@ -1244,39 +1074,41 @@ first scenarios where this will happen.
- reprap (11) -
- rfid (2) +
- rfid (3) -
- robot (9) +
- robot (10)
- rss (1) -
- ruter (4) +
- ruter (5)
- scraperwiki (2) -
- sikkerhet (40) +
- sikkerhet (53)
- sitesummary (4) -
- skepsis (4) +
- skepsis (5) + +
- standard (55) -
- standard (44) +
- stavekontroll (6) -
- stavekontroll (3) +
- stortinget (11) -
- stortinget (9) +
- surveillance (49) -
- surveillance (25) +
- sysadmin (3) -
- sysadmin (1) +
- usenix (2)
- valg (8) -
- video (42) +
- video (59)
- vitenskap (4) -
- web (32) +
- web (40)