X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/85824ebf237f0be723ef83d9a18f134376f52cf1..2d047348b0dfe1d3bab7955e9bf9b52223e84373:/blog/index.html?ds=sidebyside diff --git a/blog/index.html b/blog/index.html index 1cacd78367..7ee76c94b8 100644 --- a/blog/index.html +++ b/blog/index.html @@ -20,287 +20,96 @@
-
S3QL, a locally mounted cloud file system - nice free software
-
9th April 2014
-

For a while now, I have been looking for a sensible offsite backup -solution for use at home. My requirements are simple, it must be -cheap and locally encrypted (in other words, I keep the encryption -keys, the storage provider do not have access to my private files). -One idea me and my friends had many years ago, before the cloud -storage providers showed up, was to use Google mail as storage, -writing a Linux block device storing blocks as emails in the mail -service provided by Google, and thus get heaps of free space. On top -of this one can add encryption, RAID and volume management to have -lots of (fairly slow, I admit that) cheap and encrypted storage. But -I never found time to implement such system. But the last few weeks I -have looked at a system called -S3QL, a locally -mounted network backed file system with the features I need.

- -

S3QL is a fuse file system with a local cache and cloud storage, -handling several different storage providers, any with Amazon S3, -Google Drive or OpenStack API. There are heaps of such storage -providers. S3QL can also use a local directory as storage, which -combined with sshfs allow for file storage on any ssh server. S3QL -include support for encryption, compression, de-duplication, snapshots -and immutable file systems, allowing me to mount the remote storage as -a local mount point, look at and use the files as if they were local, -while the content is stored in the cloud as well. This allow me to -have a backup that should survive fire. The file system can not be -shared between several machines at the same time, as only one can -mount it at the time, but any machine with the encryption key and -access to the storage service can mount it if it is unmounted.

- -

It is simple to use. I'm using it on Debian Wheezy, where the -package is included already. So to get started, run apt-get -install s3ql. Next, pick a storage provider. I ended up picking -Greenqloud, after reading their nice recipe on -how -to use S3QL with their Amazon S3 service, because I trust the laws -in Iceland more than those in USA when it come to keeping my personal -data safe and private, and thus would rather spend money on a company -in Iceland. Another nice recipe is available from the article -S3QL -Filesystem for HPC Storage by Jeff Layton in the HPC section of -Admin magazine. When the provider is picked, figure out how to get -the API key needed to connect to the storage API. With Greencloud, -the key did not show up until I had added payment details to my -account.

- -

Armed with the API access details, it is time to create the file -system. First, create a new bucket in the cloud. This bucket is the -file system storage area. I picked a bucket name reflecting the -machine that was going to store data there, but any name will do. -I'll refer to it as bucket-name below. In addition, one need -the API login and password, and a locally created password. Store it -all in ~root/.s3ql/authinfo2 like this: - -

-[s3c]
-storage-url: s3c://s.greenqloud.com:443/bucket-name
-backend-login: API-login
-backend-password: API-password
-fs-passphrase: local-password
-

- -

I create my local passphrase using pwget 50 or similar, -but any sensible way to create a fairly random password should do it. -Armed with these details, it is now time to run mkfs, entering the API -details and password to create it:

- -

-# mkdir -m 700 /var/lib/s3ql-cache
-# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
-  --ssl s3c://s.greenqloud.com:443/bucket-name
-Enter backend login: 
-Enter backend password: 
-Before using S3QL, make sure to read the user's guide, especially
-the 'Important Rules to Avoid Loosing Data' section.
-Enter encryption password: 
-Confirm encryption password: 
-Generating random encryption key...
-Creating metadata tables...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.00 MB of compressed metadata.
-# 

- -

The next step is mounting the file system to make the storage available. - -

-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
-  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 4 upload threads.
-Downloading and decompressing metadata...
-Reading metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Mounting filesystem...
-# df -h /s3ql
-Filesystem                              Size  Used Avail Use% Mounted on
-s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
-#
-

- -

The file system is now ready for use. I use rsync to store my -backups in it, and as the metadata used by rsync is downloaded at -mount time, no network traffic (and storage cost) is triggered by -running rsync. To unmount, one should not use the normal umount -command, as this will not flush the cache to the cloud storage, but -instead running the umount.s3ql command like this: - -

-# umount.s3ql /s3ql
-# 
-

- -

There is a fsck command available to check the file system and -correct any problems detected. This can be used if the local server -crashes while the file system is mounted, to reset the "already -mounted" flag. This is what it look like when processing a working -file system:

- -

-# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
-Using cached metadata.
-File system seems clean, checking anyway.
-Checking DB integrity...
-Creating temporary extra indices...
-Checking lost+found...
-Checking cached objects...
-Checking names (refcounts)...
-Checking contents (names)...
-Checking contents (inodes)...
-Checking contents (parent inodes)...
-Checking objects (reference counts)...
-Checking objects (backend)...
-..processed 5000 objects so far..
-..processed 10000 objects so far..
-..processed 15000 objects so far..
-Checking objects (sizes)...
-Checking blocks (referenced objects)...
-Checking blocks (refcounts)...
-Checking inode-block mapping (blocks)...
-Checking inode-block mapping (inodes)...
-Checking inodes (refcounts)...
-Checking inodes (sizes)...
-Checking extended attributes (names)...
-Checking extended attributes (inodes)...
-Checking symlinks (inodes)...
-Checking directory reachability...
-Checking unix conventions...
-Checking referential integrity...
-Dropping temporary indices...
-Backing up old metadata...
-Dumping metadata...
-..objects..
-..blocks..
-..inodes..
-..inode_blocks..
-..symlink_targets..
-..names..
-..contents..
-..ext_attributes..
-Compressing and uploading metadata...
-Wrote 0.89 MB of compressed metadata.
-# 
-

- -

Thanks to the cache, working on files that fit in the cache is very -quick, about the same speed as local file access. Uploading large -amount of data is to me limited by the bandwidth out of and into my -house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, -which is very close to my upload speed, and downloading the same -Debian installation ISO gave me 610 kiB/s, close to my download speed. -Both were measured using dd. So for me, the bottleneck is my -network, not the file system code. I do not know what a good cache -size would be, but suspect that the cache should e larger than your -working set.

- -

I mentioned that only one machine can mount the file system at the -time. If another machine try, it is told that the file system is -busy:

- -

-# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
-  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
-Using 8 upload threads.
-Backend reports that fs is still mounted elsewhere, aborting.
-#
-

- -

The file content is uploaded when the cache is full, while the -metadata is uploaded once every 24 hour by default. To ensure the -file system content is flushed to the cloud, one can either umount the -file system, or ask S3QL to flush the cache and metadata using -s3qlctrl: - -

-# s3qlctrl upload-meta /s3ql
-# s3qlctrl flushcache /s3ql
-# 
-

- -

If you are curious about how much space your data uses in the -cloud, and how much compression and deduplication cut down on the -storage usage, you can use s3qlstat on the mounted file system to get -a report:

- -

-# s3qlstat /s3ql
-Directory entries:    9141
-Inodes:               9143
-Data blocks:          8851
-Total data size:      22049.38 MB
-After de-duplication: 21955.46 MB (99.57% of total)
-After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
-Database size:        2.39 MB (uncompressed)
-(some values do not take into account not-yet-uploaded dirty blocks in cache)
-#
-

- -

I mentioned earlier that there are several possible suppliers of -storage. I did not try to locate them all, but am aware of at least -Greenqloud, -Google Drive, -Amazon S3 web serivces, -Rackspace and -Crowncloud. The latter even -accept payment in Bitcoin. Pick one that suit your need. Some of -them provide several GiB of free storage, but the prize models are -quite different and you will have to figure out what suits you -best.

- -

While researching this blog post, I had a look at research papers -and posters discussing the S3QL file system. There are several, which -told me that the file system is getting a critical check by the -science community and increased my confidence in using it. One nice -poster is titled -"An -Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject -Store and Transformative Parallel I/O Approach" by Hsing-Bung -Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields -and Pamela Smith. Please have a look.

- -

Given my problems with different file systems earlier, I decided to -check out the mounted S3QL file system to see if it would be usable as -a home directory (in other word, that it provided POSIX semantics when -it come to locking and umask handling etc). Running -my -test code to check file system semantics, I was happy to discover that -no error was found. So the file system can be used for home -directories, if one chooses to do so.

- -

If you do not want a locally file system, and want something that -work without the Linux fuse file system, I would like to mention the -Tarsnap service, which also -provide locally encrypted backup using a command line client. It have -a nicer access control system, where one can split out read and write -access, allowing some systems to write to the backup and others to -only read from it.

- -

As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

+ +
9th August 2017
+

On friday, I came across an interesting article in the Norwegian +web based ICT news magazine digi.no on +how +to collect the IMSI numbers of nearby cell phones using the cheap +DVB-T software defined radios. The article refered to instructions +and a recipe by +Keld Norman on Youtube on how to make a simple $7 IMSI Catcher, and I decided to test them out.

+ +

The instructions said to use Ubuntu, install pip using apt (to +bypass apt), use pip to install pybombs (to bypass both apt and pip), +and the ask pybombs to fetch and build everything you need from +scratch. I wanted to see if I could do the same on the most recent +Debian packages, but this did not work because pybombs tried to build +stuff that no longer build with the most recent openssl library or +some other version skew problem. While trying to get this recipe +working, I learned that the apt->pip->pybombs route was a long detour, +and the only piece of software dependency missing in Debian was the +gr-gsm package. I also found out that the lead upstream developer of +gr-gsm (the name stand for GNU Radio GSM) project already had a set of +Debian packages provided in an Ubuntu PPA repository. All I needed to +do was to dget the Debian source package and built it.

+ +

The IMSI collector is a python script listening for packages on the +loopback network device and printing to the terminal some specific GSM +packages with IMSI numbers in them. The code is fairly short and easy +to understand. The reason this work is because gr-gsm include a tool +to read GSM data from a software defined radio like a DVB-T USB stick +and other software defined radios, decode them and inject them into a +network device on your Linux machine (using the loopback device by +default). This proved to work just fine, and I've been testing the +collector for a few days now.

+ +

The updated and simpler recipe is thus to

+ +
    + +
  1. start with a Debian machine running Stretch or newer,
  2. + +
  3. build and install the gr-gsm package available from +http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/,
  4. + +
  5. clone the git repostory from https://github.com/Oros42/IMSI-catcher,
  6. + +
  7. run grgsm_livemon and adjust the frequency until the terminal +where it was started is filled with a stream of text (meaning you +found a GSM station).
  8. + +
  9. go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers.
  10. + +
+ +

To make it even easier in the future to get this sniffer up and +running, I decided to package +the gr-gsm project +for Debian (WNPP +#871055), and the package was uploaded into the NEW queue today. +Luckily the gnuradio maintainer has promised to help me, as I do not +know much about gnuradio stuff yet.

+ +

I doubt this "IMSI cacher" is anywhere near as powerfull as +commercial tools like +The +Spy Phone Portable IMSI / IMEI Catcher or the +Harris +Stingray, but I hope the existance of cheap alternatives can make +more people realise how their whereabouts when carrying a cell phone +is easily tracked. Seeing the data flow on the screen, realizing that +I live close to a police station and knowing that the police is also +wearing cell phones, I wonder how hard it would be for criminals to +track the position of the police officers to discover when there are +police near by, or for foreign military forces to track the location +of the Norwegian military forces, or for anyone to track the location +of government officials...

+ +

It is worth noting that the data reported by the IMSI-catcher +script mentioned above is only a fraction of the data broadcasted on +the GSM network. It will only collect one frequency at the time, +while a typical phone will be using several frequencies, and not all +phones will be using the frequencies tracked by the grgsm_livemod +program. Also, there is a lot of radio chatter being ignored by the +simple_IMSI-catcher script, which would be collected by extending the +parser code. I wonder if gr-gsm can be set up to listen to more than +one frequency?

@@ -308,86 +117,37 @@ activities, please send Bitcoin donations to my address
- -
8th April 2014
-

I dag kom endelig avgjørelsen fra EU-domstolen om -datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i -strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva -datalagringsdirektivet er for noe, så er det -en -flott dokumentar tilgjengelig hos NRK som jeg tidligere -har -anbefalt alle å se.

- -

Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at -det kommer flere ut over dagen. Flere kan finnes -via -mylder.

- -

-

- -

Jeg synes det er veldig fint at nok en stemme slår fast at -totalitær overvåkning av befolkningen er uakseptabelt, men det er -fortsatt like viktig å beskytte privatsfæren som før, da de -teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror -innsats i prosjekter som -Freedombox og -Dugnadsnett er viktigere enn -noen gang.

- -

Update 2014-04-08 12:10: Kronerullingen for å -stoppe datalagringsdirektivet i Norge gjøres hos foreningen -Digitalt Personvern, -som har samlet inn 843 215,- så langt men trenger nok mye mer hvis - -ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var -kun -partinene Høyre og Arbeiderpartiet som stemte for -Datalagringsdirektivet, og en av dem må bytte mening for at det skal -bli flertall mot i Stortinget. Se mer om saken -Holder -de ord.

+ +
25th July 2017
+

+ +

I finally received a copy of the Norwegian Bokmål edition of +"The Debian Administrator's +Handbook". This test copy arrived in the mail a few days ago, and +I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition +is available +from lulu.com. If you buy it quickly, you save 25% on the list +price. The book is also available for download in electronic form as +PDF, EPUB and Mobipocket, as can be +read online +as a web page.

+ +

This is the second book I publish (the first was the book +"Free Culture" by Lawrence Lessig +in +English, +French +and +Norwegian +Bokmål), and I am very excited to finally wrap up this +project. I hope +"Håndbok +for Debian-administratoren" will be well received.

@@ -395,62 +155,50 @@ de ord.

- -
1st April 2014
-

Microsoft have announced that Windows XP reaches its end of life -2014-04-08, in 7 days. But there are heaps of machines still running -Windows XP, and depending on Windows XP to run their applications, and -upgrading will be expensive, both when it comes to money and when it -comes to the amount of effort needed to migrate from Windows XP to a -new operating system. Some obvious options (buy new a Windows -machine, buy a MacOSX machine, install Linux on the existing machine) -are already well known and covered elsewhere. Most of them involve -leaving the user applications installed on Windows XP behind and -trying out replacements or updated versions. In this blog post I want -to mention one strange bird that allow people to keep the hardware and -the existing Windows XP applications and run them on a free software -operating system that is Windows XP compatible.

- -

ReactOS is a free software -operating system (GNU GPL licensed) working on providing a operating -system that is binary compatible with Windows, able to run windows -programs directly and to use Windows drivers for hardware directly. -The project goal is for Windows user to keep their existing machines, -drivers and software, and gain the advantages from user a operating -system without usage limitations caused by non-free licensing. It is -a Windows clone running directly on the hardware, so quite different -from the approach taken by the Wine -project, which make it possible to run Windows binaries on -Linux.

- -

The ReactOS project share code with the Wine project, so most -shared libraries available on Windows are already implemented already. -There is also a software manager like the one we are used to on Linux, -allowing the user to install free software applications with a simple -click directly from the Internet. Check out the -screen shots on the -project web site for an idea what it look like (it looks just like -Windows before metro).

- -

I do not use ReactOS myself, preferring Linux and Unix like -operating systems. I've tested it, and it work fine in a virt-manager -virtual machine. The browser, minesweeper, notepad etc is working -fine as far as I can tell. Unfortunately, my main test application -is the software included on a CD with the Lego Mindstorms NXT, which -seem to install just fine from CD but fail to leave any binaries on -the disk after the installation. So no luck with that test software. -No idea why, but hope someone else figure out and fix the problem. -I've tried the ReactOS Live ISO on a physical machine, and it seemed -to work just fine. If you like Windows and want to keep running your -old Windows binaries, check it out by -downloading the -installation CD, the live CD or the preinstalled virtual machine -image.

+ +
27th June 2017
+

Jeg kom over teksten +«Killing +car privacy by federal mandate» av Leonid Reyzin på Freedom to +Tinker i dag, og det gleder meg å se en god gjennomgang om hvorfor det +er et urimelig inngrep i privatsfæren å la alle biler kringkaste sin +posisjon og bevegelse via radio. Det omtalte forslaget basert på +Dedicated Short Range Communication (DSRC) kalles Basic Safety Message +(BSM) i USA og Cooperative Awareness Message (CAM) i Europa, og det +norske Vegvesenet er en av de som ser ut til å kunne tenke seg å +pålegge alle biler å fjerne nok en bit av innbyggernes privatsfære. +Anbefaler alle å lese det som står der. + +

Mens jeg tittet litt på DSRC på biler i Norge kom jeg over et sitat +jeg synes er illustrativt for hvordan det offentlige Norge håndterer +problemstillinger rundt innbyggernes privatsfære i SINTEF-rapporten +«Informasjonssikkerhet +i AutoPASS-brikker» av Trond Foss:

+ +

+«Rapporten ser ikke på informasjonssikkerhet knyttet til personlig + integritet.» +

+ +

Så enkelt kan det tydeligvis gjøres når en vurderer +informasjonssikkerheten. Det holder vel at folkene på toppen kan si +at «Personvernet er ivaretatt», som jo er den populære intetsigende +frasen som gjør at mange tror enkeltindividers integritet tas vare på. +Sitatet fikk meg til å undres på hvor ofte samme tilnærming, å bare se +bort fra behovet for personlig itegritet, blir valgt når en velger å +legge til rette for nok et inngrep i privatsfæren til personer i +Norge. Det er jo sjelden det får reaksjoner. Historien om +reaksjonene på Helse Sør-Østs tjenesteutsetting er jo sørgelig nok et +unntak og toppen av isfjellet, desverre. Tror jeg fortsatt takker nei +til både AutoPASS og holder meg så langt unna det norske helsevesenet +som jeg kan, inntil de har demonstrert og dokumentert at de verdsetter +individets privatsfære og personlige integritet høyere enn kortsiktig +gevist og samfunnsnytte.

- Tags: english, reactos. + Tags: norsk, personvern, sikkerhet.
@@ -458,92 +206,66 @@ image.

- -
30th March 2014
-

Debian Edu / Skolelinux -keep gaining new users. Some weeks ago, a person showed up on IRC, -#debian-edu, with a -wish to contribute, and I managed to get a interview with this great -contributor Roger Marsal to learn more about his background.

- -

Who are you, and how do you spend your days?

- -

My name is Roger Marsal, I'm 27 years old (1986 generation) and I -live in Barcelona, Spain. I've got a strong business background and I -work as a patrimony manager and as a real estate agent. Additionally, -I've co-founded a British based tech company that is nowadays on the -last development phase of a new social networking concept.

- -

I'm a Linux enthusiast that started its journey with Ubuntu four years -ago and have recently switched to Debian seeking rock solid stability -and as a necessary step to gain expertise.

- -

In a nutshell, I spend my days working and learning as much as I -can to face both my job, entrepreneur project and feed my Linux -hunger.

- -

How did you get in contact with the Skolelinux / Debian Edu -project?

- -

I discovered the LTSP advantages -with "Ubuntu 12.04 alternate install" and after a year of use I -started looking for an alternative. Even though I highly value and -respect the Ubuntu project, I thought it was necessary for me to -change to a more robust and stable alternative. As far as I was using -Debian on my personal laptop I thought it would be fine to install -Debian and configure an LTSP server myself. Surprised, I discovered -that the Debian project also supported a kind of Edubuntu equivalent, -and after having some pain I obtained a Debian Edu network up and -running. I just loved it.

- -

What do you see as the advantages of Skolelinux / Debian -Edu?

- -

I found a main advantage in that, once you know "the tips and -tricks", a new installation just works out of the box. It's the most -complete alternative I've found to create an LTSP network. All the -other distributions seems to be made of plastic, Debian Edu seems to -be made of steel.

- -

What do you see as the disadvantages of Skolelinux / Debian -Edu?

- -

I found two main disadvantages.

- -

I'm not an expert but I've got notions and I had to spent a considerable -amount of time trying to bring up a standard network topology. I'm quite -stubborn and I just worked until I did but I'm sure many people with few -resources (not big schools, but academies for example) would have switched -or dropped.

- -

It's amazing how such a complex system like Debian Edu has achieved -this out-of-the-box state. Even though tweaking without breaking gets -more difficult, as more factors have to be considered. This can -discourage many people too.

- -

Which free software do you use daily?

- -

I use Debian, Firefox, Okular, Inkscape, LibreOffice and -Virtualbox.

- - -

Which strategy do you believe is the right one to use to -get schools to use free software?

- -

I don't think there is a need for a particular strategy. The free -attribute in both "freedom" and "no price" meanings is what will -really bring free software to schools. In my experience I can think of -the "R" statistical language; a -few years a ago was an extremely nerd tool for university people. -Today it's being increasingly used to teach statistics at many -different level of studies. I believe free and open software will -increasingly gain popularity, but I'm sure schools will be one of the -first scenarios where this will happen.

+ +
12th June 2017
+

It is pleasing to see that the work we put down in publishing new +editions of the classic Free +Culture book by the founder of the Creative Commons movement, +Lawrence Lessig, is still being appreciated. I had a look at the +latest sales numbers for the paper edition today. Not too impressive, +but happy to see some buyers still exist. All the revenue from the +books is sent to the Creative +Commons Corporation, and they receive the largest cut if you buy +directly from Lulu. Most books are sold via Amazon, with Ingram +second and only a small fraction directly from Lulu. The ebook +edition is available for free from +Github.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French3615
Fri kultur / Norwegian710
Free Culture / English142716
Total243431
+ +

A bit sad to see the low sales number on the Norwegian edition, and +a bit surprising the English edition still selling so well.

+ +

If you would like to translate and publish the book in your native +language, I would be happy to help make it happen. Please get in +touch.

@@ -551,39 +273,59 @@ first scenarios where this will happen.

- -
26th March 2014
-

Foreningen NUUG melder i natt at -NRK nå har bestemt seg for -når -den norske dokumentarfilmen om datalagringsdirektivet skal -sendes (se IMDB -for detaljer om filmen) . Første visning blir på NRK2 mandag -2014-03-31 kl. 19:50, og deretter visninger onsdag 2014-04-02 -kl. 12:30, fredag 2014-04-04 kl. 19:40 og søndag 2014-04-06 kl. 15:10. -Jeg har sett dokumentaren, og jeg anbefaler enhver å se den selv. Som -oppvarming mens vi venter anbefaler jeg Bjørn Stærks kronikk i -Aftenposten fra i går, -Autoritær -gjøkunge, der han gir en grei skisse av hvor ille det står til med -retten til privatliv og beskyttelsen av demokrati i Norge og resten -verden, og helt riktig slår fast at det er vi i databransjen som -sitter med nøkkelen til å gjøre noe med dette. Jeg har involvert meg -i prosjektene dugnadsnett.no -og FreedomBox for å -forsøke å gjøre litt selv for å bedre situasjonen, men det er mye -hardt arbeid fra mange flere enn meg som gjenstår før vi kan sies å ha -gjenopprettet balansen.

- -

Jeg regner med at nettutgaven dukker opp på -NRKs -side om filmen om datalagringsdirektivet om fem dager. Hold et -øye med siden, og tips venner og slekt om at de også bør se den.

+ +
10th June 2017
+

I am very happy to report that the +Nikita Noark 5 +core project tagged its second release today. The free software +solution is an implementation of the Norwegian archive standard Noark +5 used by government offices in Norway. These were the changes in +version 0.1.1 since version 0.1.0 (from NEWS.md): + +

    + +
  • Continued work on the angularjs GUI, including document upload.
  • +
  • Implemented correspondencepartPerson, correspondencepartUnit and + correspondencepartInternal
  • +
  • Applied for coverity coverage and started submitting code on + regualr basis.
  • +
  • Started fixing bugs reported by coverity
  • +
  • Corrected and completed HATEOAS links to make sure entire API is + available via URLs in _links.
  • +
  • Corrected all relation URLs to use trailing slash.
  • +
  • Add initial support for storing data in ElasticSearch.
  • +
  • Now able to receive and store uploaded files in the archive.
  • +
  • Changed JSON output for object lists to have relations in _links.
  • +
  • Improve JSON output for empty object lists.
  • +
  • Now uses correct MIME type application/vnd.noark5-v4+json.
  • +
  • Added support for docker container images.
  • +
  • Added simple API browser implemented in JavaScript/Angular.
  • +
  • Started on archive client implemented in JavaScript/Angular.
  • +
  • Started on prototype to show the public mail journal.
  • +
  • Improved performance by disabling Sprint FileWatcher.
  • +
  • Added support for 'arkivskaper', 'saksmappe' and 'journalpost'.
  • +
  • Added support for some metadata codelists.
  • +
  • Added support for Cross-origin resource sharing (CORS).
  • +
  • Changed login method from Basic Auth to JSON Web Token (RFC 7519) + style.
  • +
  • Added support for GET-ing ny-* URLs.
  • +
  • Added support for modifying entities using PUT and eTag.
  • +
  • Added support for returning XML output on request.
  • +
  • Removed support for English field and class names, limiting ourself + to the official names.
  • +
  • ...
  • + +
+ +

If this sound interesting to you, please contact us on IRC (#nikita +on irc.freenode.net) or email +(nikita-noark +mailing list).

@@ -591,102 +333,99 @@ side om filmen om datalagringsdirektivet om fem dager. Hold et
- -
25th March 2014
-

Did you ever need to store logs or other files in a way that would -allow it to be used as evidence in court, and needed a way to -demonstrate without reasonable doubt that the file had not been -changed since it was created? Or, did you ever need to document that -a given document was received at some point in time, like some -archived document or the answer to an exam, and not changed after it -was received? The problem in these settings is to remove the need to -trust yourself and your computers, while still being able to prove -that a file is the same as it was at some given time in the past.

- -

A solution to these problems is to have a trusted third party -"stamp" the document and verify that at some given time the document -looked a given way. Such -notarius service -have been around for thousands of years, and its digital equivalent is -called a -trusted -timestamping service. The Internet -Engineering Task Force standardised how such service could work a -few years ago as RFC -3161. The mechanism is simple. Create a hash of the file in -question, send it to a trusted third party which add a time stamp to -the hash and sign the result with its private key, and send back the -signed hash + timestamp. Both email, FTP and HTTP can be used to -request such signature, depending on what is provided by the service -used. Anyone with the document and the signature can then verify that -the document matches the signature by creating their own hash and -checking the signature using the trusted third party public key. -There are several commercial services around providing such -timestamping. A quick search for -"rfc 3161 -service" pointed me to at least -DigiStamp, -Quo -Vadis, -Global Sign -and Global -Trust Finder. The system work as long as the private key of the -trusted third party is not compromised.

- -

But as far as I can tell, there are very few public trusted -timestamp services available for everyone. I've been looking for one -for a while now. But yesterday I found one over at -Deutches -Forschungsnetz mentioned in -a -blog by David Müller. I then found -a -good recipe on how to use the service over at the University of -Greifswald.

- -

The OpenSSL library contain -both server and tools to use and set up your own signing service. See -the ts(1SSL), tsget(1SSL) manual pages for more details. The -following shell script demonstrate how to extract a signed timestamp -for any file on the disk in a Debian environment:

+ +
7th June 2017
+

This is a copy of +an +email I posted to the nikita-noark mailing list. Please follow up +there if you would like to discuss this topic. The background is that +we are making a free software archive system based on the Norwegian +Noark +5 standard for government archives.

+ +

I've been wondering a bit lately how trusted timestamps could be +stored in Noark 5. +Trusted +timestamps can be used to verify that some information +(document/file/checksum/metadata) have not been changed since a +specific time in the past. This is useful to verify the integrity of +the documents in the archive.

+ +

Then it occured to me, perhaps the trusted timestamps could be +stored as dokument variants (ie dokumentobjekt referered to from +dokumentbeskrivelse) with the filename set to the hash it is +stamping?

+ +

Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", +a new dokumentobjekt is associated with "dokumentbeskrivelse" with the +same attributes as the stamped dokumentobjekt except these +attributes:

+ +
    + +
  • format -> "RFC3161" +
  • mimeType -> "application/timestamp-reply" +
  • formatDetaljer -> "<source URL for timestamp service>" +
  • filenavn -> "<sjekksum>.tsr" + +
+ +

This assume a service following +IETF RFC 3161 is +used, which specifiy the given MIME type for replies and the .tsr file +ending for the content of such trusted timestamp. As far as I can +tell from the Noark 5 specifications, it is OK to have several +variants/renderings of a dokument attached to a given +dokumentbeskrivelse objekt. It might be stretching it a bit to make +some of these variants represent crypto-signatures useful for +verifying the document integrity instead of representing the dokument +itself.

+ +

Using the source of the service in formatDetaljer allow several +timestamping services to be used. This is useful to spread the risk +of key compromise over several organisations. It would only be a +problem to trust the timestamps if all of the organisations are +compromised.

+ +

The following oneliner on Linux can be used to generate the tsr +file. $input is the path to the file to checksum, and $sha256 is the +SHA-256 checksum of the file (ie the ".tsr" value mentioned +above).

-#!/bin/sh
-set -e
-url="http://zeitstempel.dfn.de"
-caurl="https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt"
-reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq)
-resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr)
-cafile=chain.txt
-if [ ! -f $cafile ] ; then
-    wget -O $cafile "$caurl"
-fi
-openssl ts -query -data "$1" -cert | tee "$reqfile" \
-    | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile"
-openssl ts -reply -in "$resfile" -text 1>&2
-openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2
-base64 < "$resfile"
-rm "$reqfile" "$resfile"
+openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
+  | curl -s -H "Content-Type: application/timestamp-query" \
+      --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr
 

-

The argument to the script is the file to timestamp, and the output -is a base64 encoded version of the signature to STDOUT and details -about the signature to STDERR. Note that due to -a bug -in the tsget script, you might need to modify the included script -and remove the last line. Or just write your own HTTP uploader using -curl. :) Now you too can prove and verify that files have not been -changed.

- -

But the Internet need more public trusted timestamp services. -Perhaps something for Uninett or -my work place the University of Oslo -to set up?

+

To verify the timestamp, you first need to download the public key +of the trusted timestamp service, for example using this command:

+ +

+wget -O ca-cert.txt \
+  https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
+

+ +

Note, the public key should be stored alongside the timestamps in +the archive to make sure it is also available 100 years from now. It +is probably a good idea to standardise how and were to store such +public keys, to make it easier to find for those trying to verify +documents 100 or 1000 years from now. :)

+ +

The verification itself is a simple openssl command:

+ +

+openssl ts -verify -data $inputfile -in $sha256.tsr \
+  -CAfile ca-cert.txt -text
+

+ +

Is there any reason this approach would not work? Is it somehow against +the Noark 5 specification?

@@ -694,54 +433,61 @@ to set up?

- -
21st March 2014
-

Keeping your DVD collection safe from scratches and curious -children fingers while still having it available when you want to see a -movie is not straight forward. My preferred method at the moment is -to store a full copy of the ISO on a hard drive, and use VLC, Popcorn -Hour or other useful players to view the resulting file. This way the -subtitles and bonus material are still available and using the ISO is -just like inserting the original DVD record in the DVD player.

- -

Earlier I used dd for taking security copies, but it do not handle -DVDs giving read errors (which are quite a few of them). I've also -tried using -dvdbackup -and genisoimage, but these days I use the marvellous python library -and program -python-dvdvideo -written by Bastian Blank. It is -in Debian -already and the binary package name is python3-dvdvideo. Instead -of trying to read every block from the DVD, it parses the file -structure and figure out which block on the DVD is actually in used, -and only read those blocks from the DVD. This work surprisingly well, -and I have been able to almost backup my entire DVD collection using -this method.

- -

So far, python-dvdvideo have failed on between 10 and -20 DVDs, which is a small fraction of my collection. The most common -problem is -DVDs -using UTF-16 instead of UTF-8 characters, which according to -Bastian is against the DVD specification (and seem to cause some -players to fail too). A rarer problem is what seem to be inconsistent -DVD structures, as the python library -claim -there is a overlap between objects. An equally rare problem claim -some -value is out of range. No idea what is going on there. I wish I -knew enough about the DVD format to fix these, to ensure my movie -collection will stay with me in the future.

- -

So, if you need to keep your DVDs safe, back them up using -python-dvdvideo. :)

+ +
3rd June 2017
+

Aftenposten +melder i dag om feil i eksamensoppgavene for eksamen i politikk og +menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var +like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring +på om den fri oversetterløsningen +Apertium ville gjort en bedre +jobb enn Utdanningsdirektoratet. Det kan se slik ut.

+ +

Her er bokmålsoppgaven fra eksamenen:

+ +
+

Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers +rolle og muligheter til å håndtere internasjonale utfordringer, som +for eksempel flykningekrisen.

+ +

Vedlegge er eksempler på tekster som kan gi relevante perspektiver +på temaet:

+
    +
  1. Flykningeregnskapet 2016, UNHCR og IDMC +
  2. «Grenseløst Europa for fall» A-Magasinet, 26. november 2015 +
+ +
+ +

Dette oversetter Apertium slik:

+ +
+

Drøft utfordringane knytte til nasjonalstatane sine og rolla til +andre aktørar og høve til å handtera internasjonale utfordringar, som +til dømes *flykningekrisen.

+ +

Vedleggja er døme på tekster som kan gje relevante perspektiv på +temaet:

+ +
    +
  1. *Flykningeregnskapet 2016, *UNHCR og *IDMC
  2. +
  3. «*Grenseløst Europa for fall» A-Magasinet, 26. november 2015
  4. +
+ +
+ +

Ord som ikke ble forstått er markert med stjerne (*), og trenger +ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i +oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at +"andre aktørers rolle og muligheter til ..." burde vært oversatt til +"rolla til andre aktørar og deira høve til ..." eller noe slikt, men +det er kanskje flisespikking. Det understreker vel bare at det alltid +trengs korrekturlesning etter automatisk oversettelse.

@@ -749,56 +495,67 @@ python-dvdvideo. :)

- -
16th March 2014
-

Det offentlige Norge har mye kunnskap og informasjon. Men hvordan -kan en få tilgang til den på en enkel måte? Takket være et lite -knippe lover og tilhørende forskrifter, blant annet -offentlighetsloven, -miljøinformasjonsloven -og -forvaltningsloven -har en rett til å spørre det offentlige og få svar. Men det finnes -intet offentlig arkiv over hva andre har spurt om, og dermed risikerer en -å måtte forstyrre myndighetene gang på gang for å få tak i samme -informasjonen på nytt. Britiske -mySociety har laget tjenesten -WhatDoTheyKnow som gjør -noe med dette. I Storbritannia blir WhatdoTheyKnow brukt i -ca -15% av alle innsynsforespørsler mot sentraladministrasjonen. -Prosjektet heter Alaveteli, og -er takk i bruk en rekke steder etter at løsningen ble generalisert og -gjort mulig å oversette. Den hjelper borgerne med å be om innsyn, -rådgir ved purringer og klager og lar alle se hvilke henvendelser som -er sendt til det offentlige og hvilke svar som er kommet inn, i et -søkpart arkiv. Her i Norge holder vi i foreningen NUUG på å få opp en -norsk utgave av Alaveteli, og her trenger vi din hjelp med -oversettelsen.

- -

Så langt er 76 % av Alaveteli oversatt til norsk bokmål, men vi -skulle gjerne vært oppe i 100 % før lansering. Oversettelsen gjøres -på Transifex, -der enhver som registrerer seg og ber om tilgang til -bokmålsoversettelsen får bidra. Vi har satt opp en test av tjenesten -(som ikke sender epost til det offentlige, kun til oss som holder på å -sette opp tjenesten) på maskinen -alaveteli-dev.nuug.no, der -en kan se hvordan de oversatte meldingen blir seende ut på nettsiden. -Når tjenesten lanseres vil den hete -Mimes brønn, etter -visdomskilden som Odin måtte gi øyet sitt for å få drikke i. Den -nettsiden er er ennå ikke klar til bruk.

- -

Hvis noen vil oversette til nynorsk også, så skal vi finne ut -hvordan vi lager en flerspråklig tjeneste. Men i første omgang er -fokus på bokmålsoversettelsen, der vi selv har nok peiling til å ha -fått oversatt 76%, men trenger hjelp for å komme helt i mål. :)

+ +
27th April 2017
+

I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på +sin forskrift. Som en kan se er det ikke mye tid igjen før fristen +som går ut på søndag. Denne forskriften er det som lister opp hvilke +formater det er greit å arkivere i +Noark +5-løsninger i Norge.

+ +

Jeg fant høringsdokumentene hos +Norsk +Arkivråd etter å ha blitt tipset på epostlisten til +fri +programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 +Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket +være min interesse for tjenestegrensesnittsprosjektet har jeg lest en +god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget +at standard epost ikke er på listen over godkjente formater som kan +arkiveres. Høringen med frist søndag er en glimrende mulighet til å +forsøke å gjøre noe med det. Jeg holder på med +egen +høringsuttalelse, og lurer på om andre er interessert i å støtte +forslaget om å tillate arkivering av epost som epost i arkivet.

+ +

Er du igang med å skrive egen høringsuttalelse allerede? I så fall +kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror +ikke det trengs så mye. Her et kort forslag til tekst:

+ +

+ +

Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse + 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om + revisjon av Forskrift om utfyllende tekniske og arkivfaglige + bestemmelser om behandling av offentlige arkiver (Riksarkivarens + forskrift).

+ +

Svært mye av vår kommuikasjon foregår i dag på e-post.  Vi + foreslår derfor at Internett-e-post, slik det er beskrevet i IETF + RFC 5322, + https://tools.ietf.org/html/rfc5322. bør + inn som godkjent dokumentformat.  Vi foreslår at forskriftens + oversikt over godkjente dokumentformater ved innlevering i § 5-16 + endres til å ta med Internett-e-post.

+ +

+ +

Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan +epost kan lagres i en Noark 5-struktur, og holder på å skrive et +forslag om hvordan dette kan gjøres som vil bli sendt over til +arkivverket så snart det er ferdig. De som er interesserte kan +følge +fremdriften på web.

+ +

Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev + sendt + inn av foreningen NUUG.

@@ -806,70 +563,52 @@ fått oversatt 76%, men trenger hjelp for å komme helt i mål. :)

- -
14th March 2014
-

The Freedombox -project is working on providing the software and hardware for -making it easy for non-technical people to host their data and -communication at home, and being able to communicate with their -friends and family encrypted and away from prying eyes. It has been -going on for a while, and is slowly progressing towards a new test -release (0.2).

- -

And what day could be better than the Pi day to announce that the -new version will provide "hard drive" / SD card / USB stick images for -Dreamplug, Raspberry Pi and VirtualBox (or any other virtualization -system), and can also be installed using a Debian installer preseed -file. The Debian based Freedombox is now based on Debian Jessie, -where most of the needed packages used are already present. Only one, -the freedombox-setup package, is missing. To try to build your own -boot image to test the current status, fetch the freedom-maker scripts -and build using -vmdebootstrap -with a user with sudo access to become root: - -

-git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
-  freedom-maker
-sudo apt-get install git vmdebootstrap mercurial python-docutils \
-  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
-  u-boot-tools
-make -C freedom-maker dreamplug-image raspberry-image virtualbox-image
-
- -

Root access is needed to run debootstrap and mount loopback -devices. See the README for more details on the build. If you do not -want all three images, trim the make line. But note that thanks to a race condition in -vmdebootstrap, the build might fail without the patch to the -kpartx call.

- -

If you instead want to install using a Debian CD and the preseed -method, boot a Debian Wheezy ISO and use this boot argument to load -the preseed values:

- -
-url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat
-
- -

But note that due to a -recently introduced bug in apt in Jessie, the installer will -currently hang while setting up APT sources. Killing the -'apt-cdrom ident' process when it hang a few times during the -installation will get the installation going. This affect all -installations in Jessie, and I expect it will be fixed soon.

- -

Give it a go and let us know how it goes on the mailing list, and help -us get the new release published. :) Please join us on -IRC (#freedombox on -irc.debian.org) and -the -mailing list if you want to help make this vision come true.

+ +
20th April 2017
+

Jeg oppdaget i dag at nettstedet som +publiserer offentlige postjournaler fra statlige etater, OEP, har +begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet +ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl +og curl. For å teste selv, kjør følgende:

+ +
+% curl -v -s https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 404 Not Found
+% curl -v -s --header 'User-Agent:Opera/12.0' https://www.oep.no/pub/report.xhtml?reportId=3 2>&1 |grep '< HTTP'
+< HTTP/1.1 200 OK
+%
+
+ +

Her kan en se at tjenesten gir «404 Not Found» for curl i +standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera +versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen +2017-03-02.

+ +

Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente +informasjon fra oep.no. Kan blokkeringen være gjort for å hindre +automatisert innsamling av informasjon fra OEP, slik Pressens +Offentlighetsutvalg gjorde for å dokumentere hvordan departementene +hindrer innsyn i +rapporten +«Slik hindrer departementer innsyn» som ble publiserte i januar +2017. Det virker usannsynlig, da det jo er trivielt å bytte +User-Agent til noe nytt.

+ +

Finnes det juridisk grunnlag for det offentlige å diskriminere +webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter +hva klienten sier at den heter? Da OEP eies av DIFI og driftes av +Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to +aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men +postjournalen +til DIFI viser kun to dokumenter det siste året mellom DIFI og +Basefarm. +Mimes brønn neste, +tenker jeg.

@@ -877,94 +616,101 @@ mailing list if you want to help make this vision come true.

- -
12th March 2014
-

On larger sites, it is useful to use a dedicated storage server for -storing user home directories and data. The design for handling this -in Debian Edu / Skolelinux, is -to update the automount rules in LDAP and let the automount daemon on -the clients take care of the rest. I was reminded about the need to -document this better when one of the customers of -Skolelinux Drift AS, where I am -on the board of directors, asked about how to do this. The steps to -get this working are the following:

- -

    - -
  1. Add new storage server in DNS. I use nas-server.intern as the -example host here.
  2. - -
  3. Add automoun LDAP information about this server in LDAP, to allow -all clients to automatically mount it on reqeust.
  4. - -
  5. Add the relevant entries in tjener.intern:/etc/fstab, because -tjener.intern do not use automount to avoid mounting loops.
  6. - -

- -

DNS entries are added in GOsa², and not described here. Follow the -instructions -in the manual (Machine Management with GOsa² in section Getting -started).

- -

Ensure that the NFS export points on the server are exported to the -relevant subnets or machines:

- -

-root@tjener:~# showmount -e nas-server
-Export list for nas-server:
-/storage         10.0.0.0/8
-root@tjener:~#
-

- -

Here everything on the backbone network is granted access to the -/storage export. With NFSv3 it is slightly better to limit it to -netgroup membership or single IP addresses to have some limits on the -NFS access.

- -

The next step is to update LDAP. This can not be done using GOsa², -because it lack a module for automount. Instead, use ldapvi and add -the required LDAP objects using an editor.

- -

-ldapvi --ldap-conf -ZD '(cn=admin)' -b ou=automount,dc=skole,dc=skolelinux,dc=no
-

- -

When the editor show up, add the following LDAP objects at the -bottom of the document. The "/&" part in the last LDAP object is a -wild card matching everything the nas-server exports, removing the -need to list individual mount points in LDAP.

+ +
19th March 2017
+

The Nikita +Noark 5 core project is implementing the Norwegian standard for +keeping an electronic archive of government documents. +The +Noark 5 standard document the requirement for data systems used by +the archives in the Norwegian government, and the Noark 5 web interface +specification document a REST web service for storing, searching and +retrieving documents and metadata in such archive. I've been involved +in the project since a few weeks before Christmas, when the Norwegian +Unix User Group +announced +it supported the project. I believe this is an important project, +and hope it can make it possible for the government archives in the +future to use free software to keep the archives we citizens depend +on. But as I do not hold such archive myself, personally my first use +case is to store and analyse public mail journal metadata published +from the government. I find it useful to have a clear use case in +mind when developing, to make sure the system scratches one of my +itches.

+ +

If you would like to help make sure there is a free software +alternatives for the archives, please join our IRC channel +(#nikita on +irc.freenode.net) and +the +project mailing list.

+ +

When I got involved, the web service could store metadata about +documents. But a few weeks ago, a new milestone was reached when it +became possible to store full text documents too. Yesterday, I +completed an implementation of a command line tool +archive-pdf to upload a PDF file to the archive using this +API. The tool is very simple at the moment, and find existing +fonds, series and +files while asking the user to select which one to use if more than +one exist. Once a file is identified, the PDF is associated with the +file and uploaded, using the title extracted from the PDF itself. The +process is fairly similar to visiting the archive, opening a cabinet, +locating a file and storing a piece of paper in the archive. Here is +a test run directly after populating the database with test data using +our API tester:

-add cn=nas-server,ou=auto.skole,ou=automount,dc=skole,dc=skolelinux,dc=no
-objectClass: automount
-cn: nas-server
-automountInformation: -fstype=autofs --timeout=60 ldap:ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
-
-add ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
-objectClass: top
-objectClass: automountMap
-ou: auto.nas-server
-
-add cn=/,ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
-objectClass: automount
-cn: /
-automountInformation: -fstype=nfs,tcp,rsize=32768,wsize=32768,rw,intr,hard,nodev,nosuid,noatime nas-server.intern:/&
+~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
+using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
+using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446
+
+ 0 - Title of the test case file created 2017-03-18T23:49:32.103446
+ 1 - Title of the test file created 2017-03-18T23:49:32.103446
+Select which mappe you want (or search term): 0
+Uploading mangelmelding/mangler.pdf
+  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
+  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
+~/src//noark5-tester$
 

-

The last step to remember is to mount the relevant mount points in -tjener.intern by adding them to /etc/fstab, creating the mount -directories using mkdir and running "mount -a" to mount them.

- -

When this is done, your users should be able to access the files on -the storage server directly by just visiting the -/tjener/nas-server/storage/ directory using any application on any -workstation, LTSP client or LTSP server.

+

You can see here how the fonds (arkiv) and serie (arkivdel) only had +one option, while the user need to choose which file (mappe) to use +among the two created by the API tester. The archive-pdf +tool can be found in the git repository for the API tester.

+ +

In the project, I have been mostly working on +the API +tester so far, while getting to know the code base. The API +tester currently use +the HATEOAS links +to traverse the entire exposed service API and verify that the exposed +operations and objects match the specification, as well as trying to +create objects holding metadata and uploading a simple XML file to +store. The tester has proved very useful for finding flaws in our +implementation, as well as flaws in the reference site and the +specification.

+ +

The test document I uploaded is a summary of all the specification +defects we have collected so far while implementing the web service. +There are several unclear and conflicting parts of the specification, +and we have +started +writing down the questions we get from implementing it. We use a +format inspired by how The +Austin Group collect defect reports for the POSIX standard with +their +instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :). + +

The Nikita project is implemented using Java and Spring, and is +fairly easy to get up and running using Docker containers for those +that want to test the current code base. The API tester is +implemented in Python.

@@ -979,6 +725,83 @@ workstation, LTSP client or LTSP server.

Archive