X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/8aedac12d78eb3190998a45354d27e807583c886..8ddc866b7270a09e64c3f284bed85a45a1d15f12:/blog/tags/sikkerhet/index.html diff --git a/blog/tags/sikkerhet/index.html b/blog/tags/sikkerhet/index.html index 3677b3f191..abda6f4a84 100644 --- a/blog/tags/sikkerhet/index.html +++ b/blog/tags/sikkerhet/index.html @@ -20,6 +20,722 @@

Entries tagged "sikkerhet".

+
+
+ s3ql, a locally mounted cloud file system - nice free software +
+
+ 9th April 2014 +
+
+

For a while now, I have been looking for a sensible offsite backup +solution for use at home. My requirements are simple, it must be +cheap and locally encrypted (in other words, I keep the encryption +keys, the storage provider do not have access to my private files). +One idea me and my friends have had many years ago, before the cloud +storage providers showed up, have been to use Google mail as storage, +writing a Linux block device storing blocks as emails in the mail +service provided by Google, and thus get heaps of free space. On top +of this one can add encryption, RAID and volume management to have +lots of (fairly slow, I admit that) cheap and encrypted storage. But +I never found time to implement such system. But the last few weeks I +have looked at a system called +S3QL, a locally +mounted network backed file system with the features I need.

+ +

S3QL is a fuse file system with a local cache and cloud storage, +handling several different storage providers, any with Amazon S3, +Google Drive or OpenStack API. There are heaps of such storage +providers. S3QL can also use a local directory as storage, which +combined with sshfs allow for file storage on any ssh server. S3QL +include support for encryption, compression, de-duplication, snapshots +and immutable file systems, allowing me to mount the remote storage as +a local mount point, look at and use the files as if they were local, +while the content is stored in the cloud as well. This allow me to +have a backup that should survive fire. The file system can not be +shared between several machines at the same time, as only one can +mount it at the time, but any machine with the encryption key and +access to the storage service can mount it if it is unmounted.

+ +

It is simple to use. I'm using it on Debian Wheezy, where the +package is included already. So to get started, run apt-get +install s3ql. Next, pick a storage provider. I ended up picking +Greenqloud, after reading their nice recipe on +how +to use s3ql with their Amazon S3 service, because I trust the laws +in Iceland more than those in USA when it come to keeping my personal +data safe and private, and thus would rather spend money on a company +in Iceland. Another nice recipe is available from the article +S3QL +Filesystem for HPC Storage by Jeff Layton in the HPC section of +Admin magazine. When the provider is picked, figure out how to get +the API key needed to connect to the storage API. With Greencloud, +the key did not show up until I had added payment details to my +account.

+ +

Armed with the API access details, it is time to create the file +system. First, create a new bucket in the cloud. This bucket is the +file system storage area. I picked a bucket name reflecting the +machine that was going to store data there, but any name will do. +I'll refer to it as bucket-name below. In addition, one need +the API login and password, and a locally created password. Store it +all in ~root/.s3ql/authinfo2 like this: + +

+[s3c]
+storage-url: s3c://s.greenqloud.com:443/bucket-name
+backend-login: API-login
+backend-password: API-password
+fs-passphrase: local-password
+

+ +

I create my local passphrase using pwget 50 or similar, +but any sensible way to create a fairly random password should do it. +Armed with these details, it is now time to run mkfs, entering the API +details and password to create it:

+ +

+# mkdir -m 700 /var/lib/s3ql-cache
+# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
+  --ssl s3c://s.greenqloud.com:443/bucket-name
+Enter backend login: 
+Enter backend password: 
+Before using S3QL, make sure to read the user's guide, especially
+the 'Important Rules to Avoid Loosing Data' section.
+Enter encryption password: 
+Confirm encryption password: 
+Generating random encryption key...
+Creating metadata tables...
+Dumping metadata...
+..objects..
+..blocks..
+..inodes..
+..inode_blocks..
+..symlink_targets..
+..names..
+..contents..
+..ext_attributes..
+Compressing and uploading metadata...
+Wrote 0.00 MB of compressed metadata.
+# 

+ +

The next step is mounting the file system to make the storage available. + +

+# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
+  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
+Using 4 upload threads.
+Downloading and decompressing metadata...
+Reading metadata...
+..objects..
+..blocks..
+..inodes..
+..inode_blocks..
+..symlink_targets..
+..names..
+..contents..
+..ext_attributes..
+Mounting filesystem...
+# df -h /mnt
+Filesystem                              Size  Used Avail Use% Mounted on
+s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
+#
+

+ +

The file system is now ready for use. I use rsync to store my +backups in it, and as the metadata used by rsync is downloaded at +mount time, no network traffic (and storage cost) is triggered by +running rsync. To unmount, one should not use the normal umount +command, as this will not flush the cache to the cloud storage, but +instead running the umount.s3ql command like this: + +

+# umount.s3ql /s3ql
+# 
+

+ +

There is a fsck command available to check the file system and +correct any problems detected. This can be used if the local server +crashes while the file system is mounted, to reset the "already +mounted" flag. This is what it look like when processing a working +file system:

+ +

+# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
+Using cached metadata.
+File system seems clean, checking anyway.
+Checking DB integrity...
+Creating temporary extra indices...
+Checking lost+found...
+Checking cached objects...
+Checking names (refcounts)...
+Checking contents (names)...
+Checking contents (inodes)...
+Checking contents (parent inodes)...
+Checking objects (reference counts)...
+Checking objects (backend)...
+..processed 5000 objects so far..
+..processed 10000 objects so far..
+..processed 15000 objects so far..
+Checking objects (sizes)...
+Checking blocks (referenced objects)...
+Checking blocks (refcounts)...
+Checking inode-block mapping (blocks)...
+Checking inode-block mapping (inodes)...
+Checking inodes (refcounts)...
+Checking inodes (sizes)...
+Checking extended attributes (names)...
+Checking extended attributes (inodes)...
+Checking symlinks (inodes)...
+Checking directory reachability...
+Checking unix conventions...
+Checking referential integrity...
+Dropping temporary indices...
+Backing up old metadata...
+Dumping metadata...
+..objects..
+..blocks..
+..inodes..
+..inode_blocks..
+..symlink_targets..
+..names..
+..contents..
+..ext_attributes..
+Compressing and uploading metadata...
+Wrote 0.89 MB of compressed metadata.
+# 
+

+ +

Thanks to the cache, working on files that fit in the cache is very +quick, about the same speed as local file access. Uploading large +amount of data is to me limited by the bandwidth out of and into my +house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, +which is very close to my upload speed, and downloading the same +Debian installation ISO gave me 610 kiB/s, close to my download speed. +Both were measured using dd. So for me, the bottleneck is my +network, not the file system code. I do not know what a good cache +size would be, but suspect that the cache should e larger than your +working set.

+ +

I mentioned that only one machine can mount the file system at the +time. If another machine try, it is told that the file system is +busy:

+ +

+# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
+  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
+Using 8 upload threads.
+Backend reports that fs is still mounted elsewhere, aborting.
+#
+

+ +

The file content is uploaded when the cache is full, while the +metadata is uploaded once every 24 hour by default. To ensure the +file system content is flushed to the cloud, one can either umount the +file system, or ask s3ql to flush the cache and metadata using +s3qlctrl: + +

+# s3qlctrl upload-meta /s3ql
+# s3qlctrl flushcache /s3ql
+# 
+

+ +

If you are curious about how much space your data uses in the +cloud, and how much compression and deduplication cut down on the +storage usage, you can use s3qlstat on the mounted file system to get +a report:

+ +

+# s3qlstat /s3ql
+Directory entries:    9141
+Inodes:               9143
+Data blocks:          8851
+Total data size:      22049.38 MB
+After de-duplication: 21955.46 MB (99.57% of total)
+After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
+Database size:        2.39 MB (uncompressed)
+(some values do not take into account not-yet-uploaded dirty blocks in cache)
+#
+

+ +

I mentioned earlier that there are several possible suppliers of +storage. I did not try to locate them all, but am aware of at least +Greenqloud, +Google Drive, +Amazon S3 web serivces, +Rackspace and +Crowncloud. The latter even +accept payment in Bitcoin. Pick one that suit your need. Some of +them provide several GiB of free storage, but the prize models are +quire different and you will have to figure out what suit you +best.

+ +

While researching this blog post, I had a look at research papers +and posters discussing the S3QL file system. There are several, which +told me that the file system is getting a critical check by the +science community and increased my confidence in using it. One nice +poster is titled +"An +Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject +Store and Transformative Parallel I/O Approach" by Hsing-Bung +Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields +and Pamela Smith. Please have a look.

+ +

Given my problems with different file systems earlier, I decided to +check out the mounted S3QL file system to see if it would be usable as +a home directory (in other word, that it provided POSIX semantics when +it come to locking and umask handling etc). Running +my +test code to check file system semantics, I was happy to discover that +no error was found. So the file system can be used for home +directories, if one chooses to do so.

+ +

If you do not want a locally file system, and want something that +work without the Linux fuse file system, I would like to mention the +Tarsnap service, which also +provide locally encrypted backup using a command line client. It have +a nicer access control system, where one can split out read and write +access, allowing some systems to write to the backup and others to +only read from it.

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

+ +
+
+ + + Tags: debian, english, personvern, sikkerhet. + + +
+
+
+ +
+
+ EU-domstolen bekreftet i dag at datalagringsdirektivet er ulovlig +
+
+ 8th April 2014 +
+
+

I dag kom endelig avgjørelsen fra EU-domstolen om +datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i +strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva +datalagringsdirektivet er for noe, så er det +en +flott dokumentar tilgjengelig hos NRK som jeg tidligere +har +anbefalt alle å se.

+ +

Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at +det kommer flere ut over dagen. Flere kan finnes +via +mylder.

+ +

+

+ +

Jeg synes det er veldig fint at nok en stemme slår fast at +totalitær overvåkning av befolkningen er uakseptabelt, men det er +fortsatt like viktig å beskytte privatsfæren som før, da de +teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror +innsats i prosjekter som +Freedombox og +Dugnadsnett er viktigere enn +noen gang.

+ +

Update 2014-04-08 12:10: Kronerullingen for å +stoppe datalagringsdirektivet i Norge gjøres hos foreningen +Digitalt Personvern, +som har samlet inn 843 215,- så langt men trenger nok mye mer hvis + +ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var +kun +partinene Høyre og Arbeiderpartiet som stemte for +Datalagringsdirektivet, og en av dem må bytte mening for at det skal +bli flertall mot i Stortinget. Se mer om saken +Holder +de ord.

+ +
+
+ + + Tags: norsk, personvern, sikkerhet, surveillance. + + +
+
+
+ +
+
+ Dokumentaren om Datalagringsdirektivet sendes endelig på NRK +
+
+ 26th March 2014 +
+
+

Foreningen NUUG melder i natt at +NRK nå har bestemt seg for +når +den norske dokumentarfilmen om datalagringsdirektivet skal +sendes (se IMDB +for detaljer om filmen) . Første visning blir på NRK2 mandag +2014-03-31 kl. 19:50, og deretter visninger onsdag 2014-04-02 +kl. 12:30, fredag 2014-04-04 kl. 19:40 og søndag 2014-04-06 kl. 15:10. +Jeg har sett dokumentaren, og jeg anbefaler enhver å se den selv. Som +oppvarming mens vi venter anbefaler jeg Bjørn Stærks kronikk i +Aftenposten fra i går, +Autoritær +gjøkunge, der han gir en grei skisse av hvor ille det står til med +retten til privatliv og beskyttelsen av demokrati i Norge og resten +verden, og helt riktig slår fast at det er vi i databransjen som +sitter med nøkkelen til å gjøre noe med dette. Jeg har involvert meg +i prosjektene dugnadsnett.no +og FreedomBox for å +forsøke å gjøre litt selv for å bedre situasjonen, men det er mye +hardt arbeid fra mange flere enn meg som gjenstår før vi kan sies å ha +gjenopprettet balansen.

+ +

Jeg regner med at nettutgaven dukker opp på +NRKs +side om filmen om datalagringsdirektivet om fem dager. Hold et +øye med siden, og tips venner og slekt om at de også bør se den.

+ +
+
+ + + Tags: freedombox, mesh network, norsk, personvern, sikkerhet, surveillance. + + +
+
+
+ +
+
+ Public Trusted Timestamping services for everyone +
+
+ 25th March 2014 +
+
+

Did you ever need to store logs or other files in a way that would +allow it to be used as evidence in court, and needed a way to +demonstrate without reasonable doubt that the file had not been +changed since it was created? Or, did you ever need to document that +a given document was received at some point in time, like some +archived document or the answer to an exam, and not changed after it +was received? The problem in these settings is to remove the need to +trust yourself and your computers, while still being able to prove +that a file is the same as it was at some given time in the past.

+ +

A solution to these problems is to have a trusted third party +"stamp" the document and verify that at some given time the document +looked a given way. Such +notarius service +have been around for thousands of years, and its digital equivalent is +called a +trusted +timestamping service. The Internet +Engineering Task Force standardised how such service could work a +few years ago as RFC +3161. The mechanism is simple. Create a hash of the file in +question, send it to a trusted third party which add a time stamp to +the hash and sign the result with its private key, and send back the +signed hash + timestamp. Both email, FTP and HTTP can be used to +request such signature, depending on what is provided by the service +used. Anyone with the document and the signature can then verify that +the document matches the signature by creating their own hash and +checking the signature using the trusted third party public key. +There are several commercial services around providing such +timestamping. A quick search for +"rfc 3161 +service" pointed me to at least +DigiStamp, +Quo +Vadis, +Global Sign +and Global +Trust Finder. The system work as long as the private key of the +trusted third party is not compromised.

+ +

But as far as I can tell, there are very few public trusted +timestamp services available for everyone. I've been looking for one +for a while now. But yesterday I found one over at +Deutches +Forschungsnetz mentioned in +a +blog by David Müller. I then found +a +good recipe on how to use the service over at the University of +Greifswald.

+ +

The OpenSSL library contain +both server and tools to use and set up your own signing service. See +the ts(1SSL), tsget(1SSL) manual pages for more details. The +following shell script demonstrate how to extract a signed timestamp +for any file on the disk in a Debian environment:

+ +

+#!/bin/sh
+set -e
+url="http://zeitstempel.dfn.de"
+caurl="https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt"
+reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq)
+resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr)
+cafile=chain.txt
+if [ ! -f $cafile ] ; then
+    wget -O $cafile "$caurl"
+fi
+openssl ts -query -data "$1" -cert | tee "$reqfile" \
+    | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile"
+openssl ts -reply -in "$resfile" -text 1>&2
+openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2
+base64 < "$resfile"
+rm "$reqfile" "$resfile"
+

+ +

The argument to the script is the file to timestamp, and the output +is a base64 encoded version of the signature to STDOUT and details +about the signature to STDERR. Note that due to +a bug +in the tsget script, you might need to modify the included script +and remove the last line. Or just write your own HTTP uploader using +curl. :) Now you too can prove and verify that files have not been +changed.

+ +

But the Internet need more public trusted timestamp services. +Perhaps something for Uninett or +my work place the University of Oslo +to set up?

+ +
+
+ + + Tags: english, sikkerhet. + + +
+
+
+ +
+
+ Freedombox on Dreamplug, Raspberry Pi and virtual x86 machine +
+
+ 14th March 2014 +
+
+

The Freedombox +project is working on providing the software and hardware for +making it easy for non-technical people to host their data and +communication at home, and being able to communicate with their +friends and family encrypted and away from prying eyes. It has been +going on for a while, and is slowly progressing towards a new test +release (0.2).

+ +

And what day could be better than the Pi day to announce that the +new version will provide "hard drive" / SD card / USB stick images for +Dreamplug, Raspberry Pi and VirtualBox (or any other virtualization +system), and can also be installed using a Debian installer preseed +file. The Debian based Freedombox is now based on Debian Jessie, +where most of the needed packages used are already present. Only one, +the freedombox-setup package, is missing. To try to build your own +boot image to test the current status, fetch the freedom-maker scripts +and build using +vmdebootstrap +with a user with sudo access to become root: + +

+git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
+  freedom-maker
+sudo apt-get install git vmdebootstrap mercurial python-docutils \
+  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
+  u-boot-tools
+make -C freedom-maker dreamplug-image raspberry-image virtualbox-image
+
+ +

Root access is needed to run debootstrap and mount loopback +devices. See the README for more details on the build. If you do not +want all three images, trim the make line. But note that thanks to a race condition in +vmdebootstrap, the build might fail without the patch to the +kpartx call.

+ +

If you instead want to install using a Debian CD and the preseed +method, boot a Debian Wheezy ISO and use this boot argument to load +the preseed values:

+ +
+url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat
+
+ +

But note that due to a +recently introduced bug in apt in Jessie, the installer will +currently hang while setting up APT sources. Killing the +'apt-cdrom ident' process when it hang a few times during the +installation will get the installation going. This affect all +installations in Jessie, and I expect it will be fixed soon.

+ +

Give it a go and let us know how it goes on the mailing list, and help +us get the new release published. :) Please join us on +IRC (#freedombox on +irc.debian.org) and +the +mailing list if you want to help make this vision come true.

+ +
+
+ + + Tags: debian, english, freedombox, sikkerhet, surveillance, web. + + +
+
+
+ +
+
+ A fist full of non-anonymous Bitcoins +
+
+ 29th January 2014 +
+
+

Bitcoin is a incredible use of peer to peer communication and +encryption, allowing direct and immediate money transfer without any +central control. It is sometimes claimed to be ideal for illegal +activity, which I believe is quite a long way from the truth. At least +I would not conduct illegal money transfers using a system where the +details of every transaction are kept forever. This point is +investigated in +USENIX ;login: +from December 2013, in the article +"A +Fistful of Bitcoins - Characterizing Payments Among Men with No +Names" by Sarah Meiklejohn, Marjori Pomarole,Grant Jordan, Kirill +Levchenko, Damon McCoy, Geoffrey M. Voelker, and Stefan Savage. They +analyse the transaction log in the Bitcoin system, using it to find +addresses belong to individuals and organisations and follow the flow +of money from both Bitcoin theft and trades on Silk Road to where the +money end up. This is how they wrap up their article:

+ +

+

"To demonstrate the usefulness of this type of analysis, we turned +our attention to criminal activity. In the Bitcoin economy, criminal +activity can appear in a number of forms, such as dealing drugs on +Silk Road or simply stealing someone else’s bitcoins. We followed the +flow of bitcoins out of Silk Road (in particular, from one notorious +address) and from a number of highly publicized thefts to see whether +we could track the bitcoins to known services. Although some of the +thieves attempted to use sophisticated mixing techniques (or possibly +mix services) to obscure the flow of bitcoins, for the most part +tracking the bitcoins was quite straightforward, and we ultimately saw +large quantities of bitcoins flow to a variety of exchanges directly +from the point of theft (or the withdrawal from Silk Road).

+ +

As acknowledged above, following stolen bitcoins to the point at +which they are deposited into an exchange does not in itself identify +the thief; however, it does enable further de-anonymization in the +case in which certain agencies can determine (through, for example, +subpoena power) the real-world owner of the account into which the +stolen bitcoins were deposited. Because such exchanges seem to serve +as chokepoints into and out of the Bitcoin economy (i.e., there are +few alternative ways to cash out), we conclude that using Bitcoin for +money laundering or other illicit purposes does not (at least at +present) seem to be particularly attractive."

+

+ +

These researches are not the first to analyse the Bitcoin +transaction log. The 2011 paper +"An Analysis of Anonymity in +the Bitcoin System" by Fergal Reid and Martin Harrigan is +summarized like this:

+ +

+"Anonymity in Bitcoin, a peer-to-peer electronic currency system, is a +complicated issue. Within the system, users are identified by +public-keys only. An attacker wishing to de-anonymize its users will +attempt to construct the one-to-many mapping between users and +public-keys and associate information external to the system with the +users. Bitcoin tries to prevent this attack by storing the mapping of +a user to his or her public-keys on that user's node only and by +allowing each user to generate as many public-keys as required. In +this chapter we consider the topological structure of two networks +derived from Bitcoin's public transaction history. We show that the +two networks have a non-trivial topological structure, provide +complementary views of the Bitcoin system and have implications for +anonymity. We combine these structures with external information and +techniques such as context discovery and flow analysis to investigate +an alleged theft of Bitcoins, which, at the time of the theft, had a +market value of approximately half a million U.S. dollars." +

+ +

I hope these references can help kill the urban myth that Bitcoin +is anonymous. It isn't really a good fit for illegal activites. Use +cash if you need to stay anonymous, at least until regular DNA +sampling of notes and coins become the norm. :)

+ +

As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

+ +
+
+ + + Tags: bitcoin, english, personvern, sikkerhet. + + +
+
+
+
All drones should be radio marked with what they do and who they belong to @@ -2241,6 +2957,19 @@ betydelige.

Archive