Petter Reinholdtsen

Entries tagged "english".

The sorry state of multimedia browser plugins in Debian
25th November 2008

Recently I have spent some time evaluating the multimedia browser plugins available in Debian Lenny, to see which one we should use by default in Debian Edu. We need an embedded video playing plugin with control buttons to pause or stop the video, and capable of streaming all the multimedia content available on the web. The test results and notes are available on the Debian wiki. I was surprised how few of the plugins are able to fill this need. My personal video player favorite, VLC, has a really bad plugin which fail on a lot of the test pages. A lot of the MIME types I would expect to work with any free software player (like video/ogg), just do not work. And simple formats like the audio/x-mplegurl format (m3u playlists), just isn't supported by the totem and vlc plugins. I hope the situation will improve soon. No wonder sites use the proprietary Adobe flash to play video.

For Lenny, we seem to end up with the mplayer plugin. It seem to be the only one fitting our needs. :/

Tags: debian, debian edu, english, multimedia, web.
Devcamp brought us closer to the Lenny based Debian Edu release
7th December 2008

This weekend we had a small developer gathering for Debian Edu in Oslo. Most of Saturday was used for the general assemly for the member organization, but the rest of the weekend I used to tune the LTSP installation. LTSP now work out of the box on the 10-network. Acer Aspire One proved to be a very nice thin client, with both screen, mouse and keybard in a small box. Was working on getting the diskless workstation setup configured out of the box, but did not finish it before the weekend was up.

Did not find time to look at the 4 VGA cards in one box we got from the Brazilian group, so that will have to wait for the next development gathering. Would love to have the Debian Edu installer automatically detect and configure a multiseat setup when it find one of these cards.

Tags: debian, debian edu, english, ltsp.
Software video mixer on a USB stick
28th December 2008

The Norwegian Unix User Group is recording our montly presentation on video, and recently we have worked on improving the quality of the recordings by mixing the slides directly with the video stream. For this, we use the dvswitch package from the Debian video team. As this require quite one computer per video source, and NUUG do not have enough laptops available, we need to borrow laptops. And to avoid having to install extra software on these borrwed laptops, I have wrapped up all the programs needed on a bootable USB stick. The software required is dvswitch with assosiated source, sink and mixer applications and dvgrab. To allow this setup to work without any configuration, I've patched dvswitch to use avahi to connect the various parts together. And to allow us to use laptops without firewire plugs, I upgraded dvgrab to the one from Debian/unstable to get one that work with USB sources. We have not yet tested this setup in a production setup, but I hope it will work properly, and allow us to set up a video mixer in a very short time frame. We will need it for Go Open 2009.

The USB image is for a 1 GB memory stick, but can be used on any larger stick as well.

Tags: english, nuug, video.
When web browser developers make a video player...
17th January 2009

As part of the work we do in NUUG to publish video recordings of our monthly presentations, we provide a page with embedded video for easy access to the recording. Putting a good set of HTML tags together to get working embedded video in all browsers and across all operating systems is not easy. I hope this will become easier when the <video> tag is implemented in all browsers, but I am not sure. We provide the recordings in several formats, MPEG1, Ogg Theora, H.264 and Quicktime, and want the browser/media plugin to pick one it support and use it to play the recording, using whatever embed mechanism the browser understand. There is at least four different tags to use for this, the new HTML5 <video> tag, the <object> tag, the <embed> tag and the <applet> tag. All of these take a lot of options, and finding the best options is a major challenge.

I just tested the experimental Opera browser available from labs.opera.com, to see how it handled a <video> tag with a few video sources and no extra attributes. I was not very impressed. The browser start by fetching a picture from the video stream. Not sure if it is the first frame, but it is definitely very early in the recording. So far, so good. Next, instead of streaming the 76 MiB video file, it start to download all of it, but do not start to play the video. This mean I have to wait for several minutes for the downloading to finish. When the download is done, the playing of the video do not start! Waiting for the download, but I do not get to see the video? Some testing later, I discover that I have to add the controls="true" attribute to be able to get a play button to pres to start the video. Adding autoplay="true" did not help. I sure hope this is a misfeature of the test version of Opera, and that future implementations of the <video> tag will stream recordings by default, or at least start playing when the download is done.

The test page I used (since changed to add more attributes) is available from the nuug site. Will have to test it with the new Firefox too.

In the test process, I discovered a missing feature. I was unable to find a way to get the URL of the playing video out of Opera, so I am not quite sure it picked the Ogg Theora version of the video. I sure hope it was using the announced Ogg Theora support. :)

Tags: english, multimedia, nuug, video, web.
Using bar codes at a computing center
20th February 2009

At work with the University of Oslo, we have several hundred computers in our computing center. This give us a challenge in tracking the location and cabling of the computers, when they are added, moved and removed. Some times the location register is not updated when a computer is inserted or moved and we then have to search the room for the "missing" computer.

In the last issue of Linux Journal, I came across a project libdmtx to write and read bar code blocks as defined in the The Data Matrix Standard. This is bar codes that can be read with a normal digital camera, for example that on a cell phone, and several such bar codes can be read by libdmtx from one picture. The bar code standard allow up to 2 KiB to be written in the tag. There is another project with a bar code writer written in postscript capable of creating such bar codes, but this was the first time I found a tool to read these bar codes.

It occurred to me that this could be used to tag and track the machines in our computing center. If both racks and computers are tagged this way, we can use a picture of the rack and all its computers to detect the rack location of any computer in that rack. If we do this regularly for the entire room, we will find all locations, and can detect movements and removals.

I decided to test if this would work in practice, and picked a random rack and tagged all the machines with their names. Next, I took pictures with my digital camera, and gave the dmtxread program these JPEG pictures to see how many tags it could read. This worked fairly well. If the pictures was well focused and not taken from the side, all tags in the image could be read. Because of limited space between the racks, I was unable to get a good picture of the entire rack, but could without problem read all tags from a picture covering about half the rack. I had to limit the search time used by dmtxread to 60000 ms to make sure it terminated in a reasonable time frame.

My conclusion is that this could work, and we should probably look at adjusting our computer tagging procedures to use bar codes for easier automatic tracking of computers.

Tags: english, nuug.
Checking server hardware support status for Dell, HP and IBM servers
28th February 2009

At work, we have a few hundred Linux servers, and with that amount of hardware it is important to keep track of when the hardware support contract expire for each server. We have a machine (and service) register, which until recently did not contain much useful besides the machine room location and contact information for the system owner for each machine. To make it easier for us to track support contract status, I've recently spent time on extending the machine register to include information about when the support contract expire, and to tag machines with expired contracts to make it easy to get a list of such machines. I extended a perl script already being used to import information about machines into the register, to also do some screen scraping off the sites of Dell, HP and IBM (our majority of machines are from these vendors), and automatically check the support status for the relevant machines. This make the support status information easily available and I hope it will make it easier for the computer owner to know when to get new hardware or renew the support contract. The result of this work documented that 27% of the machines in the registry is without a support contract, and made it very easy to find them. 27% might seem like a lot, but I see it more as the case of us using machines a bit longer than the 3 years a normal support contract last, to have test machines and a platform for less important services. After all, the machines without a contract are working fine at the moment and the lack of contract is only a problem if any of them break down. When that happen, we can either fix it using spare parts from other machines or move the service to another old machine.

I believe the code for screen scraping the Dell site was originally written by Trond Hasle Amundsen, and later adjusted by me and Morten Werner Forsbring. The HP scraping was written by me after reading a nice article in ;login: about how to use WWW::Mechanize, and the IBM scraping was written by me based on the Dell code. I know the HTML parsing could be done using nice libraries, but did not want to introduce more dependencies. This is the current incarnation:

use LWP::Simple;
use POSIX;
use WWW::Mechanize;
use Date::Parse;
[...]
sub get_support_info {
    my ($machine, $model, $serial, $productnumber) = @_;
    my $str;

    if ( $model =~ m/^Dell / ) {
        # fetch website from Dell support
        my $url = "http://support.euro.dell.com/support/topics/topic.aspx/emea/shared/support/my_systems_info/no/details?c=no&cs=nodhs1&l=no&s=dhs&ServiceTag=$serial";
        my $webpage = get($url);
        return undef unless ($webpage);

        my $daysleft = -1;
        my @lines = split(/\n/, $webpage);
        foreach my $line (@lines) {
            next unless ($line =~ m/Beskrivelse/);
            $line =~ s/<[^>]+?>/;/gm;
            $line =~ s/^.+?;(Beskrivelse;)/$1/;

            my @f = split(/\;/, $line);
            @f = @f[13 .. $#f];
            my $lastend = "";
            while ($f[3] eq "DELL") {
                my ($type, $startstr, $endstr, $days) = @f[0, 5, 7, 10];

                my $start = POSIX::strftime("%Y-%m-%d",
                                            localtime(str2time($startstr)));
                my $end = POSIX::strftime("%Y-%m-%d",
                                          localtime(str2time($endstr)));
                $str .= "$type $start -> $end ";
                @f = @f[14 .. $#f];
                $lastend = $end if ($end gt $lastend);
            }
            my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
            tag_machine_unsupported($machine)
                if ($lastend lt $today);
        }
    } elsif ( $model =~ m/^HP / ) {
        my $mech = WWW::Mechanize->new();
        my $url =
            'http://www1.itrc.hp.com/service/ewarranty/warrantyInput.do';
        $mech->get($url);
        my $fields = {
            'BODServiceID' => 'NA',
            'RegisteredPurchaseDate' => '',
            'country' => 'NO',
            'productNumber' => $productnumber,
            'serialNumber1' => $serial,
        };
        $mech->submit_form( form_number => 2,
                            fields      => $fields );
        # Next step is screen scraping
        my $content = $mech->content();

        $content =~ s/<[^>]+?>/;/gm;
        $content =~ s/\s+/ /gm;
        $content =~ s/;\s*;/;;/gm;
        $content =~ s/;[\s;]+/;/gm;

        my $today = POSIX::strftime("%Y-%m-%d", localtime(time));

        while ($content =~ m/;Warranty Type;/) {
            my ($type, $status, $startstr, $stopstr) = $content =~
                m/;Warranty Type;([^;]+);.+?;Status;(\w+);Start Date;([^;]+);End Date;([^;]+);/;
            $content =~ s/^.+?;Warranty Type;//;
            my $start = POSIX::strftime("%Y-%m-%d",
                                        localtime(str2time($startstr)));
            my $end = POSIX::strftime("%Y-%m-%d",
                                      localtime(str2time($stopstr)));

            $str .= "$type ($status) $start -> $end ";

            tag_machine_unsupported($machine)
                if ($end lt $today);
        }
    } elsif ( $model =~ m/^IBM / ) {
        # This code ignore extended support contracts.
        my ($producttype) = $model =~ m/.*-\[(.{4}).+\]-/;
        if ($producttype && $serial) {
            my $content =
                get("http://www-947.ibm.com/systems/support/supportsite.wss/warranty?action=warranty&brandind=5000008&Submit=Submit&type=$producttype&serial=$serial");
            if ($content) {
                $content =~ s/<[^>]+?>/;/gm;
                $content =~ s/\s+/ /gm;
                $content =~ s/;\s*;/;;/gm;
                $content =~ s/;[\s;]+/;/gm;

                $content =~ s/^.+?;Warranty status;//;
                my ($status, $end) = $content =~ m/;Warranty status;([^;]+)\s*;Expiration date;(\S+) ;/;

                $str .= "($status) -> $end ";

                my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
                tag_machine_unsupported($machine)
                    if ($end lt $today);
            }
        }
    }
    return $str;
}

Here are some examples on how to use the function, using fake serial numbers. The information passed in as arguments are fetched from dmidecode.

print get_support_info("hp.host", "HP ProLiant BL460c G1", "1234567890"
                       "447707-B21");
print get_support_info("dell.host", "Dell Inc. PowerEdge 2950", "1234567");
print get_support_info("ibm.host", "IBM eserver xSeries 345 -[867061X]-",
                       "1234567");

I would recommend this approach for tracking support contracts for everyone with more than a few computers to administer. :)

Update 2009-03-06: The IBM page do not include extended support contracts, so it is useless in that case. The original Dell code do not handle extended support contracts either, but has been updated to do so.

Tags: english, nuug.
Time for new LDAP schemas replacing RFC 2307?
29th March 2009

The state of standardized LDAP schemas on Linux is far from optimal. There is RFC 2307 documenting one way to store NIS maps in LDAP, and a modified version of this normally called RFC 2307bis, with some modifications to be compatible with Active Directory. The RFC specification handle the content of a lot of system databases, but do not handle DNS zones and DHCP configuration.

In Debian Edu/Skolelinux, we would like to store information about users, SMB clients/hosts, filegroups, netgroups (users and hosts), DHCP and DNS configuration, and LTSP configuration in LDAP. These objects have a lot in common, but with the current LDAP schemas it is not possible to have one object per entity. For example, one need to have at least three LDAP objects for a given computer, one with the SMB related stuff, one with DNS information and another with DHCP information. The schemas provided for DNS and DHCP are impossible to combine into one LDAP object. In addition, it is impossible to implement quick queries for netgroup membership, because of the way NIS triples are implemented. It just do not scale. I believe it is time for a few RFC specifications to cleam up this mess.

I would like to have one LDAP object representing each computer in the network, and this object can then keep the SMB (ie host key), DHCP (mac address/name) and DNS (name/IP address) settings in one place. It need to be efficently stored to make sure it scale well.

I would also like to have a quick way to map from a user or computer and to the net group this user or computer is a member.

Active Directory have done a better job than unix heads like myself in this regard, and the unix side need to catch up. Time to start a new IETF work group?

Tags: debian, debian edu, english, ldap, nuug.
Returning from Skolelinux developer gathering
29th March 2009

I'm sitting on the train going home from this weekends Debian Edu/Skolelinux development gathering. I got a bit done tuning the desktop, and looked into the dynamic service location protocol implementation avahi. It look like it could be useful for us. Almost 30 people participated, and I believe it was a great environment to get to know the Skolelinux system. Walter Bender, involved in the development of the Sugar educational platform, presented his stuff and also helped me improve my OLPC installation. He also showed me that his Turtle Art application can be used in standalone mode, and we agreed that I would help getting it packaged for Debian. As a standalone application it would be great for Debian Edu. We also tried to get the video conferencing working with two OLPCs, but that proved to be too hard for us. The application seem to need more work before it is ready for me. I look forward to getting home and relax now. :)

Tags: debian, debian edu, english, nuug.
Standardize on protocols and formats, not vendors and applications
30th March 2009

Where I work at the University of Oslo, one decision stand out as a very good one to form a long lived computer infrastructure. It is the simple one, lost by many in todays computer industry: Standardize on open network protocols and open exchange/storage formats, not applications. Applications come and go, while protocols and files tend to stay, and thus one want to make it easy to change application and vendor, while avoiding conversion costs and locking users to a specific platform or application.

This approach make it possible to replace the client applications independently of the server applications. One can even allow users to use several different applications as long as they handle the selected protocol and format. In the normal case, only one client application is recommended and users only get help if they choose to use this application, but those that want to deviate from the easy path are not blocked from doing so.

It also allow us to replace the server side without forcing the users to replace their applications, and thus allow us to select the best server implementation at any moment, when scale and resouce requirements change.

I strongly recommend standardizing - on open network protocols and open formats, but I would never recommend standardizing on a single application that do not use open network protocol or open formats.

Tags: debian, english, nuug, standard.
Recording video from cron using VLC
5th April 2009

One think I have wanted to figure out for a along time is how to run vlc from cron to do recording of video streams on the net. The task is trivial with mplayer, but I do not really trust the security of mplayer (it crashes too often on strange input), and thus prefer vlc. I finally found a way to do it today. I spent an hour or so searching the web for recipes and reading the documentation. The hardest part was to get rid of the GUI window, but after finding the dummy interface, the command line finally presented itself:

URL=http://www.ping.uio.no/video/rms-oslo_2009.ogg
SAVEFILE=rms.ogg
DISPLAY= vlc -q $URL \
  --sout="#duplicate{dst=std{access=file,url='$SAVEFILE'},dst=nodisplay}" \
  --intf=dummy

The command stream the URL and store it in the SAVEFILE by duplicating the output stream to "nodisplay" and the file, using the dummy interface. The dummy interface and the nodisplay output make sure no X interface is needed.

The cron job then need to start this job with the appropriate URL and file name to save, sleep for the duration wanted, and then kill the vlc process with SIGTERM. Here is a complete script vlc-record to use from at or cron:

#!/bin/sh
set -e
URL="$1"
SAVEFILE="$2"
DURATION="$3"
DISPLAY= vlc -q "$URL" \
  --sout="#duplicate{dst=std{access=file,url='$SAVEFILE'},dst=nodisplay}" \
  --intf=dummy < /dev/null > /dev/null 2>&1 &
pid=$!
sleep $DURATION
kill $pid
wait $pid
Tags: english, nuug, video.
No patch is not better than a useless patch
28th April 2009

Julien Blache claim that no patch is better than a useless patch. I completely disagree, as a patch allow one to discuss a concrete and proposed solution, and also prove that the issue at hand is important enough for someone to spent time on fixing it. No patch do not provide any of these positive properties.

Tags: debian, english, nuug.
Two projects that have improved the quality of free software a lot
2nd May 2009

There are two software projects that have had huge influence on the quality of free software, and I wanted to mention both in case someone do not yet know them.

The first one is valgrind, a tool to detect and expose errors in the memory handling of programs. It is easy to use, all one need to do is to run 'valgrind program', and it will report any problems on stdout. It is even better if the program include debug information. With debug information, it is able to report the source file name and line number where the problem occurs. It can report things like 'reading past memory block in file X line N, the memory block was allocated in file Y, line M', and 'using uninitialised value in control logic'. This tool has made it trivial to investigate reproducible crash bugs in programs, and have reduced the number of this kind of bugs in free software a lot.

The second one is Coverity which is a source code checker. It is able to process the source of a program and find problems in the logic without running the program. It started out as the Stanford Checker and became well known when it was used to find bugs in the Linux kernel. It is now a commercial tool and the company behind it is running a community service for the free software community, where a lot of free software projects get their source checked for free. Several thousand defects have been found and fixed so far. It can find errors like 'lock L taken in file X line N is never released if exiting in line M', or 'the code in file Y lines O to P can never be executed'. The projects included in the community service project have managed to get rid of a lot of reliability problems thanks to Coverity.

I believe tools like this, that are able to automatically find errors in the source, are vital to improve the quality of software and make sure we can get rid of the crashing and failing software we are surrounded by today.

Tags: debian, english.
Debian boots quicker and quicker
24th June 2009

I spent Monday and tuesday this week in London with a lot of the people involved in the boot system on Debian and Ubuntu, to see if we could find more ways to speed up the boot system. This was an Ubuntu funded developer gathering. It was quite productive. We also discussed the future of boot systems, and ways to handle the increasing number of boot issues introduced by the Linux kernel becoming more and more asynchronous and event base. The Ubuntu approach using udev and upstart might be a good way forward. Time will show.

Anyway, there are a few ways at the moment to speed up the boot process in Debian. All of these should be applied to get a quick boot:

These points are based on the Google summer of code work done by Carlos Villegas.

Support for makefile-style concurrency during boot was uploaded to unstable yesterday. When we tested it, we were able to cut 6 seconds from the boot sequence. It depend on very correct dependency declaration in all init.d scripts, so I expect us to find edge cases where the dependences in some scripts are slightly wrong when we start using this.

On our IRC channel for this effort, #pkg-sysvinit, a new idea was introduced by Raphael Geissert today, one that could affect the startup speed as well. Instead of starting some scripts concurrently from rcS.d/ and another set of scripts from rc2.d/, it would be possible to run a of them in the same process. A quick way to test this would be to enable insserv and run 'mv /etc/rc2.d/S* /etc/rcS.d/; insserv'. Will need to test if that work. :)

Tags: bootsystem, debian, english.
Taking over sysvinit development
22nd July 2009

After several years of frustration with the lack of activity from the existing sysvinit upstream developer, I decided a few weeks ago to take over the package and become the new upstream. The number of patches to track for the Debian package was becoming a burden, and the lack of synchronization between the distribution made it hard to keep the package up to date.

On the new sysvinit team is the SuSe maintainer Dr. Werner Fink, and my Debian co-maintainer Kel Modderman. About 10 days ago, I made a new upstream tarball with version number 2.87dsf (for Debian, SuSe and Fedora), based on the patches currently in use in these distributions. We Debian maintainers plan to move to this tarball as the new upstream as soon as we find time to do the merge. Since the new tarball was created, we agreed with Werner at SuSe to make a new upstream project at Savannah, and continue development there. The project is registered and currently waiting for approval by the Savannah administrators, and as soon as it is approved, we will import the old versions from svn and continue working on the future release.

It is a bit ironic that this is done now, when some of the involved distributions are moving to upstart as a syvinit replacement.

Tags: bootsystem, debian, english, nuug.
Debian has switched to dependency based boot sequencing
27th July 2009

Since this evening, with the upload of sysvinit version 2.87dsf-2, and the upload of insserv version 1.12.0-10 yesterday, Debian unstable have been migrated to using dependency based boot sequencing. This conclude work me and others have been doing for the last three days. It feels great to see this finally part of the default Debian installation. Now we just need to weed out the last few problems that are bound to show up, to get everything ready for Squeeze.

The next step is migrating /sbin/init from sysvinit to upstart, and fixing the more fundamental problem of handing the event based non-predictable kernel in the early boot.

Tags: bootsystem, debian, english, nuug.
ISO still hope to fix OOXML
8th August 2009

According to a blog post from Torsten Werner, the current defect report for ISO 29500 (ISO OOXML) is 809 pages. His interesting point is that the defect report is 71 pages more than the full ODF 1.1 specification. Personally I find it more interesting that ISO still believe ISO OOXML can be fixed in ISO. Personally, I believe it is broken beyon repair, and I completely lack any trust in ISO for being able to get anywhere close to solving the problems. I was part of the Norwegian committee involved in the OOXML fast track process, and was not impressed with Standard Norway and ISO in how they handled it.

These days I focus on ODF instead, which seem like a specification with the future ahead of it. We are working in NUUG to organise a ODF seminar this autumn.

Tags: english, nuug, standard.
Relative popularity of document formats (MS Office vs. ODF)
12th August 2009

Just for fun, I did a search right now on Google for a few file ODF and MS Office based formats (not to be mistaken for ISO or ECMA OOXML), to get an idea of their relative usage. I searched using 'filetype:odt' and equvalent terms, and got these results:

TypeODFMS Office
Tekst odt:282000 docx:308000
Presentasjon odp:75600 pptx:183000
Regneark ods:26500 xlsx:145000

Next, I added a 'site:no' limit to get the numbers for Norway, and got these numbers:

TypeODFMS Office
Tekst odt:2480 docx:4460
Presentasjon odp:299 pptx:741
Regneark ods:187 xlsx:372

I wonder how these numbers change over time.

I am aware of Google returning different results and numbers based on where the search is done, so I guess these numbers will differ if they are conduced in another country. Because of this, I did the same search from a machine in California, USA, a few minutes after the search done from a machine here in Norway.

TypeODFMS Office
Tekst odt:129000 docx:308000
Presentasjon odp:44200 pptx:93900
Regneark ods:26500 xlsx:82400

And with 'site:no':

TypeODFMS Office
Tekst odt:2480 docx:3410
Presentasjon odp:175 pptx:604
Regneark ods:186 xlsx:296

Interesting difference, not sure what to conclude from these numbers.

Tags: english, nuug, standard, web.
Automatic Munin and Nagios configuration
27th January 2010

One of the new features in the next Debian/Lenny based release of Debian Edu/Skolelinux, which is scheduled for release in the next few days, is automatic configuration of the service monitoring system Nagios. The previous release had automatic configuration of trend analysis using Munin, and this Lenny based release take that a step further.

When installing a Debian Edu Main-server, it is automatically configured as a Munin and Nagios server. In addition, it is configured to be a server for the SiteSummary system I have written for use in Debian Edu. The SiteSummary system is inspired by a system used by the University of Oslo where I work. In short, the system provide a centralised collector of information about the computers on the network, and a client on each computer submitting information to this collector. This allow for automatic information on which packages are installed on each machine, which kernel the machines are using, what kind of configuration the packages got etc. This also allow us to automatically generate Munin and Nagios configuration.

All computers reporting to the sitesummary collector with the munin-node package installed is automatically enabled as a Munin client and graphs from the statistics collected from that machine show up automatically on http://www/munin/ on the Main-server.

All non-laptop computers reporting to the sitesummary collector are automatically monitored for network presence (ping and any network services detected). In addition, all computers (also laptops) with the nagios-nrpe-server package installed and configured the way sitesummary would configure it, are monitored for full disks, software raid status, swap free and other checks that need to run locally on the machine.

The result is that the administrator on a school using Debian Edu based on Lenny will be able to check the health of his installation with one look at the Nagios settings, without having to spend any time keeping the Nagios configuration up-to-date.

The only configuration one need to do to get Nagios up and running is to set the password used to get access via HTTP. The system administrator need to run "htpasswd /etc/nagios3/htpasswd.users nagiosadmin" to create a nagiosadmin user and set a password for it to be able to log into the Nagios web pages. After that, everything is taken care of.

Tags: debian edu, english, nuug, sitesummary.
Debian Edu / Skolelinux based on Lenny released, work continues
11th February 2010

On Tuesday, the Debian/Lenny based version of Skolelinux was finally shipped. This was a major leap forward for the project, and I am very pleased that we finally got the release wrapped up. Work on the first point release starts imediately, as we plan to get that one out a month after the major release, to include all fixes for bugs we found and fixed too late in the release process to include last Tuesday.

Perhaps it even is time for some partying?

After this first point release, my plan is to focus again on the next major release, based on Squeeze. We will try to get as many of the fixes we need into the official Debian packages before the freeze, and have just a few weeks or months to make it happen.

Tags: debian edu, english, nuug.
After 6 years of waiting, the Xreset.d feature is implemented
6th March 2010

6 years ago, as part of the Debian Edu development I am involved in, I asked for a hook in the kdm and gdm setup to run scripts as root when the user log out. A bug was submitted against the xfree86-common package in 2004 (#230422), and revisited every time Debian Edu was working on a new release. Today, this finally paid off.

The framework for this feature was today commited to the git repositry for the xorg package, and the git repository for xdm has been updated to use this framework. Next on my agenda is to make sure kdm and gdm also add code to use this framework.

In Debian Edu, we want to ability to run commands as root when the user log out, to get rid of runaway processes and do general cleanup after a user. With this framework in place, we finally can do that in a generic way that work with all display managers using this framework. My goal is to get all display managers in Debian use it, similar to how they use the Xsession.d framework today.

Tags: debian edu, english, nuug.
Kerberos for Debian Edu/Squeeze?
14th April 2010

Yesterdays NUUG presentation about Kerberos was inspiring, and reminded me about the need to start using Kerberos in Skolelinux. Setting up a Kerberos server seem to be straight forward, and if we get this in place a long time before the Squeeze version of Debian freezes, we have a chance to migrate Skolelinux away from NFSv3 for the home directories, and over to an architecture where the infrastructure do not have to trust IP addresses and machines, and instead can trust users and cryptographic keys instead.

A challenge will be integration and administration. Is there a Kerberos implementation for Debian where one can control the administration access in Kerberos using LDAP groups? With it, the school administration will have to maintain access control using flat files on the main server, which give a huge potential for errors.

A related question I would like to know is how well Kerberos and pam-ccreds (offline password check) work together. Anyone know?

Next step will be to use Kerberos for access control in Lwat and Nagios. I have no idea how much work that will be to implement. We would also need to document how to integrate with Windows AD, as such shared network will require two Kerberos realms that need to cooperate to work properly.

I believe a good start would be to start using Kerberos on the skolelinux.no machines, and this way get ourselves experience with configuration and integration. A natural starting point would be setting up ldap.skolelinux.no as the Kerberos server, and migrate the rest of the machines from PAM via LDAP to PAM via Kerberos one at the time.

If you would like to contribute to get this working in Skolelinux, I recommend you to see the video recording from yesterdays NUUG presentation, and start using Kerberos at home. The video show show up in a few days.

Tags: debian edu, english, nuug.
Great book: "Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future"
19th April 2010

The last few weeks i have had the pleasure of reading a thought-provoking collection of essays by Cory Doctorow, on topics touching copyright, virtual worlds, the future of man when the conscience mind can be duplicated into a computer and many more. The book titled "Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future" is available with few restrictions on the web, for example from his own site. I read the epub-version from feedbooks using fbreader and my N810. I strongly recommend this book.

Tags: english, fildeling, nuug, opphavsrett, personvern, sikkerhet, web.
Thoughts on roaming laptop setup for Debian Edu
28th April 2010

For some years now, I have wondered how we should handle laptops in Debian Edu. The Debian Edu infrastructure is mostly designed to handle stationary computers, and less suited for computers that come and go.

Now I finally believe I have an sensible idea on how to adjust Debian Edu for laptops, by introducing a new profile for them, for example called Roaming Workstations. Here are my thought on this. The setup would consist of the following:

I believe all the pieces to implement this are in Debian/testing at the moment. If we work quickly, we should be able to get this ready in time for the Squeeze release to freeze. Some of the pieces need tweaking, like libpam-ccreds should get support for pam-auth-update (#566718) and nslcd (or perhaps debian-edu-config) should get some integration code to stop its daemon when the LDAP server is unavailable to avoid long timeouts when disconnected from the net. If we get Kerberos enabled, we need to make sure we avoid long timeouts there too.

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian edu, english, nuug.
Forcing new users to change their password on first login
2nd May 2010

One interesting feature in Active Directory, is the ability to create a new user with an expired password, and thus force the user to change the password on the first login attempt.

I'm not quite sure how to do that with the LDAP setup in Debian Edu, but did some initial testing with a local account. The account and password aging information is available in /etc/shadow, but unfortunately, it is not possible to specify an expiration time for passwords, only a maximum age for passwords.

A freshly created account (using adduser test) will have these settings in /etc/shadow:

root@tjener:~# chage -l test
Last password change                                    : May 02, 2010
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7
root@tjener:~#

The only way I could come up with to create a user with an expired account, is to change the date of the last password change to the lowest value possible (January 1th 1970), and the maximum password age to the difference in days between that date and today. To make it simple, I went for 30 years (30 * 365 = 10950) and January 2th (to avoid testing if 0 is a valid value).

After using these commands to set it up, it seem to work as intended:

root@tjener:~# chage -d 1 test; chage -M 10950 test
root@tjener:~# chage -l test
Last password change                                    : Jan 02, 1970
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 10950
Number of days of warning before password expires       : 7
root@tjener:~#  

So far I have tested this with ssh and console, and kdm (in Squeeze) login, and all ask for a new password before login in the user (with ssh, I was thrown out and had to log in again).

Perhaps we should set up something similar for Debian Edu, to make sure only the user itself have the account password?

If you want to comment on or help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Update 2010-05-02 17:20: Paul Tötterman tells me on IRC that the shadow(8) page in Debian/testing now state that setting the date of last password change to zero (0) will force the password to be changed on the first login. This was not mentioned in the manual in Lenny, so I did not notice this in my initial testing. I have tested it on Squeeze, and 'chage -d 0 username' do work there. I have not tested it on Lenny yet.

Update 2010-05-02-19:05: Jim Paris tells me via email that an equivalent command to expire a password is 'passwd -e username', which insert zero into the date of the last password change.

Tags: debian edu, english, nuug, sikkerhet.
Parallellizing the boot in Debian Squeeze - ready for wider testing
6th May 2010

These days, the init.d script dependencies in Squeeze are quite complete, so complete that it is actually possible to run all the init.d scripts in parallell based on these dependencies. If you want to test your Squeeze system, make sure dependency based boot sequencing is enabled, and add this line to /etc/default/rcS:

CONCURRENCY=makefile

That is it. It will cause sysv-rc to use the startpar tool to run scripts in parallel using the dependency information stored in /etc/init.d/.depend.boot, /etc/init.d/.depend.start and /etc/init.d/.depend.stop to order the scripts. Startpar is configured to try to start the kdm and gdm scripts as early as possible, and will start the facilities required by kdm or gdm as early as possible to make this happen.

Give it a try, and see if you like the result. If some services fail to start properly, it is most likely because they have incomplete init.d script dependencies in their startup script (or some of their dependent scripts have incomplete dependencies). Report bugs and get the package maintainers to fix it. :)

Running scripts in parallel could be the default in Debian when we manage to get the init.d script dependencies complete and correct. I expect we will get there in Squeeze+1, if we get manage to test and fix the remaining issues.

If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

Tags: bootsystem, debian, english.
systemd, an interesting alternative to upstart
13th May 2010

The last few days a new boot system called systemd has been introduced to the free software world. I have not yet had time to play around with it, but it seem to be a very interesting alternative to upstart, and might prove to be a good alternative for Debian when we are able to switch to an event based boot system. Tollef is in the process of getting systemd into Debian, and I look forward to seeing how well it work. I like the fact that systemd handles init.d scripts with dependency information natively, allowing them to run in parallel where upstart at the moment do not.

Unfortunately do systemd have the same problem as upstart regarding platform support. It only work on recent Linux kernels, and also need some new kernel features enabled to function properly. This means kFreeBSD and Hurd ports of Debian will need a port or a different boot system. Not sure how that will be handled if systemd proves to be the way forward.

In the mean time, based on the input on debian-devel@ regarding parallel booting in Debian, I have decided to enable full parallel booting as the default in Debian as soon as possible (probably this weekend or early next week), to see if there are any remaining serious bugs in the init.d dependencies. A new version of the sysvinit package implementing this change is already in experimental. If all go well, Squeeze will be released with parallel booting enabled by default.

Tags: bootsystem, debian, english, nuug.
Sitesummary tip: Listing MAC address of all clients
14th May 2010

In the recent Debian Edu versions, the sitesummary system is used to keep track of the machines in the school network. Each machine will automatically report its status to the central server after boot and once per night. The network setup is also reported, and using this information it is possible to get the MAC address of all network interfaces in the machines. This is useful to update the DHCP configuration.

To give some idea how to use sitesummary, here is a one-liner to ist all MAC addresses of all machines reporting to sitesummary. Run this on the collector host:

perl -MSiteSummary -e 'for_all_hosts(sub { print join(" ", get_macaddresses(shift)), "\n"; });'

This will list all MAC addresses assosiated with all machine, one line per machine and with space between the MAC addresses.

To allow system administrators easier job at adding static DHCP addresses for hosts, it would be possible to extend this to fetch machine information from sitesummary and update the DHCP and DNS tables in LDAP using this information. Such tool is unfortunately not written yet.

Tags: debian, debian edu, english, sitesummary.
Parallellized boot is now the default in Debian/unstable
14th May 2010

Since this evening, parallel booting is the default in Debian/unstable for machines using dependency based boot sequencing. Apparently the testing of concurrent booting has been wider than expected, if I am to believe the input on debian-devel@, and I concluded a few days ago to move forward with the feature this weekend, to give us some time to detect any remaining problems before Squeeze is frozen. If serious problems are detected, it is simple to change the default back to sequential boot. The upload of the new sysvinit package also activate a new upstream version.

More information about dependency based boot sequencing is available from the Debian wiki. It is currently possible to disable parallel booting when one run into problems caused by it, by adding this line to /etc/default/rcS:

CONCURRENCY=none

If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

Tags: bootsystem, debian, debian edu, english.
Pieces of the roaming laptop puzzle in Debian
19th May 2010

Today, the last piece of the puzzle for roaming laptops in Debian Edu finally entered the Debian archive. Today, the new libpam-mklocaluser package was accepted. Two days ago, two other pieces was accepted into unstable. The pam-python package needed by libpam-mklocaluser, and the sssd package passed NEW on Monday. In addition, the libpam-ccreds package we need is in experimental (version 10-4) since Saturday, and hopefully will be moved to unstable soon.

This collection of packages allow for two different setups for roaming laptops. The traditional setup would be using libpam-ccreds, nscd and libpam-mklocaluser with LDAP or Kerberos authentication, which should work out of the box if the configuration changes proposed for nscd in BTS report #485282 is implemented. The alternative setup is to use sssd with libpam-mklocaluser to connect to LDAP or Kerberos and let sssd take care of the caching of passwords and group information.

I have so far been unable to get sssd to work with the LDAP server at the University, but suspect the issue is some SSL/GnuTLS related problem with the server certificate. I plan to update the Debian package to version 1.2, which is scheduled for next week, and hope to find time to make sure the next release will include both the Debian/Ubuntu specific patches. Upstream is friendly and responsive, and I am sure we will find a good solution.

The idea is to set up the roaming laptops to authenticate using LDAP or Kerberos and create a local user with home directory in /home/ when a usre in LDAP logs in via KDM or GDM for the first time, and cache the password for offline checking, as well as caching group memberhips and other relevant LDAP information. The libpam-mklocaluser package was created to make sure the local home directory is in /home/, instead of /site/server/directory/ which would be the home directory if pam_mkhomedir was used. To avoid confusion with support requests and configuration, we do not want local laptops to have users in a path that is used for the same users home directory on the home directory servers.

One annoying problem with gdm is that it do not show the PAM message passed to the user from libpam-mklocaluser when the local user is created. Instead gdm simply reject the login with some generic message. The message is shown in kdm, ssh and login, so I guess it is a bug in gdm. Have not investigated if there is some other message type that can be used instead to get gdm to also show the message.

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian edu, english, nuug.
More flexible firmware handling in debian-installer
22nd May 2010

After a long break from debian-installer development, I finally found time today to return to the project. Having to spend less time working dependency based boot in debian, as it is almost complete now, definitely helped freeing some time.

A while back, I ran into a problem while working on Debian Edu. We include some firmware packages on the Debian Edu CDs, those needed to get disk and network controllers working. Without having these firmware packages available during installation, it is impossible to install Debian Edu on the given machine, and because our target group are non-technical people, asking them to provide firmware packages on an external medium is a support pain. Initially, I expected it to be enough to include the firmware packages on the CD to get debian-installer to find and use them. This proved to be wrong. Next, I hoped it was enough to symlink the relevant firmware packages to some useful location on the CD (tried /cdrom/ and /cdrom/firmware/). This also proved to not work, and at this point I found time to look at the debian-installer code to figure out what was going to work.

The firmware loading code is in the hw-detect package, and a closer look revealed that it would only look for firmware packages outside the installation media, so the CD was never checked for firmware packages. It would only check USB sticks, floppies and other "external" media devices. Today I changed it to also look in the /cdrom/firmware/ directory on the mounted CD or DVD, which should solve the problem I ran into with Debian edu. I also changed it to look in /firmware/, to make sure the installer also find firmware provided in the initrd when booting the installer via PXE, to allow us to provide the same feature in the PXE setup included in Debian Edu.

To make sure firmware deb packages with a license questions are not activated without asking if the license is accepted, I extended hw-detect to look for preinst scripts in the firmware packages, and run these before activating the firmware during installation. The license question is asked using debconf in the preinst, so this should solve the issue for the firmware packages I have looked at so far.

If you want to discuss the details of these features, please contact us on debian-boot@lists.debian.org.

Tags: debian, debian edu, english.
Parallellized boot seem to hold up well in Debian/testing
27th May 2010

A few days ago, parallel booting was enabled in Debian/testing. The feature seem to hold up pretty well, but three fairly serious issues are known and should be solved:

All in all not many surprising issues, and all of them seem solvable before Squeeze is released. In addition to these there are some packages with bugs in their dependencies and run level settings, which I expect will be fixed in a reasonable time span.

If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

Update: Correct bug number to file-rc issue.

Tags: bootsystem, debian, debian edu, english.
KDM fail at boot with NVidia cards - and no one try to fix it?
1st June 2010

It is strange to watch how a bug in Debian causing KDM to fail to start at boot when an NVidia video card is used is handled. The problem seem to be that the nvidia X.org driver uses a long time to initialize, and this duration is longer than kdm is configured to wait.

I came across two bugs related to this issue, #583312 initially filed against initscripts and passed on to nvidia-glx when it became obvious that the nvidia drivers were involved, and #524751 initially filed against kdm and passed on to src:nvidia-graphics-drivers for unknown reasons.

To me, it seem that no-one is interested in actually solving the problem nvidia video card owners experience and make sure the Debian distribution work out of the box for these users. The nvidia driver maintainers expect kdm to be set up to wait longer, while kdm expect the nvidia driver maintainers to fix the driver to start faster, and while they wait for each other I guess the users end up switching to a distribution that work for them. I have no idea what the solution is, but I am pretty sure that waiting for each other is not it.

I wonder why we end up handling bugs this way.

Tags: bootsystem, debian, debian edu, english.
Sitesummary tip: Listing computer hardware models used at site
3rd June 2010

When using sitesummary at a site to track machines, it is possible to get a list of the machine types in use thanks to the DMI information extracted from each machine. The script to do so is included in the sitesummary package, and here is example output from the Skolelinux build servers:

maintainer:~# /usr/lib/sitesummary/hardware-model-summary
  vendor                    count
  Dell Computer Corporation     1
    PowerEdge 1750              1
  IBM                           1
    eserver xSeries 345 -[8670M1X]-     1
  Intel                         2
  [no-dmi-info]                 3
maintainer:~#

The quality of the report depend on the quality of the DMI tables provided in each machine. Here there are Intel machines without model information listed with Intel as vendor and no model, and virtual Xen machines listed as [no-dmi-info]. One can add -l as a command line option to list the individual machines.

A larger list is available from the the city of Narvik, which uses Skolelinux on all their shools and also provide the basic sitesummary report publicly. In their report there are ~1400 machines. I know they use both Ubuntu and Skolelinux on their machines, and as sitesummary is available in both distributions, it is trivial to get all of them to report to the same central collector.

Tags: debian, debian edu, english, sitesummary.
A manual for standards wars...
6th June 2010

Via the blog of Rob Weir I came across the very interesting essay named The Art of Standards Wars (PDF 25 pages). I recommend it for everyone following the standards wars of today.

Tags: debian, debian edu, english, standard.
Upstart or sysvinit - as init.d scripts see it
6th June 2010

If Debian is to migrate to upstart on Linux, I expect some init.d scripts to migrate (some of) their operations to upstart job while keeping the init.d for hurd and kfreebsd. The packages with such needs will need a way to get their init.d scripts to behave differently when used with sysvinit and with upstart. Because of this, I had a look at the environment variables set when a init.d script is running under upstart, and when it is not.

With upstart, I notice these environment variables are set when a script is started from rcS.d/ (ignoring some irrelevant ones like COLUMNS):

DEFAULT_RUNLEVEL=2
previous=N
PREVLEVEL=
RUNLEVEL=
runlevel=S
UPSTART_EVENTS=startup
UPSTART_INSTANCE=
UPSTART_JOB=rc-sysinit

With sysvinit, these environment variables are set for the same script.

INIT_VERSION=sysvinit-2.88
previous=N
PREVLEVEL=N
RUNLEVEL=S
runlevel=S

The RUNLEVEL and PREVLEVEL environment variables passed on from sysvinit are not set by upstart. Not sure if it is intentional or not to not be compatible with sysvinit in this regard.

For scripts needing to behave differently when upstart is used, looking for the UPSTART_JOB environment variable seem to be a good choice.

Tags: bootsystem, debian, english.
Automatic upgrade testing from Lenny to Squeeze
11th June 2010

The last few days I have done some upgrade testing in Debian, to see if the upgrade from Lenny to Squeeze will go smoothly. A few bugs have been discovered and reported in the process (#585410 in nagios3-cgi, #584879 already fixed in enscript and #584861 in kdebase-workspace-data), and to get a more regular testing going on, I am working on a script to automate the test.

The idea is to create a Lenny chroot and use tasksel to install a Gnome or KDE desktop installation inside the chroot before upgrading it. To ensure no services are started in the chroot, a policy-rc.d script is inserted. To make sure tasksel believe it is to install a desktop on a laptop, the tasksel tests are replaced in the chroot (only acceptable because this is a throw-away chroot).

A naive upgrade from Lenny to Squeeze using aptitude dist-upgrade currently always fail because udev refuses to upgrade with the kernel in Lenny, so to avoid that problem the file /etc/udev/kernel-upgrade is created. The bug report #566000 make me suspect this problem do not trigger in a chroot, but I touch the file anyway to make sure the upgrade go well. Testing on virtual and real hardware have failed me because of udev so far, and creating this file do the trick in such settings anyway. This is a known issue and the current udev behaviour is intended by the udev maintainer because he lack the resources to rewrite udev to keep working with old kernels or something like that. I really wish the udev upstream would keep udev backwards compatible, to avoid such upgrade problem, but given that they fail to do so, I guess documenting the way out of this mess is the best option we got for Debian Squeeze.

Anyway, back to the task at hand, testing upgrades. This test script, which I call upgrade-test for now, is doing the trick:

#!/bin/sh
set -ex

if [ "$1" ] ; then
    desktop=$1
else
    desktop=gnome
fi

from=lenny
to=squeeze

exec < /dev/null
unset LANG
mirror=http://ftp.skolelinux.org/debian
tmpdir=chroot-$from-upgrade-$to-$desktop
fuser -mv .
debootstrap $from $tmpdir $mirror
chroot $tmpdir aptitude update
cat > $tmpdir/usr/sbin/policy-rc.d <<EOF
#!/bin/sh
exit 101
EOF
chmod a+rx $tmpdir/usr/sbin/policy-rc.d
exit_cleanup() {
    umount $tmpdir/proc
}
mount -t proc proc $tmpdir/proc
# Make sure proc is unmounted also on failure
trap exit_cleanup EXIT INT

chroot $tmpdir aptitude -y install debconf-utils

# Make sure tasksel autoselection trigger.  It need the test scripts
# to return the correct answers.
echo tasksel tasksel/desktop multiselect $desktop | \
    chroot $tmpdir debconf-set-selections

# Include the desktop and laptop task
for test in desktop laptop ; do
    echo > $tmpdir/usr/lib/tasksel/tests/$test <<EOF
#!/bin/sh
exit 2
EOF
    chmod a+rx $tmpdir/usr/lib/tasksel/tests/$test
done

DEBIAN_FRONTEND=noninteractive
DEBIAN_PRIORITY=critical
export DEBIAN_FRONTEND DEBIAN_PRIORITY
chroot $tmpdir tasksel --new-install

echo deb $mirror $to main > $tmpdir/etc/apt/sources.list
chroot $tmpdir aptitude update
touch $tmpdir/etc/udev/kernel-upgrade
chroot $tmpdir aptitude -y dist-upgrade
fuser -mv

I suspect it would be useful to test upgrades with both apt-get and with aptitude, but I have not had time to look at how they behave differently so far. I hope to get a cron job running to do the test regularly and post the result on the web. The Gnome upgrade currently work, while the KDE upgrade fail because of the bug in kdebase-workspace-data

I am not quite sure what kind of extract from the huge upgrade logs (KDE 167 KiB, Gnome 516 KiB) it make sense to include in this blog post, so I will refrain from trying. I can report that for Gnome, aptitude report 760 packages upgraded, 448 newly installed, 129 to remove and 1 not upgraded and 1024MB need to be downloaded while for KDE the same numbers are 702 packages upgraded, 507 newly installed, 193 to remove and 0 not upgraded and 1117MB need to be downloaded

I am very happy to notice that the Gnome desktop + laptop upgrade is able to migrate to dependency based boot sequencing and parallel booting without a hitch. Was unsure if there were still bugs with packages failing to clean up their obsolete init.d script during upgrades, and no such problem seem to affect the Gnome desktop+laptop packages.

Tags: bootsystem, debian, debian edu, english.
Lenny->Squeeze upgrades, removals by apt and aptitude
13th June 2010

My testing of Debian upgrades from Lenny to Squeeze continues, and I've finally made the upgrade logs available from http://people.skolelinux.org/pere/debian-upgrade-testing/. I am now testing dist-upgrade of Gnome and KDE in a chroot using both apt and aptitude, and found their differences interesting. This time I will only focus on their removal plans.

After installing a Gnome desktop and the laptop task, apt-get wants to remove 72 packages when dist-upgrading from Lenny to Squeeze. The surprising part is that it want to remove xorg and all xserver-xorg-video* drivers. Clearly not a good choice, but I am not sure why. When asking aptitude to do the same, it want to remove 129 packages, but most of them are library packages I suspect are no longer needed. Both of them want to remove bluetooth packages, which I do not know. Perhaps these bluetooth packages are obsolete?

For KDE, apt-get want to remove 82 packages, among them kdebase which seem like a bad idea and xorg the same way as with Gnome. Asking aptitude for the same, it wants to remove 192 packages, none which are too surprising.

I guess the removal of xorg during upgrades should be investigated and avoided, and perhaps others as well. Here are the complete list of planned removals. The complete logs is available from the URL above. Note if you want to repeat these tests, that the upgrade test for kde+apt-get hung in the tasksel setup because of dpkg asking conffile questions. No idea why. I worked around it by using 'echo >> /proc/pidofdpkg/fd/0' to tell dpkg to continue.

apt-get gnome 72
bluez-gnome cupsddk-drivers deskbar-applet gnome gnome-desktop-environment gnome-network-admin gtkhtml3.14 iceweasel-gnome-support libavcodec51 libdatrie0 libgdl-1-0 libgnomekbd2 libgnomekbdui2 libmetacity0 libslab0 libxcb-xlib0 nautilus-cd-burner python-gnome2-desktop python-gnome2-extras serpentine swfdec-mozilla update-manager xorg xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-cyrix xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-imstt xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nsc xserver-xorg-video-nv xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-v4l xserver-xorg-video-vesa xserver-xorg-video-vga xserver-xorg-video-vmware xserver-xorg-video-voodoo xulrunner-1.9 xulrunner-1.9-gnome-support

aptitude gnome 129
bluez-gnome bluez-utils cpp-4.3 cupsddk-drivers dhcdbd djvulibre-desktop finger gnome-app-install gnome-mount gnome-network-admin gnome-spell gnome-vfs-obexftp gnome-volume-manager gstreamer0.10-gnomevfs gtkhtml3.14 libao2 libavahi-compat-libdnssd1 libavahi-core5 libavcodec51 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcupsys2 libcurl3 libdatrie0 libdirectfb-1.0-0 libdvdread3 libedataserver1.2-9 libeel2-2.20 libeel2-data libepc-1.0-1 libepc-ui-1.0-1 libfaad0 libgail-common libgd2-noxpm libgda3-3 libgda3-common libgdl-1-0 libgdl-1-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnomecups1.0-1 libgnomekbd2 libgnomekbdui2 libgnomeprint2.2-0 libgnomeprint2.2-data libgnomeprintui2.2-0 libgnomeprintui2.2-common libgnomevfs2-bin libgpod3 libgraphviz4 libgtkhtml2-0 libgtksourceview-common libgtksourceview1.0-0 libgucharmap6 libhesiod0 libicu38 libiw29 libkpathsea4 libltdl3 libmagick++10 libmagick10 libmalaga7 libmetacity0 libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpoppler-glib3 libpoppler3 libpt-1.10.10 libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libraw1394-8 libsensors3 libslab0 libsmbios2 libsoup2.2-8 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libxalan2-java libxalan2-java-gcj libxcb-xlib0 libxerces2-java libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common nautilus-cd-burner openoffice.org-writer2latex openssl-blacklist p7zip python-4suite-xml python-eggtrayicon python-gnome2-desktop python-gnome2-extras python-gtkhtml2 python-gtkmozembed python-numeric python-sexy serpentine svgalibg1 swfdec-gnome swfdec-mozilla totem-gstreamer update-manager wodim xserver-xorg-video-cyrix xserver-xorg-video-imstt xserver-xorg-video-nsc xserver-xorg-video-v4l xserver-xorg-video-vga zip

apt-get kde 82
cupsddk-drivers karm kaudiocreator kcoloredit kcontrol kde kde-core kdeaddons kdeartwork kdebase kdebase-bin kdebase-bin-kde3 kdebase-kio-plugins kdesktop kdeutils khelpcenter kicker kicker-applets knewsticker kolourpaint konq-plugins konqueror korn kpersonalizer kscreensaver ksplash libavcodec51 libdatrie0 libkiten1 libxcb-xlib0 quanta superkaramba texlive-base-bin xorg xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-cyrix xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-imstt xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nsc xserver-xorg-video-nv xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-v4l xserver-xorg-video-vesa xserver-xorg-video-vga xserver-xorg-video-vmware xserver-xorg-video-voodoo xulrunner-1.9

aptitude kde 192
bluez-utils cpp-4.3 cupsddk-drivers cvs dcoprss dhcdbd djvulibre-desktop dosfstools eyesapplet fifteenapplet finger gettext ghostscript-x imlib-base imlib11 indi kandy karm kasteroids kaudiocreator kbackgammon kbstate kcoloredit kcontrol kcron kdat kdeadmin-kfile-plugins kdeartwork-misc kdeartwork-theme-window kdebase-bin-kde3 kdebase-kio-plugins kdeedu-data kdegraphics-kfile-plugins kdelirc kdemultimedia-kappfinder-data kdemultimedia-kfile-plugins kdenetwork-kfile-plugins kdepim-kfile-plugins kdepim-kio-plugins kdeprint kdesktop kdessh kdict kdnssd kdvi kedit keduca kenolaba kfax kfaxview kfouleggs kghostview khelpcenter khexedit kiconedit kitchensync klatin klickety kmailcvt kmenuedit kmid kmilo kmoon kmrml kodo kolourpaint kooka korn kpager kpdf kpercentage kpf kpilot kpoker kpovmodeler krec kregexpeditor ksayit ksim ksirc ksirtet ksmiletris ksmserver ksnake ksokoban ksplash ksvg ksysv ktip ktnef kuickshow kverbos kview kviewshell kvoctrain kwifimanager kwin kwin4 kworldclock kxsldbg libakode2 libao2 libarts1-akode libarts1-audiofile libarts1-mpeglib libarts1-xine libavahi-compat-libdnssd1 libavahi-core5 libavc1394-0 libavcodec51 libbluetooth2 libboost-python1.34.1 libcucul0 libcurl3 libcvsservice0 libdatrie0 libdirectfb-1.0-0 libdjvulibre21 libdvdread3 libfaad0 libfreebob0 libgail-common libgd2-noxpm libgraphviz4 libgsmme1c2a libgtkhtml2-0 libicu38 libiec61883-0 libindex0 libiw29 libk3b3 libkcal2b libkcddb1 libkdeedu3 libkdepim1a libkgantt0 libkiten1 libkleopatra1 libkmime2 libkpathsea4 libkpimexchange1 libkpimidentities1 libkscan1 libksieve0 libktnef1 liblockdev1 libltdl3 libmagick10 libmimelib1c2a libmozjs1d libmpcdec3 libneon27 libnm-util0 libopensync0 libpisock9 libpoppler-glib3 libpoppler-qt2 libpoppler3 libraw1394-8 libsmbios2 libssh2-1 libsuitesparse-3.1.0 libtalloc1 libtiff-tools libxalan2-java libxalan2-java-gcj libxcb-xlib0 libxerces2-java libxerces2-java-gcj libxtrap6 mpeglib networkstatus openoffice.org-writer2latex pmount poster psutils quanta quanta-data superkaramba svgalibg1 tex-common texlive-base texlive-base-bin texlive-common texlive-doc-base texlive-fonts-recommended xserver-xorg-video-cyrix xserver-xorg-video-imstt xserver-xorg-video-nsc xserver-xorg-video-v4l xserver-xorg-video-vga xulrunner-1.9

Tags: debian, debian edu, english.
Officeshots taking shape
13th June 2010

For those of us caring about document exchange and interoperability, OfficeShots is a great service. It is to ODF documents what BrowserShots is for web pages.

A while back, I was contacted by Knut Yrvin at the part of Nokia that used to be Trolltech, who wanted to help the OfficeShots project and wondered if the University of Oslo where I work would be interested in supporting the project. I helped him to navigate his request to the right people at work, and his request was answered with a spot in the machine room with power and network connected, and Knut arranged funding for a machine to fill the spot. The machine is administrated by the OfficeShots people, so I do not have daily contact with its progress, and thus from time to time check back to see how the project is doing.

Today I had a look, and was happy to see that the Dell box in our machine room now is the host for several virtual machines running as OfficeShots factories, and the project is able to render ODF documents in 17 different document processing implementation on Linux and Windows. This is great.

Tags: english, standard.
Calling tasksel like the installer, while still getting useful output
16th June 2010

A few times I have had the need to simulate the way tasksel installs packages during the normal debian-installer run. Until now, I have ended up letting tasksel do the work, with the annoying problem of not getting any feedback at all when something fails (like a conffile question from dpkg or a download that fails), using code like this:

export DEBIAN_FRONTEND=noninteractive
tasksel --new-install
This would invoke tasksel, let its automatic task selection pick the tasks to install, and continue to install the requested tasks without any output what so ever. Recently I revisited this problem while working on the automatic package upgrade testing, because tasksel would some times hang without any useful feedback, and I want to see what is going on when it happen. Then it occured to me, I can parse the output from tasksel when asked to run in test mode, and use that aptitude command line printed by tasksel then to simulate the tasksel run. I ended up using code like this:
export DEBIAN_FRONTEND=noninteractive
cmd="$(in_target tasksel -t --new-install | sed 's/debconf-apt-progress -- //')"
$cmd

The content of $cmd is typically something like "aptitude -q --without-recommends -o APT::Install-Recommends=no -y install ~t^desktop$ ~t^gnome-desktop$ ~t^laptop$ ~pstandard ~prequired ~pimportant", which will install the gnome desktop task, the laptop task and all packages with priority standard , required and important, just like tasksel would have done it during installation.

A better approach is probably to extend tasksel to be able to install packages without using debconf-apt-progress, for use cases like this.

Tags: debian, english, nuug.
Idea for a change to LDAP schemas allowing DNS and DHCP info to be combined into one object
24th June 2010

A while back, I complained about the fact that it is not possible with the provided schemas for storing DNS and DHCP information in LDAP to combine the two sets of information into one LDAP object representing a computer.

In the mean time, I discovered that a simple fix would be to make the dhcpHost object class auxiliary, to allow it to be combined with the dNSDomain object class, and thus forming one object for one computer when storing both DHCP and DNS information in LDAP.

If I understand this correctly, it is not safe to do this change without also changing the assigned number for the object class, and I do not know enough about LDAP schema design to do that properly for Debian Edu.

Anyway, for future reference, this is how I believe we could change the DHCP schema to solve at least part of the problem with the LDAP schemas available today from IETF.

--- dhcp.schema    (revision 65192)
+++ dhcp.schema    (working copy)
@@ -376,7 +376,7 @@
 objectclass ( 2.16.840.1.113719.1.203.6.6
        NAME 'dhcpHost'
        DESC 'This represents information about a particular client'
-       SUP top
+       SUP top AUXILIARY
        MUST cn
        MAY  (dhcpLeaseDN $ dhcpHWAddress $ dhcpOptionsDN $ dhcpStatements $ dhcpComments $ dhcpOption)
        X-NDS_CONTAINMENT ('dhcpService' 'dhcpSubnet' 'dhcpGroup') )

I very much welcome clues on how to do this properly for Debian Edu/Squeeze. We provide the DHCP schema in our debian-edu-config package, and should thus be free to rewrite it as we see fit.

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian, debian edu, english, ldap, nuug.
LUMA, a very nice LDAP GUI
28th June 2010

The last few days I have been looking into the status of the LDAP directory in Debian Edu, and in the process I started to miss a GUI tool to browse the LDAP tree. The only one I was able to find in Debian/Squeeze and Lenny is LUMA, which has proved to be a great tool to get a overview of the current LDAP directory populated by default in Skolelinux. Thanks to it, I have been able to find empty and obsolete subtrees, misplaced objects and duplicate objects. It will be installed by default in Debian/Squeeze. If you are working with LDAP, give it a go. :)

I did notice one problem with it I have not had time to report to the BTS yet. There is no .desktop file in the package, so the tool do not show up in the Gnome and KDE menus, but only deep down in in the Debian submenu in KDE. I hope that can be fixed before Squeeze is released.

I have not yet been able to get it to modify the tree yet. I would like to move objects and remove subtrees directly in the GUI, but have not found a way to do that with LUMA yet. So in the mean time, I use ldapvi for that.

If you have tips on other GUI tools for LDAP that might be useful in Debian Edu, please contact us on debian-edu@lists.debian.org.

Update 2010-06-29: Ross Reedstrom tipped us about the gq package as a useful GUI alternative. It seem like a good tool, but is unmaintained in Debian and got a RC bug keeping it out of Squeeze. Unless that changes, it will not be an option for Debian Edu based on Squeeze.

Tags: debian, debian edu, english, ldap, nuug.
Caching password, user and group on a roaming Debian laptop
1st July 2010

For a laptop, centralized user directories and password checking is a bit troubling. Laptops are typically used also when not connected to the network, and it is vital for a user to be able to log in or unlock the screen saver also when a central server is unavailable. This is possible by caching passwords and directory information (user and group attributes) locally, and the packages to do so are available in Debian. Here follow two recipes to set this up in Debian/Squeeze. It is also possible to set up in Debian/Lenny, but require more manual setup there because pam-auth-update is missing in Lenny.

LDAP/Kerberos + nscd + libpam-ccreds + libpam-mklocaluser/pam_mkhomedir

This is the traditional method with a twist. The password caching is provided by libpam-ccreds (version 10-4 or later is needed on Squeeze), and the directory caching is done by nscd. The directory lookup and password checking is done using LDAP. If one want to use Kerberos for password checking the libpam-ldapd package can be replaced with libpam-krb5 or libpam-heimdal. If one is happy having a local home directory with the path listed in LDAP, one can use the pam_mkhomedir module from pam-modules to make this happen instead of using libpam-mklocaluser. A setup for pam-auth-update to enable pam_mkhomedir will have to be written until a fix for bug #568577 is in the archive. Because I believe it is a bad idea to have local home directories using misleading paths like /site/server/partition/, I prefer to create a local user with the home directory in /home/. This is done using the libpam-mklocaluser package.

These packages need to be installed and configured

libnss-ldapd libpam-ldapd nscd libpam-ccreds libpam-mklocaluser

The ldapd packages will ask for LDAP connection information, and one have to fill in the values that fits ones own site. Make sure the PAM part uses encrypted connections, to make sure the password is not sent in clear text to the LDAP server. I've been unable to get TLS certificate checking for a self signed certificate working, which make LDAP authentication unsafe for Debian Edu (nslcd is not checking if it is talking to the correct LDAP server), and very much welcome feedback on how to get this working.

Because nscd do not have a default configuration fit for offline caching until bug #485282 is fixed, this configuration should be used instead of the one currently in /etc/nscd.conf. The changes are in the fields reload-count and positive-time-to-live, and is based on the instructions I found in the LDAP for Mobile Laptops instructions by Flyn Computing.

	debug-level		0
	reload-count		unlimited
	paranoia		no

	enable-cache		passwd		yes
	positive-time-to-live	passwd		2592000
	negative-time-to-live	passwd		20
	suggested-size		passwd		211
	check-files		passwd		yes
	persistent		passwd		yes
	shared			passwd		yes
	max-db-size		passwd		33554432
	auto-propagate		passwd		yes

	enable-cache		group		yes
	positive-time-to-live	group		2592000
	negative-time-to-live	group		20
	suggested-size		group		211
	check-files		group		yes
	persistent		group		yes
	shared			group		yes
	max-db-size		group		33554432
	auto-propagate		group		yes

	enable-cache		hosts		no
	positive-time-to-live	hosts		2592000
	negative-time-to-live	hosts		20
	suggested-size		hosts		211
	check-files		hosts		yes
	persistent		hosts		yes
	shared			hosts		yes
	max-db-size		hosts		33554432

	enable-cache		services	yes
	positive-time-to-live	services	2592000
	negative-time-to-live	services	20
	suggested-size		services	211
	check-files		services	yes
	persistent		services	yes
	shared			services	yes
	max-db-size		services	33554432

While we wait for a mechanism to update /etc/nsswitch.conf automatically like the one provided in bug #496915, the file content need to be manually replaced to ensure LDAP is used as the directory service on the machine. /etc/nsswitch.conf should normally look like this:

passwd:         files ldap
group:          files ldap
shadow:         files ldap
hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4
networks:       files
protocols:      files
services:       files
ethers:         files
rpc:            files
netgroup:       files ldap

The important parts are that ldap is listed last for passwd, group, shadow and netgroup.

With these changes in place, any user in LDAP will be able to log in locally on the machine using for example kdm, get a local home directory created and have the password as well as user and group attributes cached.

LDAP/Kerberos + nss-updatedb + libpam-ccreds + libpam-mklocaluser/pam_mkhomedir

Because nscd have had its share of problems, and seem to have problems doing proper caching, I've seen suggestions and recipes to use nss-updatedb to copy parts of the LDAP database locally when the LDAP database is available. I have not tested such setup, because I discovered sssd.

LDAP/Kerberos + sssd + libpam-mklocaluser

A more flexible and robust setup than the nscd combination mentioned earlier that has shown up recently, is the sssd package from Redhat. It is part of the FreeIPA project to provide a Active Directory like directory service for Linux machines. The sssd system combines the caching of passwords and user information into one package, and remove the need for nscd and libpam-ccreds. It support LDAP and Kerberos, but not NIS. Version 1.2 do not support netgroups, but it is said that it will support this in version 1.5 expected to show up later in 2010. Because the sssd package was missing in Debian, I ended up co-maintaining it with Werner, and version 1.2 is now in testing.

These packages need to be installed and configured to get the roaming setup I want

libpam-sss libnss-sss libpam-mklocaluser
The complete setup of sssd is done by editing/creating /etc/sssd/sssd.conf.
[sssd]
config_file_version = 2
reconnection_retries = 3
sbus_timeout = 30
services = nss, pam
domains = INTERN

[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3

[pam]
reconnection_retries = 3

[domain/INTERN]
enumerate = false
cache_credentials = true

id_provider = ldap
auth_provider = ldap
chpass_provider = ldap

ldap_uri = ldap://ldap
ldap_search_base = dc=skole,dc=skolelinux,dc=no
ldap_tls_reqcert = never
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt

I got the same problem here with certificate checking. Had to set "ldap_tls_reqcert = never" to get it working.

With the libnss-sss package in testing at the moment, the nsswitch.conf file is update automatically, so there is no need to modify it manually.

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian edu, english, ldap, nuug.
Lenny->Squeeze upgrades, apt vs aptitude with the Gnome desktop
3rd July 2010

Here is a short update on my my Debian Lenny->Squeeze upgrade testing. Here is a summary of the difference for Gnome when it is upgraded by apt-get and aptitude. I'm not reporting the status for KDE, because the upgrade crashes when aptitude try because of missing conflicts (#584861 and #585716).

At the end of the upgrade test script, dpkg -l is executed to get a complete list of the installed packages. Based on this I see these differences when I did a test run today. As usual, I do not really know what the correct set of packages would be, but thought it best to publish the difference.

Installed using apt-get, missing with aptitude

at-spi cpp-4.3 finger gnome-spell gstreamer0.10-gnomevfs libatspi1.0-0 libcupsys2 libeel2-data libgail-common libgdl-1-common libgnomeprint2.2-data libgnomeprintui2.2-common libgnomevfs2-bin libgtksourceview-common libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libservlet2.4-java libxalan2-java libxerces2-java openoffice.org-writer2latex openssl-blacklist p7zip python-4suite-xml python-eggtrayicon python-gtkhtml2 python-gtkmozembed svgalibg1 xserver-xephyr zip

Installed using apt-get, removed with aptitude

bluez-utils dhcdbd djvulibre-desktop epiphany-gecko gnome-app-install gnome-mount gnome-vfs-obexftp gnome-volume-manager libao2 libavahi-compat-libdnssd1 libavahi-core5 libbind9-50 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcurl3 libdirectfb-1.0-0 libdvdread3 libedata-cal1.2-6 libedataserver1.2-9 libeel2-2.20 libepc-1.0-1 libepc-ui-1.0-1 libexchange-storage1.2-3 libfaad0 libgd2-noxpm libgda3-3 libgda3-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnome-desktop-2 libgnome-pilot2 libgnomecups1.0-1 libgnomeprint2.2-0 libgnomeprintui2.2-0 libgpod3 libgraphviz4 libgtkhtml2-0 libgtksourceview1.0-0 libgucharmap6 libhesiod0 libicu38 libisccc50 libisccfg50 libiw29 libkpathsea4 libltdl3 liblwres50 libmagick++10 libmagick10 libmalaga7 libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpisock9 libpisync1 libpoppler-glib3 libpoppler3 libpt-1.10.10 libraw1394-8 libsensors3 libsmbios2 libsoup2.2-8 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libvoikko1 libxalan2-java-gcj libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common swfdec-gnome totem-gstreamer wodim

Installed using aptitude, missing with apt-get

gnome gnome-desktop-environment hamster-applet python-gnomeapplet python-gnomekeyring python-wnck rhythmbox-plugins xorg xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-nv xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-vesa xserver-xorg-video-vmware xserver-xorg-video-voodoo

Installed using aptitude, removed with apt-get

deskbar-applet xserver-xorg xserver-xorg-core xserver-xorg-input-wacom xserver-xorg-video-intel xserver-xorg-video-openchrome

I was told on IRC that the xorg-xserver package was changed in git today to try to get apt-get to not remove xorg completely. No idea when it hits Squeeze, but when it does I hope it will reduce the difference somewhat.

Tags: debian, debian edu, english.
jXplorer, a very nice LDAP GUI
9th July 2010

Since my last post about available LDAP tools in Debian, I was told about a LDAP GUI that is even better than luma. The java application jXplorer is claimed to be capable of moving LDAP objects and subtrees using drag-and-drop, and can authenticate using Kerberos. I have only tested the Kerberos authentication, but do not have a LDAP setup allowing me to rewrite LDAP with my test user yet. It is available in Debian testing and unstable at the moment. The only problem I have with it is how it handle errors. If something go wrong, its non-intuitive behaviour require me to go through some query work list and remove the failing query. Nothing big, but very annoying.

Tags: debian, debian edu, english, ldap, nuug.
Idea for storing LTSP configuration in LDAP
11th July 2010

Vagrant mentioned on IRC today that ltsp_config now support sourcing files from /usr/share/ltsp/ltsp_config.d/ on the thin clients, and that this can be used to fetch configuration from LDAP if Debian Edu choose to store configuration there.

Armed with this information, I got inspired and wrote a test module to get configuration from LDAP. The idea is to look up the MAC address of the client in LDAP, and look for attributes on the form ltspconfigsetting=value, and use this to export SETTING=value to the LTSP clients.

The goal is to be able to store the LTSP configuration attributes in a "computer" LDAP object used by both DNS and DHCP, and thus allowing us to store all information about a computer in one place.

This is a untested draft implementation, and I welcome feedback on this approach. A real LDAP schema for the ltspClientAux objectclass need to be written. Comments, suggestions, etc?

# Store in /opt/ltsp/$arch/usr/share/ltsp/ltsp_config.d/ldap-config
#
# Fetch LTSP client settings from LDAP based on MAC address
#
# Uses ethernet address as stored in the dhcpHost objectclass using
# the dhcpHWAddress attribute or ethernet address stored in the
# ieee802Device objectclass with the macAddress attribute.
#
# This module is written to be schema agnostic, and only depend on the
# existence of attribute names.
#
# The LTSP configuration variables are saved directly using a
# ltspConfig prefix and uppercasing the rest of the attribute name.
# To set the SERVER variable, set the ltspConfigServer attribute.
#
# Some LDAP schema should be created with all the relevant
# configuration settings.  Something like this should work:
# 
# objectclass ( 1.1.2.2 NAME 'ltspClientAux'
#     SUP top
#     AUXILIARY
#     MAY ( ltspConfigServer $ ltsConfigSound $ ... )

LDAPSERVER=$(debian-edu-ldapserver)
if [ "$LDAPSERVER" ] ; then
    LDAPBASE=$(debian-edu-ldapserver -b)
    for MAC in $(LANG=C ifconfig |grep -i hwaddr| awk '{print $5}'|sort -u) ; do
	filter="(|(dhcpHWAddress=ethernet $MAC)(macAddress=$MAC))"
	ldapsearch -h "$LDAPSERVER" -b "$LDAPBASE" -v -x "$filter" | \
	    grep '^ltspConfig' | while read attr value ; do
	    # Remove prefix and convert to upper case
	    attr=$(echo $attr | sed 's/^ltspConfig//i' | tr a-z A-Z)
	    # bass value on to clients
	    eval "$attr=$value; export $attr"
	done
    done
fi

I'm not sure this shell construction will work, because I suspect the while block might end up in a subshell causing the variables set there to not show up in ltsp-config, but if that is the case I am sure the code can be restructured to make sure the variables are passed on. I expect that can be solved with some testing. :)

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Update 2010-07-17: I am aware of another effort to store LTSP configuration in LDAP that was created around year 2000 by PC Xperience, Inc., 2000. I found its files on a personal home page over at redhat.com.

Tags: debian, debian edu, english, ldap, nuug.
Combining PowerDNS and ISC DHCP LDAP objects
14th July 2010

For a while now, I have wanted to find a way to change the DNS and DHCP services in Debian Edu to use the same LDAP objects for a given computer, to avoid the possibility of having a inconsistent state for a computer in LDAP (as in DHCP but no DNS entry or the other way around) and make it easier to add computers to LDAP.

I've looked at how powerdns and dhcpd is using LDAP, and using this information finally found a solution that seem to work.

The old setup required three LDAP objects for a given computer. One forward DNS entry, one reverse DNS entry and one DHCP entry. If we switch powerdns to use its strict LDAP method (ldap-method=strict in pdns-debian-edu.conf), the forward and reverse DNS entries are merged into one while making it impossible to transfer the reverse map to a slave DNS server.

If we also replace the object class used to get the DNS related attributes to one allowing these attributes to be combined with the dhcphost object class, we can merge the DNS and DHCP entries into one. I've written such object class in the dnsdomainaux.schema file (need proper OIDs, but that is a minor issue), and tested the setup. It seem to work.

With this test setup in place, we can get away with one LDAP object for both DNS and DHCP, and even the LTSP configuration I suggested in an earlier email. The combined LDAP object will look something like this:

  dn: cn=hostname,cn=group1,cn=THINCLIENTS,cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
  cn: hostname
  objectClass: dhcphost
  objectclass: domainrelatedobject
  objectclass: dnsdomainaux
  associateddomain: hostname.intern
  arecord: 10.11.12.13
  dhcphwaddress: ethernet 00:00:00:00:00:00
  dhcpstatements: fixed-address hostname
  ldapconfigsound: Y

The DNS server uses the associateddomain and arecord entries, while the DHCP server uses the dhcphwaddress and dhcpstatements entries before asking DNS to resolve the fixed-adddress. LTSP will use dhcphwaddress or associateddomain and the ldapconfig* attributes.

I am not yet sure if I can get the DHCP server to look for its dhcphost in a different location, to allow us to put the objects outside the "DHCP Config" subtree, but hope to figure out a way to do that. If I can't figure out a way to do that, we can still get rid of the hosts subtree and move all its content into the DHCP Config tree (which probably should be renamed to be more related to the new content. I suspect cn=dnsdhcp,ou=services or something like that might be a good place to put it.

If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian, debian edu, english, ldap, nuug.
What are they searching for - PowerDNS and ISC DHCP in LDAP
17th July 2010

This is a followup on my previous work on merging all the computer related LDAP objects in Debian Edu.

As a step to try to see if it possible to merge the DNS and DHCP LDAP objects, I have had a look at how the packages pdns-backend-ldap and dhcp3-server-ldap in Debian use the LDAP server. The two implementations are quite different in how they use LDAP.

To get this information, I started slapd with debugging enabled and dumped the debug output to a file to get the LDAP searches performed on a Debian Edu main-server. Here is a summary.

powerdns

Clues on how to set up PowerDNS to use a LDAP backend is available on the web.

PowerDNS have two modes of operation using LDAP as its backend. One "strict" mode where the forward and reverse DNS lookups are done using the same LDAP objects, and a "tree" mode where the forward and reverse entries are in two different subtrees in LDAP with a structure based on the DNS names, as in tjener.intern and 2.2.0.10.in-addr.arpa.

In tree mode, the server is set up to use a LDAP subtree as its base, and uses a "base" scoped search for the DNS name by adding "dc=tjener,dc=intern," to the base with a filter for "(associateddomain=tjener.intern)" for the forward entry and "dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa," with a filter for "(associateddomain=2.2.0.10.in-addr.arpa)" for the reverse entry. For forward entries, it is looking for attributes named dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, afsdbrecord, keyrecord, aaaarecord, locrecord, srvrecord, naptrrecord, kxrecord, certrecord, dsrecord, sshfprecord, ipseckeyrecord, rrsigrecord, nsecrecord, dnskeyrecord, dhcidrecord, spfrecord and modifytimestamp. For reverse entries it is looking for the attributes dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, aaaarecord, locrecord, srvrecord, naptrrecord and modifytimestamp. The equivalent ldapsearch commands could look like this:

ldapsearch -h ldap \
  -b dc=tjener,dc=intern,ou=hosts,dc=skole,dc=skolelinux,dc=no \
  -s base -x '(associateddomain=tjener.intern)' dNSTTL aRecord nSRecord \
  cNAMERecord sOARecord pTRRecord hInfoRecord mXRecord tXTRecord \
  rPRecord aFSDBRecord KeyRecord aAAARecord lOCRecord sRVRecord \
  nAPTRRecord kXRecord certRecord dSRecord sSHFPRecord iPSecKeyRecord \
  rRSIGRecord nSECRecord dNSKeyRecord dHCIDRecord sPFRecord modifyTimestamp

ldapsearch -h ldap \
  -b dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa,ou=hosts,dc=skole,dc=skolelinux,dc=no \
  -s base -x '(associateddomain=2.2.0.10.in-addr.arpa)'
  dnsttl, arecord, nsrecord, cnamerecord soarecord ptrrecord \
  hinforecord mxrecord txtrecord rprecord aaaarecord locrecord \
  srvrecord naptrrecord modifytimestamp

In Debian Edu/Lenny, the PowerDNS tree mode is used with ou=hosts,dc=skole,dc=skolelinux,dc=no as the base, and these are two example LDAP objects used there. In addition to these objects, the parent objects all th way up to ou=hosts,dc=skole,dc=skolelinux,dc=no also exist.

dn: dc=tjener,dc=intern,ou=hosts,dc=skole,dc=skolelinux,dc=no
objectclass: top
objectclass: dnsdomain
objectclass: domainrelatedobject
dc: tjener
arecord: 10.0.2.2
associateddomain: tjener.intern

dn: dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa,ou=hosts,dc=skole,dc=skolelinux,dc=no
objectclass: top
objectclass: dnsdomain2
objectclass: domainrelatedobject
dc: 2
ptrrecord: tjener.intern
associateddomain: 2.2.0.10.in-addr.arpa

In strict mode, the server behaves differently. When looking for forward DNS entries, it is doing a "subtree" scoped search with the same base as in the tree mode for a object with filter "(associateddomain=tjener.intern)" and requests the attributes dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, aaaarecord, locrecord, srvrecord, naptrrecord and modifytimestamp. For reverse entires it also do a subtree scoped search but this time the filter is "(arecord=10.0.2.2)" and the requested attributes are associateddomain, dnsttl and modifytimestamp. In short, in strict mode the objects with ptrrecord go away, and the arecord attribute in the forward object is used instead.

The forward and reverse searches can be simulated using ldapsearch like this:

ldapsearch -h ldap -b ou=hosts,dc=skole,dc=skolelinux,dc=no -s sub -x \
  '(associateddomain=tjener.intern)' dNSTTL aRecord nSRecord \
  cNAMERecord sOARecord pTRRecord hInfoRecord mXRecord tXTRecord \
  rPRecord aFSDBRecord KeyRecord aAAARecord lOCRecord sRVRecord \
  nAPTRRecord kXRecord certRecord dSRecord sSHFPRecord iPSecKeyRecord \
  rRSIGRecord nSECRecord dNSKeyRecord dHCIDRecord sPFRecord modifyTimestamp

ldapsearch -h ldap -b ou=hosts,dc=skole,dc=skolelinux,dc=no -s sub -x \
  '(arecord=10.0.2.2)' associateddomain dnsttl modifytimestamp

In addition to the forward and reverse searches , there is also a search for SOA records, which behave similar to the forward and reverse lookups.

A thing to note with the PowerDNS behaviour is that it do not specify any objectclass names, and instead look for the attributes it need to generate a DNS reply. This make it able to work with any objectclass that provide the needed attributes.

The attributes are normally provided in the cosine (RFC 1274) and dnsdomain2 schemas. The latter is used for reverse entries like ptrrecord and recent DNS additions like aaaarecord and srvrecord.

In Debian Edu, we have created DNS objects using the object classes dcobject (for dc), dnsdomain or dnsdomain2 (structural, for the DNS attributes) and domainrelatedobject (for associatedDomain). The use of structural object classes make it impossible to combine these classes with the object classes used by DHCP.

There are other schemas that could be used too, for example the dnszone structural object class used by Gosa and bind-sdb for the DNS attributes combined with the domainrelatedobject object class, but in this case some unused attributes would have to be included as well (zonename and relativedomainname).

My proposal for Debian Edu would be to switch PowerDNS to strict mode and not use any of the existing objectclasses (dnsdomain, dnsdomain2 and dnszone) when one want to combine the DNS information with DHCP information, and instead create a auxiliary object class defined something like this (using the attributes defined for dnsdomain and dnsdomain2 or dnszone):

objectclass ( some-oid NAME 'dnsDomainAux'
    SUP top
    AUXILIARY
    MAY ( ARecord $ MDRecord $ MXRecord $ NSRecord $ SOARecord $ CNAMERecord $
          DNSTTL $ DNSClass $ PTRRecord $ HINFORecord $ MINFORecord $
          TXTRecord $ SIGRecord $ KEYRecord $ AAAARecord $ LOCRecord $
          NXTRecord $ SRVRecord $ NAPTRRecord $ KXRecord $ CERTRecord $
          A6Record $ DNAMERecord
    ))

This will allow any object to become a DNS entry when combined with the domainrelatedobject object class, and allow any entity to include all the attributes PowerDNS wants. I've sent an email to the PowerDNS developers asking for their view on this schema and if they are interested in providing such schema with PowerDNS, and I hope my message will be accepted into their mailing list soon.

ISC dhcp

The DHCP server searches for specific objectclass and requests all the object attributes, and then uses the attributes it want. This make it harder to figure out exactly what attributes are used, but thanks to the working example in Debian Edu I can at least get an idea what is needed without having to read the source code.

In the DHCP server configuration, the LDAP base to use and the search filter to use to locate the correct dhcpServer entity is stored. These are the relevant entries from /etc/dhcp3/dhcpd.conf:

ldap-base-dn "dc=skole,dc=skolelinux,dc=no";
ldap-dhcp-server-cn "dhcp";

The DHCP server uses this information to nest all the DHCP configuration it need. The cn "dhcp" is located using the given LDAP base and the filter "(&(objectClass=dhcpServer)(cn=dhcp))". The search result is this entry:

dn: cn=dhcp,dc=skole,dc=skolelinux,dc=no
cn: dhcp
objectClass: top
objectClass: dhcpServer
dhcpServiceDN: cn=DHCP Config,dc=skole,dc=skolelinux,dc=no

The content of the dhcpServiceDN attribute is next used to locate the subtree with DHCP configuration. The DHCP configuration subtree base is located using a base scope search with base "cn=DHCP Config,dc=skole,dc=skolelinux,dc=no" and filter "(&(objectClass=dhcpService)(|(dhcpPrimaryDN=cn=dhcp,dc=skole,dc=skolelinux,dc=no)(dhcpSecondaryDN=cn=dhcp,dc=skole,dc=skolelinux,dc=no)))". The search result is this entry:

dn: cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
cn: DHCP Config
objectClass: top
objectClass: dhcpService
objectClass: dhcpOptions
dhcpPrimaryDN: cn=dhcp, dc=skole,dc=skolelinux,dc=no
dhcpStatements: ddns-update-style none
dhcpStatements: authoritative
dhcpOption: smtp-server code 69 = array of ip-address
dhcpOption: www-server code 72 = array of ip-address
dhcpOption: wpad-url code 252 = text

Next, the entire subtree is processed, one level at the time. When all the DHCP configuration is loaded, it is ready to receive requests. The subtree in Debian Edu contain objects with object classes top/dhcpService/dhcpOptions, top/dhcpSharedNetwork/dhcpOptions, top/dhcpSubnet, top/dhcpGroup and top/dhcpHost. These provide options and information about netmasks, dynamic range etc. Leaving out the details here because it is not relevant for the focus of my investigation, which is to see if it is possible to merge dns and dhcp related computer objects.

When a DHCP request come in, LDAP is searched for the MAC address of the client (00:00:00:00:00:00 in this example), using a subtree scoped search with "cn=DHCP Config,dc=skole,dc=skolelinux,dc=no" as the base and "(&(objectClass=dhcpHost)(dhcpHWAddress=ethernet 00:00:00:00:00:00))" as the filter. This is what a host object look like:

dn: cn=hostname,cn=group1,cn=THINCLIENTS,cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
cn: hostname
objectClass: top
objectClass: dhcpHost
dhcpHWAddress: ethernet 00:00:00:00:00:00
dhcpStatements: fixed-address hostname

There is less flexiblity in the way LDAP searches are done here. The object classes need to have fixed names, and the configuration need to be stored in a fairly specific LDAP structure. On the positive side, the invidiual dhcpHost entires can be anywhere without the DN pointed to by the dhcpServer entries. The latter should make it possible to group all host entries in a subtree next to the configuration entries, and this subtree can also be shared with the DNS server if the schema proposed above is combined with the dhcpHost structural object class.

Conclusion

The PowerDNS implementation seem to be very flexible when it come to which LDAP schemas to use. While its "tree" mode is rigid when it come to the the LDAP structure, the "strict" mode is very flexible, allowing DNS objects to be stored anywhere under the base cn specified in the configuration.

The DHCP implementation on the other hand is very inflexible, both regarding which LDAP schemas to use and which LDAP structure to use. I guess one could implement ones own schema, as long as the objectclasses and attributes have the names used, but this do not really help when the DHCP subtree need to have a fairly fixed structure.

Based on the observed behaviour, I suspect a LDAP structure like this might work for Debian Edu:

ou=services
  cn=machine-info (dhcpService) - dhcpServiceDN points here
    cn=dhcp (dhcpServer)
    cn=dhcp-internal (dhcpSharedNetwork/dhcpOptions)
      cn=10.0.2.0 (dhcpSubnet)
        cn=group1 (dhcpGroup/dhcpOptions)
    cn=dhcp-thinclients (dhcpSharedNetwork/dhcpOptions)
      cn=192.168.0.0 (dhcpSubnet)
        cn=group1 (dhcpGroup/dhcpOptions)
    ou=machines - PowerDNS base points here
      cn=hostname (dhcpHost/domainrelatedobject/dnsDomainAux)

This is not tested yet. If the DHCP server require the dhcpHost entries to be in the dhcpGroup subtrees, the entries can be stored there instead of a common machines subtree, and the PowerDNS base would have to be moved one level up to the machine-info subtree.

The combined object under the machines subtree would look something like this:

dn: dc=hostname,ou=machines,cn=machine-info,dc=skole,dc=skolelinux,dc=no
dc: hostname
objectClass: top
objectClass: dhcpHost
objectclass: domainrelatedobject
objectclass: dnsDomainAux
associateddomain: hostname.intern
arecord: 10.11.12.13
dhcpHWAddress: ethernet 00:00:00:00:00:00
dhcpStatements: fixed-address hostname.intern

One could even add the LTSP configuration associated with a given machine, as long as the required attributes are available in a auxiliary object class.

Tags: debian, debian edu, english, ldap, nuug.
OpenStreetmap one step closer to having routing on its front page
18th July 2010

Thanks to todays opengeodata blog entry, I just discovered that the OpenStreetmap.org site have gotten support for calculating routes. The support is still experimental and only available from the development server, until more experience is gathered on the user interface and any scalability issues.

Earlier, the routing I knew about using the OpenStreetmap.org data was provided by Cloudmade, but having it on the main page is required to make everyone aware of the issue. I've had people reject Openstreetmap.org as a viable alternative for them because the front page lacked routing support, and I hope their needs will be catered for when routing show up on the www.openstreetmap.org front page.

Tags: english, kart, web.
One step closer to single signon in Debian Edu
25th July 2010

The last few months me and the other Debian Edu developers have been working hard to get the Debian/Squeeze based version of Debian Edu/Skolelinux into shape. This future version will use Kerberos for authentication, and services are slowly migrated to single signon, getting rid of password questions one at the time.

It will also feature a roaming workstation profile with local home directory, for laptops that are only some times on the Skolelinux network, and for this profile a shortcut is created in Gnome and KDE to gain access to the users home directory on the file server. This shortcut uses SMB at the moment, and yesterday I had time to test if SMB mounting had started working in KDE after we added the cifs-utils package. I was pleasantly surprised how well it worked.

Thanks to the recent changes to our samba configuration to get it to use Kerberos for authentication, there were no question about user password when mounting the SMB volume. A simple click on the shortcut in the KDE menu, and a window with the home directory popped up. :)

One step closer to a single signon solution out of the box in Debian Edu. We already had PAM, LDAP, IMAP and SMTP in place, and now also Samba. Next step is Cups and hopefully also NFS.

We had planned a alpha0 release of Debian Edu for today, but thanks to the autobuilder administrators for some architectures being slow to sign packages, we are still missing the fixed LTSP package we need for the release. It was uploaded three days ago with urgency=high, and if it had entered testing yesterday we would have been able to test it in time for a alpha0 release today. As the binaries for ia64 and powerpc still not uploaded to the Debian archive, we need to delay the alpha release another day.

If you want to help out with implementing Kerberos for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian edu, english, nuug, sikkerhet.
First Debian Edu test release (alpha0) based on Squeeze is released
27th July 2010

I just posted this announcement culminating several months of work with the next Debian Edu release. Not nearly done, but one major step completed.

This is the first test release based on Squeeze. The focus of this release is to test the user application selection. To have a look, install the standalone profile and let the developers know if the set of installed packages i.e. applications should be modified. If some user application is missing, or if there are some applications that no longer make sense to be included in Debian Edu, please let us know. Also, if a useful application is missing the translation for your language of choice, please let us know too.

In addition, feedback and help to polish the desktop (menus, artwork, starters, etc.) is appreciated. We would like to ship a nice and handy KDE4 desktop targeted for schools out of the box.

The other profiles should be installable, but there is a lot more work left to be done before they are ready, so do not expect to much.

Changes compared to the lenny based version

  • Everything from Debian Squeeze
    • Desktop environment KDE 4.4 => the new KDE desktop in combination with some new artwork
    • Web browser Iceweasel 3.5
    • OpenOffice.org 3.2
    • Educational toolbox GCompris 9.3
    • Music creator Rosegarden 10.04.2
    • Image editor Gimp 2.6.10
    • Virtual universe Celestia 1.6.0
    • Virtual stargazer Stellarium 0.10.4
    • 3D modeler Blender 2.49.2 (new application)
    • Video editor Kdenlive 0.7.7 (new application)
  • Now using Kerberos for password checking (migration not finished). Enabled for:
    • PAM
    • LDAP
    • IMAP
    • SMTP (sender verification)
  • New experimental roaming workstation profile for laptops.
  • Show welcome page to users when they first log in. The URL is fetched from LDAP.
  • New LXDE desktop option, in addition to KDE (default) and Gnome.
  • General cleanup (not finished)

The following features are not working as they should

  • No web based administration tool for creating users and groups. The scripts ldap-createuser-krb and ldap-add-user-to-group can be used for testing.
  • DVD installs are missing debian-installer images for the PXE boot, and do not set up the PXE menu on eth0 because of this. LTSP clients should still boot from eth1 on thin client servers.
  • The restructured KDE menu is not implemented.
  • The LDAP server setup need to be reviewed for security.
  • The LDAP directory structure need to be reworked.
  • Different sets of packages are installed when using the DVD and the netinst CD. More packages are installed using the netinst CD.
  • The jackd package fail to install. This is believed to be caused by some ongoing transition, and hopefully should be solved soon. The jackd1 package can be installed manually for those that need it.
  • Some packages lack translations. See http://wiki.debian.org/DebianEdu/Status/Squeeze for updated status, and help out with translations.

To download this multiarch netinstall release you can use

To download this multiarch dvd release you can use

There is no source DVD available yet. It will be prepared when we get closer to the final release.

The MD5SUM of these images are

  • 3dbf45d59f42a53518b6e3c9ec3b5eb6 debian-edu-6.0.0+edua0-CD.iso
  • 22f2cbfce281d1c6e478be452638675d debian-edu-6.0.0+edua0-DVD.iso

The SHA1SUM of these images are

  • c53d1b69b40cf37cd27aefaf33f6f6a3821bedf0 debian-edu-6.0.0+edua0-CD.iso
  • 2ec29d7db676d59d32197b05c277ffe16348376c debian-edu-6.0.0+edua0-DVD.iso

How to report bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugsInBugzilla

Please direct replies to debian-edu@lists.debian.org

Tags: debian edu, english, nuug.
Circular package dependencies harms apt recovery
27th July 2010

I discovered this while doing automated testing of upgrades from Debian Lenny to Squeeze. A few packages in Debian still got circular dependencies, and it is often claimed that apt and aptitude should be able to handle this just fine, but some times these dependency loops causes apt to fail.

An example is from todays upgrade of KDE using aptitude. In it, a bug in kdebase-workspace-data causes perl-modules to fail to upgrade. The cause is simple. If a package fail to unpack, then only part of packages with the circular dependency might end up being unpacked when unpacking aborts, and the ones already unpacked will fail to configure in the recovery phase because its dependencies are unavailable.

In this log, the problem manifest itself with this error:

dpkg: dependency problems prevent configuration of perl-modules:
 perl-modules depends on perl (>= 5.10.1-1); however:
  Version of perl on system is 5.10.0-19lenny2.
dpkg: error processing perl-modules (--configure):
 dependency problems - leaving unconfigured

The perl/perl-modules circular dependency is already reported as a bug, and will hopefully be solved as soon as possible, but it is not the only one, and each one of these loops in the dependency tree can cause similar failures. Of course, they only occur when there are bugs in other packages causing the unpacking to fail, but it is rather nasty when the failure of one package causes the problem to become worse because of dependency loops.

Thanks to the tireless effort by Bill Allombert, the number of circular dependencies left in Debian is dropping, and perhaps it will reach zero one day. :)

Todays testing also exposed a bug in update-notifier and different behaviour between apt-get and aptitude, the latter possibly caused by some circular dependency. Reported both to BTS to try to get someone to look at it.

Tags: debian, english, nuug.
Debian Edu roaming workstation - at the university of Oslo
3rd August 2010

The new roaming workstation profile in Debian Edu/Squeeze is fairly similar to the laptop setup am I working on using Ubuntu for the University of Oslo, and just for the heck of it, I tested today how hard it would be to integrate that profile into the university infrastructure. In this case, it is the university LDAP server, Active Directory Kerberos server and SMB mounting from the Netapp file servers.

I was pleasantly surprised that the only three files needed to be changed (/etc/sssd/sssd.conf, /etc/ldap.conf and /etc/mklocaluser.d/20-debian-edu-config) and one file had to be added (/usr/share/perl5/Debian/Edu_Local.pm), to get the client working. Most of the changes were to get the client to use the university LDAP for NSS and Kerberos server for PAM, but one was to change a hard coded DNS domain name in the mklocaluser hook from .intern to .uio.no.

This testing was so encouraging, that I went ahead and adjusted the Debian Edu scripts and setup in subversion to centralise the roaming workstation setup a bit more and avoid the hardcoded DNS domain name, so that when I test this tomorrow, I expect to get away with modifying only /etc/sssd/sssd.conf and /etc/ldap.conf to get it to use the university servers.

My goal is to get the clients to have no hardcoded settings and fetch all their initial setup during installation and first boot, to allow them to be inserted also into environments where the default setup in Debian Edu has been changed or as with the university, where the environment is different but provides the protocols Debian Edu uses.

Tags: debian edu, english, nuug.
Autodetecting Client setup for roaming workstations in Debian Edu
7th August 2010

A few days ago, I tried to install a Roaming workation profile from Debian Edu/Squeeze while on the university network here at the University of Oslo, and noticed how much had to change to get it operational using the university infrastructure. It was fairly easy, but it occured to me that Debian Edu would improve a lot if I could get the client to connect without any changes at all, and thus let the client configure itself during installation and first boot to use the infrastructure around it. Now I am a huge step further along that road.

With our current squeeze-test packages, I can select the roaming workstation profile and get a working laptop connecting to the university LDAP server for user and group and our active directory servers for Kerberos authentication. All this without any configuration at all during installation. My users home directory got a bookmark in the KDE menu to mount it via SMB, with the correct URL. In short, openldap and sssd is correctly configured. In addition to this, the client look for http://wpad/wpad.dat to configure a web proxy, and when it fail to find it no proxy settings are stored in /etc/environment and /etc/apt/apt.conf. Iceweasel and KDE is configured to look for the same wpad configuration and also do not use a proxy when at the university network. If the machine is moved to a network with such wpad setup, it would automatically use it when DHCP gave it a IP address.

The LDAP server is located using DNS, by first looking for the DNS entry ldap.$domain. If this do not exist, it look for the _ldap._tcp.$domain SRV records and use the first one as the LDAP server. Next, it connects to the LDAP server and search all namingContexts entries for posixAccount or posixGroup objects, and pick the first one as the LDAP base. For Kerberos, a similar algorithm is used to locate the LDAP server, and the realm is the uppercase version of $domain.

So, what is not working, you might ask. SMB mounting my home directory do not work. No idea why, but suspected the incorrect Kerberos settings in /etc/krb5.conf and /etc/samba/smb.conf might be the cause. These are not properly configured during installation, and had to be hand-edited to get the correct Kerberos realm and server, but SMB mounting still do not work. :(

With this automatic configuration in place, I expect a Debian Edu roaming profile installation would be able to automatically detect and connect to any site using LDAP and Kerberos for NSS directory and PAM authentication. It should also work out of the box in a Active Directory environment providing posixAccount and posixGroup objects with UID and GID values.

If you want to help out with implementing these things for Debian Edu, please contact us on debian-edu@lists.debian.org.

Tags: debian edu, english, nuug.
Testing if a file system can be used for home directories...
8th August 2010

A few years ago, I was involved in a project planning to use Windows file servers as home directory servers for Debian Edu/Skolelinux machines. This was thought to be no problem, as the access would be through the SMB network file system protocol, and we knew other sites used SMB with unix and samba as the file server to mount home directories without any problems. But, after months of struggling, we had to conclude that our goal was impossible.

The reason is simply that while SMB can be used for home directories when the file server is Samba running on Unix, this only work because of Samba have some extensions and the fact that the underlying file system is a unix file system. When using a Windows file server, the underlying file system do not have POSIX semantics, and several programs will fail if the users home directory where they want to store their configuration lack POSIX semantics.

As part of this work, I wrote a small C program I want to share with you all, to replicate a few of the problematic applications (like OpenOffice.org and GCompris) and see if the file system was working as it should. If you find yourself in spooky file system land, it might help you find your way out again. This is the fs-test.c source:

/*
 * Some tests to check the file system sematics.  Used to verify that
 * CIFS from a windows server do not work properly as a linux home
 * directory.
 * License: GPL v2 or later
 * 
 * needs libsqlite3-dev and build-essential installed
 * compile with: gcc -Wall -lsqlite3 -DTEST_SQLITE fs-test.c -o fs-test
*/

#define _FILE_OFFSET_BITS 64
#define _LARGEFILE_SOURCE 1
#define _LARGEFILE64_SOURCE 1

#define _GNU_SOURCE /* for asprintf() */

#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/file.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>

#ifdef TEST_SQLITE
/*
 * Test sqlite open, as done by gcompris require the libsqlite3-dev
 * package and linking with -lsqlite3.  A more low level test is
 * below.
 * See also <URL: http://www.sqlite.org./faq.html#q5 >.
 */
#include <sqlite3.h>
#define CREATE_TABLE_USERS                                              \
  "CREATE TABLE users (user_id INT UNIQUE, login TEXT, lastname TEXT, firstname TEXT, birthdate TEXT, class_id INT ); "
int test_sqlite_open(void) {
  char *zErrMsg;
  char *name = "testsqlite.db";
  sqlite3 *db=NULL;
  unlink(name);
  int rc = sqlite3_open(name, &db);
  if( rc ){
    printf("error: sqlite open of %s failed: %s\n", name, sqlite3_errmsg(db));
    sqlite3_close(db);
    return -1;
  }

  /* create tables */
  rc = sqlite3_exec(db,CREATE_TABLE_USERS, NULL,  0, &zErrMsg);
  if( rc != SQLITE_OK ){
    printf("error: sqlite table create failed: %s\n", zErrMsg);
    sqlite3_close(db);
    return -1;
  }
  printf("info: sqlite worked\n");
  sqlite3_close(db);
  return 0;
}
#endif /* TEST_SQLITE */

/*
 * Demonstrate locking issue found in gcompris using sqlite3.  This
 * work with ext3, but not with cifs server on Windows 2003.  This is
 * done in the sqlite3 library.
 * See also
 * <URL:http://www.cygwin.com/ml/cygwin/2001-08/msg00854.html> and the
 * POSIX specification
 * <URL:http://www.opengroup.org/onlinepubs/009695399/functions/fcntl.html>.
 */
int test_gcompris_locking(void) {
  struct flock fl;
  char *name = "testsqlite.db";
  unlink(name);
  int fd = open(name, O_RDWR|O_CREAT|O_LARGEFILE, 0644);
  printf("info: testing fcntl locking\n");

  fl.l_whence = SEEK_SET;
  fl.l_pid    = getpid();
  printf("  Read-locking 1 byte from 1073741824");
  fl.l_start  = 1073741824;
  fl.l_len    = 1;
  fl.l_type   = F_RDLCK;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  printf("  Read-locking 510 byte from 1073741826");
  fl.l_start  = 1073741826;
  fl.l_len    = 510;
  fl.l_type   = F_RDLCK;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  printf("  Unlocking 1 byte from 1073741824");
  fl.l_start  = 1073741824;
  fl.l_len    = 1;
  fl.l_type   = F_UNLCK;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  printf("  Write-locking 1 byte from 1073741824");
  fl.l_start  = 1073741824;
  fl.l_len    = 1;
  fl.l_type   = F_WRLCK;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  printf("  Write-locking 510 byte from 1073741826");
  fl.l_start  = 1073741826;
  fl.l_len    = 510;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  printf("  Unlocking 2 byte from 1073741824");
  fl.l_start  = 1073741824;
  fl.l_len    = 2;
  fl.l_type   = F_UNLCK;
  if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");

  close(fd);
  return 0;
}

/*
 * Test if permissions of freshly created directories allow entries
 * below them.  This was a problem with OpenOffice.org and gcompris.
 * Mounting with option 'sync' seem to solve this problem while
 * slowing down file operations.
 */
int test_subdirectory_creation(void) {
#define LEVELS 5
  char *path = strdup("test");
  char *dirs[LEVELS];
  int level;
  printf("info: testing subdirectory creation\n");
  for (level = 0; level < LEVELS; level++) {
    char *newpath = NULL;
    if (-1 == mkdir(path, 0777)) {
      printf("  error: Unable to create directory '%s': %s\n",
	     path, strerror(errno));
      break;
    }
    asprintf(&newpath, "%s/%s", path, "test");
    free(path);
    path = newpath;
  }
  return 0;
}

/*
 * Test if symlinks can be created.  This was a problem detected with
 * KDE.
 */
int test_symlinks(void) {
  printf("info: testing symlink creation\n");
  unlink("symlink");
  if (-1 == symlink("file", "symlink"))
    printf("  error: Unable to create symlink\n");
  return 0;
}

int main(int argc, char **argv) {
  printf("Testing POSIX/Unix sematics on file system\n");
  test_symlinks();
  test_subdirectory_creation();
#ifdef TEST_SQLITE
  test_sqlite_open();
#endif /* TEST_SQLITE */
  test_gcompris_locking();
  return 0;
}

When everything is working, it should print something like this:

Testing POSIX/Unix sematics on file system
info: testing symlink creation
info: testing subdirectory creation
info: sqlite worked
info: testing fcntl locking
  Read-locking 1 byte from 1073741824
  Read-locking 510 byte from 1073741826
  Unlocking 1 byte from 1073741824
  Write-locking 1 byte from 1073741824
  Write-locking 510 byte from 1073741826
  Unlocking 2 byte from 1073741824

I do not remember the exact details of the problems we saw, but one of them was with locking, where if I remember correctly, POSIX allow a read-only lock to be upgraded to a read-write lock without unlocking the read-only lock (while Windows do not). Another was a bug in the CIFS/SMB client implementation in the Linux kernel where directory meta information would be wrong for a fraction of a second, making OpenOffice.org fail to create its deep directory tree because it was not allowed to create files in its freshly created directory.

Anyway, here is a nice tool for your tool box, might you never need it. :)

Update 2010-08-27: Michael Gebetsroither report that he found the script so useful that he created a GIT repository and stored it in http://github.com/gebi/fs-test.

Tags: debian edu, english, nuug.
No hardcoded config on Debian Edu clients
9th August 2010

As reported earlier, the last few days I have looked at how Debian Edu clients are configured, and tried to get rid of all hardcoded configuration settings on the clients. I believe the work to be mostly done, and the clients seem to work just fine with dynamically generated configuration.

What is the point, you might ask? The point is to allow a Debian Edu desktop to integrate into an existing network infrastructure without any manual configuration.

This is what happens when installing a Debian Edu client here at the University of Oslo using PXE. With the PXE installation, I am asked for language (Norwegian Bokmål), locality (Norway) and keyboard layout (no-latin1), Debian Edu profile (Roaming Workstation), if I accept to reformat the hard drive (yes), if I want to submit info to popcon.debian.org (no) and root password (secret). After answering these questions, the installer goes ahead and does its thing, and after around 50 minutes it is done. I press enter to finish the installation, and the machine reboots into KDE. When the machine is ready and kdm asks for login information, I enter my university username and password, am told by kdm that a local home directory has been created and that I must log in again, and finally log in with the same username and password to the KDE 4.4 desktop. At no point during this process did it ask for university specific settings, and all the required configuration was dynamically detected using information fetched via DHCP and DNS. The roaming workstation is now ready for use.

How was this done, you might wonder? First of all, here is the list of things that need to be configured on the client to get it working properly out of the box:

(Hm, did I forget anything? Let me knew if I did.)

The points marked (*) are not required to be able to use the machine, but needed to provide central storage and allowing system administrators to track their machines. Since yesterday, everything but the sitesummary collector URL is dynamically discovered at boot and installation time in the svn version of Debian Edu.

The IP and DNS setup is fetched during boot using DHCP as usual. When a DHCP update arrives, the proxy setup is updated by looking for http://wpat/wpad.dat and using the content of this WPAD file to configure the http and ftp proxy in /etc/environment and /etc/apt/apt.conf. I decided to update the proxy setup using a DHCP hook to ensure that the client stops using the Debian Edu proxy when it is moved outside the Debian Edu network, and instead uses any local proxy present on the new network when it moves around.

The DNS names of the LDAP, Kerberos and syslog server and related configuration are generated using DNS information at boot. First the installer looks for a host named ldap in the current DNS domain. If not found, it looks for _ldap._tcp SRV records in DNS instead. If an LDAP server is found, its root DSE entry is requested and the attributes namingContexts and defaultNamingContext are used to determine which LDAP base to use for NSS. If there are several namingContexts attibutes and the defaultNamingContext is present, that LDAP subtree is used as the base. If defaultNamingContext is missing, the subtrees listed as namingContexts are searched in sequence for any object with class posixAccount or posixGroup, and the first one with such an object is used as the LDAP base. For Kerberos, a similar search is done by first looking for a host named kerberos, and then for the _kerberos._tcp SRV record. I've been unable to find a way to look up the Kerberos realm, so for this the upper case string of the current DNS domain is used.

For the syslog server, the hosts syslog and loghost are searched for, and the _syslog._udp SRV record is consulted if no such host is found. This algorithm works for both Debian Edu and the University of Oslo. A similar strategy would work for locating the sitesummary server, but have not been implemented yet. I decided to fetch and save these settings during installation, to make sure moving to a different network does not change the set of users being allowed to log in nor the passwords required to log in. Usernames and passwords will be cached by sssd when the user logs in on the Debian Edu network, and will not change as the laptop move around. For a non-roaming machine, there is no caching, but given that it is supposed to stay in place it should not matter much. Perhaps we should switch those to use sssd too?

The user's SMB mount point for the network home directory is located when the user logs in for the first time. The LDAP server is consulted to look for the user's LDAP object and the sambaHomePath attribute is used if found. If it isn't found, the home directory path fetched from NSS is used instead. Assuming the path is of the form /site/server/directory/username, the second part is looked up in DNS and used to generate a SMB URL of the form smb://server.domain/username. This algorithm works for both Debian edu and the University of Oslo. Perhaps there are better attributes to use or a better algorithm that works for more sites, but this will do for now. :)

This work should make it easier to integrate the Debian Edu clients into any LDAP/Kerberos infrastructure, and make the current setup even more flexible than before. I suspect it will also work for thin client servers, allowing one to easily set up LTSP and hook it into a existing network infrastructure, but I have not had time to test this yet.

If you want to help out with implementing these things for Debian Edu, please contact us on debian-edu@lists.debian.org.

Update 2010-08-09: Simon Farnsworth gave me a heads-up on how to detect Kerberos realm from DNS, by looking for _kerberos TXT entries before falling back to the upper case DNS domain name. Will have to implement it for Debian Edu. :)

Tags: debian edu, english, nuug.
Rob Weir: How to Crush Dissent
15th August 2010

I found the notes from Rob Weir on how to crush dissent matching my own thoughts on the matter quite well. Highly recommended for those wondering which road our society should go down. In my view we have been heading the wrong way for a long time.

Tags: english, lenker, nuug, personvern, sikkerhet.
Broken umask handling with sshfs
26th August 2010

My file system sematics program presented a few days ago is very useful to verify that a file system can work as a unix home directory,and today I had to extend it a bit. I'm looking into alternatives for home directory access here at the University of Oslo, and one of the options is sshfs. My friend Finn-Arne mentioned a while back that they had used sshfs with Debian Edu, but stopped because of problems. I asked today what the problems where, and he mentioned that sshfs failed to handle umask properly. Trying to detect the problem I wrote this addition to my fs testing script:

mode_t touch_get_mode(const char *name, mode_t mode) {
  mode_t retval = 0;
  int fd = open(name, O_RDWR|O_CREAT|O_LARGEFILE, mode);
  if (-1 != fd) {
    unlink(name);
    struct stat statbuf;
    if (-1 != fstat(fd, &statbuf)) {
      retval = statbuf.st_mode & 0x1ff;
    }
    close(fd);
  }
  return retval;
}

/* Try to detect problem discovered using sshfs */
int test_umask(void) {
  printf("info: testing umask effect on file creation\n");

  mode_t orig_umask = umask(000);
  mode_t newmode;
  if (0666 != (newmode = touch_get_mode("foobar", 0666))) {
    printf("  error: Wrong file mode %o when creating using mode 666 and umask 000\n",
           newmode);
  }
  umask(007);
  if (0660 != (newmode = touch_get_mode("foobar", 0666))) {
    printf("  error: Wrong file mode %o when creating using mode 666 and umask 007\n",
           newmode);
  }

  umask (orig_umask);
  return 0;
}

int main(int argc, char **argv) {
  [...]
  test_umask();
  return 0;
}

Sure enough. On NFS to a netapp, I get this result:

Testing POSIX/Unix sematics on file system
info: testing symlink creation
info: testing subdirectory creation
info: testing fcntl locking
  Read-locking 1 byte from 1073741824
  Read-locking 510 byte from 1073741826
  Unlocking 1 byte from 1073741824
  Write-locking 1 byte from 1073741824
  Write-locking 510 byte from 1073741826
  Unlocking 2 byte from 1073741824
info: testing umask effect on file creation

When mounting the same directory using sshfs, I get this result:

Testing POSIX/Unix sematics on file system
info: testing symlink creation
info: testing subdirectory creation
info: testing fcntl locking
  Read-locking 1 byte from 1073741824
  Read-locking 510 byte from 1073741826
  Unlocking 1 byte from 1073741824
  Write-locking 1 byte from 1073741824
  Write-locking 510 byte from 1073741826
  Unlocking 2 byte from 1073741824
info: testing umask effect on file creation
  error: Wrong file mode 644 when creating using mode 666 and umask 000
  error: Wrong file mode 640 when creating using mode 666 and umask 007

So, I can conclude that sshfs is better than smb to a Netapp or a Windows server, but not good enough to be used as a home directory.

Update 2010-08-26: Reported the issue in BTS report #594498

Update 2010-08-27: Michael Gebetsroither report that he found the script so useful that he created a GIT repository and stored it in http://github.com/gebi/fs-test.

Tags: debian edu, english, nuug.
Broken hard link handling with sshfs
30th August 2010

Just got an email from Tobias Gruetzmacher as a followup on my previous post about sshfs. He reported another problem with sshfs. It fail to handle hard links properly. A simple way to spot this is to look at the . and .. entries in the directory tree. These should have a link count >1, but on sshfs the count is 1. I just tested to see what happen when trying to hardlink, and this fail as well:

% ln foo bar
ln: creating hard link `bar' => `foo': Function not implemented
%

I have not yet found time to implement a test for this in my file system test code, but believe having working hard links is useful to avoid surprised unix programs. Not as useful as working file locking and symlinks, which are required to get a working desktop, but useful nevertheless. :)

The latest version of the file system test code is available via git from http://github.com/gebi/fs-test

Tags: debian edu, english, nuug.
My first perl GUI application - controlling a Spykee robot
1st September 2010

This evening I made my first Perl GUI application. The last few days I have worked on a Perl module for controlling my recently aquired Spykee robots, and the module is now getting complete enought that it is possible to use it to control the robot driving at least. It was now time to figure out how to use it to create some GUI to allow me to drive the robot around. I picked PerlQt as I have had positive experiences with the Qt API before, and spent a few minutes browsing the web for examples. Using Qt Designer seemed like a short cut, so I ended up writing the perl GUI using Qt Designer and compiling it into a perl program using the puic program from libqt-perl. Nothing fancy yet, but it got buttons to connect and drive around.

The perl module I have written provide a object oriented API for controlling the robot. Here is an small example on how to use it:

use Spykee;
Spykee::discover(sub {$robot{$_[0]} = $_[1]});
my $host = (keys %robot)[0];
my $spykee = Spykee->new();
$spykee->contact($host, "admin", "admin");
$spykee->left();
sleep 2;
$spykee->right();
sleep 2;
$spykee->forward();
sleep 2;
$spykee->back();
sleep 2;
$spykee->stop();

Thanks to the release of the source of the robot firmware, I could peek into the implementation at the other end to figure out how to implement the protocol used by the robot. I've implemented several of the commands the robot understand, but is still missing the camera support to make it possible to control the robot from remote. First I want to implement support for uploading new firmware and configuring the wireless network, to make it possible to bootstrap a Spykee robot without the producers Windows and MacOSX software (I only have Linux, so I had to ask a friend to come over to get the robot testing going. :).

Will release the source to the public soon, but need to figure out where to make it available first. I will add a link to the NUUG wiki for those that want to check back later to find it.

Tags: english, nuug, robot.
Some notes on Flash in Debian and Debian Edu
4th September 2010

In the Debian popularity-contest numbers, the adobe-flashplugin package the second most popular used package that is missing in Debian. The sixth most popular is flashplayer-mozilla. This is a clear indication that working flash is important for Debian users. Around 10 percent of the users submitting data to popcon.debian.org have this package installed.

In the report written by Lars Risan in August 2008 («Skolelinux i bruk – Rapport for Hurum kommune, Universitetet i Agder og stiftelsen SLX Debian Labs»), one of the most important problems schools experienced with Debian Edu/Skolelinux was the lack of working Flash. A lot of educational web sites require Flash to work, and lacking working Flash support in the web browser and the problems with installing it was perceived as a good reason to stay with Windows.

I once saw a funny and sad comment in a web forum, where Linux was said to be the retarded cousin that did not really understand everything you told him but could work fairly well. This was a comment regarding the problems Linux have with proprietary formats and non-standard web pages, and is sad because it exposes a fairly common understanding of whose fault it is if web pages that only work in for example Internet Explorer 6 fail to work on Firefox, and funny because it explain very well how annoying it is for users when Linux distributions do not work with the documents they receive or the web pages they want to visit.

This is part of the reason why I believe it is important for Debian and Debian Edu to have a well working Flash implementation in the distribution, to get at least popular sites as Youtube and Google Video to working out of the box. For Squeeze, Debian have the chance to include the latest version of Gnash that will make this happen, as the new release 0.8.8 was published a few weeks ago and is resting in unstable. The new version work with more sites that version 0.8.7. The Gnash maintainers have asked for a freeze exception, but the release team have not had time to reply to it yet. I hope they agree with me that Flash is important for the Debian desktop users, and thus accept the new package into Squeeze.

Tags: debian, debian edu, english, multimedia, video, web.
Terms of use for video produced by a Canon IXUS 130 digital camera
9th September 2010

A few days ago I had the mixed pleasure of bying a new digital camera, a Canon IXUS 130. It was instructive and very disturbing to be able to verify that also this camera producer have the nerve to specify how I can or can not use the videos produced with the camera. Even thought I was aware of the issue, the options with new cameras are limited and I ended up bying the camera anyway. What is the problem, you might ask? It is software patents, MPEG-4, H.264 and the MPEG-LA that is the problem, and our right to record our experiences without asking for permissions that is at risk.

On page 27 of the Danish instruction manual, this section is written:

This product is licensed under AT&T patents for the MPEG-4 standard and may be used for encoding MPEG-4 compliant video and/or decoding MPEG-4 compliant video that was encoded only (1) for a personal and non-commercial purpose or (2) by a video provider licensed under the AT&T patents to provide MPEG-4 compliant video.

No license is granted or implied for any other use for MPEG-4 standard.

In short, the camera producer have chosen to use technology (MPEG-4/H.264) that is only provided if I used it for personal and non-commercial purposes, or ask for permission from the organisations holding the knowledge monopoly (patent) for technology used.

This issue has been brewing for a while, and I recommend you to read "Why Our Civilization's Video Art and Culture is Threatened by the MPEG-LA" by Eugenia Loli-Queru and "H.264 Is Not The Sort Of Free That Matters" by Simon Phipps to learn more about the issue. The solution is to support the free and open standards for video, like Ogg Theora, and avoid MPEG-4 and H.264 if you can.

Tags: digistan, english, fildeling, multimedia, nuug, opphavsrett, personvern, standard, video, web.
Links for 2010-10-03
3rd October 2010

Tags: english, lenker, nuug.
First version of a Perl library to control the Spykee robot
9th October 2010

This summer I got the chance to buy cheap Spykee robots, and since then I have worked on getting Linux software in place to control them. The firmware for the robot is available from the producer, and using that source it was trivial to figure out the protocol specification. I've started on a perl library to control it, and made some demo programs using this perl library to allow one to control the robots.

The library is quite functional already, and capable of controlling the driving, fetching video, uploading MP3s and play them. There are a few less important features too.

Since a few weeks ago, I ran out of time to spend on this project, but I never got around to releasing the current source. I decided today that it was time to do something about it, and uploaded the source to my Debian package store at people.skolelinux.org.

Because it was simpler for me, I made a Debian package and published the source and deb. If you got a spykee robot, grab the source or binary package:

If you are interested in helping out with developing this library, please let me know.

Tags: english, nuug, robot.
Pledge for funding to the Gnash project to get AVM2 support
19th October 2010

The Gnash project is the most promising solution for a Free Software Flash implementation. It has done great so far, but there is still far to go, and recently its funding has dried up. I believe AVM2 support in Gnash is vital to the continued progress of the project, as more and more sites show up with AVM2 flash files.

To try to get funding for developing such support, I have started a pledge with the following text:

"I will pay 100$ to the Gnash project to develop AVM2 support but only if 10 other people will do the same."

- Petter Reinholdtsen, free software developer

Deadline to sign up by: 24th December 2010

The Gnash project need to get support for the new Flash file format AVM2 to work with a lot of sites using Flash on the web. Gnash already work with a lot of Flash sites using the old AVM1 format, but more and more sites are using the AVM2 format these days. The project web page is available from http://www.getgnash.org/ . Gnash is a free software implementation of Adobe Flash, allowing those of us that do not accept the terms of the Adobe Flash license to get access to Flash sites.

The project need funding to get developers to put aside enough time to develop the AVM2 support, and this pledge is my way to try to get this to happen.

The project accept donations via the OpenMediaNow foundation, http://www.openmedianow.org/?q=node/32 .

I hope you will support this effort too. I hope more than 10 people will participate to make this happen. The more money the project gets, the more features it can develop using these funds. :)

Tags: english, multimedia, nuug, video, web.
Software updates 2010-10-24
24th October 2010

Some updates.

My gnash pledge to raise money for the project is going well. The lower limit of 10 signers was reached in 24 hours, and so far 13 people have signed it. More signers and more funding is most welcome, and I am really curious how far we can get before the time limit of December 24 is reached. :)

On the #gnash IRC channel on irc.freenode.net, I was just tipped about what appear to be a great code coverage tool capable of generating code coverage stats without any changes to the source code. It is called kcov, and can be used using kcov <directory> <binary>. It is missing in Debian, but the git source built just fine in Squeeze after I installed libelf-dev, libdwarf-dev, pkg-config and libglib2.0-dev. Failed to build in Lenny, but suspect that is solvable. I hope kcov make it into Debian soon.

Finally found time to wrap up the release notes for a new alpha release of Debian Edu, and just published the second alpha test release of the Squeeze based Debian Edu / Skolelinux release. Give it a try if you need a complete linux solution for your school, including central infrastructure server, workstations, thin client servers and diskless workstations. A nice touch added yesterday is RDP support on the thin client servers, for windows clients to get a Linux desktop on request.

Tags: debian, debian edu, english, multimedia.
Making room on the Debian Edu/Sqeeze DVD
7th November 2010

Prioritising packages for the Debian Edu / Skolelinux DVD, which is supposed provide a school with all the services and user applications needed on the pupils computer network has always been hard. Even schools without Internet connections should be able to get Debian Edu working using this DVD.

The job became a lot harder when apt and aptitude started installing recommended packages by default. We want the same set of packages to be installed when using the DVD and the netinst CD, and that means all recommended packages need to be on the DVD. I created a patch for debian-cd in BTS report #601203 to do this, and since this change was applied to the Debian Edu DVD build, we have been seriously short on space.

A few days ago we decided to drop blender, wxmaxima and kicad from the default installation to save space on the DVD, believing that those needing these applications are few and can get them from the Debian archive.

Yesterday, I had a look what source packages to see which packages were using most space. A few large packages are well know; openoffice.org, openclipart and fluid-soundfont. But I also discovered that lilypond used 106 MiB and fglrx-driver used 53 MiB. The lilypond package is pulled in as a dependency for rosegarden, and when looking a bit closer I discovered that 99 MiB of the 106 MiB were the documentation package, which is recommended by the binary package. I decided to drop this documentation package from our DVD, as most of our users will use the GUI front-ends and do not need the lilypond documentation. Similarly, I dropped the non-free fglrx-driver package which might be installed by d-i when its hardware is detected, as the free X driver should work.

With this change, we finally got space for the LXDE and Gnome desktop packages as well as the language specific packages making the DVD more useful again.

Tags: debian edu, english, nuug.
Debian in 3D
9th November 2010

3D printing is just great. I just came across this Debian logo in 3D linked in from the thingiverse blog.

Tags: 3d-printer, debian, english.
Gnash buildbot slave and Debian kfreebsd
20th November 2010

Answering the call from the Gnash project for buildbot slaves to test the current source, I have set up a virtual KVM machine on the Debian Edu/Skolelinux virtualization host to test the git source on Debian/Squeeze. I hope this can help the developers in getting new releases out more often.

As the developers want less main-stream build platforms tested to, I have considered setting up a Debian/kfreebsd machine as well. I have also considered using the kfreebsd architecture in Debian as a file server in NUUG to get access to the 5 TB zfs volume we currently use to store DV video. Because of this, I finally got around to do a test installation of Debian/Squeeze with kfreebsd. Installation went fairly smooth, thought I noticed some visual glitches in the cdebconf dialogs (black cursor left on the screen at random locations). Have not gotten very far with the testing. Noticed cfdisk did not work, but fdisk did so it was not a fatal problem. Have to spend some more time on it to see if it is useful as a file server for NUUG. Will try to find time to set up a gnash buildbot slave on the Debian Edu/Skolelinux this weekend.

Tags: debian, debian edu, english, nuug.
Lenny->Squeeze upgrades, apt vs aptitude with the Gnome and KDE desktop
20th November 2010

I'm still running upgrade testing of the Lenny Gnome and KDE Desktop, but have not had time to spend on reporting the status. Here is a short update based on a test I ran 20101118.

I still do not know what a correct migration should look like, so I report any differences between apt and aptitude and hope someone else can see if anything should be changed.

This is for Gnome:

Installed using apt-get, missing with aptitude

apache2.2-bin aptdaemon at-spi baobab binfmt-support browser-plugin-gnash cheese-common cli-common cpp-4.3 cups-pk-helper dmz-cursor-theme empathy empathy-common finger freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-spell gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gs-common gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hal-info hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libatspi1.0-0 libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libcupsys2 libdiscid0 libeel2-data libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgail-common libgconf2.0-cil libgdata-common libgdata7 libgdl-1-common libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgnomeprint2.2-data libgnomeprintui2.2-common libgnomevfs2-bin libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview-common libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libservlet2.4-java libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 libxalan2-java libxerces2-java media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy openoffice.org-writer2latex openssl-blacklist p7zip p7zip-full pkg-config python-4suite-xml python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-cupsutils python-eggtrayicon python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtkmozembed python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center svgalibg1 system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr zip

Installed using apt-get, removed with aptitude

arj bluez-utils cheese dhcdbd djvulibre-desktop ekiga eog epiphany-extensions epiphany-gecko evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-app-install gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnome-utils gnome-vfs-obexftp gnome-volume-manager gnuchess gucharmap guile-1.8-libs hal libavahi-compat-libdnssd1 libavahi-core5 libavahi-ui0 libbind9-50 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcurl3 libdirectfb-1.0-0 libdmx1 libdvdread3 libedata-cal1.2-6 libedataserver1.2-9 libeel2-2.20 libepc-1.0-1 libepc-ui-1.0-1 libexchange-storage1.2-3 libfaad0 libgadu3 libgalago3 libgd2-noxpm libgda3-3 libgda3-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnome-desktop-2 libgnome-pilot2 libgnomecups1.0-1 libgnomeprint2.2-0 libgnomeprintui2.2-0 libgpod3 libgraphviz4 libgtk-vnc-1.0-0 libgtkhtml2-0 libgtksourceview1.0-0 libgtksourceview2.0-0 libgucharmap6 libhesiod0 libicu38 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libkpathsea4 liblircclient0 libltdl3 liblwres50 libmagick++10 libmagick10 libmalaga7 libmozjs1d libmpfr1ldbl libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpisock9 libpisync1 libpoppler-glib3 libpoppler3 libpt-1.10.10 libraw1394-8 libsdl1.2debian-alsa libsensors3 libsexy2 libsmbios2 libsoup2.2-8 libspeexdsp1 libssh2-1 libsuitesparse-3.1.0 libsvga1 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libvoikko1 libxalan2-java-gcj libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common rhythmbox seahorse sound-juicer swfdec-gnome system-config-printer totem-common totem-gstreamer transmission-gtk vinagre vino w3c-dtd-xhtml wodim

Installed using aptitude, missing with apt-get

gstreamer0.10-gnomevfs

Installed using aptitude, removed with apt-get

[nothing]

This is for KDE:

Installed using apt-get, missing with aptitude

autopoint bomber bovo cantor cantor-backend-kalgebra cpp-4.3 dcoprss edict espeak espeak-data eyesapplet fifteenapplet finger gettext ghostscript-x git gnome-audio gnugo granatier gs-common gstreamer0.10-pulseaudio indi kaddressbook-plugins kalgebra kalzium-data kanjidic kapman kate-plugins kblocks kbreakout kbstate kde-icons-mono kdeaccessibility kdeaddons-kfile-plugins kdeadmin-kfile-plugins kdeartwork-misc kdeartwork-theme-window kdeedu kdeedu-data kdeedu-kvtml-data kdegames kdegames-card-data kdegames-mahjongg-data kdegraphics-kfile-plugins kdelirc kdemultimedia-kfile-plugins kdenetwork-kfile-plugins kdepim-kfile-plugins kdepim-kio-plugins kdessh kdetoys kdewebdev kdiamond kdnssd kfilereplace kfourinline kgeography-data kigo killbots kiriki klettres-data kmoon kmrml knewsticker-scripts kollision kpf krosspython ksirk ksmserver ksquares kstars-data ksudoku kubrick kweather libasound2-plugins libboost-python1.42.0 libcfitsio3 libconvert-binhex-perl libcrypt-ssleay-perl libdb4.6++ libdjvulibre-text libdotconf1.0 liberror-perl libespeak1 libfinance-quote-perl libgail-common libgsl0ldbl libhtml-parser-perl libhtml-tableextract-perl libhtml-tagset-perl libhtml-tree-perl libio-stringy-perl libkdeedu4 libkdegames5 libkiten4 libkpathsea5 libkrossui4 libmailtools-perl libmime-tools-perl libnews-nntpclient-perl libopenbabel3 libportaudio2 libpulse-browse0 libservlet2.4-java libspeechd2 libtiff-tools libtimedate-perl libunistring0 liburi-perl libwww-perl libxalan2-java libxerces2-java lirc luatex marble networkstatus noatun-plugins openoffice.org-writer2latex palapeli palapeli-data parley parley-data poster psutils pulseaudio pulseaudio-esound-compat pulseaudio-module-x11 pulseaudio-utils quanta-data rocs rsync speech-dispatcher step svgalibg1 texlive-binaries texlive-luatex ttf-sazanami-gothic

Installed using apt-get, removed with aptitude

amor artsbuilder atlantik atlantikdesigner blinken bluez-utils cvs dhcdbd djvulibre-desktop imlib-base imlib11 kalzium kanagram kandy kasteroids katomic kbackgammon kbattleship kblackbox kbounce kbruch kcron kdat kdemultimedia-kappfinder-data kdeprint kdict kdvi kedit keduca kenolaba kfax kfaxview kfouleggs kgeography kghostview kgoldrunner khangman khexedit kiconedit kig kimagemapeditor kitchensync kiten kjumpingcube klatin klettres klickety klines klinkstatus kmag kmahjongg kmailcvt kmenuedit kmid kmilo kmines kmousetool kmouth kmplot knetwalk kodo kolf kommander konquest kooka kpager kpat kpdf kpercentage kpilot kpoker kpovmodeler krec kregexpeditor kreversi ksame ksayit kshisen ksig ksim ksirc ksirtet ksmiletris ksnake ksokoban kspaceduel kstars ksvg ksysv kteatime ktip ktnef ktouch ktron kttsd ktuberling kturtle ktux kuickshow kverbos kview kviewshell kvoctrain kwifimanager kwin kwin4 kwordquiz kworldclock kxsldbg libakode2 libarts1-akode libarts1-audiofile libarts1-mpeglib libarts1-xine libavahi-compat-libdnssd1 libavahi-core5 libavc1394-0 libbind9-50 libbluetooth2 libboost-python1.34.1 libcucul0 libcurl3 libcvsservice0 libdirectfb-1.0-0 libdjvulibre21 libdvdread3 libfaad0 libfreebob0 libgd2-noxpm libgraphviz4 libgsmme1c2a libgtkhtml2-0 libicu38 libiec61883-0 libindex0 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libk3b3 libkcal2b libkcddb1 libkdeedu3 libkdegames1 libkdepim1a libkgantt0 libkleopatra1 libkmime2 libkpathsea4 libkpimexchange1 libkpimidentities1 libkscan1 libksieve0 libktnef1 liblockdev1 libltdl3 liblwres50 libmagick10 libmimelib1c2a libmodplug0c2 libmozjs1d libmpcdec3 libmpfr1ldbl libneon27 libnm-util0 libopensync0 libpisock9 libpoppler-glib3 libpoppler-qt2 libpoppler3 libraw1394-8 librss1 libsensors3 libsmbios2 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libxalan2-java-gcj libxerces2-java-gcj libxtrap6 lskat mpeglib network-manager-kde noatun pmount tex-common texlive-base texlive-common texlive-doc-base texlive-fonts-recommended tidy ttf-dustin ttf-kochi-gothic ttf-sjfonts

Installed using aptitude, missing with apt-get

dolphin kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeutils kscreensaver kscreensaver-xsavers libgle3 libkonq5 libkonq5-templates libnetpbm10 netpbm plasma-widget-folderview plasma-widget-networkmanagement xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod

Installed using aptitude, removed with apt-get

kdebase-bin konq-plugins konqueror

Tags: debian, debian edu, english.
Migrating Xen virtual machines using LVM to KVM using disk images
22nd November 2010

Most of the computers in use by the Debian Edu/Skolelinux project are virtual machines. And they have been Xen machines running on a fairly old IBM eserver xseries 345 machine, and we wanted to migrate them to KVM on a newer Dell PowerEdge 2950 host machine. This was a bit harder that it could have been, because we set up the Xen virtual machines to get the virtual partitions from LVM, which as far as I know is not supported by KVM. So to migrate, we had to convert several LVM logical volumes to partitions on a virtual disk file.

I found a nice recipe to do this, and wrote the following script to do the migration. It uses qemu-img from the qemu package to make the disk image, parted to partition it, losetup and kpartx to present the disk image partions as devices, and dd to copy the data. I NFS mounted the new servers storage area on the old server to do the migration.

#!/bin/sh

# Based on
# http://searchnetworking.techtarget.com.au/articles/35011-Six-steps-for-migrating-Xen-virtual-machines-to-KVM

set -e
set -x

if [ -z "$1" ] ; then
    echo "Usage: $0 <hostname>"
    exit 1
else
    host="$1"
fi

if [ ! -e /dev/vg_data/$host-disk ] ; then
    echo "error: unable to find LVM volume for $host"
    exit 1
fi

# Partitions need to be a bit bigger than the LVM LVs.  not sure why.
disksize=$( lvs --units m | grep $host-disk | awk '{sum = sum + $4} END { print int(sum * 1.05) }')
swapsize=$( lvs --units m | grep $host-swap | awk '{sum = sum + $4} END { print int(sum * 1.05) }')
totalsize=$(( ( $disksize + $swapsize ) ))

img=$host.img
#dd if=/dev/zero of=$img bs=1M count=$(( $disksize + $swapsize ))
qemu-img create $img ${totalsize}MMaking room on the Debian Edu/Sqeeze DVD

parted $img mklabel msdos
parted $img mkpart primary linux-swap 0 $disksize
parted $img mkpart primary ext2 $disksize $totalsize
parted $img set 1 boot on

modprobe dm-mod
losetup /dev/loop0 $img
kpartx -a /dev/loop0

dd if=/dev/vg_data/$host-disk of=/dev/mapper/loop0p1 bs=1M
fsck.ext3 -f /dev/mapper/loop0p1 || true
mkswap /dev/mapper/loop0p2

kpartx -d /dev/loop0
losetup -d /dev/loop0

The script is perhaps so simple that it is not copyrightable, but if it is, it is licenced using GPL v2 or later at your discretion.

After doing this, I booted a Debian CD in rescue mode in KVM with the new disk image attached, installed grub-pc and linux-image-686 and set up grub to boot from the disk image. After this, the KVM machines seem to work just fine.

Tags: debian, debian edu, english.
Lenny->Squeeze upgrades of the Gnome and KDE desktop, now with apt-get autoremove
22nd November 2010

Michael Biebl suggested to me on IRC, that I changed my automated upgrade testing of the Lenny Gnome and KDE Desktop to do apt-get autoremove when using apt-get. This seem like a very good idea, so I adjusted by test scripts and can now present the updated result from today:

This is for Gnome:

Installed using apt-get, missing with aptitude

apache2.2-bin aptdaemon baobab binfmt-support browser-plugin-gnash cheese-common cli-common cups-pk-helper dmz-cursor-theme empathy empathy-common freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libdiscid0 libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgconf2.0-cil libgdata-common libgdata7 libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy p7zip-full pkg-config python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr

Installed using apt-get, removed with aptitude

cheese ekiga eog epiphany-extensions evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnuchess gucharmap guile-1.8-libs libavahi-ui0 libdmx1 libgalago3 libgtk-vnc-1.0-0 libgtksourceview2.0-0 liblircclient0 libsdl1.2debian-alsa libspeexdsp1 libsvga1 rhythmbox seahorse sound-juicer system-config-printer totem-common transmission-gtk vinagre vino

Installed using aptitude, missing with apt-get

gstreamer0.10-gnomevfs

Installed using aptitude, removed with apt-get

[nothing]

This is for KDE:

Installed using apt-get, missing with aptitude

ksmserver

Installed using apt-get, removed with aptitude

kwin network-manager-kde

Installed using aptitude, missing with apt-get

arts dolphin freespacenotifier google-gadgets-gst google-gadgets-xul kappfinder kcalc kcharselect kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdeartwork-emoticons kdeartwork-style kdeartwork-theme-icon kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeeject kdelibs kdeplasma-addons kdeutils kdewallpapers kdf kfloppy kgpg khelpcenter4 kinfocenter konq-plugins-l10n konqueror-nsplugins kscreensaver kscreensaver-xsavers ktimer kwrite libgle3 libkde4-ruby1.8 libkonq5 libkonq5-templates libnetpbm10 libplasma-ruby libplasma-ruby1.8 libqt4-ruby1.8 marble-data marble-plugins netpbm nuvola-icon-theme plasma-dataengines-workspace plasma-desktop plasma-desktopthemes-artwork plasma-runners-addons plasma-scriptengine-googlegadgets plasma-scriptengine-python plasma-scriptengine-qedje plasma-scriptengine-ruby plasma-scriptengine-webkit plasma-scriptengines plasma-wallpapers-addons plasma-widget-folderview plasma-widget-networkmanagement ruby sweeper update-notifier-kde xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod

Installed using aptitude, removed with apt-get

ark google-gadgets-common google-gadgets-qt htdig kate kdebase-bin kdebase-data kdepasswd kfind klipper konq-plugins konqueror ksysguard ksysguardd libarchive1 libcln6 libeet1 libeina-svn-06 libggadget-1.0-0b libggadget-qt-1.0-0b libgps19 libkdecorations4 libkephal4 libkonq4 libkonqsidebarplugin4a libkscreensaver5 libksgrd4 libksignalplotter4 libkunitconversion4 libkwineffects1a libmarblewidget4 libntrack-qt4-1 libntrack0 libplasma-geolocation-interface4 libplasmaclock4a libplasmagenericshell4 libprocesscore4a libprocessui4a libqalculate5 libqedje0a libqtruby4shared2 libqzion0a libruby1.8 libscim8c2a libsmokekdecore4-3 libsmokekdeui4-3 libsmokekfile3 libsmokekhtml3 libsmokekio3 libsmokeknewstuff2-3 libsmokeknewstuff3-3 libsmokekparts3 libsmokektexteditor3 libsmokekutils3 libsmokenepomuk3 libsmokephonon3 libsmokeplasma3 libsmokeqtcore4-3 libsmokeqtdbus4-3 libsmokeqtgui4-3 libsmokeqtnetwork4-3 libsmokeqtopengl4-3 libsmokeqtscript4-3 libsmokeqtsql4-3 libsmokeqtsvg4-3 libsmokeqttest4-3 libsmokeqtuitools4-3 libsmokeqtwebkit4-3 libsmokeqtxml4-3 libsmokesolid3 libsmokesoprano3 libtaskmanager4a libtidy-0.99-0 libweather-ion4a libxklavier16 libxxf86misc1 okteta oxygencursors plasma-dataengines-addons plasma-scriptengine-superkaramba plasma-widget-lancelot plasma-widgets-addons plasma-widgets-workspace polkit-kde-1 ruby1.8 systemsettings update-notifier-common

Running apt-get autoremove made the results using apt-get and aptitude a bit more similar, but there are still quite a lott of differences. I have no idea what packages should be installed after the upgrade, but hope those that do can have a look.

Tags: debian, debian edu, english.
Why isn't Debian Edu using VLC?
27th November 2010

In the latest issue of Linux Journal, the readers choices were presented, and the winner among the multimedia player were VLC. Personally, I like VLC, and it is my player of choice when I first try to play a video file or stream. Only if VLC fail will I drag out gmplayer to see if it can do better. The reason is mostly the failure model and trust. When VLC fail, it normally pop up a error message reporting the problem. When mplayer fail, it normally segfault or just hangs. The latter failure mode drain my trust in the program.

But even if VLC is my player of choice, we have choosen to use mplayer in Debian Edu/Skolelinux. The reason is simple. We need a good browser plugin to play web videos seamlessly, and the VLC browser plugin is not very good. For example, it lack in-line control buttons, so there is no way for the user to pause the video. Also, when I last tested the browser plugins available in Debian, the VLC plugin failed on several video pages where mplayer based plugins worked. If the browser plugin for VLC was as good as the gecko-mediaplayer package (which uses mplayer), we would switch.

While VLC is a good player, its user interface is slightly annoying. The most annoying feature is its inconsistent use of keyboard shortcuts. When the player is in full screen mode, its shortcuts are different from when it is playing the video in a window. For example, space only work as pause when in full screen mode. I wish it had consisten shortcuts and that space also would work when in window mode. Another nice shortcut in gmplayer is [enter] to restart the current video. It is very nice when playing short videos from the web and want to restart it when new people arrive to have a look at what is going on.

Tags: debian, debian edu, english, multimedia, video, web.
Debian Edu development gathering and General Assembly for FRiSK
29th November 2010

On friday, the first Debian Edu / Skolelinux development gathering in a long time take place here in Oslo, Norway. I really look forward to seeing all the good people working on the Squeeze release. The gathering is open for everyone interested in learning more about Debian Edu / Skolelinux.

On Saturday, the Norwegian member organization taking care of organizing these development gatherings, Fri Programvare i Skolen, will hold its General Assembly for 2010. Membership is open for all, and currently there are 388 people registered as members. Last year 32 members cast their vote in the memberdb based election system. I hope more people find time to vote this year.

Tags: debian edu, english, nuug.
Student group continue the work on my Reprap 3D printer
9th December 2010

A few days ago, I was introduces to some students in the robot student assosiation Robotica Osloensis at the University of Oslo where I work, who planned to get their own 3D printer. They wanted to learn from me based on my work in the area. After having a short lunch meeting with them, I offered them to borrow my reprap kit, as I never had time to complete the build and this seem unlike to change any time soon. I look forward to see how this goes. This monday their volunteer driver picked up my kit and drove it to their lab, and tomorrow I am told the last exam is over so they can start work on getting the 3D printer operational.

The robotic group have already build several robots on their own, and seem capable of getting the reprap operational. I really look forward to being able to print all the cool 3D designs published on Thingiverse. I even got some 3D scans I got made during Dagen@IFI when one of the groups at the computer science department at the university demonstrated their very cool 3D scanner.

Tags: 3d-printer, english, reprap.
Now accepting bitcoins - anonymous and distributed p2p crypto-money
10th December 2010

With this weeks lawless governmental attacks on Wikileak and free speech, it has become obvious that PayPal, visa and mastercard can not be trusted to handle money transactions. A blog post from Simon Phipps on bitcoin reminded me about a project that a friend of mine mentioned earlier. I decided to follow Simon's example, and get involved with BitCoin. I got some help from my friend to get it all running, and he even handed me some bitcoins to get started. I even donated a few bitcoins to Simon for helping me remember BitCoin.

So, what is bitcoins, you probably wonder? It is a digital crypto-currency, decentralised and handled using peer-to-peer networks. It allows anonymous transactions and prohibits central control over the transactions, making it impossible for governments and companies alike to block donations and other transactions. The source is free software, and while the key dependency wxWidgets 2.9 for the graphical user interface is missing in Debian, the command line client builds just fine. Hopefully Jonas will get the package into Debian soon.

Bitcoins can be converted to other currencies, like USD and EUR. There are companies accepting bitcoins when selling services and goods, and there are even currency "stock" markets where the exchange rate is decided. There are not many users so far, but the concept seems promising. If you want to get started and lack a friend with any bitcoins to spare, you can even get some for free (0.05 bitcoin at the time of writing). Use BitcoinWatch to keep an eye on the current exchange rates.

As an experiment, I have decided to set up bitcoind on one of my machines. If you want to support my activity, please send Bitcoin donations to the address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b. Thank you!

Tags: bitcoin, debian, english, personvern, sikkerhet.
Some thoughts on BitCoins
11th December 2010

As I continue to explore BitCoin, I've starting to wonder what properties the system have, and how it will be affected by laws and regulations here in Norway. Here are some random notes.

One interesting thing to note is that since the transactions are verified using a peer to peer network, all details about a transaction is known to everyone. This means that if a BitCoin address has been published like I did with mine in my initial post about BitCoin, it is possible for everyone to see how many BitCoins have been transfered to that address. There is even a web service to look at the details for all transactions. There I can see that my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b have received 16.06 Bitcoin, the 1LfdGnGuWkpSJgbQySxxCWhv8MHqvwst3 address of Simon Phipps have received 181.97 BitCoin and the address 1MCwBbhNGp5hRm5rC1Aims2YFRe2SXPYKt of EFF have received 2447.38 BitCoins so far. Thank you to each and every one of you that donated bitcoins to support my activity. The fact that anyone can see how much money was transfered to a given address make it more obvious why the BitCoin community recommend to generate and hand out a new address for each transaction. I'm told there is no way to track which addresses belong to a given person or organisation without the person or organisation revealing it themselves, as Simon, EFF and I have done.

In Norway, and in most other countries, there are laws and regulations limiting how much money one can transfer across the border without declaring it. There are money laundering, tax and accounting laws and regulations I would expect to apply to the use of BitCoin. If the Skolelinux foundation (SLX Debian Labs) were to accept donations in BitCoin in addition to normal bank transfers like EFF is doing, how should this be accounted? Given that it is impossible to know if money can across the border or not, should everything or nothing be declared? What exchange rate should be used when calculating taxes? Would receivers have to pay income tax if the foundation were to pay Skolelinux contributors in BitCoin? I have no idea, but it would be interesting to know.

For a currency to be useful and successful, it must be trusted and accepted by a lot of users. It must be possible to get easy access to the currency (as a wage or using currency exchanges), and it must be easy to spend it. At the moment BitCoin seem fairly easy to get access to, but there are very few places to spend it. I am not really a regular user of any of the vendor types currently accepting BitCoin, so I wonder when my kind of shop would start accepting BitCoins. I would like to buy electronics, travels and subway tickets, not herbs and books. :) The currency is young, and this will improve over time if it become popular, but I suspect regular banks will start to lobby to get BitCoin declared illegal if it become popular. I'm sure they will claim it is helping fund terrorism and money laundering (which probably would be true, as is any currency in existence), but I believe the problems should be solved elsewhere and not by blaming currencies.

The process of creating new BitCoins is called mining, and it is CPU intensive process that depend on a bit of luck as well (as one is competing against all the other miners currently spending CPU cycles to see which one get the next lump of cash). The "winner" get 50 BitCoin when this happen. Yesterday I came across the obvious way to join forces to increase ones changes of getting at least some coins, by coordinating the work on mining BitCoins across several machines and people, and sharing the result if one is lucky and get the 50 BitCoins. Check out BitCoin Pool if this sounds interesting. I have not had time to try to set up a machine to participate there yet, but have seen that running on ones own for a few days have not yield any BitCoins througth mining yet.

Update 2010-12-15: Found an interesting criticism of bitcoin. Not quite sure how valid it is, but thought it was interesting to read. The arguments presented seem to be equally valid for gold, which was used as a currency for many years.

Tags: bitcoin, debian, english, personvern, sikkerhet.
How to test if a laptop is working with Linux
22nd December 2010

The last few days I have spent at work here at the University of oslo testing if the new batch of computers will work with Linux. Every year for the last few years the university have organized shared bid of a few thousand computers, and this year HP won the bid. Two different desktops and five different laptops are on the list this year. We in the UNIX group want to know which one of these computers work well with RHEL and Ubuntu, the two Linux distributions we currently handle at the university.

My test method is simple, and I share it here to get feedback and perhaps inspire others to test hardware as well. To test, I PXE install the OS version of choice, and log in as my normal user and run a few applications and plug in selected pieces of hardware. When something fail, I make a note about this in the test matrix and move on. If I have some spare time I try to report the bug to the OS vendor, but as I only have the machines for a short time, I rarely have the time to do this for all the problems I find.

Anyway, to get to the point of this post. Here is the simple tests I perform on a new model.

By now I suspect you are really curious what the test results are for the HP machines I am testing. I'm not done yet, so I will report the test results later. For now I can report that HP 8100 Elite work fine, and hibernation fail with HP EliteBook 8440p on Ubuntu Lucid, and audio fail on RHEL6. Ubuntu Maverik worked with 8440p. As you can see, I have most machines left to test. One interesting observation is that Ubuntu Lucid has almost twice the framerate than RHEL6 with glxgears. No idea why.

Tags: debian, debian edu, english.
Officeshots still going strong
25th December 2010

Half a year ago I wrote a bit about OfficeShots, a web service to allow anyone to test how ODF documents are handled by the different programs reading and writing the ODF format.

I just had a look at the service, and it seem to be going strong. Very interesting to see the results reported in the gallery, how different Office implementations handle different ODF features. Sad to see that KOffice was not doing it very well, and happy to see that LibreOffice has been tested already (but sadly not listed as a option for OfficeShots users yet). I am glad to see that the ODF community got such a great test tool available.

Tags: english, standard.
The reply from Edgar Villanueva to Microsoft in Peru
25th December 2010

A few days ago an article in the Norwegian Computerworld magazine about how version 2.0 of European Interoperability Framework has been successfully lobbied by the proprietary software industry to remove the focus on free software. Nothing very surprising there, given earlier reports on how Microsoft and others have stacked the committees in this work. But I find this very sad. The definition of an open standard from version 1 was very good, and something I believe should be used also in the future, alongside the definition from Digistan. Version 2 have removed the open standard definition from its content.

Anyway, the news reminded me of the great reply sent by Dr. Edgar Villanueva, congressman in Peru at the time, to Microsoft as a reply to Microsofts attack on his proposal regarding the use of free software in the public sector in Peru. As the text was not available from a few of the URLs where it used to be available, I copy it here from my source to ensure it is available also in the future. Some background information about that story is available in an article from Linux Journal in 2002.

Lima, 8th of April, 2002
To: Señor JUAN ALBERTO GONZÁLEZ
General Manager of Microsoft Perú

Dear Sir:

First of all, I thank you for your letter of March 25, 2002 in which you state the official position of Microsoft relative to Bill Number 1609, Free Software in Public Administration, which is indubitably inspired by the desire for Peru to find a suitable place in the global technological context. In the same spirit, and convinced that we will find the best solutions through an exchange of clear and open ideas, I will take this opportunity to reply to the commentaries included in your letter.

While acknowledging that opinions such as yours constitute a significant contribution, it would have been even more worthwhile for me if, rather than formulating objections of a general nature (which we will analyze in detail later) you had gathered solid arguments for the advantages that proprietary software could bring to the Peruvian State, and to its citizens in general, since this would have allowed a more enlightening exchange in respect of each of our positions.

With the aim of creating an orderly debate, we will assume that what you call "open source software" is what the Bill defines as "free software", since there exists software for which the source code is distributed together with the program, but which does not fall within the definition established by the Bill; and that what you call "commercial software" is what the Bill defines as "proprietary" or "unfree", given that there exists free software which is sold in the market for a price like any other good or service.

It is also necessary to make it clear that the aim of the Bill we are discussing is not directly related to the amount of direct savings that can by made by using free software in state institutions. That is in any case a marginal aggregate value, but in no way is it the chief focus of the Bill. The basic principles which inspire the Bill are linked to the basic guarantees of a state of law, such as:

  • Free access to public information by the citizen.
  • Permanence of public data.
  • Security of the State and citizens.

To guarantee the free access of citizens to public information, it is indispensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software.

To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code.

To guarantee national security or the security of the State, it is indispensable to be able to rely on systems without elements which allow control from a distance or the undesired transmission of information to third parties. Systems with source code freely accessible to the public are required to allow their inspection by the State itself, by the citizens, and by a large number of independent experts throughout the world. Our proposal brings further security, since the knowledge of the source code will eliminate the growing number of programs with *spy code*.

In the same way, our proposal strengthens the security of the citizens, both in their role as legitimate owners of information managed by the state, and in their role as consumers. In this second case, by allowing the growth of a widespread availability of free software not containing *spy code* able to put at risk privacy and individual freedoms.

In this sense, the Bill is limited to establishing the conditions under which the state bodies will obtain software in the future, that is, in a way compatible with these basic principles.

From reading the Bill it will be clear that once passed:

  • the law does not forbid the production of proprietary software
  • the law does not forbid the sale of proprietary software
  • the law does not specify which concrete software to use
  • the law does not dictate the supplier from whom software will be bought
  • the law does not limit the terms under which a software product can be licensed.
  • What the Bill does express clearly, is that, for software to be acceptable for the state it is not enough that it is technically capable of fulfilling a task, but that further the contractual conditions must satisfy a series of requirements regarding the license, without which the State cannot guarantee the citizen adequate processing of his data, watching over its integrity, confidentiality, and accessibility throughout time, as these are very critical aspects for its normal functioning.

    We agree, Mr. Gonzalez, that information and communication technology have a significant impact on the quality of life of the citizens (whether it be positive or negative). We surely also agree that the basic values I have pointed out above are fundamental in a democratic state like Peru. So we are very interested to know of any other way of guaranteeing these principles, other than through the use of free software in the terms defined by the Bill.

    As for the observations you have made, we will now go on to analyze them in detail:

    Firstly, you point out that: "1. The bill makes it compulsory for all public bodies to use only free software, that is to say open source software, which breaches the principles of equality before the law, that of non-discrimination and the right of free private enterprise, freedom of industry and of contract, protected by the constitution."

    This understanding is in error. The Bill in no way affects the rights you list; it limits itself entirely to establishing conditions for the use of software on the part of state institutions, without in any way meddling in private sector transactions. It is a well established principle that the State does not enjoy the wide spectrum of contractual freedom of the private sector, as it is limited in its actions precisely by the requirement for transparency of public acts; and in this sense, the preservation of the greater common interest must prevail when legislating on the matter.

    The Bill protects equality under the law, since no natural or legal person is excluded from the right of offering these goods to the State under the conditions defined in the Bill and without more limitations than those established by the Law of State Contracts and Purchasing (T.U.O. by Supreme Decree No. 012-2001-PCM).

    The Bill does not introduce any discrimination whatever, since it only establishes *how* the goods have to be provided (which is a state power) and not *who* has to provide them (which would effectively be discriminatory, if restrictions based on national origin, race religion, ideology, sexual preference etc. were imposed). On the contrary, the Bill is decidedly antidiscriminatory. This is so because by defining with no room for doubt the conditions for the provision of software, it prevents state bodies from using software which has a license including discriminatory conditions.

    It should be obvious from the preceding two paragraphs that the Bill does not harm free private enterprise, since the latter can always choose under what conditions it will produce software; some of these will be acceptable to the State, and others will not be since they contradict the guarantee of the basic principles listed above. This free initiative is of course compatible with the freedom of industry and freedom of contract (in the limited form in which the State can exercise the latter). Any private subject can produce software under the conditions which the State requires, or can refrain from doing so. Nobody is forced to adopt a model of production, but if they wish to provide software to the State, they must provide the mechanisms which guarantee the basic principles, and which are those described in the Bill.

    By way of an example: nothing in the text of the Bill would prevent your company offering the State bodies an office "suite", under the conditions defined in the Bill and setting the price that you consider satisfactory. If you did not, it would not be due to restrictions imposed by the law, but to business decisions relative to the method of commercializing your products, decisions with which the State is not involved.

    To continue; you note that:" 2. The bill, by making the use of open source software compulsory, would establish discriminatory and non competitive practices in the contracting and purchasing by public bodies..."

    This statement is just a reiteration of the previous one, and so the response can be found above. However, let us concern ourselves for a moment with your comment regarding "non-competitive ... practices."

    Of course, in defining any kind of purchase, the buyer sets conditions which relate to the proposed use of the good or service. From the start, this excludes certain manufacturers from the possibility of competing, but does not exclude them "a priori", but rather based on a series of principles determined by the autonomous will of the purchaser, and so the process takes place in conformance with the law. And in the Bill it is established that *no one* is excluded from competing as far as he guarantees the fulfillment of the basic principles.

    Furthermore, the Bill *stimulates* competition, since it tends to generate a supply of software with better conditions of usability, and to better existing work, in a model of continuous improvement.

    On the other hand, the central aspect of competivity is the chance to provide better choices to the consumer. Now, it is impossible to ignore the fact that marketing does not play a neutral role when the product is offered on the market (since accepting the opposite would lead one to suppose that firms' expenses in marketing lack any sense), and that therefore a significant expense under this heading can influence the decisions of the purchaser. This influence of marketing is in large measure reduced by the bill that we are backing, since the choice within the framework proposed is based on the *technical merits* of the product and not on the effort put into commercialization by the producer; in this sense, competitiveness is increased, since the smallest software producer can compete on equal terms with the most powerful corporations.

    It is necessary to stress that there is no position more anti-competitive than that of the big software producers, which frequently abuse their dominant position, since in innumerable cases they propose as a solution to problems raised by users: "update your software to the new version" (at the user's expense, naturally); furthermore, it is common to find arbitrary cessation of technical help for products, which, in the provider's judgment alone, are "old"; and so, to receive any kind of technical assistance, the user finds himself forced to migrate to new versions (with non-trivial costs, especially as changes in hardware platform are often involved). And as the whole infrastructure is based on proprietary data formats, the user stays "trapped" in the need to continue using products from the same supplier, or to make the huge effort to change to another environment (probably also proprietary).

    You add: "3. So, by compelling the State to favor a business model based entirely on open source, the bill would only discourage the local and international manufacturing companies, which are the ones which really undertake important expenditures, create a significant number of direct and indirect jobs, as well as contributing to the GNP, as opposed to a model of open source software which tends to have an ever weaker economic impact, since it mainly creates jobs in the service sector."

    I do not agree with your statement. Partly because of what you yourself point out in paragraph 6 of your letter, regarding the relative weight of services in the context of software use. This contradiction alone would invalidate your position. The service model, adopted by a large number of companies in the software industry, is much larger in economic terms, and with a tendency to increase, than the licensing of programs.

    On the other hand, the private sector of the economy has the widest possible freedom to choose the economic model which best suits its interests, even if this freedom of choice is often obscured subliminally by the disproportionate expenditure on marketing by the producers of proprietary software.

    In addition, a reading of your opinion would lead to the conclusion that the State market is crucial and essential for the proprietary software industry, to such a point that the choice made by the State in this bill would completely eliminate the market for these firms. If that is true, we can deduce that the State must be subsidizing the proprietary software industry. In the unlikely event that this were true, the State would have the right to apply the subsidies in the area it considered of greatest social value; it is undeniable, in this improbable hypothesis, that if the State decided to subsidize software, it would have to do so choosing the free over the proprietary, considering its social effect and the rational use of taxpayers money.

    In respect of the jobs generated by proprietary software in countries like ours, these mainly concern technical tasks of little aggregate value; at the local level, the technicians who provide support for proprietary software produced by transnational companies do not have the possibility of fixing bugs, not necessarily for lack of technical capability or of talent, but because they do not have access to the source code to fix it. With free software one creates more technically qualified employment and a framework of free competence where success is only tied to the ability to offer good technical support and quality of service, one stimulates the market, and one increases the shared fund of knowledge, opening up alternatives to generate services of greater total value and a higher quality level, to the benefit of all involved: producers, service organizations, and consumers.

    It is a common phenomenon in developing countries that local software industries obtain the majority of their takings in the service sector, or in the creation of "ad hoc" software. Therefore, any negative impact that the application of the Bill might have in this sector will be more than compensated by a growth in demand for services (as long as these are carried out to high quality standards). If the transnational software companies decide not to compete under these new rules of the game, it is likely that they will undergo some decrease in takings in terms of payment for licenses; however, considering that these firms continue to allege that much of the software used by the State has been illegally copied, one can see that the impact will not be very serious. Certainly, in any case their fortune will be determined by market laws, changes in which cannot be avoided; many firms traditionally associated with proprietary software have already set out on the road (supported by copious expense) of providing services associated with free software, which shows that the models are not mutually exclusive.

    With this bill the State is deciding that it needs to preserve certain fundamental values. And it is deciding this based on its sovereign power, without affecting any of the constitutional guarantees. If these values could be guaranteed without having to choose a particular economic model, the effects of the law would be even more beneficial. In any case, it should be clear that the State does not choose an economic model; if it happens that there only exists one economic model capable of providing software which provides the basic guarantee of these principles, this is because of historical circumstances, not because of an arbitrary choice of a given model.

    Your letter continues: "4. The bill imposes the use of open source software without considering the dangers that this can bring from the point of view of security, guarantee, and possible violation of the intellectual property rights of third parties."

    Alluding in an abstract way to "the dangers this can bring", without specifically mentioning a single one of these supposed dangers, shows at the least some lack of knowledge of the topic. So, allow me to enlighten you on these points.

    On security:

    National security has already been mentioned in general terms in the initial discussion of the basic principles of the bill. In more specific terms, relative to the security of the software itself, it is well known that all software (whether proprietary or free) contains errors or "bugs" (in programmers' slang). But it is also well known that the bugs in free software are fewer, and are fixed much more quickly, than in proprietary software. It is not in vain that numerous public bodies responsible for the IT security of state systems in developed countries require the use of free software for the same conditions of security and efficiency.

    What is impossible to prove is that proprietary software is more secure than free, without the public and open inspection of the scientific community and users in general. This demonstration is impossible because the model of proprietary software itself prevents this analysis, so that any guarantee of security is based only on promises of good intentions (biased, by any reckoning) made by the producer itself, or its contractors.

    It should be remembered that in many cases, the licensing conditions include Non-Disclosure clauses which prevent the user from publicly revealing security flaws found in the licensed proprietary product.

    In respect of the guarantee:

    As you know perfectly well, or could find out by reading the "End User License Agreement" of the products you license, in the great majority of cases the guarantees are limited to replacement of the storage medium in case of defects, but in no case is compensation given for direct or indirect damages, loss of profits, etc... If as a result of a security bug in one of your products, not fixed in time by yourselves, an attacker managed to compromise crucial State systems, what guarantees, reparations and compensation would your company make in accordance with your licensing conditions? The guarantees of proprietary software, inasmuch as programs are delivered ``AS IS'', that is, in the state in which they are, with no additional responsibility of the provider in respect of function, in no way differ from those normal with free software.

    On Intellectual Property:

    Questions of intellectual property fall outside the scope of this bill, since they are covered by specific other laws. The model of free software in no way implies ignorance of these laws, and in fact the great majority of free software is covered by copyright. In reality, the inclusion of this question in your observations shows your confusion in respect of the legal framework in which free software is developed. The inclusion of the intellectual property of others in works claimed as one's own is not a practice that has been noted in the free software community; whereas, unfortunately, it has been in the area of proprietary software. As an example, the condemnation by the Commercial Court of Nanterre, France, on 27th September 2001 of Microsoft Corp. to a penalty of 3 million francs in damages and interest, for violation of intellectual property (piracy, to use the unfortunate term that your firm commonly uses in its publicity).

    You go on to say that: "The bill uses the concept of open source software incorrectly, since it does not necessarily imply that the software is free or of zero cost, and so arrives at mistaken conclusions regarding State savings, with no cost-benefit analysis to validate its position."

    This observation is wrong; in principle, freedom and lack of cost are orthogonal concepts: there is software which is proprietary and charged for (for example, MS Office), software which is proprietary and free of charge (MS Internet Explorer), software which is free and charged for (Red Hat, SuSE etc GNU/Linux distributions), software which is free and not charged for (Apache, Open Office, Mozilla), and even software which can be licensed in a range of combinations (MySQL).

    Certainly free software is not necessarily free of charge. And the text of the bill does not state that it has to be so, as you will have noted after reading it. The definitions included in the Bill state clearly *what* should be considered free software, at no point referring to freedom from charges. Although the possibility of savings in payments for proprietary software licenses are mentioned, the foundations of the bill clearly refer to the fundamental guarantees to be preserved and to the stimulus to local technological development. Given that a democratic State must support these principles, it has no other choice than to use software with publicly available source code, and to exchange information only in standard formats.

    If the State does not use software with these characteristics, it will be weakening basic republican principles. Luckily, free software also implies lower total costs; however, even given the hypothesis (easily disproved) that it was more expensive than proprietary software, the simple existence of an effective free software tool for a particular IT function would oblige the State to use it; not by command of this Bill, but because of the basic principles we enumerated at the start, and which arise from the very essence of the lawful democratic State.

    You continue: "6. It is wrong to think that Open Source Software is free of charge. Research by the Gartner Group (an important investigator of the technological market recognized at world level) has shown that the cost of purchase of software (operating system and applications) is only 8% of the total cost which firms and institutions take on for a rational and truly beneficial use of the technology. The other 92% consists of: installation costs, enabling, support, maintenance, administration, and down-time."

    This argument repeats that already given in paragraph 5 and partly contradicts paragraph 3. For the sake of brevity we refer to the comments on those paragraphs. However, allow me to point out that your conclusion is logically false: even if according to Gartner Group the cost of software is on average only 8% of the total cost of use, this does not in any way deny the existence of software which is free of charge, that is, with a licensing cost of zero.

    In addition, in this paragraph you correctly point out that the service components and losses due to down-time make up the largest part of the total cost of software use, which, as you will note, contradicts your statement regarding the small value of services suggested in paragraph 3. Now the use of free software contributes significantly to reduce the remaining life-cycle costs. This reduction in the costs of installation, support etc. can be noted in several areas: in the first place, the competitive service model of free software, support and maintenance for which can be freely contracted out to a range of suppliers competing on the grounds of quality and low cost. This is true for installation, enabling, and support, and in large part for maintenance. In the second place, due to the reproductive characteristics of the model, maintenance carried out for an application is easily replicable, without incurring large costs (that is, without paying more than once for the same thing) since modifications, if one wishes, can be incorporated in the common fund of knowledge. Thirdly, the huge costs caused by non-functioning software ("blue screens of death", malicious code such as virus, worms, and trojans, exceptions, general protection faults and other well-known problems) are reduced considerably by using more stable software; and it is well known that one of the most notable virtues of free software is its stability.

    You further state that: "7. One of the arguments behind the bill is the supposed freedom from costs of open-source software, compared with the costs of commercial software, without taking into account the fact that there exist types of volume licensing which can be highly advantageous for the State, as has happened in other countries."

    I have already pointed out that what is in question is not the cost of the software but the principles of freedom of information, accessibility, and security. These arguments have been covered extensively in the preceding paragraphs to which I would refer you.

    On the other hand, there certainly exist types of volume licensing (although unfortunately proprietary software does not satisfy the basic principles). But as you correctly pointed out in the immediately preceding paragraph of your letter, they only manage to reduce the impact of a component which makes up no more than 8% of the total.

    You continue: "8. In addition, the alternative adopted by the bill (I) is clearly more expensive, due to the high costs of software migration, and (II) puts at risk compatibility and interoperability of the IT platforms within the State, and between the State and the private sector, given the hundreds of versions of open source software on the market."

    Let us analyze your statement in two parts. Your first argument, that migration implies high costs, is in reality an argument in favor of the Bill. Because the more time goes by, the more difficult migration to another technology will become; and at the same time, the security risks associated with proprietary software will continue to increase. In this way, the use of proprietary systems and formats will make the State ever more dependent on specific suppliers. Once a policy of using free software has been established (which certainly, does imply some cost) then on the contrary migration from one system to another becomes very simple, since all data is stored in open formats. On the other hand, migration to an open software context implies no more costs than migration between two different proprietary software contexts, which invalidates your argument completely.

    The second argument refers to "problems in interoperability of the IT platforms within the State, and between the State and the private sector" This statement implies a certain lack of knowledge of the way in which free software is built, which does not maximize the dependence of the user on a particular platform, as normally happens in the realm of proprietary software. Even when there are multiple free software distributions, and numerous programs which can be used for the same function, interoperability is guaranteed as much by the use of standard formats, as required by the bill, as by the possibility of creating interoperable software given the availability of the source code.

    You then say that: "9. The majority of open source code does not offer adequate levels of service nor the guarantee from recognized manufacturers of high productivity on the part of the users, which has led various public organizations to retract their decision to go with an open source software solution and to use commercial software in its place."

    This observation is without foundation. In respect of the guarantee, your argument was rebutted in the response to paragraph 4. In respect of support services, it is possible to use free software without them (just as also happens with proprietary software), but anyone who does need them can obtain support separately, whether from local firms or from international corporations, again just as in the case of proprietary software.

    On the other hand, it would contribute greatly to our analysis if you could inform us about free software projects *established* in public bodies which have already been abandoned in favor of proprietary software. We know of a good number of cases where the opposite has taken place, but not know of any where what you describe has taken place.

    You continue by observing that: "10. The bill discourages the creativity of the Peruvian software industry, which invoices 40 million US$/year, exports 4 million US$ (10th in ranking among non-traditional exports, more than handicrafts) and is a source of highly qualified employment. With a law that encourages the use of open source, software programmers lose their intellectual property rights and their main source of payment."

    It is clear enough that nobody is forced to commercialize their code as free software. The only thing to take into account is that if it is not free software, it cannot be sold to the public sector. This is not in any case the main market for the national software industry. We covered some questions referring to the influence of the Bill on the generation of employment which would be both highly technically qualified and in better conditions for competition above, so it seems unnecessary to insist on this point.

    What follows in your statement is incorrect. On the one hand, no author of free software loses his intellectual property rights, unless he expressly wishes to place his work in the public domain. The free software movement has always been very respectful of intellectual property, and has generated widespread public recognition of its authors. Names like those of Richard Stallman, Linus Torvalds, Guido van Rossum, Larry Wall, Miguel de Icaza, Andrew Tridgell, Theo de Raadt, Andrea Arcangeli, Bruce Perens, Darren Reed, Alan Cox, Eric Raymond, and many others, are recognized world-wide for their contributions to the development of software that is used today by millions of people throughout the world. On the other hand, to say that the rewards for authors rights make up the main source of payment of Peruvian programmers is in any case a guess, in particular since there is no proof to this effect, nor a demonstration of how the use of free software by the State would influence these payments.

    You go on to say that: "11. Open source software, since it can be distributed without charge, does not allow the generation of income for its developers through exports. In this way, the multiplier effect of the sale of software to other countries is weakened, and so in turn is the growth of the industry, while Government rules ought on the contrary to stimulate local industry."

    This statement shows once again complete ignorance of the mechanisms of and market for free software. It tries to claim that the market of sale of non- exclusive rights for use (sale of licenses) is the only possible one for the software industry, when you yourself pointed out several paragraphs above that it is not even the most important one. The incentives that the bill offers for the growth of a supply of better qualified professionals, together with the increase in experience that working on a large scale with free software within the State will bring for Peruvian technicians, will place them in a highly competitive position to offer their services abroad.

    You then state that: "12. In the Forum, the use of open source software in education was discussed, without mentioning the complete collapse of this initiative in a country like Mexico, where precisely the State employees who founded the project now state that open source software did not make it possible to offer a learning experience to pupils in the schools, did not take into account the capability at a national level to give adequate support to the platform, and that the software did not and does not allow for the levels of platform integration that now exist in schools."

    In fact Mexico has gone into reverse with the Red Escolar (Schools Network) project. This is due precisely to the fact that the driving forces behind the Mexican project used license costs as their main argument, instead of the other reasons specified in our project, which are far more essential. Because of this conceptual mistake, and as a result of the lack of effective support from the SEP (Secretary of State for Public Education), the assumption was made that to implant free software in schools it would be enough to drop their software budget and send them a CD ROM with Gnu/Linux instead. Of course this failed, and it couldn't have been otherwise, just as school laboratories fail when they use proprietary software and have no budget for implementation and maintenance. That's exactly why our bill is not limited to making the use of free software mandatory, but recognizes the need to create a viable migration plan, in which the State undertakes the technical transition in an orderly way in order to then enjoy the advantages of free software.

    You end with a rhetorical question: "13. If open source software satisfies all the requirements of State bodies, why do you need a law to adopt it? Shouldn't it be the market which decides freely which products give most benefits or value?"

    We agree that in the private sector of the economy, it must be the market that decides which products to use, and no state interference is permissible there. However, in the case of the public sector, the reasoning is not the same: as we have already established, the state archives, handles, and transmits information which does not belong to it, but which is entrusted to it by citizens, who have no alternative under the rule of law. As a counterpart to this legal requirement, the State must take extreme measures to safeguard the integrity, confidentiality, and accessibility of this information. The use of proprietary software raises serious doubts as to whether these requirements can be fulfilled, lacks conclusive evidence in this respect, and so is not suitable for use in the public sector.

    The need for a law is based, firstly, on the realization of the fundamental principles listed above in the specific area of software; secondly, on the fact that the State is not an ideal homogeneous entity, but made up of multiple bodies with varying degrees of autonomy in decision making. Given that it is inappropriate to use proprietary software, the fact of establishing these rules in law will prevent the personal discretion of any state employee from putting at risk the information which belongs to citizens. And above all, because it constitutes an up-to-date reaffirmation in relation to the means of management and communication of information used today, it is based on the republican principle of openness to the public.

    In conformance with this universally accepted principle, the citizen has the right to know all information held by the State and not covered by well- founded declarations of secrecy based on law. Now, software deals with information and is itself information. Information in a special form, capable of being interpreted by a machine in order to execute actions, but crucial information all the same because the citizen has a legitimate right to know, for example, how his vote is computed or his taxes calculated. And for that he must have free access to the source code and be able to prove to his satisfaction the programs used for electoral computations or calculation of his taxes.

    I wish you the greatest respect, and would like to repeat that my office will always be open for you to expound your point of view to whatever level of detail you consider suitable.

    Cordially,
    DR. EDGAR DAVID VILLANUEVA NUÑEZ
    Congressman of the Republic of Perú.

    Tags: digistan, english, standard.
    Is Ogg Theora a free and open standard?
    25th December 2010

    The Digistan definition of a free and open standard reads like this:

    The Digital Standards Organization defines free and open standard as follows:

    1. A free and open standard is immune to vendor capture at all stages in its life-cycle. Immunity from vendor capture makes it possible to freely use, improve upon, trust, and extend a standard over time.
    2. The standard is adopted and will be maintained by a not-for-profit organisation, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties.
    3. The standard has been published and the standard specification document is available freely. It must be permissible to all to copy, distribute, and use it freely.
    4. The patents possibly present on (parts of) the standard are made irrevocably available on a royalty-free basis.
    5. There are no constraints on the re-use of the standard.

    The economic outcome of a free and open standard, which can be measured, is that it enables perfect competition between suppliers of products based on the standard.

    For a while now I have tried to figure out of Ogg Theora is a free and open standard according to this definition. Here is a short writeup of what I have been able to gather so far. I brought up the topic on the Xiph advocacy mailing list in July 2009, for those that want to see some background information. According to Ivo Emanuel Gonçalves and Monty Montgomery on that list the Ogg Theora specification fulfils the Digistan definition.

    Free from vendor capture?

    As far as I can see, there is no single vendor that can control the Ogg Theora specification. It can be argued that the Xiph foundation is such vendor, but given that it is a non-profit foundation with the expressed goal making free and open protocols and standards available, it is not obvious that this is a real risk. One issue with the Xiph foundation is that its inner working (as in board member list, or who control the foundation) are not easily available on the web. I've been unable to find out who is in the foundation board, and have not seen any accounting information documenting how money is handled nor where is is spent in the foundation. It is thus not obvious for an external observer who control The Xiph foundation, and for all I know it is possible for a single vendor to take control over the specification. But it seem unlikely.

    Maintained by open not-for-profit organisation?

    Assuming that the Xiph foundation is the organisation its web pages claim it to be, this point is fulfilled. If Xiph foundation is controlled by a single vendor, it isn't, but I have not found any documentation indicating this.

    According to a report prepared by Audun Vaaler og Børre Ludvigsen for the Norwegian government, the Xiph foundation is a non-commercial organisation and the development process is open, transparent and non-Discrimatory. Until proven otherwise, I believe it make most sense to believe the report is correct.

    Specification freely available?

    The specification for the Ogg container format and both the Vorbis and Theora codeces are available on the web. This are the terms in the Vorbis and Theora specification:

    Anyone may freely use and distribute the Ogg and [Vorbis/Theora] specifications, whether in private, public, or corporate capacity. However, the Xiph.Org Foundation and the Ogg project reserve the right to set the Ogg [Vorbis/Theora] specification and certify specification compliance.

    The Ogg container format is specified in IETF RFC 3533, and this is the term:

    This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

    The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

    All these terms seem to allow unlimited distribution and use, an this term seem to be fulfilled. There might be a problem with the missing permission to distribute modified versions of the text, and thus reuse it in other specifications. Not quite sure if that is a requirement for the Digistan definition.

    Royalty-free?

    There are no known patent claims requiring royalties for the Ogg Theora format. MPEG-LA and Steve Jobs in Apple claim to know about some patent claims (submarine patents) against the Theora format, but no-one else seem to believe them. Both Opera Software and the Mozilla Foundation have looked into this and decided to implement Ogg Theora support in their browsers without paying any royalties. For now the claims from MPEG-LA and Steve Jobs seem more like FUD to scare people to use the H.264 codec than any real problem with Ogg Theora.

    No constraints on re-use?

    I am not aware of any constraints on re-use.

    Conclusion

    3 of 5 requirements seem obviously fulfilled, and the remaining 2 depend on the governing structure of the Xiph foundation. Given the background report used by the Norwegian government, I believe it is safe to assume the last two requirements are fulfilled too, but it would be nice if the Xiph foundation web site made it easier to verify this.

    It would be nice to see other analysis of other specifications to see if they are free and open standards.

    Tags: digistan, english, standard, video.
    The many definitions of a open standard
    27th December 2010

    One of the reasons I like the Digistan definition of "Free and Open Standard" is that this is a new term, and thus the meaning of the term has been decided by Digistan. The term "Open Standard" has become so misunderstood that it is no longer very useful when talking about standards. One end up discussing which definition is the best one and with such frame the only one gaining are the proponents of de-facto standards and proprietary solutions.

    But to give us an idea about the diversity of definitions of open standards, here are a few that I know about. This list is not complete, but can be a starting point for those that want to do a complete survey. More definitions are available on the wikipedia page.

    First off is my favourite, the definition from the European Interoperability Framework version 1.0. Really sad to notice that BSA and others has succeeded in getting it removed from version 2.0 of the framework by stacking the committee drafting the new version with their own people. Anyway, the definition is still available and it include the key properties needed to make sure everyone can use a specification on equal terms.

    The following are the minimal characteristics that a specification and its attendant documents must have in order to be considered an open standard:

    • The standard is adopted and will be maintained by a not-for-profit organisation, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties (consensus or majority decision etc.).
    • The standard has been published and the standard specification document is available either freely or at a nominal charge. It must be permissible to all to copy, distribute and use it for no fee or at a nominal fee.
    • The intellectual property - i.e. patents possibly present - of (parts of) the standard is made irrevocably available on a royalty- free basis.
    • There are no constraints on the re-use of the standard.

    Another one originates from my friends over at DKUUG, who coined and gathered support for this definition in 2004. It even made it into the Danish parlament as their definition of a open standard. Another from a different part of the Danish government is available from the wikipedia page.

    En åben standard opfylder følgende krav:

    1. Veldokumenteret med den fuldstændige specifikation offentligt tilgængelig.
    2. Frit implementerbar uden økonomiske, politiske eller juridiske begrænsninger på implementation og anvendelse.
    3. Standardiseret og vedligeholdt i et åbent forum (en såkaldt "standardiseringsorganisation") via en åben proces.

    Then there is the definition from Free Software Foundation Europe.

    An Open Standard refers to a format or protocol that is

    1. subject to full public assessment and use without constraints in a manner equally available to all parties;
    2. without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
    3. free from legal or technical clauses that limit its utilisation by any party or in any business model;
    4. managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
    5. available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.

    A long time ago, SUN Microsystems, now bought by Oracle, created its Open Standards Checklist with a fairly detailed description.

    Creation and Management of an Open Standard

    • Its development and management process must be collaborative and democratic:
      • Participation must be accessible to all those who wish to participate and can meet fair and reasonable criteria imposed by the organization under which it is developed and managed.
      • The processes must be documented and, through a known method, can be changed through input from all participants.
      • The process must be based on formal and binding commitments for the disclosure and licensing of intellectual property rights.
      • Development and management should strive for consensus, and an appeals process must be clearly outlined.
      • The standard specification must be open to extensive public review at least once in its life-cycle, with comments duly discussed and acted upon, if required.

    Use and Licensing of an Open Standard

    • The standard must describe an interface, not an implementation, and the industry must be capable of creating multiple, competing implementations to the interface described in the standard without undue or restrictive constraints. Interfaces include APIs, protocols, schemas, data formats and their encoding.
    • The standard must not contain any proprietary "hooks" that create a technical or economic barriers
    • Faithful implementations of the standard must interoperate. Interoperability means the ability of a computer program to communicate and exchange information with other computer programs and mutually to use the information which has been exchanged. This includes the ability to use, convert, or exchange file formats, protocols, schemas, interface information or conventions, so as to permit the computer program to work with other computer programs and users in all the ways in which they are intended to function.
    • It must be permissible for anyone to copy, distribute and read the standard for a nominal fee, or even no fee. If there is a fee, it must be low enough to not preclude widespread use.
    • It must be possible for anyone to obtain free (no royalties or fees; also known as "royalty free"), worldwide, non-exclusive and perpetual licenses to all essential patent claims to make, use and sell products based on the standard. The only exceptions are terminations per the reciprocity and defensive suspension terms outlined below. Essential patent claims include pending, unpublished patents, published patents, and patent applications. The license is only for the exact scope of the standard in question.
      • May be conditioned only on reciprocal licenses to any of licensees' patent claims essential to practice that standard (also known as a reciprocity clause)
      • May be terminated as to any licensee who sues the licensor or any other licensee for infringement of patent claims essential to practice that standard (also known as a "defensive suspension" clause)
      • The same licensing terms are available to every potential licensor
    • The licensing terms of an open standards must not preclude implementations of that standard under open source licensing terms or restricted licensing terms

    It is said that one of the nice things about standards is that there are so many of them. As you can see, the same holds true for open standard definitions. Most of the definitions have a lot in common, and it is not really controversial what properties a open standard should have, but the diversity of definitions have made it possible for those that want to avoid a level marked field and real competition to downplay the significance of open standards. I hope we can turn this tide by focusing on the advantages of Free and Open Standards.

    Tags: digistan, english, standard.
    What standards are Free and Open as defined by Digistan?
    30th December 2010

    After trying to compare Ogg Theora to the Digistan definition of a free and open standard, I concluded that this need to be done for more standards and started on a framework for doing this. As a start, I want to get the status for all the standards in the Norwegian reference directory, which include UTF-8, HTML, PDF, ODF, JPEG, PNG, SVG and others. But to be able to complete this in a reasonable time frame, I will need help.

    If you want to help out with this work, please visit the wiki pages I have set up for this, and let me know that you want to help out. The IRC channel #nuug on irc.freenode.net is a good place to coordinate this for now, as it is the IRC channel for the NUUG association where I have created the framework (I am the leader of the Norwegian Unix User Group).

    The framework is still forming, and a lot is left to do. Do not be scared by the sketchy form of the current pages. :)

    Tags: digistan, english, standard.
    Chrome plan to drop H.264 support for HTML5 <video>
    12th January 2011

    Today I discovered via digi.no that the Chrome developers, in a surprising announcement, yesterday announced plans to drop H.264 support for HTML5 <video> in the browser. The argument used is that H.264 is not a "completely open" codec technology. If you believe H.264 was free for everyone to use, I recommend having a look at the essay "H.264 – Not The Kind Of Free That Matters". It is not free of cost for creators of video tools, nor those of us that want to publish on the Internet, and the terms provided by MPEG-LA excludes free software projects from licensing the patents needed for H.264. Some background information on the Google announcement is available from OSnews. A good read. :)

    Personally, I believe it is great that Google is taking a stand to promote equal terms for everyone when it comes to video publishing on the Internet. This can only be done by publishing using free and open standards, which is only possible if the web browsers provide support for these free and open standards. At the moment there seem to be two camps in the web browser world when it come to video support. Some browsers support H.264, and others support Ogg Theora and WebM (Dirac is not really an option yet), forcing those of us that want to publish video on the Internet and which can not accept the terms of use presented by MPEG-LA for H.264 to not reach all potential viewers. Wikipedia keep an updated summary of the current browser support.

    Not surprising, several people would prefer Google to keep promoting H.264, and John Gruber presents the mind set of these people quite well. His rhetorical questions provoked a reply from Thom Holwerda with another set of questions presenting the issues with H.264. Both are worth a read.

    Some argue that if Google is dropping H.264 because it isn't free, they should also drop support for the Adobe Flash plugin. This argument was covered by Simon Phipps in todays blog post, which I find to put the issue in context. To me it make perfect sense to drop native H.264 support for HTML5 in the browser while still allowing plugins.

    I suspect the reason this announcement make so many people protest, is that all the users and promoters of H.264 suddenly get an uneasy feeling that they might be backing the wrong horse. A lot of TV broadcasters have been moving to H.264 the last few years, and a lot of money has been invested in hardware based on the belief that they could use the same video format for both broadcasting and web publishing. Suddenly this belief is shaken.

    An interesting question is why Google is doing this. While the presented argument might be true enough, I believe Google would only present the argument if the change make sense from a business perspective. One reason might be that they are currently negotiating with MPEG-LA over royalties or usage terms, and giving MPEG-LA the feeling that dropping H.264 completely from Chroome, Youtube and Google Video would improve the negotiation position of Google. Another reason might be that Google want to save money by not having to pay the video tax to MPEG-LA at all, and thus want to move to a video format not requiring royalties at all. A third reason might be that the Chrome development team simply want to avoid the Chrome/Chromium split to get more help with the development of Chrome. I guess time will tell.

    Update 2011-01-15: The Google Chrome team provided more background and information on the move it a blog post yesterday.

    Tags: english, standard, video.
    The video format most supported in web browsers?
    16th January 2011

    The video format struggle on the web continues, and the three contenders seem to be Ogg Theora, H.264 and WebM. Most video sites seem to use H.264, while others use Ogg Theora. Interestingly enough, the comments I see give me the feeling that a lot of people believe H.264 is the most supported video format in browsers, but according to the Wikipedia article on HTML5 video, this is not true. Check out the nice table of supprted formats in different browsers there. The format supported by most browsers is Ogg Theora, supported by released versions of Mozilla Firefox, Google Chrome, Chromium, Opera, Konqueror, Epiphany, Origyn Web Browser and BOLT browser, while not supported by Internet Explorer nor Safari. The runner up is WebM supported by released versions of Google Chrome Chromium Opera and Origyn Web Browser, and test versions of Mozilla Firefox. H.264 is supported by released versions of Safari, Origyn Web Browser and BOLT browser, and the test version of Internet Explorer. Those wanting Ogg Theora support in Internet Explorer and Safari can install plugins to get it.

    To me, the simple conclusion from this is that to reach most users without any extra software installed, one uses Ogg Theora with the HTML5 video tag. Of course to reach all those without a browser handling HTML5, one need fallback mechanisms. In NUUG, we provide first fallback to a plugin capable of playing MPEG1 video, and those without such support we have a second fallback to the Cortado java applet playing Ogg Theora. This seem to work quite well, as can be seen in an example from last week.

    The reason Ogg Theora is the most supported format, and H.264 is the least supported is simple. Implementing and using H.264 require royalty payment to MPEG-LA, and the terms of use from MPEG-LA are incompatible with free software licensing. If you believed H.264 was without royalties and license terms, check out "H.264 – Not The Kind Of Free That Matters" by Simon Phipps.

    A incomplete list of sites providing video in Ogg Theora is available from the Xiph.org wiki, if you want to have a look. I'm not aware of a similar list for WebM nor H.264.

    Update 2011-01-16 09:40: A question from Tollef on IRC made me realise that I failed to make it clear enough this text is about the <video> tag support in browsers and not the video support provided by external plugins like the Flash plugins.

    Tags: english, nuug, standard, video.
    Which module is loaded for a given PCI and USB device?
    23rd January 2011

    In the discover-data package in Debian, there is a script to report useful information about the running hardware for use when people report missing information. One part of this script that I find very useful when debugging hardware problems, is the part mapping loaded kernel module to the PCI device it claims. It allow me to quickly see if the kernel module I expect is driving the hardware I am struggling with. To see the output, make sure discover-data is installed and run /usr/share/bug/discover-data 3>&1. The relevant output on one of my machines like this:

    loaded modules:
    10de:03eb i2c_nforce2
    10de:03f1 ohci_hcd
    10de:03f2 ehci_hcd
    10de:03f0 snd_hda_intel
    10de:03ec pata_amd
    10de:03f6 sata_nv
    1022:1103 k8temp
    109e:036e bttv
    109e:0878 snd_bt87x
    11ab:4364 sky2
    

    The code in question look like this, slightly modified for readability and to drop the output to file descriptor 3:

    if [ -d /sys/bus/pci/devices/ ] ; then
        echo loaded pci modules:
        (
            cd /sys/bus/pci/devices/
            for address in * ; do
                if [ -d "$address/driver/module" ] ; then
                    module=`cd $address/driver/module ; pwd -P | xargs basename`
                    if grep -q "^$module " /proc/modules ; then
                        address=$(echo $address |sed s/0000://)
                        id=`lspci -n -s $address | tail -n 1 | awk '{print $3}'`
                        echo "$id $module"
                    fi
                fi
            done
        )
        echo
    fi
    

    Similar code could be used to extract USB device module mappings:

    if [ -d /sys/bus/usb/devices/ ] ; then
        echo loaded usb modules:
        (
            cd /sys/bus/usb/devices/
            for address in * ; do
                if [ -d "$address/driver/module" ] ; then
                    module=`cd $address/driver/module ; pwd -P | xargs basename`
                    if grep -q "^$module " /proc/modules ; then
                        address=$(echo $address |sed s/0000://)
                        id=$(lsusb -s $address | tail -n 1 | awk '{print $6}')
                        if [ "$id" ] ; then
                            echo "$id $module"
                        fi
                    fi
                fi
            done
        )
        echo
    fi
    

    This might perhaps be something to include in other tools as well.

    Tags: debian, english.
    Using NVD and CPE to track CVEs in locally maintained software
    28th January 2011

    The last few days I have looked at ways to track open security issues here at my work with the University of Oslo. My idea is that it should be possible to use the information about security issues available on the Internet, and check our locally maintained/distributed software against this information. It should allow us to verify that no known security issues are forgotten. The CVE database listing vulnerabilities seem like a great central point, and by using the package lists from Debian mapped to CVEs provided by the testing security team, I believed it should be possible to figure out which security holes were present in our free software collection.

    After reading up on the topic, it became obvious that the first building block is to be able to name software packages in a unique and consistent way across data sources. I considered several ways to do this, for example coming up with my own naming scheme like using URLs to project home pages or URLs to the Freshmeat entries, or using some existing naming scheme. And it seem like I am not the first one to come across this problem, as MITRE already proposed and implemented a solution. Enter the Common Platform Enumeration dictionary, a vocabulary for referring to software, hardware and other platform components. The CPE ids are mapped to CVEs in the National Vulnerability Database, allowing me to look up know security issues for any CPE name. With this in place, all I need to do is to locate the CPE id for the software packages we use at the university. This is fairly trivial (I google for 'cve cpe $package' and check the NVD entry if a CVE for the package exist).

    To give you an example. The GNU gzip source package have the CPE name cpe:/a:gnu:gzip. If the old version 1.3.3 was the package to check out, one could look up cpe:/a:gnu:gzip:1.3.3 in NVD and get a list of 6 security holes with public CVE entries. The most recent one is CVE-2010-0001, and at the bottom of the NVD page for this vulnerability the complete list of affected versions is provided.

    The NVD database of CVEs is also available as a XML dump, allowing for offline processing of issues. Using this dump, I've written a small script taking a list of CPEs as input and list all CVEs affecting the packages represented by these CPEs. One give it CPEs with version numbers as specified above and get a list of open security issues out.

    Of course for this approach to be useful, the quality of the NVD information need to be high. For that to happen, I believe as many as possible need to use and contribute to the NVD database. I notice RHEL is providing a map from CVE to CPE, indicating that they are using the CPE information. I'm not aware of Debian and Ubuntu doing the same.

    To get an idea about the quality for free software, I spent some time making it possible to compare the CVE database from Debian with the CVE database in NVD. The result look fairly good, but there are some inconsistencies in NVD (same software package having several CPEs), and some inaccuracies (NVD not mentioning buggy packages that Debian believe are affected by a CVE). Hope to find time to improve the quality of NVD, but that require being able to get in touch with someone maintaining it. So far my three emails with questions and corrections have not seen any reply, but I hope contact can be established soon.

    An interesting application for CPEs is cross platform package mapping. It would be useful to know which packages in for example RHEL, OpenSuSe and Mandriva are missing from Debian and Ubuntu, and this would be trivial if all linux distributions provided CPE entries for their packages.

    Tags: debian, english, sikkerhet.
    A Norwegian FixMyStreet have kept me busy the last few weeks
    3rd April 2011

    Here is a small update for my English readers. Most of my blog posts have been in Norwegian the last few weeks, so here is a short update in English.

    The kids still keep me too busy to get much free software work done, but I did manage to organise a project to get a Norwegian port of the British service FixMyStreet up and running, and it has been running for a month now. The entire project has been organised by me and two others. Around Christmas we gathered sponsors to fund the development work. In January I drafted a contract with mySociety on what to develop, and in February the development took place. Most of it involved converting the source to use GPS coordinates instead of British easting/northing, and the resulting code should be a lot easier to get running in any country by now. The Norwegian FiksGataMi is using OpenStreetmap as the map source and the source for administrative borders in Norway, and support for this had to be added/fixed.

    The Norwegian version went live March 3th, and we spent the weekend polishing the system before we announced it March 7th. The system is running on a KVM instance of Debian/Squeeze, and has seen almost 3000 problem reports in a few weeks. Soon we hope to announce the Android and iPhone versions making it even easier to report problems with the public infrastructure.

    Perhaps something to consider for those of you in countries without such service?

    Tags: debian, english, fiksgatami, kart.
    Gnash enteres Google Summer of Code 2011
    6th April 2011

    The Gnash project is still the most promising solution for a Free Software Flash implementation. A few days ago the project announced that it will participate in Google Summer of Code. I hope many students apply, and that some of them succeed in getting AVM2 support into Gnash.

    Tags: english, multimedia, video, web.
    Initial notes on adding Open311 server API on FixMyStreet
    29th April 2011

    The last few days I have spent some time trying to add support for the Open311 API in the Norwegian FixMyStreet service. Earlier I believed Open311 would be a useful API to use to submit reports to the municipalities, but when I noticed that the New Zealand version of FixMyStreet had implemented Open311 on the server side, it occurred to me that this was a nice way to allow the public, press and municipalities to do data mining directly in the FixMyStreet service. Thus I went to work implementing the Open311 specification for FixMyStreet. The implementation is not yet ready, but I am starting to get a draft limping along. In the process, I have discovered a few issues with the Open311 specification.

    One obvious missing feature is the lack of natural language handling in the specification. The specification seem to assume all reports will be written in English, and do not provide a way for the receiving end to specify which languages are understood there. To be able to use the same client and submit to several Open311 receivers, it would be useful to know which language to use when writing reports. I believe the specification should be extended to allow the receivers of problem reports to specify which language they accept, and the submitter to specify which language the report is written in. Language of a text can also be automatically guessed using statistical methods, but for multi-lingual persons like myself, it is useful to know which language to use when writing a problem report. I suspect some lang=nb,nn kind of attribute would solve it.

    A key part of the Open311 API is the list of services provided, which is similar to the categories used by FixMyStreet. One issue I run into is the need to specify both name and unique identifier for each category. The specification do not state that the identifier should be numeric, but all example implementations have used numbers here. In FixMyStreet, there is no number associated with each category. As the specification do not forbid it, I will use the name as the unique identifier for now and see how open311 clients handle it.

    The report format in open311 and the report format in FixMyStreet differ in a key part. FixMyStreet have a title and a description, while Open311 only have a description and lack the title. I'm not quite sure how to best handle this yet. When asking for a FixMyStreet report in Open311 format, I just merge title an description into the open311 description, but this is not going to work if the open311 API should be used for submitting new reports to FixMyStreet.

    The search feature in Open311 is missing a way to ask for problems near a geographic location. I believe this is important if one is to use Open311 as the query language for mobile units. The specification should be extended to handle this, probably using some new lat=, lon= and range= options.

    The final challenge I see is that the FixMyStreet code handle several administrations in one interface, while the Open311 API seem to assume only one administration. For FixMyStreet, this mean a report can be sent to several administrations, and the categories available depend on the location of the problem. Not quite sure how to best handle this. I've noticed SeeClickFix added latitude and longitude options to the services request, but it do not solve the problem of what to return when no location is specified. Will have to investigate this a bit more.

    My distaste for web forums have kept me from bringing these issues up with the open311 developer group. I really wish they had a email list available via Gmane to use for discussions instead of only a forum. Oh, well. That will probably resolve itself, one way or another. I've also tried visiting the IRC channel #open311 on FreeNode, but no-one seem to reply to my questions there. This make me wonder if I just fail to understand how the open311 community work. It sure do not work like the free software project communities I am used to.

    Tags: english, fiksgatami, open311.
    Experimental Open311 API for the mySociety fixmystreet system
    30th April 2011

    Today, the first draft implementation of an Open311 API for the Norwegian service FiksGataMi started to work. It is only available on the developer server for now, and I have not tested it using any existing Open311 client (I lack the platforms needed to run the clients I have found so far), but it is able to query the database and extract a list of open and closed requests within a given category and reported to a given municipality. I believe that is a good start to create a useful service for those that want to do data mining on the requests submitted so far.

    Where is it? Visit http://fiksgatami-dev.nuug.no/open311.cgi/v2/ to have a look. Please send feedback to the fiksgatami (at) nuug.no mailing list.

    Tags: english, fiksgatami, open311.
    Free Software vs. proprietary softare...
    20th June 2011

    Reading the thingiverse blog, I came across two highlights of interesting parts of the Autodesk and Microsoft Kinect End User License Agreements (EULAs), which illustrates quite well why I stay away from software with EULAs. Whenever I take the time to read their content, the terms are simply unacceptable.

    Tags: english, opphavsrett.
    Perl modules used by FixMyStreet which are missing in Debian/Squeeze
    26th July 2011

    The Norwegian FiksGataMi site is build on Debian/Squeeze, and this platform was chosen because I am most familiar with Debian (being a Debian Developer for around 10 years) because it is the latest stable Debian release which should get security support for a few years.

    The web service is written in Perl, and depend on some perl modules that are missing in Debian at the moment. It would be great if these modules were added to the Debian archive, allowing anyone to set up their own FixMyStreet clone in their own country using only Debian packages. The list of modules missing in Debian/Squeeze isn't very long, and I hope the perl group will find time to package the 12 modules Catalyst::Plugin::SmartURI, Catalyst::Plugin::Unicode::Encoding, Catalyst::View::TT, Devel::Hide, Sort::Key, Statistics::Distributions, Template::Plugin::Comma, Template::Plugin::DateTime::Format, Term::Size::Any, Term::Size::Perl, URI::SmartURI and Web::Scraper to make the maintenance of FixMyStreet easier in the future.

    Thanks to the great tools in Debian, getting the missing modules installed on my server was a simple call to 'cpan2deb Module::Name' and 'dpkg -i' to install the resulting package. But this leave me with the responsibility of tracking security problems, which I really do not have time for.

    Tags: debian, english, fiksgatami.
    What is missing in the Debian desktop, or why my parents use Kubuntu
    29th July 2011

    While at Debconf11, I have several times during discussions mentioned the issues I believe should be improved in Debian for its desktop to be useful for more people. The use case for this is my parents, which are currently running Kubuntu which solve the issues.

    I suspect these four missing features are not very hard to implement. After all, they are present in Ubuntu, so if we wanted to do this in Debian we would have a source.

    1. Simple GUI based upgrade of packages. When there are new packages available for upgrades, a icon in the KDE status bar indicate this, and clicking on it will activate the simple upgrade tool to handle it. I have no problem guiding both of my parents through the process over the phone. If a kernel reboot is required, this too is indicated by the status bars and the upgrade tool. Last time I checked, nothing with the same features was working in KDE in Debian.
    2. Simple handling of missing Firefox browser plugins. When the browser encounter a MIME type it do not currently have a handler for, it will ask the user if the system should search for a package that would add support for this MIME type, and if the user say yes, the APT sources will be searched for packages advertising the MIME type in their control file (visible in the Packages file in the APT archive). If one or more packages are found, it is a simple click of the mouse to add support for the missing mime type. If the package require the user to accept some non-free license, this is explained to the user. The entire process make it more clear to the user why something do not work in the browser, and make the chances higher for the user to blame the web page authors and not the browser for any missing features.
    3. Simple handling of missing multimedia codec/format handlers. When the media players encounter a format or codec it is not supporting, a dialog pop up asking the user if the system should search for a package that would add support for it. This happen with things like MP3, Windows Media or H.264. The selection and installation procedure is very similar to the Firefox browser plugin handling. This is as far as I know implemented using a gstreamer hook. The end result is that the user easily get access to the codecs that are present from the APT archives available, while explaining more on why a given format is unsupported by Ubuntu.
    4. Better browser handling of some MIME types. When displaying a text/plain file in my Debian browser, it will propose to start emacs to show it. If I remember correctly, when doing the same in Kunbutu it show the file as a text file in the browser. At least I know Opera will show text files within the browser. I much prefer the latter behaviour.

    There are other nice features as well, like the simplified suite upgrader, but given that I am the one mostly doing the dist-upgrade, it do not matter much.

    I really hope we could get these features in place for the next Debian release. It would require the coordinated effort of several maintainers, but would make the end user experience a lot better.

    Tags: debian, english, multimedia, web.
    What should start from /etc/rcS.d/ in Debian? - almost nothing
    30th July 2011

    In the Debian boot system, several packages include scripts that are started from /etc/rcS.d/. In fact, there is a bite more of them than make sense, and this causes a few problems. What kind of problems, you might ask. There are at least two problems. The first is that it is not possible to recover a machine after switching to runlevel 1. One need to actually reboot to get the machine back to the expected state. The other is that single user boot will sometimes run into problems because some of the subsystems are activated before the root login is presented, causing problems when trying to recover a machine from a problem in that subsystem. A minor additional point is that moving more scripts out of rcS.d/ and into the other rc#.d/ directories will increase the amount of scripts that can run in parallel during boot, and thus decrease the boot time.

    So, which scripts should start from rcS.d/. In short, only the scripts that _have_ to execute before the root login prompt is presented during a single user boot should go there. Everything else should go into the numeric runlevels. This means things like lm-sensors, fuse and x11-common should not run from rcS.d, but from the numeric runlevels. Today in Debian, there are around 115 init.d scripts that are started from rcS.d/, and most of them should be moved out. Do your package have one of them? Please help us make single user and runlevel 1 better by moving it.

    Scripts setting up the screen, keyboard, system partitions etc. should still be started from rcS.d/, but there is for example no need to have the network enabled before the single user login prompt is presented.

    As always, things are not so easy to fix as they sound. To keep Debian systems working while scripts migrate and during upgrades, the scripts need to be moved from rcS.d/ to rc2.d/ in reverse dependency order, ie the scripts that nothing in rcS.d/ depend on can be moved, and the next ones can only be moved when their dependencies have been moved first. This migration must be done sequentially while we ensure that the package system upgrade packages in the right order to keep the system state correct. This will require some coordination when it comes to network related packages, but most of the packages with scripts that should migrate do not have anything in rcS.d/ depending on them. Some packages have already been updated, like the sudo package, while others are still left to do. I wish I had time to work on this myself, but real live constrains make it unlikely that I will find time to push this forward.

    Tags: bootsystem, debian, english.
    How is booting into runlevel 1 different from single user boots?
    4th August 2011

    Wouter Verhelst have some interesting comments and opinions on my blog post on the need to clean up /etc/rcS.d/ in Debian and my blog post about the default KDE desktop in Debian. I only have time to address one small piece of his comment now, and though it best to address the misunderstanding he bring forward:

    Currently, a system admin has four options: [...] boot to a single-user system (by adding 'single' to the kernel command line; this runs rcS and rc1 scripts)

    This make me believe Wouter believe booting into single user mode and booting into runlevel 1 is the same. I am not surprised he believe this, because it would make sense and is a quite sensible thing to believe. But because the boot in Debian is slightly broken, runlevel 1 do not work properly and it isn't the same as single user mode. I'll try to explain what is actually happing, but it is a bit hard to explain.

    Single user mode is defined like this in /etc/inittab: "~~:S:wait:/sbin/sulogin". This means the only thing that is executed in single user mode is sulogin. Single user mode is a boot state "between" the runlevels, and when booting into single user mode, only the scripts in /etc/rcS.d/ are executed before the init process enters the single user state. When switching to runlevel 1, the state is in fact not ending in runlevel 1, but it passes through runlevel 1 and end up in the single user mode (see /etc/rc1.d/S03single, which runs "init -t1 S" to switch to single user mode at the end of runlevel 1. It is confusing that the 'S' (single user) init mode is not the mode enabled by /etc/rcS.d/ (which is more like the initial boot mode).

    This summary might make it clearer. When booting for the first time into single user mode, the following commands are executed: "/etc/init.d/rc S; /sbin/sulogin". When booting into runlevel 1, the following commands are executed: "/etc/init.d/rc S; /etc/init.d/rc 1; /sbin/sulogin". A problem show up when trying to continue after visiting single user mode. Not all services are started again as they should, causing the machine to end up in an unpredicatble state. This is why Debian admins recommend rebooting after visiting single user mode.

    A similar problem with runlevel 1 is caused by the amount of scripts executed from /etc/rcS.d/. When switching from say runlevel 2 to runlevel 1, the services started from /etc/rcS.d/ are not properly stopped when passing through the scripts in /etc/rc1.d/, and not started again when switching away from runlevel 1 to the runlevels 2-5. I believe the problem is best fixed by moving all the scripts out of /etc/rcS.d/ that are not required to get a functioning single user mode during boot.

    I have spent several years investigating the Debian boot system, and discovered this problem a few years ago. I suspect it originates from when sysvinit was introduced into Debian, a long time ago.

    Tags: bootsystem, debian, english.
    Ripping problematic DVDs using dvdbackup and genisoimage
    17th September 2011

    For convenience, I want to store copies of all my DVDs on my file server. It allow me to save shelf space flat while still having my movie collection easily available. It also make it possible to let the kids see their favourite DVDs without wearing the physical copies down. I prefer to store the DVDs as ISOs to keep the DVD menu and subtitle options intact. It also ensure that the entire film is one file on the disk. As this is for personal use, the ripping is perfectly legal here in Norway.

    Normally I rip the DVDs using dd like this:

    #!/bin/sh
    # apt-get install lsdvd
    title=$(lsdvd 2>/dev/null|awk '/Disc Title: / {print $3}')
    dd if=/dev/dvd of=/storage/dvds/$title.iso bs=1M
    

    But some DVDs give a input/output error when I read it, and I have been looking for a better alternative. I have no idea why this I/O error occur, but suspect my DVD drive, the Linux kernel driver or something fishy with the DVDs in question. Or perhaps all three.

    Anyway, I believe I found a solution today using dvdbackup and genisoimage. This script gave me a working ISO for a problematic movie by first extracting the DVD file system and then re-packing it back as an ISO.

    #!/bin/sh
    # apt-get install lsdvd dvdbackup genisoimage
    set -e
    tmpdir=/storage/dvds/
    title=$(lsdvd 2>/dev/null|awk '/Disc Title: / {print $3}')
    dvdbackup -i /dev/dvd -M -o $tmpdir -n$title
    genisoimage -dvd-video -o $tmpdir/$title.iso $tmpdir/$title
    rm -rf $tmpdir/$title
    

    Anyone know of a better way available in Debian/Squeeze?

    Update 2011-09-18: I got a tip from Konstantin Khomoutov about the readom program from the wodim package. It is specially written to read optical media, and is called like this: readom dev=/dev/dvd f=image.iso. It got 6 GB along with the problematic Cars DVD before it failed, and failed right away with a Timmy Time DVD.

    Next, I got a tip from Bastian Blank about his program python-dvdvideo, which seem to be just what I am looking for. Tested it with my problematic Timmy Time DVD, and it succeeded creating a ISO image. The git source built and installed just fine in Squeeze, so I guess this will be my tool of choice in the future.

    Tags: english, opphavsrett, video.
    Free e-book kiosk for the public libraries?
    7th October 2011

    Here in Norway the public libraries are debating with the publishing houses how to handle electronic books. Surprisingly, the libraries seem to be willing to accept digital restriction mechanisms (DRM) on books and renting e-books with artificial scarcity from the publishing houses. Time limited renting (2-3 years) is one proposed model, and only allowing X borrowers for each book is another. Personally I find it amazing that libraries are even considering such models.

    Anyway, while reading part of this debate, it occurred to me that someone should present a more sensible approach to the libraries, to allow its borrowers to get used to a better model. The idea is simple:

    Create a computer system for the libraries, either in the form of a Live DVD or a installable distribution, that provide a simple kiosk solution to hand out free e-books. As a start, the books distributed by Project Gutenberg (abount 36,000 books), Project Runenberg (1149 books) and The Internet Archive (3,033,748 books) could be included, but any book where the copyright has expired or with a free licence could be distributed.

    The computer system would make it easy to:

    In addition to such kiosk solution, there should probably be a web site as well to allow people easy access to these books without visiting the library. The site would be the distribution point for the kiosk systems, which would connect regularly to fetch any new books available.

    Are there anyone working on a system like this? I guess it would fit any library in the world, and not just the Norwegian public libraries. :)

    Tags: english, opphavsrett.
    Automatically upgrading server firmware on Dell PowerEdge
    21st November 2011

    At work we have heaps of servers. I believe the total count is around 1000 at the moment. To be able to get help from the vendors when something go wrong, we want to keep the firmware on the servers up to date. If the firmware isn't the latest and greatest, the vendors typically refuse to start debugging any problems until the firmware is upgraded. So before every reboot, we want to upgrade the firmware, and we would really like everyone handling servers at the university to do this themselves when they plan to reboot a machine. For that to happen we at the unix server admin group need to provide the tools to do so.

    To make firmware upgrading easier, I am working on a script to fetch and install the latest firmware for the servers we got. Most of our hardware are from Dell and HP, so I have focused on these servers so far. This blog post is about the Dell part.

    On the Dell FTP site I was lucky enough to find an XML file with firmware information for all 11th generation servers, listing which firmware should be used on a given model and where on the FTP site I can find it. Using a simple perl XML parser I can then download the shell scripts Dell provides to do firmware upgrades from within Linux and reboot when all the firmware is primed and ready to be activated on the first reboot.

    This is the Dell related fragment of the perl code I am working on. Are there anyone working on similar tools for firmware upgrading all servers at a site? Please get in touch and lets share resources.

    #!/usr/bin/perl
    use strict;
    use warnings;
    use File::Temp qw(tempdir);
    BEGIN {
        # Install needed RHEL packages if missing
        my %rhelmodules = (
            'XML::Simple' => 'perl-XML-Simple',
            );
        for my $module (keys %rhelmodules) {
            eval "use $module;";
            if ($@) {
                my $pkg = $rhelmodules{$module};
                system("yum install -y $pkg");
                eval "use $module;";
            }
        }
    }
    my $errorsto = 'pere@hungry.com';
    
    upgrade_dell();
    
    exit 0;
    
    sub run_firmware_script {
        my ($opts, $script) = @_;
        unless ($script) {
            print STDERR "fail: missing script name\n";
            exit 1
        }
        print STDERR "Running $script\n\n";
    
        if (0 == system("sh $script $opts")) { # FIXME correct exit code handling
            print STDERR "success: firmware script ran succcessfully\n";
        } else {
            print STDERR "fail: firmware script returned error\n";
        }
    }
    
    sub run_firmware_scripts {
        my ($opts, @dirs) = @_;
        # Run firmware packages
        for my $dir (@dirs) {
            print STDERR "info: Running scripts in $dir\n";
            opendir(my $dh, $dir) or die "Unable to open directory $dir: $!";
            while (my $s = readdir $dh) {
                next if $s =~ m/^\.\.?/;
                run_firmware_script($opts, "$dir/$s");
            }
            closedir $dh;
        }
    }
    
    sub download {
        my $url = shift;
        print STDERR "info: Downloading $url\n";
        system("wget --quiet \"$url\"");
    }
    
    sub upgrade_dell {
        my @dirs;
        my $product = `dmidecode -s system-product-name`;
        chomp $product;
    
        if ($product =~ m/PowerEdge/) {
    
            # on RHEL, these pacakges are needed by the firwmare upgrade scripts
            system('yum install -y compat-libstdc++-33.i686 libstdc++.i686 libxml2.i686 procmail');
    
            my $tmpdir = tempdir(
                CLEANUP => 1
                );
            chdir($tmpdir);
            fetch_dell_fw('catalog/Catalog.xml.gz');
            system('gunzip Catalog.xml.gz');
            my @paths = fetch_dell_fw_list('Catalog.xml');
            # -q is quiet, disabling interactivity and reducing console output
            my $fwopts = "-q";
            if (@paths) {
                for my $url (@paths) {
                    fetch_dell_fw($url);
                }
                run_firmware_scripts($fwopts, $tmpdir);
            } else {
                print STDERR "error: Unsupported Dell model '$product'.\n";
                print STDERR "error: Please report to $errorsto.\n";
            }
            chdir('/');
        } else {
            print STDERR "error: Unsupported Dell model '$product'.\n";
            print STDERR "error: Please report to $errorsto.\n";
        }
    }
    
    sub fetch_dell_fw {
        my $path = shift;
        my $url = "ftp://ftp.us.dell.com/$path";
        download($url);
    }
    
    # Using ftp://ftp.us.dell.com/catalog/Catalog.xml.gz, figure out which
    # firmware packages to download from Dell.  Only work for Linux
    # machines and 11th generation Dell servers.
    sub fetch_dell_fw_list {
        my $filename = shift;
    
        my $product = `dmidecode -s system-product-name`;
        chomp $product;
        my ($mybrand, $mymodel) = split(/\s+/, $product);
    
        print STDERR "Finding firmware bundles for $mybrand $mymodel\n";
    
        my $xml = XMLin($filename);
        my @paths;
        for my $bundle (@{$xml->{SoftwareBundle}}) {
            my $brand = $bundle->{TargetSystems}->{Brand}->{Display}->{content};
            my $model = $bundle->{TargetSystems}->{Brand}->{Model}->{Display}->{content};
            my $oscode;
            if ("ARRAY" eq ref $bundle->{TargetOSes}->{OperatingSystem}) {
                $oscode = $bundle->{TargetOSes}->{OperatingSystem}[0]->{osCode};
            } else {
                $oscode = $bundle->{TargetOSes}->{OperatingSystem}->{osCode};
            }
            if ($mybrand eq $brand && $mymodel eq $model && "LIN" eq $oscode)
            {
                @paths = map { $_->{path} } @{$bundle->{Contents}->{Package}};
            }
        }
        for my $component (@{$xml->{SoftwareComponent}}) {
            my $componenttype = $component->{ComponentType}->{value};
    
            # Drop application packages, only firmware and BIOS
            next if 'APAC' eq $componenttype;
    
            my $cpath = $component->{path};
            for my $path (@paths) {
                if ($cpath =~ m%/$path$%) {
                    push(@paths, $cpath);
                }
            }
        }
        return @paths;
    }
    

    The code is only tested on RedHat Enterprise Linux, but I suspect it could work on other platforms with some tweaking. Anyone know a index like Catalog.xml is available from HP for HP servers? At the moment I maintain a similar list manually and it is quickly getting outdated.

    Tags: debian, english.
    Fixing an hanging debian installer for Debian Edu
    3rd January 2012

    During christmas, I have been working getting the next version of Debian Edu / Skolelinux ready for release. The initial problem I looked at was particularly interesting.

    The installer would hang at the end when it was doing it post-installation configuration, and whatevery I did to try to find the cause and fix it always worked while I tested it, but never when I integrated it into the installer and ran the installation from scratch. I would try to restart processes, close file descriptors, remove or create files, and the installer would always unblock and wrap up its tasks.

    Eventually the cause was found. The kernel was simply running out of entropy, causing the Kerberos setup to hang waiting for more. Pressing keys was adding entropy to the kernel, and thus all my tries to fix the problem worked not because what I was typing to fix it, but because I was typing.

    The fix I implemented was to add a background process looking at the level of entropy in the kernel (by checking /proc/sys/kernel/random/entropy_avail), and if it was too small, the installer will flush the kernel file buffers and do 'find /' to generate some disk IO. Disk IO generate entropy in the kernel, and is one of the few things that can be initated from within the system to generate entropy.

    The fix is in beta1 of the Debian Edu/Squeeze version, and we welcome more testers and developers. We plan to release beta2 this weekend.

    Tags: debian edu, english.
    Second beta version of Debian Edu / Skolelinux based on Squeeze
    7th January 2012

    I am happy to announce that today we managed to wrap up and publish the second beta version of Debian Edu / Skolelinux. If you want to test a LDAP backed Kerberos server with out of the box PXE configuration for running diskless machines and installing new machines, check it out. If you need a software solution for your school, check it out too. The full announcement is available on the project announcement list.

    Tags: debian edu, english.
    Changing the default Iceweasel start page in Debian Edu/Squeeze
    10th January 2012

    In the Squeeze version of Debian Edu / Skolelinux soon to be released, users of the system will get their default browser start page set from LDAP, allowing the system administrator to point all users to the school web page by updating one setting in LDAP. In addition to setting the default start page when a machine boots, users are shown the same page as a welcome page when they log in for the first time.

    The LDAP object dc=skole,dc=skolelinux,dc=no have an attribute labeledURI with "http://www/ LDAP for Debian Edu/Skolelinux" as the default content. By changing this value to another URL, all users get to see the page behind this new URL.

    An easy way to update it is by using the ldapvi tool. It can be called as "ldapvi -ZD '(cn=admin)'' to update LDAP with the new setting.

    We have written the code to adjust the default start page and show the welcome page, and I wonder if there is an easier way to do this from within Iceweasel instead.

    Tags: debian edu, english, web.
    Setting up a new school with Debian Edu/Squeeze
    25th January 2012

    The next version of Debian Edu / Skolelinux will include a new tool sitesummary2ldapdhcp, which can be used to quickly set up all the computers in a school without much manual labour. Here is a short summary on how to use it to set up a new school.

    First, install a combined Main Server and Thin Client Server as the central server in the network. Next, PXE boot all the client machines as thin clients and wait 5 minutes after the last client booted to allow the clients to report their existence to the central server. When this is done, log on to the central server and run sitesummary2ldapdhcp in the konsole to use the collected information to generate system objects in LDAP. The output will look similar to this:

    % sitesummary2ldapdhcp 
    info: Updating machine tjener.intern [10.0.2.2] id ether-00:01:02:03:04:05.
    info: Create GOsa machine for auto-mac-00-01-02-03-04-06 [10.0.16.20] id ether-00:01:02:03:04:06.
    
    Enter password if you want to activate these changes, and ^c to abort.
    
    Connecting to LDAP as cn=admin,ou=ldap-access,dc=skole,dc=skolelinux,dc=no
    enter password: *******
    % 
    

    After providing the LDAP administrative password (the same as the root password set during installation), the LDAP database will be populated with system objects for each PXE booted machine with automatically generated names. The final step to set up the school is then to log into GOsa, the web based user, group and system administration system to change system names, add systems to the correct host groups and finally enable DHCP and DNS for the systems. All clients that should be used as diskless workstations should be added to the workstation-hosts group. After this is done, all computers can be booted again via PXE and get their assigned names and group based configuration automatically.

    We plan to release beta3 with the updated version of this feature enabled this weekend. You might want to give it a try.

    Tags: debian edu, english, sitesummary.

    RSS Feed

    Created by Chronicle v4.4