1 <?xml version=
"1.0" encoding=
"utf-8"?>
2 <rss version='
2.0' xmlns:lj='http://www.livejournal.org/rss/lj/
1.0/'
>
4 <title>Petter Reinholdtsen - Entries tagged english
</title>
5 <description>Entries tagged english
</description>
10 <title>The sorry state of multimedia browser plugins in Debian
</title>
11 <link>../../The_sorry_state_of_multimedia_browser_plugins_in_Debian.html
</link>
12 <guid isPermaLink=
"true">../../The_sorry_state_of_multimedia_browser_plugins_in_Debian.html
</guid>
13 <pubDate>Tue,
25 Nov
2008 00:
10:
00 +
0100</pubDate>
15 <p
>Recently I have spent some time evaluating the multimedia browser
16 plugins available in Debian Lenny, to see which one we should use by
17 default in Debian Edu. We need an embedded video playing plugin with
18 control buttons to pause or stop the video, and capable of streaming
19 all the multimedia content available on the web. The test results and
20 notes are available on
21 <a href=
"http://wiki.debian.org/DebianEdu/BrowserMultimedia
">the
22 Debian wiki
</a
>. I was surprised how few of the plugins are able to
23 fill this need. My personal video player favorite, VLC, has a really
24 bad plugin which fail on a lot of the test pages. A lot of the MIME
25 types I would expect to work with any free software player (like
26 video/ogg), just do not work. And simple formats like the
27 audio/x-mplegurl format (m3u playlists), just isn
't supported by the
28 totem and vlc plugins. I hope the situation will improve soon. No
29 wonder sites use the proprietary Adobe flash to play video.
</p
>
31 <p
>For Lenny, we seem to end up with the mplayer plugin. It seem to
32 be the only one fitting our needs. :/
</p
>
37 <title>Devcamp brought us closer to the Lenny based Debian Edu release
</title>
38 <link>../../Devcamp_brought_us_closer_to_the_Lenny_based_Debian_Edu_release.html
</link>
39 <guid isPermaLink=
"true">../../Devcamp_brought_us_closer_to_the_Lenny_based_Debian_Edu_release.html
</guid>
40 <pubDate>Sun,
7 Dec
2008 12:
00:
00 +
0100</pubDate>
42 <p
>This weekend we had a small developer gathering for Debian Edu in
43 Oslo. Most of Saturday was used for the general assemly for the
44 member organization, but the rest of the weekend I used to tune the
45 LTSP installation. LTSP now work out of the box on the
10-network.
46 Acer Aspire One proved to be a very nice thin client, with both
47 screen, mouse and keybard in a small box. Was working on getting the
48 diskless workstation setup configured out of the box, but did not
49 finish it before the weekend was up.
</p
>
51 <p
>Did not find time to look at the
4 VGA cards in one box we got from
52 the Brazilian group, so that will have to wait for the next
53 development gathering. Would love to have the Debian Edu installer
54 automatically detect and configure a multiseat setup when it find one
55 of these cards.
</p
>
60 <title>Software video mixer on a USB stick
</title>
61 <link>../../Software_video_mixer_on_a_USB_stick.html
</link>
62 <guid isPermaLink=
"true">../../Software_video_mixer_on_a_USB_stick.html
</guid>
63 <pubDate>Sun,
28 Dec
2008 15:
40:
00 +
0100</pubDate>
65 <p
>The
<a href=
"http://www.nuug.no/
">Norwegian Unix User Group
</a
> is
66 recording our montly presentation on video, and recently we have
67 worked on improving the quality of the recordings by mixing the slides
68 directly with the video stream. For this, we use the
69 <a href=
"http://dvswitch.alioth.debian.org/
">dvswitch
</a
> package from
70 the Debian video team. As this require quite one computer per video
71 source, and NUUG do not have enough laptops available, we need to
72 borrow laptops. And to avoid having to install extra software on
73 these borrwed laptops, I have wrapped up all the programs needed on a
74 bootable USB stick. The software required is dvswitch with assosiated
75 source, sink and mixer applications and
76 <a href=
"http://www.kinodv.org/
">dvgrab
</a
>. To allow this setup to
77 work without any configuration, I
've patched dvswitch to use
78 <a href=
"http://www.avahi.org/
">avahi
</a
> to connect the various parts
79 together. And to allow us to use laptops without firewire plugs, I
80 upgraded dvgrab to the one from Debian/unstable to get one that work
81 with USB sources. We have not yet tested this setup in a production
82 setup, but I hope it will work properly, and allow us to set up a
83 video mixer in a very short time frame. We will need it for
84 <a href=
"http://www.goopen.no/
">Go Open
2009</a
>.
</p
>
86 <p
><a href=
"http://www.nuug.no/pub/video/bin/usbstick-dvswitch.img.gz
">The
87 USB image
</a
> is for a
1 GB memory stick, but can be used on any
88 larger stick as well.
</p
>
93 <title>When web browser developers make a video player...
</title>
94 <link>../../When_web_browser_developers_make_a_video_player___.html
</link>
95 <guid isPermaLink=
"true">../../When_web_browser_developers_make_a_video_player___.html
</guid>
96 <pubDate>Sat,
17 Jan
2009 18:
50:
00 +
0100</pubDate>
98 <p
>As part of the work we do in
<a href=
"http://www.nuug.no
">NUUG
</a
>
99 to publish video recordings of our monthly presentations, we provide a
100 page with embedded video for easy access to the recording. Putting a
101 good set of HTML tags together to get working embedded video in all
102 browsers and across all operating systems is not easy. I hope this
103 will become easier when the
&lt;video
&gt; tag is implemented in all
104 browsers, but I am not sure. We provide the recordings in several
105 formats, MPEG1, Ogg Theora, H
.264 and Quicktime, and want the
106 browser/media plugin to pick one it support and use it to play the
107 recording, using whatever embed mechanism the browser understand.
108 There is at least four different tags to use for this, the new HTML5
109 &lt;video
&gt; tag, the
&lt;object
&gt; tag, the
&lt;embed
&gt; tag and
110 the
&lt;applet
&gt; tag. All of these take a lot of options, and
111 finding the best options is a major challenge.
</p
>
113 <p
>I just tested the experimental Opera browser available from
<a
114 href=
"http://labs.opera.com
">labs.opera.com
</a
>, to see how it handled
115 a
&lt;video
&gt; tag with a few video sources and no extra attributes.
116 I was not very impressed. The browser start by fetching a picture
117 from the video stream. Not sure if it is the first frame, but it is
118 definitely very early in the recording. So far, so good. Next,
119 instead of streaming the
76 MiB video file, it start to download all
120 of it, but do not start to play the video. This mean I have to wait
121 for several minutes for the downloading to finish. When the download
122 is done, the playing of the video do not start! Waiting for the
123 download, but I do not get to see the video? Some testing later, I
124 discover that I have to add the controls=
"true
" attribute to be able
125 to get a play button to pres to start the video. Adding
126 autoplay=
"true
" did not help. I sure hope this is a misfeature of the
127 test version of Opera, and that future implementations of the
128 &lt;video
&gt; tag will stream recordings by default, or at least start
129 playing when the download is done.
</p
>
131 <p
>The test page I used (since changed to add more attributes) is
132 <a href=
"http://www.nuug.no/aktiviteter/
20090113-foredrag-om-foredrag/
">available
133 from the nuug site
</a
>. Will have to test it with the new Firefox
136 <p
>In the test process, I discovered a missing feature. I was unable
137 to find a way to get the URL of the playing video out of Opera, so I
138 am not quite sure it picked the Ogg Theora version of the video. I
139 sure hope it was using the announced Ogg Theora support. :)
</p
>
144 <title>Using bar codes at a computing center
</title>
145 <link>../../Using_bar_codes_at_a_computing_center.html
</link>
146 <guid isPermaLink=
"true">../../Using_bar_codes_at_a_computing_center.html
</guid>
147 <pubDate>Fri,
20 Feb
2009 08:
50:
00 +
0100</pubDate>
149 <p
>At work with the University of Oslo, we have several hundred computers
150 in our computing center. This give us a challenge in tracking the
151 location and cabling of the computers, when they are added, moved and
152 removed. Some times the location register is not updated when a
153 computer is inserted or moved and we then have to search the room for
154 the
"missing
" computer.
</p
>
156 <p
>In the last issue of Linux Journal, I came across a project
157 <a href=
"http://www.libdmtx.org/
">libdmtx
</a
> to write and read bar
158 code blocks as defined in the
159 <a href=
"http://en.wikipedia.org/wiki/Data_Matrix
">The Data Matrix
160 Standard
</a
>. This is bar codes that can be read with a normal
161 digital camera, for example that on a cell phone, and several such bar
162 codes can be read by libdmtx from one picture. The bar code standard
163 allow up to
2 KiB to be written in the tag. There is another project
164 with
<a href=
"http://www.terryburton.co.uk/barcodewriter/
">a bar code
165 writer written in postscript
</a
> capable of creating such bar codes,
166 but this was the first time I found a tool to read these bar
169 <p
>It occurred to me that this could be used to tag and track the
170 machines in our computing center. If both racks and computers are
171 tagged this way, we can use a picture of the rack and all its
172 computers to detect the rack location of any computer in that rack.
173 If we do this regularly for the entire room, we will find all
174 locations, and can detect movements and removals.
</p
>
176 <p
>I decided to test if this would work in practice, and picked a
177 random rack and tagged all the machines with their names. Next, I
178 took pictures with my digital camera, and gave the dmtxread program
179 these JPEG pictures to see how many tags it could read. This worked
180 fairly well. If the pictures was well focused and not taken from the
181 side, all tags in the image could be read. Because of limited space
182 between the racks, I was unable to get a good picture of the entire
183 rack, but could without problem read all tags from a picture covering
184 about half the rack. I had to limit the search time used by dmtxread
185 to
60000 ms to make sure it terminated in a reasonable time frame.
</p
>
187 <p
>My conclusion is that this could work, and we should probably look
188 at adjusting our computer tagging procedures to use bar codes for
189 easier automatic tracking of computers.
</p
>
194 <title>Checking server hardware support status for Dell, HP and IBM servers
</title>
195 <link>../../Checking_server_hardware_support_status_for_Dell__HP_and_IBM_servers.html
</link>
196 <guid isPermaLink=
"true">../../Checking_server_hardware_support_status_for_Dell__HP_and_IBM_servers.html
</guid>
197 <pubDate>Sat,
28 Feb
2009 23:
50:
00 +
0100</pubDate>
199 <p
>At work, we have a few hundred Linux servers, and with that amount
200 of hardware it is important to keep track of when the hardware support
201 contract expire for each server. We have a machine (and service)
202 register, which until recently did not contain much useful besides the
203 machine room location and contact information for the system owner for
204 each machine. To make it easier for us to track support contract
205 status, I
've recently spent time on extending the machine register to
206 include information about when the support contract expire, and to tag
207 machines with expired contracts to make it easy to get a list of such
208 machines. I extended a perl script already being used to import
209 information about machines into the register, to also do some screen
210 scraping off the sites of Dell, HP and IBM (our majority of machines
211 are from these vendors), and automatically check the support status
212 for the relevant machines. This make the support status information
213 easily available and I hope it will make it easier for the computer
214 owner to know when to get new hardware or renew the support contract.
215 The result of this work documented that
27% of the machines in the
216 registry is without a support contract, and made it very easy to find
217 them.
27% might seem like a lot, but I see it more as the case of us
218 using machines a bit longer than the
3 years a normal support contract
219 last, to have test machines and a platform for less important
220 services. After all, the machines without a contract are working fine
221 at the moment and the lack of contract is only a problem if any of
222 them break down. When that happen, we can either fix it using spare
223 parts from other machines or move the service to another old
226 <p
>I believe the code for screen scraping the Dell site was originally
227 written by Trond Hasle Amundsen, and later adjusted by me and Morten
228 Werner Forsbring. The HP scraping was written by me after reading a
229 nice article in ;login: about how to use WWW::Mechanize, and the IBM
230 scraping was written by me based on the Dell code. I know the HTML
231 parsing could be done using nice libraries, but did not want to
232 introduce more dependencies. This is the current incarnation:
</p
>
240 sub get_support_info {
241 my ($machine, $model, $serial, $productnumber) = @_;
244 if ( $model =~ m/^Dell / ) {
245 # fetch website from Dell support
246 my $url =
"http://support.euro.dell.com/support/topics/topic.aspx/emea/shared/support/my_systems_info/no/details?c=no
&amp;cs=nodhs1
&amp;l=no
&amp;s=dhs
&amp;ServiceTag=$serial
";
247 my $webpage = get($url);
248 return undef unless ($webpage);
251 my @lines = split(/\n/, $webpage);
252 foreach my $line (@lines) {
253 next unless ($line =~ m/Beskrivelse/);
254 $line =~ s/
&lt;[^
>]+?
>/;/gm;
255 $line =~ s/^.+?;(Beskrivelse;)/$
1/;
257 my @f = split(/\;/, $line);
259 my $lastend =
"";
260 while ($f[
3] eq
"DELL
") {
261 my ($type, $startstr, $endstr, $days) = @f[
0,
5,
7,
10];
263 my $start = POSIX::strftime(
"%Y-%m-%d
",
264 localtime(str2time($startstr)));
265 my $end = POSIX::strftime(
"%Y-%m-%d
",
266 localtime(str2time($endstr)));
267 $str .=
"$type $start -
> $end
";
269 $lastend = $end if ($end gt $lastend);
271 my $today = POSIX::strftime(
"%Y-%m-%d
", localtime(time));
272 tag_machine_unsupported($machine)
273 if ($lastend lt $today);
275 } elsif ( $model =~ m/^HP / ) {
276 my $mech = WWW::Mechanize-
>new();
278 'http://www1.itrc.hp.com/service/ewarranty/warrantyInput.do
';
281 'BODServiceID
' =
> 'NA
',
282 'RegisteredPurchaseDate
' =
> '',
283 'country
' =
> 'NO
',
284 'productNumber
' =
> $productnumber,
285 'serialNumber1
' =
> $serial,
287 $mech-
>submit_form( form_number =
> 2,
288 fields =
> $fields );
289 # Next step is screen scraping
290 my $content = $mech-
>content();
292 $content =~ s/
&lt;[^
>]+?
>/;/gm;
293 $content =~ s/\s+/ /gm;
294 $content =~ s/;\s*;/;;/gm;
295 $content =~ s/;[\s;]+/;/gm;
297 my $today = POSIX::strftime(
"%Y-%m-%d
", localtime(time));
299 while ($content =~ m/;Warranty Type;/) {
300 my ($type, $status, $startstr, $stopstr) = $content =~
301 m/;Warranty Type;([^;]+);.+?;Status;(\w+);Start Date;([^;]+);End Date;([^;]+);/;
302 $content =~ s/^.+?;Warranty Type;//;
303 my $start = POSIX::strftime(
"%Y-%m-%d
",
304 localtime(str2time($startstr)));
305 my $end = POSIX::strftime(
"%Y-%m-%d
",
306 localtime(str2time($stopstr)));
308 $str .=
"$type ($status) $start -
> $end
";
310 tag_machine_unsupported($machine)
313 } elsif ( $model =~ m/^IBM / ) {
314 # This code ignore extended support contracts.
315 my ($producttype) = $model =~ m/.*-\[(.{
4}).+\]-/;
316 if ($producttype
&amp;
&amp; $serial) {
318 get(
"http://www-
947.ibm.com/systems/support/supportsite.wss/warranty?action=warranty
&amp;brandind=
5000008&amp;Submit=Submit
&amp;type=$producttype
&amp;serial=$serial
");
320 $content =~ s/
&lt;[^
>]+?
>/;/gm;
321 $content =~ s/\s+/ /gm;
322 $content =~ s/;\s*;/;;/gm;
323 $content =~ s/;[\s;]+/;/gm;
325 $content =~ s/^.+?;Warranty status;//;
326 my ($status, $end) = $content =~ m/;Warranty status;([^;]+)\s*;Expiration date;(\S+) ;/;
328 $str .=
"($status) -
> $end
";
330 my $today = POSIX::strftime(
"%Y-%m-%d
", localtime(time));
331 tag_machine_unsupported($machine)
340 <p
>Here are some examples on how to use the function, using fake
341 serial numbers. The information passed in as arguments are fetched
342 from dmidecode.
</p
>
345 print get_support_info(
"hp.host
",
"HP ProLiant BL460c G1
",
"1234567890"
346 "447707-B21
");
347 print get_support_info(
"dell.host
",
"Dell Inc. PowerEdge
2950",
"1234567");
348 print get_support_info(
"ibm.host
",
"IBM eserver xSeries
345 -[
867061X]-
",
349 "1234567");
352 <p
>I would recommend this approach for tracking support contracts for
353 everyone with more than a few computers to administer. :)
</p
>
355 <p
>Update
2009-
03-
06: The IBM page do not include extended support
356 contracts, so it is useless in that case. The original Dell code do
357 not handle extended support contracts either, but has been updated to
363 <title>Time for new LDAP schemas replacing RFC
2307?
</title>
364 <link>../../Time_for_new__LDAP_schemas_replacing_RFC_2307_.html
</link>
365 <guid isPermaLink=
"true">../../Time_for_new__LDAP_schemas_replacing_RFC_2307_.html
</guid>
366 <pubDate>Sun,
29 Mar
2009 20:
30:
00 +
0200</pubDate>
368 <p
>The state of standardized LDAP schemas on Linux is far from
369 optimal. There is RFC
2307 documenting one way to store NIS maps in
370 LDAP, and a modified version of this normally called RFC
2307bis, with
371 some modifications to be compatible with Active Directory. The RFC
372 specification handle the content of a lot of system databases, but do
373 not handle DNS zones and DHCP configuration.
</p
>
375 <p
>In
<a href=
"http://www.skolelinux.org/
">Debian Edu/Skolelinux
</a
>,
376 we would like to store information about users, SMB clients/hosts,
377 filegroups, netgroups (users and hosts), DHCP and DNS configuration,
378 and LTSP configuration in LDAP. These objects have a lot in common,
379 but with the current LDAP schemas it is not possible to have one
380 object per entity. For example, one need to have at least three LDAP
381 objects for a given computer, one with the SMB related stuff, one with
382 DNS information and another with DHCP information. The schemas
383 provided for DNS and DHCP are impossible to combine into one LDAP
384 object. In addition, it is impossible to implement quick queries for
385 netgroup membership, because of the way NIS triples are implemented.
386 It just do not scale. I believe it is time for a few RFC
387 specifications to cleam up this mess.
</p
>
389 <p
>I would like to have one LDAP object representing each computer in
390 the network, and this object can then keep the SMB (ie host key), DHCP
391 (mac address/name) and DNS (name/IP address) settings in one place.
392 It need to be efficently stored to make sure it scale well.
</p
>
394 <p
>I would also like to have a quick way to map from a user or
395 computer and to the net group this user or computer is a member.
</p
>
397 <p
>Active Directory have done a better job than unix heads like myself
398 in this regard, and the unix side need to catch up. Time to start a
399 new IETF work group?
</p
>
404 <title>Returning from Skolelinux developer gathering
</title>
405 <link>../../Returning_from_Skolelinux_developer_gathering.html
</link>
406 <guid isPermaLink=
"true">../../Returning_from_Skolelinux_developer_gathering.html
</guid>
407 <pubDate>Sun,
29 Mar
2009 21:
00:
00 +
0200</pubDate>
409 <p
>I
'm sitting on the train going home from this weekends Debian
410 Edu/Skolelinux development gathering. I got a bit done tuning the
411 desktop, and looked into the dynamic service location protocol
412 implementation avahi. It look like it could be useful for us. Almost
413 30 people participated, and I believe it was a great environment to
414 get to know the Skolelinux system. Walter Bender, involved in the
415 development of the Sugar educational platform, presented his stuff and
416 also helped me improve my OLPC installation. He also showed me that
417 his Turtle Art application can be used in standalone mode, and we
418 agreed that I would help getting it packaged for Debian. As a
419 standalone application it would be great for Debian Edu. We also
420 tried to get the video conferencing working with two OLPCs, but that
421 proved to be too hard for us. The application seem to need more work
422 before it is ready for me. I look forward to getting home and relax
428 <title>Standardize on protocols and formats, not vendors and applications
</title>
429 <link>../../Standardize_on_protocols_and_formats__not_vendors_and_applications.html
</link>
430 <guid isPermaLink=
"true">../../Standardize_on_protocols_and_formats__not_vendors_and_applications.html
</guid>
431 <pubDate>Mon,
30 Mar
2009 11:
50:
00 +
0200</pubDate>
433 <p
>Where I work at the University of Oslo, one decision stand out as a
434 very good one to form a long lived computer infrastructure. It is the
435 simple one, lost by many in todays computer industry: Standardize on
436 open network protocols and open exchange/storage formats, not applications.
437 Applications come and go, while protocols and files tend to stay, and
438 thus one want to make it easy to change application and vendor, while
439 avoiding conversion costs and locking users to a specific platform or
440 application.
</p
>
442 <p
>This approach make it possible to replace the client applications
443 independently of the server applications. One can even allow users to
444 use several different applications as long as they handle the selected
445 protocol and format. In the normal case, only one client application
446 is recommended and users only get help if they choose to use this
447 application, but those that want to deviate from the easy path are not
448 blocked from doing so.
</p
>
450 <p
>It also allow us to replace the server side without forcing the
451 users to replace their applications, and thus allow us to select the
452 best server implementation at any moment, when scale and resouce
453 requirements change.
</p
>
455 <p
>I strongly recommend standardizing - on open network protocols and
456 open formats, but I would never recommend standardizing on a single
457 application that do not use open network protocol or open formats.
</p
>
462 <title>Recording video from cron using VLC
</title>
463 <link>../../Recording_video_from_cron_using_VLC.html
</link>
464 <guid isPermaLink=
"true">../../Recording_video_from_cron_using_VLC.html
</guid>
465 <pubDate>Sun,
5 Apr
2009 10:
00:
00 +
0200</pubDate>
467 <p
>One think I have wanted to figure out for a along time is how to
468 run vlc from cron to do recording of video streams on the net. The
469 task is trivial with mplayer, but I do not really trust the security
470 of mplayer (it crashes too often on strange input), and thus prefer
471 vlc. I finally found a way to do it today. I spent an hour or so
472 searching the web for recipes and reading the documentation. The
473 hardest part was to get rid of the GUI window, but after finding the
474 dummy interface, the command line finally presented itself:
</p
>
476 <blockquote
><pre
>URL=http://www.ping.uio.no/video/rms-oslo_2009.ogg
478 DISPLAY= vlc -q $URL \
479 --sout=
"#duplicate{dst=std{access=file,url=
'$SAVEFILE
'},dst=nodisplay}
" \
480 --intf=dummy
</pre
></blockquote
>
482 <p
>The command stream the URL and store it in the SAVEFILE by
483 duplicating the output stream to
"nodisplay
" and the file, using the
484 dummy interface. The dummy interface and the nodisplay output make
485 sure no X interface is needed.
</p
>
487 <p
>The cron job then need to start this job with the appropriate URL
488 and file name to save, sleep for the duration wanted, and then kill
489 the vlc process with SIGTERM. Here is a complete script
490 <tt
>vlc-record
</tt
> to use from
<tt
>at
</tt
> or
<tt
>cron
</tt
>:
</p
>
492 <blockquote
><pre
>#!/bin/sh
495 SAVEFILE=
"$
2"
496 DURATION=
"$
3"
497 DISPLAY= vlc -q
"$URL
" \
498 --sout=
"#duplicate{dst=std{access=file,url=
'$SAVEFILE
'},dst=nodisplay}
" \
499 --intf=dummy
< /dev/null
> /dev/null
2>&1 &
503 wait $pid
</pre
></blockquote
>
508 <title>No patch is not better than a useless patch
</title>
509 <link>../../No_patch_is_not_better_than_a_useless_patch.html
</link>
510 <guid isPermaLink=
"true">../../No_patch_is_not_better_than_a_useless_patch.html
</guid>
511 <pubDate>Tue,
28 Apr
2009 09:
30:
00 +
0200</pubDate>
513 <p
>Julien Blache
514 <a href=
"http://blog.technologeek.org/
2009/
04/
12/
214">claim that no
515 patch is better than a useless patch
</a
>. I completely disagree, as a
516 patch allow one to discuss a concrete and proposed solution, and also
517 prove that the issue at hand is important enough for someone to spent
518 time on fixing it. No patch do not provide any of these positive
519 properties.
</p
>
524 <title>Two projects that have improved the quality of free software a lot
</title>
525 <link>../../Two_projects_that_have_improved_the_quality_of_free_software_a_lot.html
</link>
526 <guid isPermaLink=
"true">../../Two_projects_that_have_improved_the_quality_of_free_software_a_lot.html
</guid>
527 <pubDate>Sat,
2 May
2009 15:
00:
00 +
0200</pubDate>
529 <p
>There are two software projects that have had huge influence on the
530 quality of free software, and I wanted to mention both in case someone
531 do not yet know them.
</p
>
533 <p
>The first one is
<a href=
"http://valgrind.org/
">valgrind
</a
>, a
534 tool to detect and expose errors in the memory handling of programs.
535 It is easy to use, all one need to do is to run
'valgrind program
',
536 and it will report any problems on stdout. It is even better if the
537 program include debug information. With debug information, it is able
538 to report the source file name and line number where the problem
539 occurs. It can report things like
'reading past memory block in file
540 X line N, the memory block was allocated in file Y, line M
', and
541 'using uninitialised value in control logic
'. This tool has made it
542 trivial to investigate reproducible crash bugs in programs, and have
543 reduced the number of this kind of bugs in free software a lot.
545 <p
>The second one is
546 <a href=
"http://en.wikipedia.org/wiki/Coverity
">Coverity
</a
> which is
547 a source code checker. It is able to process the source of a program
548 and find problems in the logic without running the program. It
549 started out as the Stanford Checker and became well known when it was
550 used to find bugs in the Linux kernel. It is now a commercial tool
551 and the company behind it is running
552 <a href=
"http://www.scan.coverity.com/
">a community service
</a
> for the
553 free software community, where a lot of free software projects get
554 their source checked for free. Several thousand defects have been
555 found and fixed so far. It can find errors like
'lock L taken in file
556 X line N is never released if exiting in line M
', or
'the code in file
557 Y lines O to P can never be executed
'. The projects included in the
558 community service project have managed to get rid of a lot of
559 reliability problems thanks to Coverity.
</p
>
561 <p
>I believe tools like this, that are able to automatically find
562 errors in the source, are vital to improve the quality of software and
563 make sure we can get rid of the crashing and failing software we are
564 surrounded by today.
</p
>
569 <title>Debian boots quicker and quicker
</title>
570 <link>../../Debian_boots_quicker_and_quicker.html
</link>
571 <guid isPermaLink=
"true">../../Debian_boots_quicker_and_quicker.html
</guid>
572 <pubDate>Wed,
24 Jun
2009 21:
40:
00 +
0200</pubDate>
574 <p
>I spent Monday and tuesday this week in London with a lot of the
575 people involved in the boot system on Debian and Ubuntu, to see if we
576 could find more ways to speed up the boot system. This was an Ubuntu
578 <a href=
"https://wiki.ubuntu.com/FoundationsTeam/BootPerformance/DebianUbuntuSprint
">developer
579 gathering
</a
>. It was quite productive. We also discussed the future
580 of boot systems, and ways to handle the increasing number of boot
581 issues introduced by the Linux kernel becoming more and more
582 asynchronous and event base. The Ubuntu approach using udev and
583 upstart might be a good way forward. Time will show.
</p
>
585 <p
>Anyway, there are a few ways at the moment to speed up the boot
586 process in Debian. All of these should be applied to get a quick
591 <li
>Use dash as /bin/sh.
</li
>
593 <li
>Disable the init.d/hwclock*.sh scripts and make sure the hardware
594 clock is in UTC.
</li
>
596 <li
>Install and activate the insserv package to enable
597 <a href=
"http://wiki.debian.org/LSBInitScripts/DependencyBasedBoot
">dependency
598 based boot sequencing
</a
>, and enable concurrent booting.
</li
>
602 These points are based on the Google summer of code work done by
603 <a href=
"http://initscripts-ng.alioth.debian.org/soc2006-bootsystem/
">Carlos
606 <p
>Support for makefile-style concurrency during boot was uploaded to
607 unstable yesterday. When we tested it, we were able to cut
6 seconds
608 from the boot sequence. It depend on very correct dependency
609 declaration in all init.d scripts, so I expect us to find edge cases
610 where the dependences in some scripts are slightly wrong when we start
611 using this.
</p
>
613 <p
>On our IRC channel for this effort, #pkg-sysvinit, a new idea was
614 introduced by Raphael Geissert today, one that could affect the
615 startup speed as well. Instead of starting some scripts concurrently
616 from rcS.d/ and another set of scripts from rc2.d/, it would be
617 possible to run a of them in the same process. A quick way to test
618 this would be to enable insserv and run
'mv /etc/rc2.d/S* /etc/rcS.d/;
619 insserv
'. Will need to test if that work. :)
</p
>
624 <title>Taking over sysvinit development
</title>
625 <link>../../Taking_over_sysvinit_development.html
</link>
626 <guid isPermaLink=
"true">../../Taking_over_sysvinit_development.html
</guid>
627 <pubDate>Wed,
22 Jul
2009 23:
00:
00 +
0200</pubDate>
629 <p
>After several years of frustration with the lack of activity from
630 the existing sysvinit upstream developer, I decided a few weeks ago to
631 take over the package and become the new upstream. The number of
632 patches to track for the Debian package was becoming a burden, and the
633 lack of synchronization between the distribution made it hard to keep
634 the package up to date.
</p
>
636 <p
>On the new sysvinit team is the SuSe maintainer Dr. Werner Fink,
637 and my Debian co-maintainer Kel Modderman. About
10 days ago, I made
638 a new upstream tarball with version number
2.87dsf (for Debian, SuSe
639 and Fedora), based on the patches currently in use in these
640 distributions. We Debian maintainers plan to move to this tarball as
641 the new upstream as soon as we find time to do the merge. Since the
642 new tarball was created, we agreed with Werner at SuSe to make a new
643 upstream project at
<a href=
"http://savannah.nongnu.org/
">Savannah
</a
>, and continue
644 development there. The project is registered and currently waiting
645 for approval by the Savannah administrators, and as soon as it is
646 approved, we will import the old versions from svn and continue
647 working on the future release.
</p
>
649 <p
>It is a bit ironic that this is done now, when some of the involved
650 distributions are moving to upstart as a syvinit replacement.
</p
>
655 <title>Debian has switched to dependency based boot sequencing
</title>
656 <link>../../Debian_has_switched_to_dependency_based_boot_sequencing.html
</link>
657 <guid isPermaLink=
"true">../../Debian_has_switched_to_dependency_based_boot_sequencing.html
</guid>
658 <pubDate>Mon,
27 Jul
2009 23:
50:
00 +
0200</pubDate>
660 <p
>Since this evening, with the upload of sysvinit version
2.87dsf-
2,
661 and the upload of insserv version
1.12.0-
10 yesterday, Debian unstable
662 have been migrated to using dependency based boot sequencing. This
663 conclude work me and others have been doing for the last three days.
664 It feels great to see this finally part of the default Debian
665 installation. Now we just need to weed out the last few problems that
666 are bound to show up, to get everything ready for Squeeze.
</p
>
668 <p
>The next step is migrating /sbin/init from sysvinit to upstart, and
669 fixing the more fundamental problem of handing the event based
670 non-predictable kernel in the early boot.
</p
>
675 <title>ISO still hope to fix OOXML
</title>
676 <link>../../ISO_still_hope_to_fix_OOXML.html
</link>
677 <guid isPermaLink=
"true">../../ISO_still_hope_to_fix_OOXML.html
</guid>
678 <pubDate>Sat,
8 Aug
2009 14:
00:
00 +
0200</pubDate>
680 <p
>According to
<a
681 href=
"http://twerner.blogspot.com/
2009/
08/defects-of-office-open-xml.html
">a
682 blog post from Torsten Werner
</a
>, the current defect report for ISO
683 29500 (ISO OOXML) is
809 pages. His interesting point is that the
684 defect report is
71 pages more than the full ODF
1.1 specification.
685 Personally I find it more interesting that ISO still believe ISO OOXML
686 can be fixed in ISO. Personally, I believe it is broken beyon repair,
687 and I completely lack any trust in ISO for being able to get anywhere
688 close to solving the problems. I was part of the Norwegian committee
689 involved in the OOXML fast track process, and was not impressed with
690 Standard Norway and ISO in how they handled it.
</p
>
692 <p
>These days I focus on ODF instead, which seem like a specification
693 with the future ahead of it. We are working in NUUG to organise a ODF
694 seminar this autumn.
</p
>
699 <title>Relative popularity of document formats (MS Office vs. ODF)
</title>
700 <link>../../Relative_popularity_of_document_formats__MS_Office_vs__ODF_.html
</link>
701 <guid isPermaLink=
"true">../../Relative_popularity_of_document_formats__MS_Office_vs__ODF_.html
</guid>
702 <pubDate>Wed,
12 Aug
2009 15:
50:
00 +
0200</pubDate>
704 <p
>Just for fun, I did a search right now on Google for a few file ODF
705 and MS Office based formats (not to be mistaken for ISO or ECMA
706 OOXML), to get an idea of their relative usage. I searched using
707 'filetype:odt
' and equvalent terms, and got these results:
</P
>
710 <tr
><th
>Type
</th
><th
>ODF
</th
><th
>MS Office
</th
></tr
>
711 <tr
><td
>Tekst
</td
> <td
>odt:
282000</td
> <td
>docx:
308000</td
></tr
>
712 <tr
><td
>Presentasjon
</td
> <td
>odp:
75600</td
> <td
>pptx:
183000</td
></tr
>
713 <tr
><td
>Regneark
</td
> <td
>ods:
26500 </td
> <td
>xlsx:
145000</td
></tr
>
716 <p
>Next, I added a
'site:no
' limit to get the numbers for Norway, and
717 got these numbers:
</p
>
720 <tr
><th
>Type
</th
><th
>ODF
</th
><th
>MS Office
</th
></tr
>
721 <tr
><td
>Tekst
</td
> <td
>odt:
2480 </td
> <td
>docx:
4460</td
></tr
>
722 <tr
><td
>Presentasjon
</td
> <td
>odp:
299 </td
> <td
>pptx:
741</td
></tr
>
723 <tr
><td
>Regneark
</td
> <td
>ods:
187 </td
> <td
>xlsx:
372</td
></tr
>
726 <p
>I wonder how these numbers change over time.
</p
>
728 <p
>I am aware of Google returning different results and numbers based
729 on where the search is done, so I guess these numbers will differ if
730 they are conduced in another country. Because of this, I did the same
731 search from a machine in California, USA, a few minutes after the
732 search done from a machine here in Norway.
</p
>
736 <tr
><th
>Type
</th
><th
>ODF
</th
><th
>MS Office
</th
></tr
>
737 <tr
><td
>Tekst
</td
> <td
>odt:
129000</td
> <td
>docx:
308000</td
></tr
>
738 <tr
><td
>Presentasjon
</td
> <td
>odp:
44200</td
> <td
>pptx:
93900</td
></tr
>
739 <tr
><td
>Regneark
</td
> <td
>ods:
26500 </td
> <td
>xlsx:
82400</td
></tr
>
742 <p
>And with
'site:no
':
745 <tr
><th
>Type
</th
><th
>ODF
</th
><th
>MS Office
</th
></tr
>
746 <tr
><td
>Tekst
</td
> <td
>odt:
2480</td
> <td
>docx:
3410</td
></tr
>
747 <tr
><td
>Presentasjon
</td
> <td
>odp:
175</td
> <td
>pptx:
604</td
></tr
>
748 <tr
><td
>Regneark
</td
> <td
>ods:
186 </td
> <td
>xlsx:
296</td
></tr
>
751 <p
>Interesting difference, not sure what to conclude from these
757 <title>Automatic Munin and Nagios configuration
</title>
758 <link>../../Automatic_Munin_and_Nagios_configuration.html
</link>
759 <guid isPermaLink=
"true">../../Automatic_Munin_and_Nagios_configuration.html
</guid>
760 <pubDate>Wed,
27 Jan
2010 15:
15:
00 +
0100</pubDate>
762 <p
>One of the new features in the next Debian/Lenny based release of
763 Debian Edu/Skolelinux, which is scheduled for release in the next few
764 days, is automatic configuration of the service monitoring system
765 Nagios. The previous release had automatic configuration of trend
766 analysis using Munin, and this Lenny based release take that a step
769 <p
>When installing a Debian Edu Main-server, it is automatically
770 configured as a Munin and Nagios server. In addition, it is
771 configured to be a server for the
772 <a href=
"http://wiki.debian.org/DebianEdu/HowTo/SiteSummary
">SiteSummary
773 system
</a
> I have written for use in Debian Edu. The SiteSummary
774 system is inspired by a system used by the University of Oslo where I
775 work. In short, the system provide a centralised collector of
776 information about the computers on the network, and a client on each
777 computer submitting information to this collector. This allow for
778 automatic information on which packages are installed on each machine,
779 which kernel the machines are using, what kind of configuration the
780 packages got etc. This also allow us to automatically generate Munin
781 and Nagios configuration.
</p
>
783 <p
>All computers reporting to the sitesummary collector with the
784 munin-node package installed is automatically enabled as a Munin
785 client and graphs from the statistics collected from that machine show
786 up automatically on http://www/munin/ on the Main-server.
</p
>
788 <p
>All non-laptop computers reporting to the sitesummary collector are
789 automatically monitored for network presence (ping and any network
790 services detected). In addition, all computers (also laptops) with
791 the nagios-nrpe-server package installed and configured the way
792 sitesummary would configure it, are monitored for full disks, software
793 raid status, swap free and other checks that need to run locally on
794 the machine.
</p
>
796 <p
>The result is that the administrator on a school using Debian Edu
797 based on Lenny will be able to check the health of his installation
798 with one look at the Nagios settings, without having to spend any time
799 keeping the Nagios configuration up-to-date.
</p
>
801 <p
>The only configuration one need to do to get Nagios up and running
802 is to set the password used to get access via HTTP. The system
803 administrator need to run
"<tt
>htpasswd /etc/nagios3/htpasswd.users
804 nagiosadmin
</tt
>" to create a nagiosadmin user and set a password for
805 it to be able to log into the Nagios web pages. After that,
806 everything is taken care of.
</p
>
811 <title>Debian Edu / Skolelinux based on Lenny released, work continues
</title>
812 <link>../../Debian_Edu___Skolelinux_based_on_Lenny_released__work_continues.html
</link>
813 <guid isPermaLink=
"true">../../Debian_Edu___Skolelinux_based_on_Lenny_released__work_continues.html
</guid>
814 <pubDate>Thu,
11 Feb
2010 17:
15:
00 +
0100</pubDate>
816 <p
>On Tuesday, the Debian/Lenny based version of
817 <a href=
"http://www.skolelinux.org/
">Skolelinux
</a
> was finally
818 shipped. This was a major leap forward for the project, and I am very
819 pleased that we finally got the release wrapped up. Work on the first
820 point release starts imediately, as we plan to get that one out a
821 month after the major release, to include all fixes for bugs we found
822 and fixed too late in the release process to include last Tuesday.
</p
>
824 <p
>Perhaps it even is time for some partying?
</p
>
826 <p
>After this first point release, my plan is to focus again on the
827 next major release, based on Squeeze. We will try to get as many of
828 the fixes we need into the official Debian packages before the freeze,
829 and have just a few weeks or months to make it happen.
</p
>
834 <title>After
6 years of waiting, the Xreset.d feature is implemented
</title>
835 <link>../../After_6_years_of_waiting__the_Xreset_d_feature_is_implemented.html
</link>
836 <guid isPermaLink=
"true">../../After_6_years_of_waiting__the_Xreset_d_feature_is_implemented.html
</guid>
837 <pubDate>Sat,
6 Mar
2010 18:
15:
00 +
0100</pubDate>
839 <p
>6 years ago, as part of the Debian Edu development I am involved
840 in, I asked for a hook in the kdm and gdm setup to run scripts as root
841 when the user log out. A bug was submitted against the xfree86-common
842 package in
2004 (
<a href=
"http://bugs.debian.org/
230422">#
230422</a
>),
843 and revisited every time Debian Edu was working on a new release.
844 Today, this finally paid off.
</p
>
846 <p
>The framework for this feature was today commited to the git
847 repositry for the xorg package, and the git repository for xdm has
848 been updated to use this framework. Next on my agenda is to make sure
849 kdm and gdm also add code to use this framework.
</p
>
851 <p
>In Debian Edu, we want to ability to run commands as root when the
852 user log out, to get rid of runaway processes and do general cleanup
853 after a user. With this framework in place, we finally can do that in
854 a generic way that work with all display managers using this
855 framework. My goal is to get all display managers in Debian use it,
856 similar to how they use the Xsession.d framework today.
<p
>
861 <title>Kerberos for Debian Edu/Squeeze?
</title>
862 <link>../../Kerberos_for_Debian_Edu_Squeeze_.html
</link>
863 <guid isPermaLink=
"true">../../Kerberos_for_Debian_Edu_Squeeze_.html
</guid>
864 <pubDate>Wed,
14 Apr
2010 17:
20:
00 +
0200</pubDate>
866 <p
><a href=
"http://www.nuug.no/aktiviteter/
20100413-kerberos/
">Yesterdays
867 NUUG presentation
</a
> about Kerberos was inspiring, and reminded me
868 about the need to start using Kerberos in Skolelinux. Setting up a
869 Kerberos server seem to be straight forward, and if we get this in
870 place a long time before the Squeeze version of Debian freezes, we
871 have a chance to migrate Skolelinux away from NFSv3 for the home
872 directories, and over to an architecture where the infrastructure do
873 not have to trust IP addresses and machines, and instead can trust
874 users and cryptographic keys instead.
</p
>
876 <p
>A challenge will be integration and administration. Is there a
877 Kerberos implementation for Debian where one can control the
878 administration access in Kerberos using LDAP groups? With it, the
879 school administration will have to maintain access control using flat
880 files on the main server, which give a huge potential for errors.
</p
>
882 <p
>A related question I would like to know is how well Kerberos and
883 pam-ccreds (offline password check) work together. Anyone know?
</p
>
885 <p
>Next step will be to use Kerberos for access control in Lwat and
886 Nagios. I have no idea how much work that will be to implement. We
887 would also need to document how to integrate with Windows AD, as such
888 shared network will require two Kerberos realms that need to cooperate
889 to work properly.
</p
>
891 <p
>I believe a good start would be to start using Kerberos on the
892 skolelinux.no machines, and this way get ourselves experience with
893 configuration and integration. A natural starting point would be
894 setting up ldap.skolelinux.no as the Kerberos server, and migrate the
895 rest of the machines from PAM via LDAP to PAM via Kerberos one at the
898 <p
>If you would like to contribute to get this working in Skolelinux,
899 I recommend you to see the video recording from yesterdays NUUG
900 presentation, and start using Kerberos at home. The video show show
901 up in a few days.
</p
>
906 <title>Great book:
"Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future
"</title>
907 <link>../../Great_book___Content__Selected_Essays_on_Technology__Creativity__Copyright__and_the_Future_of_the_Future_.html
</link>
908 <guid isPermaLink=
"true">../../Great_book___Content__Selected_Essays_on_Technology__Creativity__Copyright__and_the_Future_of_the_Future_.html
</guid>
909 <pubDate>Mon,
19 Apr
2010 17:
10:
00 +
0200</pubDate>
911 <p
>The last few weeks i have had the pleasure of reading a
912 thought-provoking collection of essays by Cory Doctorow, on topics
913 touching copyright, virtual worlds, the future of man when the
914 conscience mind can be duplicated into a computer and many more. The
915 book titled
"Content: Selected Essays on Technology, Creativity,
916 Copyright, and the Future of the Future
" is available with few
917 restrictions on the web, for example from
918 <a href=
"http://craphound.com/content/
">his own site
</a
>. I read the
920 <a href=
"http://www.feedbooks.com/book/
2883">feedbooks
</a
> using
921 <a href=
"http://www.fbreader.org/
">fbreader
</a
> and my N810. I
922 strongly recommend this book.
</p
>