-<p>At work, we have a few hundred Linux servers, and with that amount
-of hardware it is important to keep track of when the hardware support
-contract expire for each server. We have a machine (and service)
-register, which until recently did not contain much useful besides the
-machine room location and contact information for the system owner for
-each machine. To make it easier for us to track support contract
-status, I've recently spent time on extending the machine register to
-include information about when the support contract expire, and to tag
-machines with expired contracts to make it easy to get a list of such
-machines. I extended a perl script already being used to import
-information about machines into the register, to also do some screen
-scraping off the sites of Dell, HP and IBM (our majority of machines
-are from these vendors), and automatically check the support status
-for the relevant machines. This make the support status information
-easily available and I hope it will make it easier for the computer
-owner to know when to get new hardware or renew the support contract.
-The result of this work documented that 27% of the machines in the
-registry is without a support contract, and made it very easy to find
-them. 27% might seem like a lot, but I see it more as the case of us
-using machines a bit longer than the 3 years a normal support contract
-last, to have test machines and a platform for less important
-services. After all, the machines without a contract are working fine
-at the moment and the lack of contract is only a problem if any of
-them break down. When that happen, we can either fix it using spare
-parts from other machines or move the service to another old
-machine.</p>
-
-<p>I believe the code for screen scraping the Dell site was originally
-written by Trond Hasle Amundsen, and later adjusted by me and Morten
-Werner Forsbring. The HP scraping was written by me after reading a
-nice article in ;login: about how to use WWW::Mechanize, and the IBM
-scraping was written by me based on the Dell code. I know the HTML
-parsing could be done using nice libraries, but did not want to
-introduce more dependencies. This is the current incarnation:</p>
-
-<pre>
-use LWP::Simple;
-use POSIX;
-use WWW::Mechanize;
-use Date::Parse;
-[...]
-sub get_support_info {
- my ($machine, $model, $serial, $productnumber) = @_;
- my $str;
-
- if ( $model =~ m/^Dell / ) {
- # fetch website from Dell support
- my $url = "http://support.euro.dell.com/support/topics/topic.aspx/emea/shared/support/my_systems_info/no/details?c=no&amp;cs=nodhs1&amp;l=no&amp;s=dhs&amp;ServiceTag=$serial";
- my $webpage = get($url);
- return undef unless ($webpage);
-
- my $daysleft = -1;
- my @lines = split(/\n/, $webpage);
- foreach my $line (@lines) {
- next unless ($line =~ m/Beskrivelse/);
- $line =~ s/&lt;[^>]+?>/;/gm;
- $line =~ s/^.+?;(Beskrivelse;)/$1/;
-
- my @f = split(/\;/, $line);
- @f = @f[13 .. $#f];
- my $lastend = "";
- while ($f[3] eq "DELL") {
- my ($type, $startstr, $endstr, $days) = @f[0, 5, 7, 10];
-
- my $start = POSIX::strftime("%Y-%m-%d",
- localtime(str2time($startstr)));
- my $end = POSIX::strftime("%Y-%m-%d",
- localtime(str2time($endstr)));
- $str .= "$type $start -> $end ";
- @f = @f[14 .. $#f];
- $lastend = $end if ($end gt $lastend);
- }
- my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
- tag_machine_unsupported($machine)
- if ($lastend lt $today);
- }
- } elsif ( $model =~ m/^HP / ) {
- my $mech = WWW::Mechanize->new();
- my $url =
- 'http://www1.itrc.hp.com/service/ewarranty/warrantyInput.do';
- $mech->get($url);
- my $fields = {
- 'BODServiceID' => 'NA',
- 'RegisteredPurchaseDate' => '',
- 'country' => 'NO',
- 'productNumber' => $productnumber,
- 'serialNumber1' => $serial,
- };
- $mech->submit_form( form_number => 2,
- fields => $fields );
- # Next step is screen scraping
- my $content = $mech->content();
+<p><a href="http://www.aftenposten.no/kul_und/litteratur/article3042382.ece">Aftenposten
+melder</a> at
+<a href="http://www.nb.no/aktuelt/50_000_norske_boeker_gratis_tilgjengelig_paa_nett_helt_lovlig">nasjonalbiblioteket
+og Kopinor har gjort en avtale</a> som gjør at eldre bøker kan gjøres
+digitalt tilgjengelig fra nasjonalbiblioteket mot at Kopinor får 56
+øre for hver side som legges ut. Utvalget er litt merkelig: 1790-,
+1890- og 1990-tallet. Jeg synes det er absurd hvis det er slik at
+Kopinor skal ha betalt for utlegging av bøker som ikke lenger er
+beskyttet av opphavsretten. Jeg antar her at det er mer enn 90 år
+siden forfatterne av bøker som ble publisert 1790-1799 døde, slik at
+disse bøkene er falt i det fri og enhver kan kopiere så mye de vil fra
+dem uten å bryte opphavsrettsloven. Bruk av slike verk har ikke
+Kopinor noe med å gjøre. Jeg håper jeg har misforstått.
+<a href="http://www.nb.no/aktuelt/no_er_vi_i_gang_med_aa_digitalisere_samlingane_vaare_og_formidle_digitalt">En
+melding fra nasjonalbiblioteket i 2007</a> tyder på at tekster i det
+fri ikke trenger avtale med Kopinor.</p>
+
+<p>Et annet problem er at bøkene kun legges ut som bildefiler, noe som
+vil gjøre at søketjenester ikke vil finne disse bøkene når en søker
+etter fragmenter i teksten. En risikerer dermed at de blir liggende
+på en slik måte at folk som bruker Google ikke finner dem.</p>
+
+<p>Da skulle jeg heller sett at nasjonalbiblioteket gjorde alvor av
+sin aprilspøk, og la ut bøkene som faller i det fri
+fortløpende.</p>