X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/638b8150d70f9aea7ddc8327de8210f721aeade8..f8d3af1589dece3d5ee766752359559f7a52bcc0:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index ee5d01ca6c..1d9e0d03bc 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -6,6 +6,120 @@ http://people.skolelinux.org/pere/blog/ + + H, Ap, Frp og Venstre går for DNA-innsamling av hele befolkingen + http://people.skolelinux.org/pere/blog/H__Ap__Frp_og_Venstre_g_r_for_DNA_innsamling_av_hele_befolkingen.html + http://people.skolelinux.org/pere/blog/H__Ap__Frp_og_Venstre_g_r_for_DNA_innsamling_av_hele_befolkingen.html + Wed, 14 Mar 2018 14:15:00 +0100 + <p>I går kom det nok et argument for å holde seg unna det norske +helsevesenet. Da annonserte et stortingsflertall bestående av Høyre, +Arbeiderpartiet, Fremskrittspartiet og Venstre, at de går inn for å +samle inn og lagre DNA fra hele befolkningen i Norge til evig tid. +Endringen gjelder innsamlede blodprøver fra nyfødte i Norge. Det vil +dermed ta litt tid før en har hele befolkningen, men det er dit vi +havner gitt nok tid. I dag er det nesten hundre prosent oppslutning +om undersøkelsen som gjøres like etter fødselen, på bakgrunn av +blodprøven det er snakk om å lagre, for å oppdage endel medfødte +sykdommer. Blodprøven lagres i dag i inntil seks år. +<a href="https://www.stortinget.no/no/Saker-og-publikasjoner/Publikasjoner/Innstillinger/Stortinget/2017-2018/inns-201718-182l/?all=true">Stortingets +flertallsinnstilling</a> er at tidsbegresningen skal fjernes, og mener +at tidsubegrenset lagring ikke vil påvirke oppslutningen om +undersøkelsen.</p> + +<p>Datatilsynet har ikke akkurat applaudert forslaget:</p> + +<p><blockquote> + + <p>«Datatilsynet mener forslaget ikke i tilstrekkelig grad + synliggjør hvilke etiske og personvernmessige utfordringer som må + diskuteres før en etablerer en nasjonal biobank med blodprøver fra + hele befolkningen.»</p> + +</blockquote></p> + +<p>Det er flere historier om hvordan innsamlet biologisk materiale har +blitt brukt til andre formål enn de ble innsamlet til, og historien om +<a href="https://www.aftenposten.no/norge/i/Ql0WR/Na-ma-Folkehelsa-slette-uskyldiges-DNA-info">folkehelseinstituttets +lagring på vegne av politiet (Kripos) av innsamlet biologisk materiale +og DNA-informasjon i strid med loven</a> viser at en ikke kan være +trygg på at lover og intensjoner beskytter de som blir berørt mot +misbruk av slik privat og personlig informasjon.</p> + +<p>Det er verdt å merke seg at det kan forskes på de innsamlede +blodprøvene uten samtykke fra den det gjelder (eller foreldre når det +gjelder barn), etter en lovendring for en stund tilbake, med mindre +det er sendt inn skjema der en reserverer seg mot forskning uten +samtykke. Skjemaet er tilgjengelig fra +<a href="https://www.fhi.no/arkiv/publikasjoner/for-pasienter-skjema-for-reservasjo/">folkehelseinstituttets +websider</a>, og jeg anbefaler, uavhengig av denne saken, varmt alle å +sende inn skjemaet for å dokumentere hvor mange som ikke synes det er +greit å fjerne krav om samtykke.</p> + +<p>I tillegg bør en kreve destruering av alt biologisk materiale som +er samlet inn om en selv, for å redusere eventuelle negative +konsekvener i fremtiden når materialet kommer på avveie eller blir +brukt uten samtykke, men det er så vidt jeg vet ikke noe system for +dette i dag.</p> + + + + + First rough draft Norwegian and Spanish edition of the book Made with Creative Commons + http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html + http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html + Tue, 13 Mar 2018 13:00:00 +0100 + <p>I am working on publishing yet another book related to Creative +Commons. This time it is a book filled with interviews and histories +from those around the globe making a living using Creative +Commons.</p> + +<p>Yesterday, after many months of hard work by several volunteer +translators, the first draft of a Norwegian Bokmål edition of the book +<a href="https://madewith.cc">Made with Creative Commons from 2017</a> +was complete. The Spanish translation is also complete, while the +Dutch, Polish, German and Ukraine edition need a lot of work. Get in +touch if you want to help make those happen, or would like to +translate into your mother tongue.</p> + +<p>The whole book project started when +<a href="http://gwolf.org/node/4102">Gunnar Wolf announced</a> that he +was going to make a Spanish edition of the book. I noticed, and +offered some input on how to make a book, based on my experience with +translating the +<a href="https://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Free +Culture</a> and +<a href="https://debian-handbook.info/get/#norwegian">The Debian +Administrator's Handbook</a> books to Norwegian Bokmål. To make a +long story short, we ended up working on a Bokmål edition, and now the +first rough translation is complete, thanks to the hard work of +Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first +proof reading is almost done, and only the second and third proof +reading remains. We will also need to translate the 14 figures and +create a book cover. Once it is done we will publish the book on +paper, as well as in PDF, ePub and possibly Mobi formats.</p> + +<p>The book itself originates as a manuscript on Google Docs, is +downloaded as ODT from there and converted to Markdown using pandoc. +The Markdown is modified by a script before is converted to DocBook +using pandoc. The DocBook is modified again using a script before it +is used to create a Gettext POT file for translators. The translated +PO file is then combined with the earlier mentioned DocBook file to +create a translated DocBook file, which finally is given to dblatex to +create the final PDF. The end result is a set of editions of the +manuscript, one English and one for each of the translations.</p> + +<p>The translation is conducted using +<a href="https://hosted.weblate.org/projects/madewithcc/translation/">the +Weblate web based translation system</a>. Please have a look there +and get in touch if you would like to help out with proof +reading. :)</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + Debian used in the subway info screens in Oslo, Norway http://people.skolelinux.org/pere/blog/Debian_used_in_the_subway_info_screens_in_Oslo__Norway.html @@ -800,123 +914,5 @@ illegally.</p> - - Cura, the nice 3D print slicer, is now in Debian Unstable - http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html - http://people.skolelinux.org/pere/blog/Cura__the_nice_3D_print_slicer__is_now_in_Debian_Unstable.html - Sun, 17 Dec 2017 07:00:00 +0100 - <p>After several months of working and waiting, I am happy to report -that the nice and user friendly 3D printer slicer software Cura just -entered Debian Unstable. It consist of five packages, -<a href="https://tracker.debian.org/pkg/cura">cura</a>, -<a href="https://tracker.debian.org/pkg/cura-engine">cura-engine</a>, -<a href="https://tracker.debian.org/pkg/libarcus">libarcus</a>, -<a href="https://tracker.debian.org/pkg/fdm-materials">fdm-materials</a>, -<a href="https://tracker.debian.org/pkg/libsavitar">libsavitar</a> and -<a href="https://tracker.debian.org/pkg/uranium">uranium</a>. The last -two, uranium and cura, entered Unstable yesterday. This should make -it easier for Debian users to print on at least the Ultimaker class of -3D printers. My nearest 3D printer is an Ultimaker 2+, so it will -make life easier for at least me. :)</p> - -<p>The work to make this happen was done by Gregor Riepl, and I was -happy to assist him in sponsoring the packages. With the introduction -of Cura, Debian is up to three 3D printer slicers at your service, -Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D -printer, give it a go. :)</p> - -<p>The 3D printer software is maintained by the 3D printer Debian -team, flocking together on the -<a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/3dprinter-general">3dprinter-general</a> -mailing list and the -<a href="irc://irc.debian.org/#debian-3dprinting">#debian-3dprinting</a> -IRC channel.</p> - -<p>The next step for Cura in Debian is to update the cura package to -version 3.0.3 and then update the entire set of packages to version -3.1.0 which showed up the last few days.</p> - - - - - Idea for finding all public domain movies in the USA - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - Wed, 13 Dec 2017 10:15:00 +0100 - <p>While looking at -<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies -for the copyright renewal entries for movies published in the USA</a>, -an idea occurred to me. The number of renewals are so few per year, it -should be fairly quick to transcribe them all and add references to -the corresponding IMDB title ID. This would give the (presumably) -complete list of movies published 28 years earlier that did _not_ -enter the public domain for the transcribed year. By fetching the -list of USA movies published 28 years earlier and subtract the movies -with renewals, we should be left with movies registered in IMDB that -are now in the public domain. For the year 1955 (which is the one I -have looked at the most), the total number of pages to transcribe is -21. For the 28 years from 1950 to 1978, it should be in the range -500-600 pages. It is just a few days of work, and spread among a -small group of people it should be doable in a few weeks of spare -time.</p> - -<p>A typical copyright renewal entry look like this (the first one -listed for 1955):</p> - -<p><blockquote> - ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer - Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); - 10Jun55; R151558. -</blockquote></p> - -<p>The movie title as well as registration and renewal dates are easy -enough to locate by a program (split on first comma and look for -DDmmmYY). The rest of the text is not required to find the movie in -IMDB, but is useful to confirm the correct movie is found. I am not -quite sure what the L and R numbers mean, but suspect they are -reference numbers into the archive of the US Copyright Office.</p> - -<p>Tracking down the equivalent IMDB title ID is probably going to be -a manual task, but given the year it is fairly easy to search for the -movie title using for example -<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>. -Using this search, I find that the equivalent IMDB title ID for the -first renewal entry from 1955 is -<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p> - -<p>I suspect the best way to do this would be to make a specialised -web service to make it easy for contributors to transcribe and track -down IMDB title IDs. In the web service, once a entry is transcribed, -the title and year could be extracted from the text, a search in IMDB -conducted for the user to pick the equivalent IMDB title ID right -away. By spreading out the work among volunteers, it would also be -possible to make at least two persons transcribe the same entries to -be able to discover any typos introduced. But I will need help to -make this happen, as I lack the spare time to do all of this on my -own. If you would like to help, please get in touch. Perhaps you can -draft a web service for crowd sourcing the task?</p> - -<p>Note, Project Gutenberg already have some -<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed -copies of the US Copyright Office renewal protocols</a>, but I have -not been able to find any film renewals there, so I suspect they only -have copies of renewal for written works. I have not been able to find -any transcribed versions of movie renewals so far. Perhaps they exist -somewhere?</p> - -<p>I would love to figure out methods for finding all the public -domain works in other countries too, but it is a lot harder. At least -for Norway and Great Britain, such work involve tracking down the -people involved in making the movie and figuring out when they died. -It is hard enough to figure out who was part of making a movie, but I -do not know how to automate such procedure without a registry of every -person involved in making movies and their death year.</p> - -<p>As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> - - -