X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/2d109c7ac958437eb3ee3b2f9fcfe7e38b820fcb..c9cd2ef81ddebb5affcd3449627617c73f4cbced:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index ee5d01ca6c..ae2847a8ad 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -6,6 +6,63 @@ http://people.skolelinux.org/pere/blog/ + + First rough draft Norwegian and Spanish edition of the book Made with Creative Commons + http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html + http://people.skolelinux.org/pere/blog/First_rough_draft_Norwegian_and_Spanish_edition_of_the_book_Made_with_Creative_Commons.html + Tue, 13 Mar 2018 13:00:00 +0100 + <p>I am working on publishing yet another book related to Creative +Commons. This time it is a book filled with interviews and histories +from those around the globe making a living using Creative +Commons.</p> + +<p>Yesterday, after many months of hard work by several volunteer +translators, the first draft of a Norwegian Bokmål edition of the book +<a href="https://madewith.cc">Made with Creative Commons from 2017</a> +was complete. The Spanish translation is also complete, while the +Dutch, Polish, German and Ukraine edition need a lot of work. Get in +touch if you want to help make those happen, or would like to +translate into your mother tongue.</p> + +<p>The whole book project started when +<a href="http://gwolf.org/node/4102">Gunnar Wolf announced</a> that he +was going to make a Spanish edition of the book. I noticed, and +offered some input on how to make a book, based on my experience with +translating the +<a href="https://www.lulu.com/shop/lawrence-lessig/fri-kultur/paperback/product-22441576.html">Free +Culture</a> and +<a href="https://debian-handbook.info/get/#norwegian">The Debian +Administrator's Handbook</a> books to Norwegian Bokmål. To make a +long story short, we ended up working on a Bokmål edition, and now the +first rough translation is complete, thanks to the hard work of +Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first +proof reading is almost done, and only the second and third proof +reading remains. We will also need to translate the 14 figures and +create a book cover. Once it is done we will publish the book on +paper, as well as in PDF, ePub and possibly Mobi formats.</p> + +<p>The book itself originates as a manuscript on Google Docs, is +downloaded as ODT from there and converted to Markdown using pandoc. +The Markdown in modified by a script before is converted to DocBook +using pandoc. The DocBook is modified again using a script before it +is used to create a Gettext POT file for translators. The translated +PO file is then combined with the earlier mentioned DocBook file to +create a translated DocBook file, which finally is given to dblatex to +create the final PDF. The end result is a set of editions of the +manuscript, one English and one for each of the translations.</p> + +<p>The translation is conducted using +<a href="https://hosted.weblate.org/projects/madewithcc/translation/">the +Weblate web based translation system</a>. Please have a look there +and get in touch if you would like to help out with proof +reading. :)</p> + +<p>As usual, if you use Bitcoin and want to show your support of my +activities, please send Bitcoin donations to my address +<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> + + + Debian used in the subway info screens in Oslo, Norway http://people.skolelinux.org/pere/blog/Debian_used_in_the_subway_info_screens_in_Oslo__Norway.html @@ -838,85 +895,5 @@ version 3.0.3 and then update the entire set of packages to version - - Idea for finding all public domain movies in the USA - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - http://people.skolelinux.org/pere/blog/Idea_for_finding_all_public_domain_movies_in_the_USA.html - Wed, 13 Dec 2017 10:15:00 +0100 - <p>While looking at -<a href="http://onlinebooks.library.upenn.edu/cce/">the scanned copies -for the copyright renewal entries for movies published in the USA</a>, -an idea occurred to me. The number of renewals are so few per year, it -should be fairly quick to transcribe them all and add references to -the corresponding IMDB title ID. This would give the (presumably) -complete list of movies published 28 years earlier that did _not_ -enter the public domain for the transcribed year. By fetching the -list of USA movies published 28 years earlier and subtract the movies -with renewals, we should be left with movies registered in IMDB that -are now in the public domain. For the year 1955 (which is the one I -have looked at the most), the total number of pages to transcribe is -21. For the 28 years from 1950 to 1978, it should be in the range -500-600 pages. It is just a few days of work, and spread among a -small group of people it should be doable in a few weeks of spare -time.</p> - -<p>A typical copyright renewal entry look like this (the first one -listed for 1955):</p> - -<p><blockquote> - ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer - Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); - 10Jun55; R151558. -</blockquote></p> - -<p>The movie title as well as registration and renewal dates are easy -enough to locate by a program (split on first comma and look for -DDmmmYY). The rest of the text is not required to find the movie in -IMDB, but is useful to confirm the correct movie is found. I am not -quite sure what the L and R numbers mean, but suspect they are -reference numbers into the archive of the US Copyright Office.</p> - -<p>Tracking down the equivalent IMDB title ID is probably going to be -a manual task, but given the year it is fairly easy to search for the -movie title using for example -<a href="http://www.imdb.com/find?q=adam+and+evil+1927&s=all">http://www.imdb.com/find?q=adam+and+evil+1927&s=all</a>. -Using this search, I find that the equivalent IMDB title ID for the -first renewal entry from 1955 is -<a href="http://www.imdb.com/title/tt0017588/">http://www.imdb.com/title/tt0017588/</a>.</p> - -<p>I suspect the best way to do this would be to make a specialised -web service to make it easy for contributors to transcribe and track -down IMDB title IDs. In the web service, once a entry is transcribed, -the title and year could be extracted from the text, a search in IMDB -conducted for the user to pick the equivalent IMDB title ID right -away. By spreading out the work among volunteers, it would also be -possible to make at least two persons transcribe the same entries to -be able to discover any typos introduced. But I will need help to -make this happen, as I lack the spare time to do all of this on my -own. If you would like to help, please get in touch. Perhaps you can -draft a web service for crowd sourcing the task?</p> - -<p>Note, Project Gutenberg already have some -<a href="http://www.gutenberg.org/ebooks/search/?query=copyright+office+renewals">transcribed -copies of the US Copyright Office renewal protocols</a>, but I have -not been able to find any film renewals there, so I suspect they only -have copies of renewal for written works. I have not been able to find -any transcribed versions of movie renewals so far. Perhaps they exist -somewhere?</p> - -<p>I would love to figure out methods for finding all the public -domain works in other countries too, but it is a lot harder. At least -for Norway and Great Britain, such work involve tracking down the -people involved in making the movie and figuring out when they died. -It is hard enough to figure out who was part of making a movie, but I -do not know how to automate such procedure without a registry of every -person involved in making movies and their death year.</p> - -<p>As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> - - -