I helga kom det et hårreisende forslag fra Lysne II-utvalget satt -ned av Forsvarsdepartementet. Lysne II-utvalget var bedt om å vurdere -ønskelista til Forsvarets etterretningstjeneste (e-tjenesten), og har -kommet med -forslag -om at e-tjenesten skal få lov til a avlytte all Internett-trafikk -som passerer Norges grenser. Få er klar over at dette innebærer at -e-tjenesten får tilgang til epost sendt til de fleste politiske -partiene på Stortinget. Regjeringspartiet Høyre (@hoyre.no), -støttepartiene Venstre (@venstre.no) og Kristelig Folkeparti (@krf.no) -samt Sosialistisk Ventreparti (@sv.no) og Miljøpartiet de grønne -(@mdg.no) har nemlig alle valgt å ta imot eposten sin via utenlandske -tjenester. Det betyr at hvis noen sender epost til noen med en slik -adresse vil innholdet i eposten om dette forslaget blir vedtatt gjøres -tilgjengelig for e-tjenesten. Venstre, Sosialistisk Ventreparti og -Miljøpartiet De Grønne har valgt å motta sin epost hos Google, -Kristelig Folkeparti har valgt å motta sin epost hos Microsoft, og -Høyre har valgt å motta sin epost hos Comendo med mottak i Danmark og -Irland. Kun Arbeiderpartiet og Fremskrittspartiet har valgt å motta -eposten sin i Norge, hos henholdsvis Intility AS og Telecomputing -AS.
- -Konsekvensen er at epost inn og ut av de politiske organisasjonene, -til og fra partimedlemmer og partiets tillitsvalgte vil gjøres -tilgjengelig for e-tjenesten for analyse og sortering. Jeg mistenker -at kunnskapen som slik blir tilgjengelig vil være nyttig hvis en -ønsker å vite hvilke argumenter som treffer publikum når en ønsker å -påvirke Stortingets representanter.
Ved hjelp av MX-oppslag i DNS for epost-domene, tilhørende -whois-oppslag av IP-adressene og traceroute for å se hvorvidt -trafikken går via utlandet kan enhver få bekreftet at epost sendt til -de omtalte partiene vil gjøres tilgjengelig for forsvarets -etterretningstjeneste hvis forslaget blir vedtatt. En kan også bruke -den kjekke nett-tjenesten ipinfo.io -for å få en ide om hvor i verden en IP-adresse hører til. - -På den positive siden vil forslaget gjøre at enda flere blir -motivert til å ta grep for å bruke -Tor og krypterte -kommunikasjonsløsninger for å kommunisere med sine kjære, for å sikre -at privatsfæren vernes. Selv bruker jeg blant annet -FreedomBox og -Signal til slikt. Ingen av -dem er optimale, men de fungerer ganske bra allerede og øker kostnaden -for dem som ønsker å invadere mitt privatliv.
- -For øvrig burde varsleren Edward Snowden få politisk asyl i -Norge.
- - + +A month ago, I blogged about my work to +automatically +check the copyright status of IMDB entries, and try to count the +number of movies listed in IMDB that is legal to distribute on the +Internet. I have continued to look for good data sources, and +identified a few more. The code used to extract information from +various data sources is available in +a +git repository, currently available from github.
+ +So far I have identified 3186 unique IMDB title IDs. To gain +better understanding of the structure of the data set, I created a +histogram of the year associated with each movie (typically release +year). It is interesting to notice where the peaks and dips in the +graph are located. I wonder why they are placed there. I suspect +World War II caused the dip around 1940, but what caused the peak +around 2010?
+ +I've so far identified ten sources for IMDB title IDs for movies in +the public domain or with a free license. This is the statistics +reported when running 'make stats' in the git repository:
+ ++ 249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json + 2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json + 830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json + 2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json + 291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json + 144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json + 350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json + 4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json + 698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json + 8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json + 3186 unique IMDB title IDs in total ++ +
The entries without IMDB title ID are candidates to increase the +data set, but might equally well be duplicates of entries already +listed with IMDB title ID in one of the other sources, or represent +movies that lack a IMDB title ID. I've seen examples of all these +situations when peeking at the entries without IMDB title ID. Based +on these data sources, the lower bound for movies listed in IMDB that +are legal to distribute on the Internet is between 3186 and 4713. + +
It would be great for improving the accuracy of this measurement, +if the various sources added IMDB title ID to their metadata. I have +tried to reach the people behind the various sources to ask if they +are interested in doing this, without any replies so far. Perhaps you +can help me get in touch with the people behind VODO, Public Domain +Torrents, Public Domain Movies and Public Domain Review to try to +convince them to add more metadata to their movie entries?
+ +Another way you could help is by adding pages to Wikipedia about +movies that are legal to distribute on the Internet. If such page +exist and include a link to both IMDB and The Internet Archive, the +script used to generate free-movies-archive-org-wikidata.json should +pick up the mapping as soon as wikidata is updates.
In April we -started -to work on a Norwegian Bokmål edition of the "open access" book on -how to set up and administrate a Debian system. Today I am happy to -report that the first draft is now publicly available. You can find -it on get the Debian -Administrator's Handbook page (under Other languages). The first -eight chapters have a first draft translation, and we are working on -proofreading the content. If you want to help out, please start -contributing using -the -hosted weblate project page, and get in touch using -the -translators mailing list. Please also check out -the instructions for -contributors. A good way to contribute is to proofread the text -and update weblate if you find errors.
- -Our goal is still to make the Norwegian book available on paper as well as -electronic form.
+ +If you care about how fault tolerant your storage is, you might +find these articles and papers interesting. They have formed how I +think of when designing a storage system.
+ +-
+
+
- USENIX :login; Redundancy +Does Not Imply Fault Tolerance. Analysis of Distributed Storage +Reactions to Single Errors and Corruptions by Aishwarya Ganesan, +Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi +H. Arpaci-Dusseau + +
- ZDNet +Why +RAID 5 stops working in 2009 by Robin Harris + +
- ZDNet +Why +RAID 6 stops working in 2019 by Robin Harris + +
- USENIX FAST'07 +Failure +Trends in a Large Disk Drive Population by Eduardo Pinheiro, +Wolf-Dietrich Weber and Luiz AndreÌ Barroso + +
- USENIX ;login: Data +Integrity. Finding Truth in a World of Guesses and Lies by Doug +Hughes + +
- USENIX FAST'08 +An +Analysis of Data Corruption in the Storage Stack by +L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. +Arpaci-Dusseau, and R. H. Arpaci-Dusseau + +
- USENIX FAST'07 Disk +failures in the real world: what does an MTTF of 1,000,000 hours mean +to you? by B. Schroeder and G. A. Gibson. + +
- USENIX ;login: Are +Disks the Dominant Contributor for Storage Failures? A Comprehensive +Study of Storage Subsystem Failure Characteristics by Weihang +Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky + +
- SIGMETRICS 2007 +An +analysis of latent sector errors in disk drives by +L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler + +
Several of these research papers are based on data collected from +hundred thousands or millions of disk, and their findings are eye +opening. The short story is simply do not implicitly trust RAID or +redundant storage systems. Details matter. And unfortunately there +are few options on Linux addressing all the identified issues. Both +ZFS and Btrfs are doing a fairly good job, but have legal and +practical issues on their own. I wonder how cluster file systems like +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a computer you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.
+ +Just remember, in the end, it do not matter how redundant, or how +fault tolerant your storage is, if you do not continuously monitor its +status to detect and replace failed disks.
This summer, I read a great article -"coz: -This Is the Profiler You're Looking For" in USENIX ;login: about -how to profile multi-threaded programs. It presented a system for -profiling software by running experiences in the running program, -testing how run time performance is affected by "speeding up" parts of -the code to various degrees compared to a normal run. It does this by -slowing down parallel threads while the "faster up" code is running -and measure how this affect processing time. The processing time is -measured using probes inserted into the code, either using progress -counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It -can also measure unmodified code by measuring complete the program -runtime and running the program several times instead.
- -The project and presentation was so inspiring that I would like to -get the system into Debian. I -created -a WNPP request for it and contacted upstream to try to make the -system ready for Debian by sending patches. The build process need to -be changed a bit to avoid running 'git clone' to get dependencies, and -to include the JavaScript web page used to visualize the collected -profiling information included in the source package. -But I expect that should work out fairly soon.
- -The way the system work is fairly simple. To run an coz experiment -on a binary with debug symbols available, start the program like this: - -
- --coz run --- program-to-run -
This will create a text file profile.coz with the instrumentation -information. To show what part of the code affect the performance -most, use a web browser and either point it to -http://plasma-umass.github.io/coz/ -or use the copy from git (in the gh-pages branch). Check out this web -site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the -profiling more useful you include <coz.h> and insert the -COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the -code, rebuild and run the profiler. This allow coz to do more -targeted experiments.
- -A video published by ACM -presenting the -Coz profiler is available from Youtube. There is also a paper -from the 25th Symposium on Operating Systems Principles available -titled -Coz: -finding code that counts with causal profiling.
- -The source code -for Coz is available from github. It will only build with clang -because it uses a -C++ -feature missing in GCC, but I've submitted -a patch to solve -it and hope it will be included in the upstream source soon.
- -Please get in touch if you, like me, would like to see this piece -of software in Debian. I would very much like some help with the -packaging effort, as I lack the in depth knowledge on how to package -C++ libraries.
+ +I was surprised today to learn that a friend in academia did not +know there are easily available web services available for writing +LaTeX documents as a team. I thought it was common knowledge, but to +make sure at least my readers are aware of it, I would like to mention +these useful services for writing LaTeX documents. Some of them even +provide a WYSIWYG editor to ease writing even further.
+ +There are two commercial services available, +ShareLaTeX and +Overleaf. They are very easy to +use. Just start a new document, select which publisher to write for +(ie which LaTeX style to use), and start writing. Note, these two +have announced their intention to join forces, so soon it will only be +one joint service. I've used both for different documents, and they +work just fine. While +ShareLaTeX is free +software, while the latter is not. According to a +announcement from Overleaf, they plan to keep the ShareLaTeX code +base maintained as free software.
+ +But these two are not the only alternatives. +Fidus Writer is another free +software solution with the +source available on github. I have not used it myself. Several +others can be found on the nice +alterntiveTo +web service. + +If you like Google Docs or Etherpad, but would like to write +documents in LaTeX, you should check out these services. You can even +host your own, if you want to. :)
+As my regular readers probably remember, the last year I published -a French and Norwegian translation of the classic -Free Culture book by the -founder of the Creative Commons movement, Lawrence Lessig. A bit less -known is the fact that due to the way I created the translations, -using docbook and po4a, I also recreated the English original. And -because I already had created a new the PDF edition, I published it -too. The revenue from the books are sent to the Creative Commons -Corporation. In other words, I do not earn any money from this -project, I just earn the warm fuzzy feeling that the text is available -for a wider audience and more people can learn why the Creative -Commons is needed.
- -Today, just for fun, I had a look at the sales number over at -Lulu.com, which take care of payment, printing and shipping. Much to -my surprise, the English edition is selling better than both the -French and Norwegian edition, despite the fact that it has been -available in English since it was first published. In total, 24 paper -books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:
- -Title / language | Quantity |
---|---|
Culture Libre / French | 3 |
Fri kultur / Norwegian | 7 |
Free Culture / English | 14 |
The books are available both from Lulu.com and from large book -stores like Amazon and Barnes&Noble. Most revenue, around $10 per -book, is sent to the Creative Commons project when the book is sold -directly by Lulu.com. The other channels give less revenue. The -summary from Lulu tell me 10 books was sold via the Amazon channel, 10 -via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells -me that the revenue sent so far this year is USD $101.42. No idea -what kind of sales numbers to expect, so I do not know if that is a -good amount of sales for a 10 year old book or not. But it make me -happy that the buyers find the book, and I hope they enjoy reading it -as much as I did.
- -The ebook edition is available for free from -Github.
- -If you would like to translate and publish the book in your native -language, I would be happy to help make it happen. Please get in -touch.
+ +Recently, I needed to automatically check the copyright status of a +set of The Internet Movie database +(IMDB) entries, to figure out which one of the movies they refer +to can be freely distributed on the Internet. This proved to be +harder than it sounds. IMDB for sure list movies without any +copyright protection, where the copyright protection has expired or +where the movie is lisenced using a permissive license like one from +Creative Commons. These are mixed with copyright protected movies, +and there seem to be no way to separate these classes of movies using +the information in IMDB.
+ +First I tried to look up entries manually in IMDB, +Wikipedia and +The Internet Archive, to get a +feel how to do this. It is hard to know for sure using these sources, +but it should be possible to be reasonable confident a movie is "out +of copyright" with a few hours work per movie. As I needed to check +almost 20,000 entries, this approach was not sustainable. I simply +can not work around the clock for about 6 years to check this data +set.
+ +I asked the people behind The Internet Archive if they could +introduce a new metadata field in their metadata XML for IMDB ID, but +was told that they leave it completely to the uploaders to update the +metadata. Some of the metadata entries had IMDB links in the +description, but I found no way to download all metadata files in bulk +to locate those ones and put that approach aside.
+ +In the process I noticed several Wikipedia articles about movies +had links to both IMDB and The Internet Archive, and it occured to me +that I could use the Wikipedia RDF data set to locate entries with +both, to at least get a lower bound on the number of movies on The +Internet Archive with a IMDB ID. This is useful based on the +assumption that movies distributed by The Internet Archive can be +legally distributed on the Internet. With some help from the RDF +community (thank you DanC), I was able to come up with this query to +pass to the SPARQL interface on +Wikidata: + +
+SELECT ?work ?imdb ?ia ?when ?label +WHERE +{ + ?work wdt:P31/wdt:P279* wd:Q11424. + ?work wdt:P345 ?imdb. + ?work wdt:P724 ?ia. + OPTIONAL { + ?work wdt:P577 ?when. + ?work rdfs:label ?label. + FILTER(LANG(?label) = "en"). + } +} ++ +
If I understand the query right, for every film entry anywhere in +Wikpedia, it will return the IMDB ID and The Internet Archive ID, and +when the movie was released and its English title, if either or both +of the latter two are available. At the moment the result set contain +2338 entries. Of course, it depend on volunteers including both +correct IMDB and The Internet Archive IDs in the wikipedia articles +for the movie. It should be noted that the result will include +duplicates if the movie have entries in several languages. There are +some bogus entries, either because The Internet Archive ID contain a +typo or because the movie is not available from The Internet Archive. +I did not verify the IMDB IDs, as I am unsure how to do that +automatically.
+ +I wrote a small python script to extract the data set from Wikidata +and check if the XML metadata for the movie is available from The +Internet Archive, and after around 1.5 hour it produced a list of 2097 +free movies and their IMDB ID. In total, 171 entries in Wikidata lack +the refered Internet Archive entry. I assume the 70 "disappearing" +entries (ie 2338-2097-171) are duplicate entries.
+ +This is not too bad, given that The Internet Archive report to +contain 5331 +feature films at the moment, but it also mean more than 3000 +movies are missing on Wikipedia or are missing the pair of references +on Wikipedia.
+ +I was curious about the distribution by release year, and made a +little graph to show how the amount of free movies is spread over the +years:
+ +
I expect the relative distribution of the remaining 3000 movies to +be similar.
+ +If you want to help, and want to ensure Wikipedia can be used to +cross reference The Internet Archive and The Internet Movie Database, +please make sure entries like this are listed under the "External +links" heading on the Wikipedia article for the movie:
+ ++* {{Internet Archive film|id=FightingLady}} +* {{IMDb title|id=0036823|title=The Fighting Lady}} ++ +
Please verify the links on the final page, to make sure you did not +introduce a typo.
+ +Here is the complete list, if you want to correct the 171 +identified Wikipedia entries with broken links to The Internet +Archive: Q1140317, +Q458656, +Q458656, +Q470560, +Q743340, +Q822580, +Q480696, +Q128761, +Q1307059, +Q1335091, +Q1537166, +Q1438334, +Q1479751, +Q1497200, +Q1498122, +Q865973, +Q834269, +Q841781, +Q841781, +Q1548193, +Q499031, +Q1564769, +Q1585239, +Q1585569, +Q1624236, +Q4796595, +Q4853469, +Q4873046, +Q915016, +Q4660396, +Q4677708, +Q4738449, +Q4756096, +Q4766785, +Q880357, +Q882066, +Q882066, +Q204191, +Q204191, +Q1194170, +Q940014, +Q946863, +Q172837, +Q573077, +Q1219005, +Q1219599, +Q1643798, +Q1656352, +Q1659549, +Q1660007, +Q1698154, +Q1737980, +Q1877284, +Q1199354, +Q1199354, +Q1199451, +Q1211871, +Q1212179, +Q1238382, +Q4906454, +Q320219, +Q1148649, +Q645094, +Q5050350, +Q5166548, +Q2677926, +Q2698139, +Q2707305, +Q2740725, +Q2024780, +Q2117418, +Q2138984, +Q1127992, +Q1058087, +Q1070484, +Q1080080, +Q1090813, +Q1251918, +Q1254110, +Q1257070, +Q1257079, +Q1197410, +Q1198423, +Q706951, +Q723239, +Q2079261, +Q1171364, +Q617858, +Q5166611, +Q5166611, +Q324513, +Q374172, +Q7533269, +Q970386, +Q976849, +Q7458614, +Q5347416, +Q5460005, +Q5463392, +Q3038555, +Q5288458, +Q2346516, +Q5183645, +Q5185497, +Q5216127, +Q5223127, +Q5261159, +Q1300759, +Q5521241, +Q7733434, +Q7736264, +Q7737032, +Q7882671, +Q7719427, +Q7719444, +Q7722575, +Q2629763, +Q2640346, +Q2649671, +Q7703851, +Q7747041, +Q6544949, +Q6672759, +Q2445896, +Q12124891, +Q3127044, +Q2511262, +Q2517672, +Q2543165, +Q426628, +Q426628, +Q12126890, +Q13359969, +Q13359969, +Q2294295, +Q2294295, +Q2559509, +Q2559912, +Q7760469, +Q6703974, +Q4744, +Q7766962, +Q7768516, +Q7769205, +Q7769988, +Q2946945, +Q3212086, +Q3212086, +Q18218448, +Q18218448, +Q18218448, +Q6909175, +Q7405709, +Q7416149, +Q7239952, +Q7317332, +Q7783674, +Q7783704, +Q7857590, +Q3372526, +Q3372642, +Q3372816, +Q3372909, +Q7959649, +Q7977485, +Q7992684, +Q3817966, +Q3821852, +Q3420907, +Q3429733, +Q774474
For mange år siden leste jeg en klassisk tekst som gjorde såpass -inntrykk på meg at jeg husker den fortsatt, flere år senere, og bruker -argumentene fra den stadig vekk. Teksten var «The Relativity of -Wrong» som Isaac Asimov publiserte i Skeptical Inquirer i 1989. Den -gir litt perspektiv rundt formidlingen av vitenskapelige resultater. -Jeg har hatt lyst til å kunne dele den også med folk som ikke -behersker engelsk så godt, som barn og noen av mine eldre slektninger, -og har savnet å ha den tilgjengelig på norsk. For to uker siden tok -jeg meg sammen og kontaktet Asbjørn Dyrendal i foreningen Skepsis om -de var interessert i å publisere en norsk utgave på bloggen sin, og da -han var positiv tok jeg kontakt med Skeptical Inquirer og spurte om -det var greit for dem. I løpet av noen dager fikk vi tilbakemelding -fra Barry Karr hos The Skeptical Inquirer som hadde sjekket og fått OK -fra Robyn Asimov som representerte arvingene i Asmiov-familien og gikk -igang med oversettingen.
- -Resultatet, «Relativt -feil», ble publisert på skepsis-bloggen for noen minutter siden. -Jeg anbefaler deg på det varmeste å lese denne teksten og dele den med -dine venner.
- -For å håndtere oversettelsen og sikre at original og oversettelse -var i sync brukte vi git, po4a, GNU make og Transifex. Det hele -fungerte utmerket og gjorde det enkelt å dele tekstene og jobbe sammen -om finpuss på formuleringene. Hadde hosted.weblate.org latt meg -opprette nye prosjekter selv i stedet for å måtte kontakte -administratoren der, så hadde jeg brukt weblate i stedet.
+ +I find it fascinating how many of the people being locked inside +the proposed border wall between USA and Mexico support the idea. The +proposal to keep Mexicans out reminds me of +the +propaganda twist from the East Germany government calling the wall +the âAntifascist Bulwarkâ after erecting the Berlin Wall, claiming +that the wall was erected to keep enemies from creeping into East +Germany, while it was obvious to the people locked inside it that it +was erected to keep the people from escaping.
+ +Do the people in USA supporting this wall really believe it is a +one way wall, only keeping people on the outside from getting in, +while not keeping people in the inside from getting out?
Did you know there is a TV channel broadcasting talks from DebConf -16 across an entire country? Or that there is a TV channel -broadcasting talks by or about -Linus Torvalds, -Tor, -OpenID, -Common Lisp, -Civic Tech, -EFF founder John Barlow, -how to make 3D -printer electronics and many more fascinating topics? It works -using only free software (all of it -available from Github), and -is administrated using a web browser and a web API.
- -The TV channel is the Norwegian open channel -Frikanalen, and I am involved -via the NUUG member association in -running and developing the software for the channel. The channel is -organised as a member organisation where its members can upload and -broadcast what they want (think of it as Youtube for national -broadcasting television). Individuals can broadcast too. The time -slots are handled on a first come, first serve basis. Because the -channel have almost no viewers and very few active members, we can -experiment with TV technology without too much flack when we make -mistakes. And thanks to the few active members, most of the slots on -the schedule are free. I see this as an opportunity to spread -knowledge about technology and free software, and have a script I run -regularly to fill up all the open slots the next few days with -technology related video. The end result is a channel I like to -describe as Techno TV - filled with interesting talks and -presentations.
- -It is available on channel 50 on the Norwegian national digital TV -network (RiksTV). It is also available as a multicast stream on -Uninett. And finally, it is available as -a WebM unicast stream from -Frikanalen and NUUG. Check it out. :)
+ +At my nearby maker space, +Sonen, I heard the story that it +was easier to generate gcode files for theyr 3D printers (Ultimake 2+) +on Windows and MacOS X than Linux, because the software involved had +to be manually compiled and set up on Linux while premade packages +worked out of the box on Windows and MacOS X. I found this annoying, +as the software involved, +Cura, is free software +and should be trivial to get up and running on Linux if someone took +the time to package it for the relevant distributions. I even found +a request for adding into +Debian from 2013, which had seem some activity over the years but +never resulted in the software showing up in Debian. So a few days +ago I offered my help to try to improve the situation.
+ +Now I am very happy to see that all the packages required by a +working Cura in Debian are uploaded into Debian and waiting in the NEW +queue for the ftpmasters to have a look. You can track the progress +on +the +status page for the 3D printer team.
+ +The uploaded packages are a bit behind upstream, and was uploaded +now to get slots in the NEW +queue while we work up updating the packages to the latest +upstream version.
+ +On a related note, two competitors for Cura, which I found harder +to use and was unable to configure correctly for Ultimaker 2+ in the +short time I spent on it, are already in Debian. If you are looking +for 3D printer "slicers" and want something already available in +Debian, check out +slic3r and +slic3r-prusa. +The latter is a fork of the former.
Yesterday, I tried to unlock a HTC Desire HD phone, and it proved -to be a slight challenge. Here is the recipe if I ever need to do it -again. It all started by me wanting to try the recipe to set up -an -hardened Android installation from the Tor project blog on a -device I had access to. It is a old mobile phone with a broken -microphone The initial idea had been to just -install -CyanogenMod on it, but did not quite find time to start on it -until a few days ago.
- -The unlock process is supposed to be simple: (1) Boot into the boot -loader (press volume down and power at the same time), (2) select -'fastboot' before (3) connecting the device via USB to a Linux -machine, (4) request the device identifier token by running 'fastboot -oem get_identifier_token', (5) request the device unlocking key using -the HTC developer web -site and unlock the phone using the key file emailed to you.
- -Unfortunately, this only work fi you have hboot version 2.00.0029 -or newer, and the device I was working on had 2.00.0027. This -apparently can be easily fixed by downloading a Windows program and -running it on your Windows machine, if you accept the terms Microsoft -require you to accept to use Windows - which I do not. So I had to -come up with a different approach. I got a lot of help from AndyCap -on #nuug, and would not have been able to get this working without -him.
- -First I needed to extract the hboot firmware from -the -windows binary for HTC Desire HD downloaded as 'the RUU' from HTC. -For this there is is a github -project named unruu using libunshield. The unshield tool did not -recognise the file format, but unruu worked and extracted rom.zip, -containing the new hboot firmware and a text file describing which -devices it would work for.
- -Next, I needed to get the new firmware into the device. For this I -followed some instructions -available -from HTC1Guru.com, and ran these commands as root on a Linux -machine with Debian testing:
- --adb reboot-bootloader -fastboot oem rebootRUU -fastboot flash zip rom.zip -fastboot flash zip rom.zip -fastboot reboot -- -
The flash command apparently need to be done twice to take effect, -as the first is just preparations and the second one do the flashing. -The adb command is just to get to the boot loader menu, so turning the -device on while holding volume down and the power button should work -too.
- -With the new hboot version in place I could start following the -instructions on the HTC developer web site. I got the device token -like this:
- --fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //' -- -
And once I got the unlock code via email, I could use it like -this:
- --fastboot flash unlocktoken Unlock_code.bin -- -
And with that final step in place, the phone was unlocked and I -could start stuffing the software of my own choosing into the device. -So far I only inserted a replacement recovery image to wipe the phone -before I start. We will see what happen next. Perhaps I should -install Debian on it. :)
+ +For a while now, I have wanted to test -the Signal app, as it is -said to provide end to end encrypted communication and several of my -friends and family are already using it. As I by choice do not own a -mobile phone, this proved to be harder than expected. And I wanted to -have the source of the client and know that it was the code used on my -machine. But yesterday I managed to get it working. I used the -Github source, compared it to the source in -the -Signal Chrome app available from the Chrome web store, applied -patches to use the production Signal servers, started the app and -asked for the hidden "register without a smart phone" form. Here is -the recipe how I did it.
- -First, I fetched the Signal desktop source from Github, using - -
-git clone https://github.com/WhisperSystems/Signal-Desktop.git -- -
Next, I patched the source to use the production servers, to be -able to talk to other Signal users:
- --cat <<EOF | patch -p0 -diff -ur ./js/background.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js ---- ./js/background.js 2016-06-29 13:43:15.630344628 +0200 -+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js 2016-06-29 14:06:29.530300934 +0200 -@@ -47,8 +47,8 @@ - }); - }); - -- var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org'; -- var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com'; -+ var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org:4433'; -+ var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com'; - var messageReceiver; - window.getSocketStatus = function() { - if (messageReceiver) { -diff -ur ./js/expire.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js ---- ./js/expire.js 2016-06-29 13:43:15.630344628 +0200 -+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js2016-06-29 14:06:29.530300934 +0200 -@@ -1,6 +1,6 @@ - ;(function() { - 'use strict'; -- var BUILD_EXPIRATION = 0; -+ var BUILD_EXPIRATION = 1474492690000; - - window.extension = window.extension || {}; - -EOF -- -
The first part is changing the servers, and the second is updating -an expiration timestamp. This timestamp need to be updated regularly. -It is set 90 days in the future by the build process (Gruntfile.js). -The value is seconds since 1970 times 1000, as far as I can tell.
- -Based on a tip and good help from the #nuug IRC channel, I wrote a -script to launch Signal in Chromium.
- --#!/bin/sh -cd $(dirname $0) -mkdir -p userdata -exec chromium \ - --proxy-server="socks://localhost:9050" \ - --user-data-dir=`pwd`/userdata --load-and-launch-app=`pwd` -- -
The script start the app and configure Chromium to use the Tor -SOCKS5 proxy to make sure those controlling the Signal servers (today -Amazon and Whisper Systems) as well as those listening on the lines -will have a harder time location my laptop based on the Signal -connections if they use source IP address.
- -When the script starts, one need to follow the instructions under -"Standalone Registration" in the CONTRIBUTING.md file in the git -repository. I right clicked on the Signal window to get up the -Chromium debugging tool, visited the 'Console' tab and wrote -'extension.install("standalone")' on the console prompt to get the -registration form. Then I entered by land line phone number and -pressed 'Call'. 5 seconds later the phone rang and a robot voice -repeated the verification code three times. After entering the number -into the verification code field in the form, I could start using -Signal from my laptop. - -
As far as I can tell, The Signal app will leak who is talking to -whom and thus who know who to those controlling the central server, -but such leakage is hard to avoid with a centrally controlled server -setup. It is something to keep in mind when using Signal - the -content of your chats are harder to intercept, but the meta data -exposing your contact network is available to people you do not know. -So better than many options, but not great. And sadly the usage is -connected to my land line, thus allowing those controlling the server -to associate it to my home and person. I would prefer it if only -those I knew could tell who I was on Signal. There are options -avoiding such information leakage, but most of my friends are not -using them, so I am stuck with Signal for now.
+ +Every mobile phone announce its existence over radio to the nearby +mobile cell towers. And this radio chatter is available for anyone +with a radio receiver capable of receiving them. Details about the +mobile phones with very good accuracy is of course collected by the +phone companies, but this is not the topic of this blog post. The +mobile phone radio chatter make it possible to figure out when a cell +phone is nearby, as it include the SIM card ID (IMSI). By paying +attention over time, one can see when a phone arrive and when it leave +an area. I believe it would be nice to make this information more +available to the general public, to make more people aware of how +their phones are announcing their whereabouts to anyone that care to +listen.
+ +I am very happy to report that we managed to get something +visualizing this information up and running for +Oslo Skaperfestival 2017 +(Oslo Makers Festival) taking place today and tomorrow at Deichmanske +library. The solution is based on the +simple +recipe for listening to GSM chatter I posted a few days ago, and +will show up at the stand of à pen +Sone from the Computer Science department of the University of +Oslo. The presentation will show the nearby mobile phones (aka +IMSIs) as dots in a web browser graph, with lines to the dot +representing mobile base station it is talking to. It was working in +the lab yesterday, and was moved into place this morning.
+ +We set up a fairly powerful desktop machine using Debian +Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers +connected and visualize the visible cell phone towers using an +English version of +Hopglass. A fairly powerfull machine is needed as the +grgsm_livemon_headless processes from +gr-gsm converting +the radio signal to data packages is quite CPU intensive.
+ +The frequencies to listen to, are identified using a slightly +patched scan-and-livemon (to set the --args values for each receiver), +and the Hopglass data is generated using the +patches +in my meshviewer-output branch. For some reason we could not get +more than four SDRs working. There is also a geographical map trying +to show the location of the base stations, but I believe their +coordinates are hardcoded to some random location in Germany, I +believe. The code should be replaced with code to look up location in +a text file, a sqlite database or one of the online databases +mentioned in +the github +issue for the topic. + +
If this sound interesting, visit the stand at the festival!
When I set out a few weeks ago to figure out -which -multimedia player in Debian claimed to support most file formats / -MIME types, I was a bit surprised how varied the sets of MIME types -the various players claimed support for. The range was from 55 to 130 -MIME types. I suspect most media formats are supported by all -players, but this is not really reflected in the MimeTypes values in -their desktop files. There are probably also some bogus MIME types -listed, but it is hard to identify which one this is.
- -Anyway, in the mean time I got in touch with upstream for some of -the players suggesting to add more MIME types to their desktop files, -and decided to spend some time myself improving the situation for my -favorite media player VLC. The fixes for VLC entered Debian unstable -yesterday. The complete list of MIME types can be seen on the -Multimedia -player MIME type support status Debian wiki page.
- -The new "best" multimedia player in Debian? It is VLC, followed by -totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and -kmplayer. I am sure some of the other players desktop files support -several of the formats currently listed as working only with vlc, -toten and parole.
- -A sad observation is that only 14 MIME types are listed as -supported by all the tested multimedia players in Debian in their -desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl, -audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg, -video/quicktime, video/vnd.rn-realvideo, video/x-matroska, -video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find -it sad that video/ogg and video/webm is not supported by all the media -players in Debian. As far as I can tell, all of them can handle both -formats.
+ +A little more than a month ago I wrote +how +to observe the SIM card ID (aka IMSI number) of mobile phones talking +to nearby mobile phone base stations using Debian GNU/Linux and a +cheap USB software defined radio, and thus being able to pinpoint +the location of people and equipment (like cars and trains) with an +accuracy of a few kilometer. Since then we have worked to make the +procedure even simpler, and it is now possible to do this without any +manual frequency tuning and without building your own packages.
+ +The gr-gsm +package is now included in Debian testing and unstable, and the +IMSI-catcher code no longer require root access to fetch and decode +the GSM data collected using gr-gsm.
+ +Here is an updated recipe, using packages built by Debian and a git +clone of two python scripts:
+ +-
+
+
- Start with a Debian machine running the Buster version (aka + testing). + +
- Run 'apt install gr-gsm python-numpy python-scipy + python-scapy' as root to install required packages. + +
- Fetch the code decoding GSM packages using 'git clone + github.com/Oros42/IMSI-catcher.git'. + +
- Insert USB software defined radio supported by GNU Radio. + +
- Enter the IMSI-catcher directory and run 'python + scan-and-livemon' to locate the frequency of nearby base + stations and start listening for GSM packages on one of them. + +
- Enter the IMSI-catcher directory and run 'python + simple_IMSI-catcher.py' to display the collected information. + +
Note, due to a bug somewhere the scan-and-livemon program (actually +its underlying +program grgsm_scanner) do not work with the HackRF radio. It does +work with RTL 8232 and other similar USB radio receivers you can get +very cheaply +(for example +from ebay), so for now the solution is to scan using the RTL radio +and only use HackRF for fetching GSM data.
+ +As far as I can tell, a cell phone only show up on one of the +frequencies at the time, so if you are going to track and count every +cell phone around you, you need to listen to all the frequencies used. +To listen to several frequencies, use the --numrecv argument to +scan-and-livemon to use several receivers. Further, I am not sure if +phones using 3G or 4G will show as talking GSM to base stations, so +this approach might not see all phones around you. I typically see +0-400 IMSI numbers an hour when looking around where I live.
+ +I've tried to run the scanner on a +Raspberry Pi 2 and 3 +running Debian Buster, but the grgsm_livemon_headless process seem +to be too CPU intensive to keep up. When GNU Radio print 'O' to +stdout, I am told there it is caused by a buffer overflow between the +radio and GNU Radio, caused by the program being unable to read the +GSM data fast enough. If you see a stream of 'O's from the terminal +where you started scan-and-livemon, you need a give the process more +CPU power. Perhaps someone are able to optimize the code to a point +where it become possible to set up RPi3 based GSM sniffers? I tried +using Raspbian instead of Debian, but there seem to be something wrong +with GNU Radio on raspbian, causing glibc to abort().
Many years ago, when koffice was fresh and with few users, I -decided to test its presentation tool when making the slides for a -talk I was giving for NUUG on Japhar, a free Java virtual machine. I -wrote the first draft of the slides, saved the result and went to bed -the day before I would give the talk. The next day I took a plane to -the location where the meeting should take place, and on the plane I -started up koffice again to polish the talk a bit, only to discover -that kpresenter refused to load its own data file. I cursed a bit and -started making the slides again from memory, to have something to -present when I arrived. I tested that the saved files could be -loaded, and the day seemed to be rescued. I continued to polish the -slides until I suddenly discovered that the saved file could no longer -be loaded into kpresenter. In the end I had to rewrite the slides -three times, condensing the content until the talk became shorter and -shorter. After the talk I was able to pinpoint the problem – -kpresenter wrote inline images in a way itself could not understand. -Eventually that bug was fixed and kpresenter ended up being a great -program to make slides. The point I'm trying to make is that we -expect a program to be able to load its own data files, and it is -embarrassing to its developers if it can't.
- -Did you ever experience a program failing to load its own data -files from the desktop file browser? It is not a uncommon problem. A -while back I discovered that the screencast recorder -gtk-recordmydesktop would save an Ogg Theora video file the KDE file -browser would refuse to open. No video player claimed to understand -such file. I tracked down the cause being file --mime-type -returning the application/ogg MIME type, which no video player I had -installed listed as a MIME type they would understand. I asked for -file to change its -behavour and use the MIME type video/ogg instead. I also asked -several video players to add video/ogg to their desktop files, to give -the file browser an idea what to do about Ogg Theora files. After a -while, the desktop file browsers in Debian started to handle the -output from gtk-recordmydesktop properly.
- -But history repeats itself. A few days ago I tested the music -system Rosegarden again, and I discovered that the KDE and xfce file -browsers did not know what to do with the Rosegarden project files -(*.rg). I've reported the -rosegarden problem to BTS and a fix is commited to git and will be -included in the next upload. To increase the chance of me remembering -how to fix the problem next time some program fail to load its files -from the file browser, here are some notes on how to fix it.
- -The file browsers in Debian in general operates on MIME types. -There are two sources for the MIME type of a given file. The output from -file --mime-type mentioned above, and the content of the -shared MIME type registry (under /usr/share/mime/). The file MIME -type is mapped to programs supporting the MIME type, and this -information is collected from -the -desktop files available in /usr/share/applications/. If there is -one desktop file claiming support for the MIME type of the file, it is -activated when asking to open a given file. If there are more, one -can normally select which one to use by right-clicking on the file and -selecting the wanted one using 'Open with' or similar. In general -this work well. But it depend on each program picking a good MIME -type (preferably -a -MIME type registered with IANA), file and/or the shared MIME -registry recognizing the file and the desktop file to list the MIME -type in its list of supported MIME types.
- -The /usr/share/mime/packages/rosegarden.xml entry for -the -Shared MIME database look like this:
- -- --<?xml version="1.0" encoding="UTF-8"?> -<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info"> - <mime-type type="audio/x-rosegarden"> - <sub-class-of type="application/x-gzip"/> - <comment>Rosegarden project file</comment> - <glob pattern="*.rg"/> - </mime-type> -</mime-info> -
This states that audio/x-rosegarden is a kind of application/x-gzip -(it is a gzipped XML file). Note, it is much better to use an -official MIME type registered with IANA than it is to make up ones own -unofficial ones like the x-rosegarden type used by rosegarden.
- -The desktop file of the rosegarden program failed to list -audio/x-rosegarden in its list of supported MIME types, causing the -file browsers to have no idea what to do with *.rg files:
- -- --% grep Mime /usr/share/applications/rosegarden.desktop -MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi; -X-KDE-NativeMimeType=audio/x-rosegarden-composition -% -
The fix was to add "audio/x-rosegarden;" at the end of the -MimeType= line.
- -If you run into a file which fail to open the correct program when -selected from the file browser, please check out the output from -file --mime-type for the file, ensure the file ending and -MIME type is registered somewhere under /usr/share/mime/ and check -that some desktop file under /usr/share/applications/ is claiming -support for this MIME type. If not, please report a bug to have it -fixed. :)
+ +For noen dager siden publiserte Jon Wessel-Aas en bloggpost om +«Konklusjonen om datalagring som +EU-kommisjonen ikke ville at vi skulle få se». Det er en +interessant gjennomgang av EU-domstolens syn på snurpenotovervåkning +av befolkningen, som er klar på at det er i strid med +EU-lovgivingen.
+ +Valgkampen går for fullt i Norge, og om noen få dager er siste +frist for å avgi stemme. En ting er sikkert, Høyre og Arbeiderpartiet +får ikke min stemme +denne +gangen heller. Jeg har ikke glemt at de tvang igjennom loven som +skulle pålegge alle data- og teletjenesteleverandører å overvåke alle +sine kunder. En lov som er vedtatt, og aldri opphevet igjen.
+ +Det er tydelig fra diskusjonen rundt grenseløs digital overvåkning +(eller "Digital Grenseforsvar" som det kalles i Orvellisk nytale) at +hverken Høyre og Arbeiderpartiet har noen prinsipielle sperrer mot å +overvåke hele befolkningen, og diskusjonen så langt tyder på at flere +av de andre partiene heller ikke har det. Mange av +de som stemte +for Datalagringsdirektivet i Stortinget (64 fra Arbeiderpartiet, +25 fra Høyre) er fortsatt aktive og argumenterer fortsatt for å radere +vekk mer av innbyggernes privatsfære.
+ +Når myndighetene demonstrerer sin mistillit til folket, tror jeg +folket selv bør legge litt innsats i å verne sitt privatliv, ved å ta +i bruk ende-til-ende-kryptert kommunikasjon med sine kjente og kjære, +og begrense hvor mye privat informasjon som deles med uvedkommende. +Det er jo ingenting som tyder på at myndighetene kommer til å være vår +privatsfære. +Det +er mange muligheter. Selv har jeg litt sans for +Ring, som er basert på p2p-teknologi +uten sentral kontroll, er fri programvare, og støtter meldinger, tale +og video. Systemet er tilgjengelig ut av boksen fra +Debian og +Ubuntu, og det +finnes pakker for Android, MacOSX og Windows. Foreløpig er det få +brukere med Ring, slik at jeg også bruker +Signal som nettleserutvidelse.
Archive
-
+
- 2017
+
-
+
+
- January (4) + +
- February (3) + +
- March (5) + +
- April (2) + +
- June (5) + +
- July (1) + +
- August (1) + +
- September (3) + +
- October (5) + +
- November (2) + +
+
- 2016
-
@@ -838,7 +875,13 @@ fixed. :)
- August (5) -
- September (1) +
- September (2) + +
- October (3) + +
- November (8) + +
- December (5)
@@ -1061,7 +1104,7 @@ fixed. :)
- 3d-printer (13) +
- 3d-printer (14)
- amiga (1) @@ -1077,25 +1120,27 @@ fixed. :)
- chrpath (2) -
- debian (135) +
- debian (154) -
- debian edu (157) +
- debian edu (158) + +
- debian-handbook (4)
- digistan (10) -
- dld (15) +
- dld (17) -
- docbook (23) +
- docbook (24)
- drivstoffpriser (4) -
- english (327) +
- english (359)
- fiksgatami (23)
- fildeling (12) -
- freeculture (28) +
- freeculture (30)
- freedombox (9) @@ -1105,12 +1150,14 @@ fixed. :)
- intervju (42) -
- isenkram (12) +
- isenkram (15) -
- kart (19) +
- kart (20)
- ldap (9) +
- lego (4) +
- lenker (8)
- lsdvd (2) @@ -1121,21 +1168,21 @@ fixed. :)
- multimedia (39) -
- nice free software (8) +
- nice free software (9) -
- norsk (277) +
- norsk (293) -
- nuug (182) +
- nuug (189) -
- offentlig innsyn (26) +
- offentlig innsyn (33)
- open311 (2) -
- opphavsrett (61) +
- opphavsrett (66) -
- personvern (92) +
- personvern (104) -
- raid (1) +
- raid (2)
- reactos (1) @@ -1143,39 +1190,39 @@ fixed. :)
- rfid (3) -
- robot (9) +
- robot (10)
- rss (1) -
- ruter (4) +
- ruter (5)
- scraperwiki (2) -
- sikkerhet (48) +
- sikkerhet (53)
- sitesummary (4)
- skepsis (5) -
- standard (49) +
- standard (55) -
- stavekontroll (4) +
- stavekontroll (6) -
- stortinget (10) +
- stortinget (12) -
- surveillance (38) +
- surveillance (52) -
- sysadmin (2) +
- sysadmin (4)
- usenix (2) -
- valg (8) +
- valg (9)
- video (59)
- vitenskap (4) -
- web (38) +
- web (40)
Tags
-
-