- <title>Where did that package go? &mdash; geolocated IP traceroute</title>
- <link>http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</link>
- <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Where_did_that_package_go___mdash__geolocated_IP_traceroute.html</guid>
- <pubDate>Mon, 9 Jan 2017 12:20:00 +0100</pubDate>
- <description><p>Did you ever wonder where the web trafic really flow to reach the
-web servers, and who own the network equipment it is flowing through?
-It is possible to get a glimpse of this from using traceroute, but it
-is hard to find all the details. Many years ago, I wrote a system to
-map the Norwegian Internet (trying to figure out if our plans for a
-network game service would get low enough latency, and who we needed
-to talk to about setting up game servers close to the users. Back
-then I used traceroute output from many locations (I asked my friends
-to run a script and send me their traceroute output) to create the
-graph and the map. The output from traceroute typically look like
-this:
-
-<p><pre>
-traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
- 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms
- 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms
- 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms
- 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms
- 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms
- 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms
- 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms
- 8 * * *
- 9 * * *
-[...]
-</pre></p>
-
-<p>This show the DNS names and IP addresses of (at least some of the)
-network equipment involved in getting the data traffic from me to the
-www.stortinget.no server, and how long it took in milliseconds for a
-package to reach the equipment and return to me. Three packages are
-sent, and some times the packages do not follow the same path. This
-is shown for hop 5, where three different IP addresses replied to the
-traceroute request.</p>
-
-<p>There are many ways to measure trace routes. Other good traceroute
-implementations I use are traceroute (using ICMP packages) mtr (can do
-both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP
-traceroute and a lot of other capabilities). All of them are easily
-available in <a href="https://www.debian.org/">Debian</a>.</p>
-
-<p>This time around, I wanted to know the geographic location of
-different route points, to visualize how visiting a web page spread
-information about the visit to a lot of servers around the globe. The
-background is that a web site today often will ask the browser to get
-from many servers the parts (for example HTML, JSON, fonts,
-JavaScript, CSS, video) required to display the content. This will
-leak information about the visit to those controlling these servers
-and anyone able to peek at the data traffic passing by (like your ISP,
-the ISPs backbone provider, FRA, GCHQ, NSA and others).</p>
-
-<p>Lets pick an example, the Norwegian parliament web site
-www.stortinget.no. It is read daily by all members of parliament and
-their staff, as well as political journalists, activits and many other
-citizens of Norway. A visit to the www.stortinget.no web site will
-ask your browser to contact 8 other servers: ajax.googleapis.com,
-insights.hotjar.com, script.hotjar.com, static.hotjar.com,
-stats.g.doubleclick.net, www.google-analytics.com,
-www.googletagmanager.com and www.netigate.se. I extracted this by
-asking <a href="http://phantomjs.org/">PhantomJS</a> to visit the
-Stortinget web page and tell me all the URLs PhantomJS downloaded to
-render the page (in HAR format using
-<a href="https://github.com/ariya/phantomjs/blob/master/examples/netsniff.js">their
-netsniff example</a>. I am very grateful to Gorm for showing me how
-to do this). My goal is to visualize network traces to all IP
-addresses behind these DNS names, do show where visitors personal
-information is spread when visiting the page.</p>
-
-<p align="center"><a href="www.stortinget.no-geoip.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geoip-small.png" alt="map of combined traces for URLs used by www.stortinget.no using GeoIP"/></a></p>
-
-<p>When I had a look around for options, I could not find any good
-free software tools to do this, and decided I needed my own traceroute
-wrapper outputting KML based on locations looked up using GeoIP. KML
-is easy to work with and easy to generate, and understood by several
-of the GIS tools I have available. I got good help from by NUUG
-colleague Anders Einar with this, and the result can be seen in
-<a href="https://github.com/petterreinholdtsen/kmltraceroute">my
-kmltraceroute git repository</a>. Unfortunately, the quality of the
-free GeoIP databases I could find (and the for-pay databases my
-friends had access to) is not up to the task. The IP addresses of
-central Internet infrastructure would typically be placed near the
-controlling companies main office, and not where the router is really
-located, as you can see from <a href="www.stortinget.no-geoip.kml">the
-KML file I created</a> using the GeoLite City dataset from MaxMind.
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy-small.png" alt="scapy traceroute graph for URLs used by www.stortinget.no"/></a></p>
-
-<p>I also had a look at the visual traceroute graph created by
-<a href="http://www.secdev.org/projects/scapy/">the scrapy project</a>,
-showing IP network ownership (aka AS owner) for the IP address in
-question.
-<a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-scapy.svg">The
-graph display a lot of useful information about the traceroute in SVG
-format</a>, and give a good indication on who control the network
-equipment involved, but it do not include geolocation. This graph
-make it possible to see the information is made available at least for
-UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level
-3 Communications and NetDNA.</p>
-
-<p align="center"><a href="https://geotraceroute.com/index.php?node=4&host=www.stortinget.no"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-small.png" alt="example geotraceroute view for www.stortinget.no"/></a></p>
-
-<p>In the process, I came across the
-<a href="https://geotraceroute.com/">web service GeoTraceroute</a> by
-Salim Gasmi. Its methology of combining guesses based on DNS names,
-various location databases and finally use latecy times to rule out
-candidate locations seemed to do a very good job of guessing correct
-geolocation. But it could only do one trace at the time, did not have
-a sensor in Norway and did not make the geolocations easily available
-for postprocessing. So I contacted the developer and asked if he
-would be willing to share the code (he refused until he had time to
-clean it up), but he was interested in providing the geolocations in a
-machine readable format, and willing to set up a sensor in Norway. So
-since yesterday, it is possible to run traces from Norway in this
-service thanks to a sensor node set up by
-<a href="https://www.nuug.no/">the NUUG assosiation</a>, and get the
-trace in KML format for further processing.</p>
-
-<p align="center"><a href="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.kml"><img
-src="http://people.skolelinux.org/pere/blog/images/2017-01-09-www.stortinget.no-geotraceroute-kml-join.png" alt="map of combined traces for URLs used by www.stortinget.no using geotraceroute"/></a></p>
-
-<p>Here we can see a lot of trafic passes Sweden on its way to
-Denmark, Germany, Holland and Ireland. Plenty of places where the
-Snowden confirmations verified the traffic is read by various actors
-without your best interest as their top priority.</p>
-
-<p>Combining KML files is trivial using a text editor, so I could loop
-over all the hosts behind the urls imported by www.stortinget.no and
-ask for the KML file from GeoTraceroute, and create a combined KML
-file with all the traces (unfortunately only one of the IP addresses
-behind the DNS name is traced this time. To get them all, one would
-have to request traces using IP number instead of DNS names from
-GeoTraceroute). That might be the next step in this project.</p>
-
-<p>Armed with these tools, I find it a lot easier to figure out where
-the IP traffic moves and who control the boxes involved in moving it.
-And every time the link crosses for example the Swedish border, we can
-be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in
-Britain and NSA in USA and cables around the globe. (Hm, what should
-we tell them? :) Keep that in mind if you ever send anything
-unencrypted over the Internet.</p>
-
-<p>PS: KML files are drawn using
-<a href="http://ivanrublev.me/kml/">the KML viewer from Ivan
-Rublev<a/>, as it was less cluttered than the local Linux application
-Marble. There are heaps of other options too.</p>
+ <title>Debian APT upgrade without enough free space on the disk...</title>
+ <link>http://people.skolelinux.org/pere/blog/Debian_APT_upgrade_without_enough_free_space_on_the_disk___.html</link>
+ <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/Debian_APT_upgrade_without_enough_free_space_on_the_disk___.html</guid>
+ <pubDate>Sun, 8 Jul 2018 12:10:00 +0200</pubDate>
+ <description><p>Quite regularly, I let my Debian Sid/Unstable chroot stay untouch
+for a while, and when I need to update it there is not enough free
+space on the disk for apt to do a normal 'apt upgrade'. I normally
+would resolve the issue by doing 'apt install &lt;somepackages&gt;' to
+upgrade only some of the packages in one batch, until the amount of
+packages to download fall below the amount of free space available.
+Today, I had about 500 packages to upgrade, and after a while I got
+tired of trying to install chunks of packages manually. I concluded
+that I did not have the spare hours required to complete the task, and
+decided to see if I could automate it. I came up with this small
+script which I call 'apt-in-chunks':</p>
+
+<p><blockquote><pre>
+#!/bin/sh
+#
+# Upgrade packages when the disk is too full to upgrade every
+# upgradable package in one lump. Fetching packages to upgrade using
+# apt, and then installing using dpkg, to avoid changing the package
+# flag for manual/automatic.
+
+set -e
+
+ignore() {
+ if [ "$1" ]; then
+ grep -v "$1"
+ else
+ cat
+ fi
+}
+
+for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do
+ echo "Upgrading $p"
+ apt clean
+ apt install --download-only -y $p
+ for f in /var/cache/apt/archives/*.deb; do
+ if [ -e "$f" ]; then
+ dpkg -i /var/cache/apt/archives/*.deb
+ break
+ fi
+ done
+done
+</pre></blockquote></p>
+
+<p>The script will extract the list of packages to upgrade, try to
+download the packages needed to upgrade one package, install the
+downloaded packages using dpkg. The idea is to upgrade packages
+without changing the APT mark for the package (ie the one recording of
+the package was manually requested or pulled in as a dependency). To
+use it, simply run it as root from the command line. If it fail, try
+'apt install -f' to clean up the mess and run the script again. This
+might happen if the new packages conflict with one of the old
+packages. dpkg is unable to remove, while apt can do this.</p>
+
+<p>It take one option, a package to ignore in the list of packages to
+upgrade. The option to ignore a package is there to be able to skip
+the packages that are simply too large to unpack. Today this was
+'ghc', but I have run into other large packages causing similar
+problems earlier (like TeX).</p>
+
+<p>Update 2018-07-08: Thanks to Paul Wise, I am aware of two
+alternative ways to handle this. The "unattended-upgrades
+--minimal-upgrade-steps" option will try to calculate upgrade sets for
+each package to upgrade, and then upgrade them in order, smallest set
+first. It might be a better option than my above mentioned script.
+Also, "aptutude upgrade" can upgrade single packages, thus avoiding
+the need for using "dpkg -i" in the script above.</p>