X-Git-Url: http://pere.pagekite.me/gitweb/homepage.git/blobdiff_plain/c5af2986515029a6622d31176fe79265be465897..53798080ccfe7c0d9ad356a83ded5b219b186c9b:/blog/index.rss diff --git a/blog/index.rss b/blog/index.rss index 0465482870..0a657f587a 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -6,6 +6,115 @@ http://people.skolelinux.org/pere/blog/ + + Detecting NFS hangs on Linux without hanging yourself... + http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html + http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html + Thu, 9 Mar 2017 15:20:00 +0100 + <p>Over the years, administrating thousand of NFS mounting linux +computers at the time, I often needed a way to detect if the machine +was experiencing NFS hang. If you try to use <tt>df</tt> or look at a +file or directory affected by the hang, the process (and possibly the +shell) will hang too. So you want to be able to detect this without +risking the detection process getting stuck too. It has not been +obvious how to do this. When the hang has lasted a while, it is +possible to find messages like these in dmesg:</p> + +<p><blockquote> +nfs: server nfsserver not responding, still trying +<br>nfs: server nfsserver OK +</blockquote></p> + +<p>It is hard to know if the hang is still going on, and it is hard to +be sure looking in dmesg is going to work. If there are lots of other +messages in dmesg the lines might have rotated out of site before they +are noticed.</p> + +<p>While reading through the nfs client implementation in linux kernel +code, I came across some statistics that seem to give a way to detect +it. The om_timeouts sunrpc value in the kernel will increase every +time the above log entry is inserted into dmesg. And after digging a +bit further, I discovered that this value show up in +/proc/self/mountstats on Linux.</p> + +<p>The mountstats content seem to be shared between files using the +same file system context, so it is enough to check one of the +mountstats files to get the state of the mount point for the machine. +I assume this will not show lazy umounted NFS points, nor NFS mount +points in a different process context (ie with a different filesystem +view), but that does not worry me.</p> + +<p>The content for a NFS mount point look similar to this:</p> + +<p><blockquote><pre> +[...] +device /dev/mapper/Debian-var mounted on /var with fstype ext3 +device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1 + opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all + age: 7863311 + caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255 + sec: flavor=1,pseudoflavor=1 + events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 + bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 + RPC iostats version: 1.0 p/v: 100003/3 (nfs) + xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820 + per-op statistics + NULL: 0 0 0 0 0 0 0 0 + GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132 + SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943 + LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511 + ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140 + READLINK: 125 125 0 20472 18620 0 1112 1118 + READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693 + WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771 + CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398 + MKDIR: 3680 3680 0 773980 993920 26 23990 24245 + SYMLINK: 903 903 0 233428 245488 6 5865 5917 + MKNOD: 80 80 0 20148 21760 0 299 304 + REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636 + RMDIR: 3367 3367 0 645112 484848 22 5782 6002 + RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288 + LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579 + READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917 + READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937 + FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022 + FSINFO: 2 2 0 232 328 0 1 1 + PATHCONF: 1 1 0 116 140 0 0 0 + COMMIT: 0 0 0 0 0 0 0 0 + +device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc +[...] +</pre></blockquote></p> + +<p>The key number to look at is the third number in the per-op list. +It is the number of NFS timeouts experiences per file system +operation. Here 22 write timeouts and 5 access timeouts. If these +numbers are increasing, I believe the machine is experiencing NFS +hang. Unfortunately the timeout value do not start to increase right +away. The NFS operations need to time out first, and this can take a +while. The exact timeout value depend on the setup. For example the +defaults for TCP and UDP mount points are quite different, and the +timeout value is affected by the soft, hard, timeo and retrans NFS +mount options.</p> + +<p>The only way I have been able to get working on Debian and RedHat +Enterprise Linux for getting the timeout count is to peek in /proc/. +But according to +<ahref="http://docs.oracle.com/cd/E19253-01/816-4555/netmonitor-12/index.html">Solaris +10 System Administration Guide: Network Services</a>, the 'nfsstat -c' +command can be used to get these timeout values. But this do not work +on Linux, as far as I can tell. I +<ahref="http://bugs.debian.org/857043">asked Debian about this</a>, +but have not seen any replies yet.</p> + +<p>Is there a better way to figure out if a Linux NFS client is +experiencing NFS hangs? Is there a way to detect which processes are +affected? Is there a way to get the NFS mount going quickly once the +network problem causing the NFS hang has been cleared? I would very +much welcome some clues, as we regularly run into NFS hangs.</p> + + + How does it feel to be wiretapped, when you should be doing the wiretapping... http://people.skolelinux.org/pere/blog/How_does_it_feel_to_be_wiretapped__when_you_should_be_doing_the_wiretapping___.html @@ -22,9 +131,9 @@ alongside the senators, judges and the rest of the people in USA.</p> <p>Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am -sure the surveillance industry in USA according to themselves believe -they have all the legal backing they need to conduct mass surveillance -on the entire world.</p> +sure the surveillance industry in USA believe they have all the legal +backing they need to conduct mass surveillance on the entire +world.</p> <p>There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very @@ -38,6 +147,10 @@ claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.</p> + +<p><strong>Update 2017-03-13:</strong> Look like +<a href="https://theintercept.com/2017/03/13/rand-paul-is-right-nsa-routinely-monitors-americans-communications-without-warrants/">The +Intercept report that US Senator Rand Paul confirm what I state above</a>.</p> @@ -549,78 +662,6 @@ unencrypted over the Internet.</p> Rublev<a/>, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.</p> -<p>As usual, if you use Bitcoin and want to show your support of my -activities, please send Bitcoin donations to my address -<b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p> - - - - - Introducing ical-archiver to split out old iCalendar entries - http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html - http://people.skolelinux.org/pere/blog/Introducing_ical_archiver_to_split_out_old_iCalendar_entries.html - Wed, 4 Jan 2017 12:20:00 +0100 - <p>Do you have a large <a href="https://icalendar.org/">iCalendar</a> -file with lots of old entries, and would like to archive them to save -space and resources? At least those of us using KOrganizer know that -turning on and off an event set become slower and slower the more -entries are in the set. While working on migrating our calendars to a -<a href="http://radicale.org/">Radicale CalDAV server</a> on our -<a href="https://freedomboxfoundation.org/">Freedombox server</a/>, my -loved one wondered if I could find a way to split up the calendar file -she had in KOrganizer, and I set out to write a tool. I spent a few -days writing and polishing the system, and it is now ready for general -consumption. The -<a href="https://github.com/petterreinholdtsen/ical-archiver">code for -ical-archiver</a> is publicly available from a git repository on -github. The system is written in Python and depend on -<a href="http://eventable.github.io/vobject/">the vobject Python -module</a>.</p> - -<p>To use it, locate the iCalendar file you want to operate on and -give it as an argument to the ical-archiver script. This will -generate a set of new files, one file per component type per year for -all components expiring more than two years in the past. The vevent, -vtodo and vjournal entries are handled by the script. The remaining -entries are stored in a 'remaining' file.</p> - -<p>This is what a test run can look like: - -<p><pre> -% ical-archiver t/2004-2016.ics -Found 3612 vevents -Found 6 vtodos -Found 2 vjournals -Writing t/2004-2016.ics-subset-vevent-2004.ics -Writing t/2004-2016.ics-subset-vevent-2005.ics -Writing t/2004-2016.ics-subset-vevent-2006.ics -Writing t/2004-2016.ics-subset-vevent-2007.ics -Writing t/2004-2016.ics-subset-vevent-2008.ics -Writing t/2004-2016.ics-subset-vevent-2009.ics -Writing t/2004-2016.ics-subset-vevent-2010.ics -Writing t/2004-2016.ics-subset-vevent-2011.ics -Writing t/2004-2016.ics-subset-vevent-2012.ics -Writing t/2004-2016.ics-subset-vevent-2013.ics -Writing t/2004-2016.ics-subset-vevent-2014.ics -Writing t/2004-2016.ics-subset-vjournal-2007.ics -Writing t/2004-2016.ics-subset-vjournal-2011.ics -Writing t/2004-2016.ics-subset-vtodo-2012.ics -Writing t/2004-2016.ics-remaining.ics -% -</pre></p> - -<p>As you can see, the original file is untouched and new files are -written with names derived from the original file. If you are happy -with their content, the *-remaining.ics file can replace the original -the the others can be archived or imported as historical calendar -collections.</p> - -<p>The script should probably be improved a bit. The error handling -when discovering broken entries is not good, and I am not sure yet if -it make sense to split different entry types into separate files or -not. The program is thus likely to change. If you find it -interesting, please get in touch. :)</p> - <p>As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address <b><a href="bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&label=PetterReinholdtsenBlog">15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>