1 <?xml version=
"1.0" encoding=
"utf-8"?>
2 <rss version='
2.0' xmlns:lj='http://www.livejournal.org/rss/lj/
1.0/'
>
4 <title>Petter Reinholdtsen - Entries tagged sysadmin
</title>
5 <description>Entries tagged sysadmin
</description>
6 <link>http://people.skolelinux.org/pere/blog/
</link>
10 <title>Some notes on fault tolerant storage systems
</title>
11 <link>http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html
</link>
12 <guid isPermaLink=
"true">http://people.skolelinux.org/pere/blog/Some_notes_on_fault_tolerant_storage_systems.html
</guid>
13 <pubDate>Wed,
1 Nov
2017 15:
35:
00 +
0100</pubDate>
14 <description><p
>If you care about how fault tolerant your storage is, you might
15 find these articles and papers interesting. They have formed how I
16 think of when designing a storage system.
</p
>
20 <li
>USENIX :login;
<a
21 href=
"https://www.usenix.org/publications/login/summer2017/ganesan
">Redundancy
22 Does Not Imply Fault Tolerance. Analysis of Distributed Storage
23 Reactions to Single Errors and Corruptions
</a
> by Aishwarya Ganesan,
24 Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi
25 H. Arpaci-Dusseau
</li
>
28 <a href=
"http://www.zdnet.com/article/why-raid-
5-stops-working-in-
2009/
">Why
29 RAID
5 stops working in
2009</a
> by Robin Harris
</li
>
32 <a href=
"http://www.zdnet.com/article/why-raid-
6-stops-working-in-
2019/
">Why
33 RAID
6 stops working in
2019</a
> by Robin Harris
</li
>
35 <li
>USENIX FAST
'07
36 <a href=
"http://research.google.com/archive/disk_failures.pdf
">Failure
37 Trends in a Large Disk Drive Population
</a
> by Eduardo Pinheiro,
38 Wolf-Dietrich Weber and Luiz AndreĢ Barroso
</li
>
40 <li
>USENIX ;login:
<a
41 href=
"https://www.usenix.org/system/files/login/articles/hughes12-
04.pdf
">Data
42 Integrity. Finding Truth in a World of Guesses and Lies
</a
> by Doug
45 <li
>USENIX FAST
'08
46 <a href=
"https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/
">An
47 cAnalysis of Data Corruption in the Storage Stack
</a
> -
48 L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C.
49 Arpaci-Dusseau, and R. H. Arpaci-Dusseau
</li
>
51 <li
>USENIX FAST
'07 <a
52 href=
"https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder_html/
">Disk
53 failures in the real world: what does an MTTF of
1,
000,
000 hours mean
54 to you?
</a
> by B. Schroeder and G. A. Gibson.
</li
>
56 <li
>USENIX ;login:
<a
57 href=
"https://www.usenix.org/events/fast08/tech/full_papers/jiang/jiang_html/
">Are
58 Disks the Dominant Contributor for Storage Failures? A Comprehensive
59 Study of Storage Subsystem Failure Characteristics
</a
> by Weihang
60 Jiang, Chongfeng Hu, Yuanyuan Zhou, and Arkady Kanevsky
</li
>
62 <li
>SIGMETRICS
2007
63 <a href=
"http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf
">An
64 analysis of latent sector errors in disk drives
</a
> -
65 L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler
</li
>
69 <p
>Several of these research papers are based on data collected from
70 hundred thousands or millions of disk, and their findings are eye
71 opening. The short story is simply do not implicitly trust RAID or
72 redundant storage systems. Details matter. And unfortunately there
73 are few options on Linux addressing all the identified issues. Both
74 ZFS and Btrfs are doing a fairly good job, but have legal and
75 practical issues on their own. I wonder how cluster file systems like
76 Ceph do in this regard.
</p
>
78 <p
>Just remember, in the end, it do not matter how redundant, or how
79 fault tolerant your storage is, if you do not continuously monitor its
80 status to detect and replace failed disks.
</p
>
85 <title>Detecting NFS hangs on Linux without hanging yourself...
</title>
86 <link>http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html
</link>
87 <guid isPermaLink=
"true">http://people.skolelinux.org/pere/blog/Detecting_NFS_hangs_on_Linux_without_hanging_yourself___.html
</guid>
88 <pubDate>Thu,
9 Mar
2017 15:
20:
00 +
0100</pubDate>
89 <description><p
>Over the years, administrating thousand of NFS mounting linux
90 computers at the time, I often needed a way to detect if the machine
91 was experiencing NFS hang. If you try to use
<tt
>df
</tt
> or look at a
92 file or directory affected by the hang, the process (and possibly the
93 shell) will hang too. So you want to be able to detect this without
94 risking the detection process getting stuck too. It has not been
95 obvious how to do this. When the hang has lasted a while, it is
96 possible to find messages like these in dmesg:
</p
>
98 <p
><blockquote
>
99 nfs: server nfsserver not responding, still trying
100 <br
>nfs: server nfsserver OK
101 </blockquote
></p
>
103 <p
>It is hard to know if the hang is still going on, and it is hard to
104 be sure looking in dmesg is going to work. If there are lots of other
105 messages in dmesg the lines might have rotated out of site before they
106 are noticed.
</p
>
108 <p
>While reading through the nfs client implementation in linux kernel
109 code, I came across some statistics that seem to give a way to detect
110 it. The om_timeouts sunrpc value in the kernel will increase every
111 time the above log entry is inserted into dmesg. And after digging a
112 bit further, I discovered that this value show up in
113 /proc/self/mountstats on Linux.
</p
>
115 <p
>The mountstats content seem to be shared between files using the
116 same file system context, so it is enough to check one of the
117 mountstats files to get the state of the mount point for the machine.
118 I assume this will not show lazy umounted NFS points, nor NFS mount
119 points in a different process context (ie with a different filesystem
120 view), but that does not worry me.
</p
>
122 <p
>The content for a NFS mount point look similar to this:
</p
>
124 <p
><blockquote
><pre
>
126 device /dev/mapper/Debian-var mounted on /var with fstype ext3
127 device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=
1.1
128 opts: rw,vers=
3,rsize=
65536,wsize=
65536,namlen=
255,acregmin=
3,acregmax=
60,acdirmin=
30,acdirmax=
60,soft,nolock,proto=tcp,timeo=
600,retrans=
2,sec=sys,mountaddr=
129.240.3.145,mountvers=
3,mountport=
4048,mountproto=udp,local_lock=all
130 caps: caps=
0x3fe7,wtmult=
4096,dtsize=
8192,bsize=
0,namlen=
255
131 sec: flavor=
1,pseudoflavor=
1
132 events:
61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
133 bytes:
166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
134 RPC iostats version:
1.0 p/v:
100003/
3 (nfs)
135 xprt: tcp
925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
137 NULL:
0 0 0 0 0 0 0 0
138 GETATTR:
61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
139 SETATTR:
463469 463470 0 92005440 66739536 63787 603235 687943
140 LOOKUP:
17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
141 ACCESS:
14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
142 READLINK:
125 125 0 20472 18620 0 1112 1118
143 READ:
4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
144 WRITE:
8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
145 CREATE:
171708 171708 0 38084748 46702272 873 1041833 1050398
146 MKDIR:
3680 3680 0 773980 993920 26 23990 24245
147 SYMLINK:
903 903 0 233428 245488 6 5865 5917
148 MKNOD:
80 80 0 20148 21760 0 299 304
149 REMOVE:
429921 429921 0 79796004 61908192 3313 2710416 2741636
150 RMDIR:
3367 3367 0 645112 484848 22 5782 6002
151 RENAME:
466201 466201 0 130026184 121212260 7075 5935207 5961288
152 LINK:
289155 289155 0 72775556 67083960 2199 2565060 2585579
153 READDIR:
2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
154 READDIRPLUS:
1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
155 FSSTAT:
6144 6144 0 1010516 1032192 51 9654 10022
156 FSINFO:
2 2 0 232 328 0 1 1
157 PATHCONF:
1 1 0 116 140 0 0 0
158 COMMIT:
0 0 0 0 0 0 0 0
160 device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
162 </pre
></blockquote
></p
>
164 <p
>The key number to look at is the third number in the per-op list.
165 It is the number of NFS timeouts experiences per file system
166 operation. Here
22 write timeouts and
5 access timeouts. If these
167 numbers are increasing, I believe the machine is experiencing NFS
168 hang. Unfortunately the timeout value do not start to increase right
169 away. The NFS operations need to time out first, and this can take a
170 while. The exact timeout value depend on the setup. For example the
171 defaults for TCP and UDP mount points are quite different, and the
172 timeout value is affected by the soft, hard, timeo and retrans NFS
173 mount options.
</p
>
175 <p
>The only way I have been able to get working on Debian and RedHat
176 Enterprise Linux for getting the timeout count is to peek in /proc/.
178 <ahref=
"http://docs.oracle.com/cd/E19253-
01/
816-
4555/netmonitor-
12/index.html
">Solaris
179 10 System Administration Guide: Network Services
</a
>, the
'nfsstat -c
'
180 command can be used to get these timeout values. But this do not work
181 on Linux, as far as I can tell. I
182 <ahref=
"http://bugs.debian.org/
857043">asked Debian about this
</a
>,
183 but have not seen any replies yet.
</p
>
185 <p
>Is there a better way to figure out if a Linux NFS client is
186 experiencing NFS hangs? Is there a way to detect which processes are
187 affected? Is there a way to get the NFS mount going quickly once the
188 network problem causing the NFS hang has been cleared? I would very
189 much welcome some clues, as we regularly run into NFS hangs.
</p
>
194 <title>Debian Jessie, PXE and automatic firmware installation
</title>
195 <link>http://people.skolelinux.org/pere/blog/Debian_Jessie__PXE_and_automatic_firmware_installation.html
</link>
196 <guid isPermaLink=
"true">http://people.skolelinux.org/pere/blog/Debian_Jessie__PXE_and_automatic_firmware_installation.html
</guid>
197 <pubDate>Fri,
17 Oct
2014 14:
10:
00 +
0200</pubDate>
198 <description><p
>When PXE installing laptops with Debian, I often run into the
199 problem that the WiFi card require some firmware to work properly.
200 And it has been a pain to fix this using preseeding in Debian.
201 Normally something more is needed. But thanks to
202 <a href=
"https://packages.qa.debian.org/i/isenkram.html
">my isenkram
203 package
</a
> and its recent tasksel extension, it has now become easy
204 to do this using simple preseeding.
</p
>
206 <p
>The isenkram-cli package provide tasksel tasks which will install
207 firmware for the hardware found in the machine (actually, requested by
208 the kernel modules for the hardware). (It can also install user space
209 programs supporting the hardware detected, but that is not the focus
210 of this story.)
</p
>
212 <p
>To get this working in the default installation, two preeseding
213 values are needed. First, the isenkram-cli package must be installed
214 into the target chroot (aka the hard drive) before tasksel is executed
215 in the pkgsel step of the debian-installer system. This is done by
216 preseeding the base-installer/includes debconf value to include the
217 isenkram-cli package. The package name is next passed to debootstrap
218 for installation. With the isenkram-cli package in place, tasksel
219 will automatically use the isenkram tasks to detect hardware specific
220 packages for the machine being installed and install them, because
221 isenkram-cli contain tasksel tasks.
</p
>
223 <p
>Second, one need to enable the non-free APT repository, because
224 most firmware unfortunately is non-free. This is done by preseeding
225 the apt-mirror-setup step. This is unfortunate, but for a lot of
226 hardware it is the only option in Debian.
</p
>
228 <p
>The end result is two lines needed in your preseeding file to get
229 firmware installed automatically by the installer:
</p
>
231 <p
><blockquote
><pre
>
232 base-installer base-installer/includes string isenkram-cli
233 apt-mirror-setup apt-setup/non-free boolean true
234 </pre
></blockquote
></p
>
236 <p
>The current version of isenkram-cli in testing/jessie will install
237 both firmware and user space packages when using this method. It also
238 do not work well, so use version
0.15 or later. Installing both
239 firmware and user space packages might give you a bit more than you
240 want, so I decided to split the tasksel task in two, one for firmware
241 and one for user space programs. The firmware task is enabled by
242 default, while the one for user space programs is not. This split is
243 implemented in the package currently in unstable.
</p
>
245 <p
>If you decide to give this a go, please let me know (via email) how
246 this recipe work for you. :)
</p
>
248 <p
>So, I bet you are wondering, how can this work. First and
249 foremost, it work because tasksel is modular, and driven by whatever
250 files it find in /usr/lib/tasksel/ and /usr/share/tasksel/. So the
251 isenkram-cli package place two files for tasksel to find. First there
252 is the task description file (/usr/share/tasksel/descs/isenkram.desc):
</p
>
254 <p
><blockquote
><pre
>
255 Task: isenkram-packages
257 Description: Hardware specific packages (autodetected by isenkram)
258 Based on the detected hardware various hardware specific packages are
260 Test-new-install: show show
262 Packages: for-current-hardware
264 Task: isenkram-firmware
266 Description: Hardware specific firmware packages (autodetected by isenkram)
267 Based on the detected hardware various hardware specific firmware
268 packages are proposed.
269 Test-new-install: mark show
271 Packages: for-current-hardware-firmware
272 </pre
></blockquote
></p
>
274 <p
>The key parts are Test-new-install which indicate how the task
275 should be handled and the Packages line referencing to a script in
276 /usr/lib/tasksel/packages/. The scripts use other scripts to get a
277 list of packages to install. The for-current-hardware-firmware script
278 look like this to list relevant firmware for the machine:
280 <p
><blockquote
><pre
>
285 isenkram-autoinstall-firmware -l
286 </pre
></blockquote
></p
>
288 <p
>With those two pieces in place, the firmware is installed by
289 tasksel during the normal d-i run. :)
</p
>
291 <p
>If you want to test what tasksel will install when isenkram-cli is
292 installed, run
<tt
>DEBIAN_PRIORITY=critical tasksel --test
293 --new-install
</tt
> to get the list of packages that tasksel would
296 <p
><a href=
"https://wiki.debian.org/DebianEdu/
">Debian Edu
</a
> will be
297 pilots in testing this feature, as isenkram is used there now to
298 install firmware, replacing the earlier scripts.
</p
>
303 <title>Scripting the Cerebrum/bofhd user administration system using XML-RPC
</title>
304 <link>http://people.skolelinux.org/pere/blog/Scripting_the_Cerebrum_bofhd_user_administration_system_using_XML_RPC.html
</link>
305 <guid isPermaLink=
"true">http://people.skolelinux.org/pere/blog/Scripting_the_Cerebrum_bofhd_user_administration_system_using_XML_RPC.html
</guid>
306 <pubDate>Thu,
6 Dec
2012 10:
30:
00 +
0100</pubDate>
307 <description><p
>Where I work at the
<a href=
"http://www.uio.no/
">University of
308 Oslo
</a
>, we use the
309 <a href=
"http://sourceforge.net/projects/cerebrum/
">Cerebrum user
310 administration system
</a
> to maintain users, groups, DNS, DHCP, etc.
311 I
've known since the system was written that the server is providing
312 an
<a href=
"http://en.wikipedia.org/wiki/XML-RPC
">XML-RPC
</a
> API, but
313 I have never spent time to try to figure out how to use it, as we
314 always use the bofh command line client at work. Until today. I want
315 to script the updating of DNS and DHCP to make it easier to set up
316 virtual machines. Here are a few notes on how to use it with
319 <p
>I started by looking at the source of the Java
320 <a href=
"http://cerebrum.svn.sourceforge.net/viewvc/cerebrum/trunk/cerebrum/clients/jbofh/
">bofh
321 client
</a
>, to figure out how it connected to the API server. I also
322 googled for python examples on how to use XML-RPC, and found
323 <a href=
"http://tldp.org/HOWTO/XML-RPC-HOWTO/xmlrpc-howto-python.html
">a
324 simple example in
</a
> the XML-RPC howto.
</p
>
326 <p
>This simple example code show how to connect, get the list of
327 commands (as a JSON dump), and how to get the information about the
328 user currently logged in:
</p
>
330 <blockquote
><pre
>
331 #!/usr/bin/env python
334 server_url =
'https://cerebrum-uio.uio.no:
8000';
335 username = getpass.getuser()
336 password = getpass.getpass()
337 server = xmlrpclib.Server(server_url);
338 #print server.get_commands(sessionid)
339 sessionid = server.login(username, password)
340 print server.run_command(sessionid,
"user_info
", username)
341 result = server.logout(sessionid)
343 </pre
></blockquote
>
345 <p
>Armed with this knowledge I can now move forward and script the DNS
346 and DHCP updates I wanted to do.
</p
>