From: Petter Reinholdtsen Date: Wed, 1 Nov 2017 14:31:35 +0000 (+0100) Subject: Wrap up post. X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/commitdiff_plain/0e01e53026e5778894455d3b537730565a612288?ds=sidebyside;hp=a7ec90711e555e7df6cb6b899d00a873a72c6ff4 Wrap up post. --- diff --git a/blog/data/2017-11-01-storage-fault-tolerance.txt b/blog/data/2017-11-01-storage-fault-tolerance.txt index 065c2f380b..2b138231b0 100644 --- a/blog/data/2017-11-01-storage-fault-tolerance.txt +++ b/blog/data/2017-11-01-storage-fault-tolerance.txt @@ -1,6 +1,6 @@ Title: Some notes on fault tolerant storage systems Tags: english, sysadmin, raid -Date: 2017-11-01 15:30 +Date: 2017-11-01 15:35

If you care about how fault tolerant your storage is, you might find these articles and papers interesting. They have formed how I @@ -15,7 +15,6 @@ Reactions to Single Errors and Corruptions by Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau -

  • ZDNet Why RAID 5 stops working in 2009 by Robin Harris
  • @@ -66,3 +65,7 @@ are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how next generation cluster file systems like Ceph do in this regard.

    + +

    Just remember, in the end, it do not matter how redundant, or how +fault tolerant your storage is, if you do not continuously monitor its +status to detect and replace failed disks.