From: Petter Reinholdtsen Date: Wed, 1 Nov 2017 14:41:54 +0000 (+0100) Subject: Generated. X-Git-Url: https://pere.pagekite.me/gitweb/homepage.git/commitdiff_plain/9ffde319673a9def1725be16daf858412e54ec5c?ds=sidebyside Generated. --- diff --git a/blog/Some_notes_on_fault_tolerant_storage_systems.html b/blog/Some_notes_on_fault_tolerant_storage_systems.html index 36e934abf1..5d39fa2b2a 100644 --- a/blog/Some_notes_on_fault_tolerant_storage_systems.html +++ b/blog/Some_notes_on_fault_tolerant_storage_systems.html @@ -55,7 +55,7 @@ Hughes
  • USENIX FAST'08 An -cAnalysis of Data Corruption in the Storage Stack by +Analysis of Data Corruption in the Storage Stack by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau
  • @@ -84,10 +84,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.

    +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

    Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/index.html b/blog/index.html index 7cee2eb902..2c5eead6fe 100644 --- a/blog/index.html +++ b/blog/index.html @@ -55,7 +55,7 @@ Hughes

  • USENIX FAST'08 An -cAnalysis of Data Corruption in the Storage Stack by +Analysis of Data Corruption in the Storage Stack by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau
  • @@ -84,10 +84,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.

    +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

    Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/index.rss b/blog/index.rss index 6f8847277d..ec8decc975 100644 --- a/blog/index.rss +++ b/blog/index.rss @@ -44,7 +44,7 @@ Hughes</li> <li>USENIX FAST'08 <a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An -cAnalysis of Data Corruption in the Storage Stack</a> by +Analysis of Data Corruption in the Storage Stack</a> by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> @@ -73,10 +73,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.</p> +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.</p> <p>Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/english/english.rss b/blog/tags/english/english.rss index 3b7c927594..a6a5a45601 100644 --- a/blog/tags/english/english.rss +++ b/blog/tags/english/english.rss @@ -44,7 +44,7 @@ Hughes</li> <li>USENIX FAST'08 <a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An -cAnalysis of Data Corruption in the Storage Stack</a> by +Analysis of Data Corruption in the Storage Stack</a> by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> @@ -73,10 +73,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.</p> +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.</p> <p>Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/english/index.html b/blog/tags/english/index.html index 42713416dd..4a61691103 100644 --- a/blog/tags/english/index.html +++ b/blog/tags/english/index.html @@ -61,7 +61,7 @@ Hughes

  • USENIX FAST'08 An -cAnalysis of Data Corruption in the Storage Stack by +Analysis of Data Corruption in the Storage Stack by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau
  • @@ -90,10 +90,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.

    +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

    Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/raid/index.html b/blog/tags/raid/index.html index 4e20582cdd..5fecee26c5 100644 --- a/blog/tags/raid/index.html +++ b/blog/tags/raid/index.html @@ -61,7 +61,7 @@ Hughes

  • USENIX FAST'08 An -cAnalysis of Data Corruption in the Storage Stack by +Analysis of Data Corruption in the Storage Stack by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau
  • @@ -90,10 +90,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.

    +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

    Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/raid/raid.rss b/blog/tags/raid/raid.rss index 6e9b628118..74aa6fcac2 100644 --- a/blog/tags/raid/raid.rss +++ b/blog/tags/raid/raid.rss @@ -44,7 +44,7 @@ Hughes</li> <li>USENIX FAST'08 <a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An -cAnalysis of Data Corruption in the Storage Stack</a> by +Analysis of Data Corruption in the Storage Stack</a> by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> @@ -73,10 +73,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.</p> +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.</p> <p>Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/sysadmin/index.html b/blog/tags/sysadmin/index.html index 349a787407..3c20b200f3 100644 --- a/blog/tags/sysadmin/index.html +++ b/blog/tags/sysadmin/index.html @@ -61,7 +61,7 @@ Hughes

  • USENIX FAST'08 An -cAnalysis of Data Corruption in the Storage Stack by +Analysis of Data Corruption in the Storage Stack by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau
  • @@ -90,10 +90,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.

    +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.

    Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its diff --git a/blog/tags/sysadmin/sysadmin.rss b/blog/tags/sysadmin/sysadmin.rss index 33f3ec11ed..1508371d7c 100644 --- a/blog/tags/sysadmin/sysadmin.rss +++ b/blog/tags/sysadmin/sysadmin.rss @@ -44,7 +44,7 @@ Hughes</li> <li>USENIX FAST'08 <a href="https://www.usenix.org/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/">An -cAnalysis of Data Corruption in the Storage Stack</a> by +Analysis of Data Corruption in the Storage Stack</a> by L. N. Bairavasundaram, G. R. Goodson, B. Schroeder, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau</li> @@ -73,10 +73,10 @@ redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like -Ceph do in this regard. After, all the old saying, you know you have -a distributed system when the crash of a compyter you have never heard -of stops you from getting any work done. The same holds true if fault -tolerance do not work.</p> +Ceph do in this regard. After all, there is an old saying, you know +you have a distributed system when the crash of a compyter you have +never heard of stops you from getting any work done. The same holds +true if fault tolerance do not work.</p> <p>Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its