Thanks, your explaination is very clearly. I am planning to install the second 
hard disk in each node of my cluster.

-----Original Message-----
   >From: "Robert Latham"<[EMAIL PROTECTED]>
   >Sent: 2006-6-7 23:25:38
   >To: "Eric Zhang"<[EMAIL PROTECTED]>
   >Cc: 
"[email protected]"<[email protected]>
   >Subject: Re: [Pvfs2-users] Pvfs2 failover policy
     >On Sun, Jun 04, 2006 at 10:18:22PM +0800, Eric Zhang wrote:
   >> Pvfs runs smoothly and everything is OK. But I want to know what
   >> will happen if any disk damaged, I mean, If one of these disks
   >> failed, all data will lose? How pvfs2 deals with this situation? 
   >
   >PVFS2 deals with this situation the same way a raid-0 array would deal
   >with it:  there would be a loss of data, and you'd have to restore
   >from backups.  The common solution is to deploy raid 1 on each pvfs2
   >storage node.  then pvfs2 can sustain one disk failure per storage
   >node.
   >
   >> have read the "pvfs-ha.pdf" but this kind of solution based on my
   >> cluster nodes have redundance disks that I don't have. Does pvfs
   >> support redundance storage policy? Just like RAID 1, when data
   >> arrives, we write it to two nodes and at the same time, we write
   >> another copy of data to the other two nodes.  Thanks, any
   >> suggestions will be appreciated.
   >
   >If you don't want to pay for additional hardware and you don't want to
   >pay for enough storage to back up pvfs2, then you'll have to treat
   >PVFS2 as it was intended: as a fast scratch space for applications.
   >Commonly, data is staged onto pvfs2 before running an IO-intensive
   >application and shipped off of to storage which is presumably
   >backed-up.  
   >
   >Software-based redundancy is a lot harder to solve at the file-system
   >layer than it is at the device layer.  Specifically, it's a real
   >challenge to in effect write two copies of data without cutting
   >overall write performance in half. Several research efforts are
   >ongoing to deliver software redundancy with high performance, but
   >these efforts are still in early stages. 
   >
   >I hope this explanation is clear.  There is definitely a lot of demand
   >for software-based reduncancy, and we're working on it. 
   >
   >==rob
   >
   >-- 
   >Rob Latham
   >Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
   >Argonne National Labs, IL USA                B29D F333 664A 4280 315B
   >

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to