Well if the disk in the stripe dies and it doesn't have a mirror, the
whole metadevice will go into maintenance. In which case, your file
systems are inconsistent and you have major issues. Meaning, the write
that was broken up for striping would fail. 

If it was a temporary write error, it gets retried a certain number of
times. If it succeeds within that range, everything is okay. Otherwise,
you have the maintenance issue again. 

The key fault here with only striping is that without mirroring or the
use of raid5, you would have corruption because 25% of your storage
disappears when the disk fails. This would be the same on any volume
manager that was configured the same.

If you had your mirrors striped or stripes mirrored, you would have
redundancy to deal with a disk failure. If you were doing raid5, you'd
have parity to recover the bad disk, but you'd need a spare disk to
rebuild the component.

I hope that makes sense.


--- Atul Vidwansa <[EMAIL PROTECTED]> wrote:

> Even if the metadevice goes into maintenance and I recover the failed
> disk somehow, what will be the status of failed write? Will disks 1,2
> and 3 have the data stripe or not?
> 
> Regards,
> -Atul
> 
> On 4/3/07, Octave Orgeron <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > Without looking at the code, but speaking from experience the write
> > would be retried. If the block is bad, it will get marked as bad
> and
> > the write will happen elsewhere on the disk. If the disk is bad,
> > meaning it's not replying or we're getting too many I/O errors, the
> > disk will go into maintenance mode in SVM. However, because you
> have a
> > stripe setup, you better have mirrors and a hot spare disk.
> Otherwise,
> > you're toast! Meaning, the entire metadevice will go into
> maintenance
> > mode because you're missing an element that doesn't have a mirror.
> So
> > while it's cheap to stripe, it's unwise to do so without mirroring
> and
> > having a hot spare setup.
> >
> > Octave
> >
> > --- Atul Vidwansa <[EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >     If SVM or Shared QFS is striping data across multiple disks
> and
> > > one of the disk replies with I/O error, how is that error
> handled? In
> > > case error is persistant, does SVM or shared QFS will roll back
> that
> > > transaction?
> > >     For example, lets say I am striping across 4 disks using SVM.
> > > While writing a full stripe across 4 disks, disks 1, 2 and 3
> report
> > > success whereas disk 4 reports I/O error. If no mirror is
> specified
> > > for this diskset, how is this error handled? Will SVM revert back
> to
> > > old copy of data on disks 1,2 and 3?
> > >
> > > Regards,
> > > -Atul
> > >
> > > PS: can someone point me to the code in svm in opensolaris where
> such
> > > condition is handled?
> > > _______________________________________________
> > > sysadmin-discuss mailing list
> > > [EMAIL PROTECTED]
> > > http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss
> > >
> >
> >
> > *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> > Octave J. Orgeron
> > Solaris Systems Engineer
> > http://www.opensolaris.org/os/community/sysadmin/
> > http://unixconsole.blogspot.com
> > [EMAIL PROTECTED]
> > *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> >
> >
> >
> >
>
____________________________________________________________________________________
> > Expecting? Get great news right away with email Auto-Check.
> > Try the Yahoo! Mail Beta.
> > http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html
> >
> 


*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Octave J. Orgeron
Solaris Systems Engineer
http://www.opensolaris.org/os/community/sysadmin/
http://unixconsole.blogspot.com
[EMAIL PROTECTED]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


 
____________________________________________________________________________________
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to