On Sun July 27 2008 17:16, Василишин Андрей wrote:
> 
> [EMAIL PROTECTED] пишет:
> > Hello Andrey,
> >
> >   
> >> I  am do that I want :)  The fist, that I saw it was sendfile on nginx 
> >> doesn't work.
> >>     
> >
> > I guess you forgot applying splice-2.6.23.patch and enabling
> > CONFIG_AUFS_SPLICE_PATCH (if your kernel is linux-2.6.23 or later).
> >   
> Thank for patch :)
> > (from aufs README)
> > - splice-2.6.23.patch
> >   For linux-2.6.23 and later.
> >   When you use splice(2) (sendfile(2) in linux-2.6.23 and later), or
> >   loopback-mount an fs-image file in aufs, this patch is
> >   required. Aufs doesn't support splice(2) in linux-2.6.22 and
> >   earlier.
> >   If unionfs patch v2.3 or later is applied to your kernel, then you
> >   don't need this patch.
> >
> >
> > I thought thar raid5 can recover data after ONE drive failure. Is what
> > you have experienced a bug in software raid?
> >   
> 
> It was software RAID, when I did reshape from 8 disk to 10 my server was 
> reboot and then I saw dmesg, where was written  that  4 of 10 disks are 
> fail.
> 

From the earlier posts, it sounds like something is getting overlooked here. ;)

RAID does not have to be run with physical devices as the underlying storage,
it can, for instance, be run on top of RAID devices.

Starting at the machine, do:
Two (or three) machine buses;
One controller card per bus (total of two (or three) controller cards);
Split the storage devices into two (or three) groups;
Do not share cabling or power among the groups;

Now define RAID1 drive arrays, using one drive from each of the two (or three) 
groups;
That gives you (with either 8 or 9 drives) - 
Either 4, 2-for-1 RAID1 arrays or 3, 3-for-1 RAID1 arrays;

Now you have a choice -
You can join those four (or three) arrays with your choice of another type of 
RAID
(the /dev/mdxx are the underlying storage for this level, not the /dev/sdxx 
devices);
Or;
You can join them (the /dev/mdxx arrays) with LVM2 and specify how you want the 
storage used.

With 2-for-1 RAID1 - you will need a way to be notified and manually intervene 
on failure of
a drive - -
With 3-for-1 RAID1 - the system will put the 'spare' on-line automatically.
That gives you (as seen from the LVM2 device) 3*418G storage (three groups) -
each of which requires a triple-failure before you lose anything;
and requires a double-failure before you have to manually intervene;
a single-failure only takes you down (on 1/3 of the storage) from 3-for-1 to 
2-for-1 RAID1.

Loss of a machine bus, interface controller, or drive cable is a single-point 
failure that
can take all three groups down from 3-for-1 to 2-for-1 RAID1.

Use quality equipment, and you can run that for years without data loss.

No auFS involved anywhere.  All of that runs at the device mapper level of the 
system.

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/

Reply via email to