just a comment, It sure would be nice if LVM supported a raid5 or raidz
style raid within the volume manager so we didn't have to put LVM on top of
md*.  sure would be nice to have native ZFS on linux. :(  for this reason
i'm considering nexenta and backuppc, as well as a faster network stack in
the solaris kernel or freebsd 'cause freebsd rocks!  you can put a
backuppc/freebsd/ZFS install on a 128MB CF card and use all your hard disk
space for ZFS raidz volumes!

On Wed, Feb 27, 2008 at 3:35 PM, David Rees <[EMAIL PROTECTED]> wrote:

> On Wed, Feb 27, 2008 at 2:54 AM, Tomasz Chmielewski <[EMAIL PROTECTED]>
> wrote:
> >  Stripe size is 64k.
> >  Also, the system was made with just "mkfs.ext3 -j /dev/sdX", so without
> >  the stride option (or other useful options, like online resizing, which
> >  is enabled by default only in the recent releases of e2fsprogs).
> >
> >  On the other hand, using "stride" is a bit unclear for me.
> >
> >  Although you can somehow calculate it if you place your fs directly on
> a
> >  RAID array:
> >
> >    stride=stripe-size
> >        Configure the filesystem for a RAID array with stripe-size
> >        filesystem blocks per stripe.
> >
> >  It is a bit harder if you have LVM on your RAID, I guess.
>
> Right - I don't know if LVM "offsets" the filesystem layout any. If
> you ever moved the array or reshaped the array using LVM, then you'd
> lose the performance benefits of having an optimal stride setting.
>
> IIRC, xfs automagically uses the correct stride if the RAID is a local
> array.
>
> >  But as I looked at dumpe2fs output (and HDD LEDs blinking), everything
> >  is scattered rather among all disks.
>
> Which doesn't tell us much. The point of the stride setting is to
> avoid writing specific bits of data across multiple stripes.
>
> For example, if you perform a small write across two stripes, that
> means you have to read/write two 64kB stripes in your case. By
> aligning writes, you could have avoided this and only read/write a
> single 64kB stripe. It's pretty easy to see how this might affect
> performance.
>
> >  Hey, I just disabled internal bitmap in RAID-5 and it seems the things
> >  are much faster now - this is "iostat sda -d 10" output without the
> >  internal bitmap.
>
> Doh, I forgot about RAID-5 bitmaps. I did a quick search and it
> appears that bitmaps can really kill performance. But it does prevent
> a full-resync after a crash. I don't think it's worth it in your case.
>
> It might be worth posting back to the thread on LKML (and cc
> linux-raid) to see if there are any known workarounds if you want to
> try to keep bitmaps enabled.
>
> -Dave
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2008.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> BackupPC-users mailing list
> [email protected]
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to