Neil Brown wrote:
2.6.12 does support reducing the number of drives in a raid1, but it
will only remove drives from the end of the list. e.g. if the
state was
58604992 blocks [3/2] [UU_]
then it would work. But as it is
58604992 blocks [3/2] [_UU]
it won't. You could fai
Iustin Pop wrote:
> Maybe it's because md doesn't support barriers whereas the disks
> supports them? In this case some filesystems, for example XFS, will work
> faster on raid1 because they can't force the flush to disk using
> barriers.
It's an ext3 partition, so I guess that doesn't apply?
I t
On Sat, Sep 15, 2007 at 02:18:19PM +0200, Goswin von Brederlow wrote:
> Shouldn't it be the other way around? With a barrier the filesystem
> can enforce an order on the data written and can then continue writing
> data to the cache. More data is queued up for write. Without barriers
> the filesyst
Iustin Pop <[EMAIL PROTECTED]> writes:
> On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
>> (Kernel: 2.6.18, x86_64)
>>
>> Is it normal for an MD RAID1 partition with 1 active disk to perform
>> differently from a non-RAID partition?
>>
>> md0 : active raid1 sda2[0]
>> 8193
On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
> (Kernel: 2.6.18, x86_64)
>
> Is it normal for an MD RAID1 partition with 1 active disk to perform
> differently from a non-RAID partition?
>
> md0 : active raid1 sda2[0]
> 8193024 blocks [2/1] [U_]
>
> I'm building a search