Neil Brown wrote:
On Monday November 27, [EMAIL PROTECTED] wrote:
using reiserfs over raid5 with 5 disks. This is unnecessarily suboptimal, it
should be that parity
writes are 20% of the disk bandwidth. Comments?
Is there a known reason why reiserfs over raid5 is way worse than
On Tue, Nov 28, 2000 at 11:46:10AM -0700, Hans Reiser wrote:
We need vm to push the FS which pushes raid, [..]
This just happens.
[..] and the FS needs to know about
or even contain the stripe cache.
That's not a blkdev layer issue but a FS-LVM/RAID issue.
What's suboptimal at the moment
On Tuesday November 28, [EMAIL PROTECTED] wrote:
Hi,
I'm forwarding the message to you guys because I got no answer from Ingo
Thanks
I would suggest always CCing to [EMAIL PROTECTED] I have
taken the liberty of Ccing this reply there.
-- Forwarded message --
Date:
snip
If I understood correctly, bh-b_rsector is used to know if the sector
number of the request being processed is not inside the resync range.
In case it is, it sleeps waiting for the resync daemon. Otherwise, it can
send the operation to the lower level block device(s).
The
Neil Brown wrote:
On Tuesday November 28, [EMAIL PROTECTED] wrote:
Hi Neil!
I did a raidsetfaulty - raidhotremove - raidhotadd sequence and hit an Oops.
I refer to a posting of you about raid5 where you suggested this to a guy
and so I thought this should work, but I always get an
On Tuesday November 28, [EMAIL PROTECTED] wrote:
snip
If I understood correctly, bh-b_rsector is used to know if the sector
number of the request being processed is not inside the resync range.
In case it is, it sleeps waiting for the resync daemon. Otherwise, it can
send
After a reboot... (with the push-button)
www-neu:/root # cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 hda1[0]
51264 blocks [2/1] [U_]
md1 : active raid1 hdc2[1] hda2[0]
9970560 blocks [2/2] [UU]
[] resync = 21.0%
Friedrich Lobenstock wrote:
My guess is that hdc1 was the "current drive" (-last_used) when you
did that setfaulty, and still when you did the raidhotadd, and trying
to rebuild from that drive causes the Oopps.
If I am right, then you should get this result about half the time
that
On Tuesday November 28, [EMAIL PROTECTED] wrote:
On Tue, Nov 28, 2000 at 10:50:06AM +1100, Neil Brown wrote:
However, there is only one "unplug-all-devices"(*) call in the API
that a reader or write can make. It is not possible to unplug a
particular device, or better still, to
Linus,
A couple of versions of this patch went into Alan's tree, but weren't
quite right. This one is minimal, but works.
The problem is that the the tidy up of xor.o, it auto-initialises
itself, instead of being called by raid.o, and so needs to be linked
*before* md.o, as the
Linus,
I sent this patch to Alan a little while ago, but after ac4, so I
don't know if it went into his tree.
There is a bit of code at the front of raid5_sync_request which
calculates which block is the parity block for a given stripe.
However, to convert from a block number (1K units)
Linus,
The are currently two ways to get md/raid devices configured at boot
time.
AUTODETECT_RAID finds bits of raid arrays from partition types and
automagically connected them together
MD_BOOT allows bits of raid arrays to be explicitly described on the
boot line.
Currently,
Linus,
md currently has two #defines which give a limit to the number of
devices that can be in a given raid array:
MAX_REAL (==12) dates back to the time before we had persistent
superblocks, and mostly affects raid0
MD_SB_DISKS (==27) is a characteristic of the newer persistent
Linus,
This is a resend of a patch that probably got lost a week or so ago.
(It is also more gramatically correct).
If md.c has two raid arrays that need to be resynced, and they share
a physical device, then the two resyncs are serialised. However the
message printed says something
14 matches
Mail list logo