Neil Brown wrote:
On Sunday April 30, [EMAIL PROTECTED] wrote:
NeilBrown <[EMAIL PROTECTED]> wrote:
When a md array has been idle (no writes) for 20msecs it is marked as
'clean'. This delay turns out to be too short for some real
workloads. So increase it to 200msec (the time to update th
Neil Brown <[EMAIL PROTECTED]> wrote:
>
> On Sunday April 30, [EMAIL PROTECTED] wrote:
> > NeilBrown <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > When a md array has been idle (no writes) for 20msecs it is marked as
> > > 'clean'. This delay turns out to be too short for some real
> > > wo
On Sunday April 30, [EMAIL PROTECTED] wrote:
> NeilBrown <[EMAIL PROTECTED]> wrote:
> >
> >
> > When a md array has been idle (no writes) for 20msecs it is marked as
> > 'clean'. This delay turns out to be too short for some real
> > workloads. So increase it to 200msec (the time to update the m
NeilBrown <[EMAIL PROTECTED]> wrote:
>
>
> When a md array has been idle (no writes) for 20msecs it is marked as
> 'clean'. This delay turns out to be too short for some real
> workloads. So increase it to 200msec (the time to update the metadata
> should be a tiny fraction of that) and make it
NeilBrown wrote:
There is a lot of commonality between raid5.c and raid6main.c. This
patches merges both into one module called raid456. This saves a lot
of code, and paves the way for online raid5->raid6 migrations.
There is still duplication, e.g. between handle_stripe5 and
handle_stripe6.
The "industry standard" DDF format allows for a stripe/offset layout
where data is duplicated on different stripes. e.g.
A B C D
D A B C
E F G H
H E F G
(columns are drives, rows are stripes, LETTERS are chunks of data).
This is similar to raid10's 'far' mode, but not quite
For a while we have had checkpointing of resync.
The version-1 superblock allows recovery to be checkpointed
as well, and this patch implements that.
Due to early carelessness we need to add a feature flag
to signal that the recovery_offset field is in use, otherwise
older kernels would assume th
From: Adrian Bunk <[EMAIL PROTECTED]>
This patch makes the needlessly global md_print_devices() static.
Signed-off-by: Adrian Bunk <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |7 +--
./include/linux/raid/md.h |4 --
The largest chunk size the code can support without substantial
surgery is 2^30 bytes, so make that the limit instead of an arbitrary
4Meg.
Some day, the 'chunksize' should change to a sector-shift
instead of a byte-count. Then no limit would be needed.
Signed-off-by: Neil Brown <[EMAIL PROTECTE
This warning was slightly useful back in 2.2 days, but is more
an annoyance now. It makes it awkward to add new ioctls (that we we are
likely to do that in the current climate, but it is possible).
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |5 -
A recent change made this goto unnecessary, so reformat the
code to make it clearer what is happening.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid1.c | 34 +-
1 file changed, 17 insertions(+), 17 deletions(-)
diff ./driv
... as raid5 sync_request is WAY too big.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c | 244 ++-
1 file changed, 127 insertions(+), 117 deletions(-)
diff ./drivers/md/raid5.c~current~ ./drivers/md/raid5
When a md array has been idle (no writes) for 20msecs it is marked as
'clean'. This delay turns out to be too short for some real
workloads. So increase it to 200msec (the time to update the metadata
should be a tiny fraction of that) and make it sysfs-configurable.
Signed-off-by: Neil Brown <[
On Monday May 1, [EMAIL PROTECTED] wrote:
> Hey folks.
>
> There's no point in using LVM on a raid5 setup if all you intend to do
> in the future is resize the filesystem on it, is there? The new raid5
> resizing code takes care of providing the extra space and then as long
> as the say ext3 files
On Monday May 1, [EMAIL PROTECTED] wrote:
> Hello,
>
> Suppose a read action on a disk which is member of a raid5 (or raid1 or any
> other raid where there's data redundancy) fails.
> What ahppens next is that the entire disk is marked as "failed" and a raid5
> rebuild is initiated.
>
> However,
On Mon, May 01, 2006 at 01:17:42AM +0200, Dick Snippe wrote:
> Suppose a read action on a disk which is member of a raid5 (or raid1 or any
> other raid where there's data redundancy) fails.
> What ahppens next is that the entire disk is marked as "failed" and a raid5
> rebuild is initiated.
>
> Ho
16 matches
Mail list logo