This patch changes the disk to be read for layout far 1 to always be
the disk with the lowest block address.
Thus the chunks to be read will always be (for a fully functioning array)
from the first band of stripes, and the raid will then work as a raid0
consisting of the first band of stripes
David Greaves wrote:
Jan Engelhardt wrote:
Feel free to argue that the manpage is clear on this - but as we know, not
everyone reads the manpages in depth...
That is indeed suboptimal (but I would not care since I know the
implications of an SB at the front)
Neil cares even
Jan Engelhardt wrote:
Feel free to argue that the manpage is clear on this - but as we know, not
everyone reads the manpages in depth...
That is indeed suboptimal (but I would not care since I know the
implications of an SB at the front)
Neil cares even less and probably doesn't even need
Jan Engelhardt wrote:
On Jan 29 2008 18:08, Bill Davidsen wrote:
IIRC there was a discussion a while back on renaming mdadm options
(google Time to deprecate old RAID formats?) and the superblocks
to emphasise the location and data structure. Would it be good to
introduce the new names at
On Feb 10 2008 10:34, David Greaves wrote:
Jan Engelhardt wrote:
On Jan 29 2008 18:08, Bill Davidsen wrote:
IIRC there was a discussion a while back on renaming mdadm options
(google Time to deprecate old RAID formats?) and the superblocks
to emphasise the location and data structure. Would
change that
ushers in broader benefit.
I acknowledge that I am only talking semantics - OTOH I think semantics can be a
very important aspect of communication.
David
PS I would love to send a patch to mdadm in - I am currently being heavily
nagged to sort out our house electrics and get lunch. It may
On Feb 10 2008 12:27, David Greaves wrote:
I do not see anything wrong by specifying the SB location as a metadata
version. Why should not location be an element of the raid type?
It's fine the way it is IMHO. (Just the default is not :)
There was quite a discussion about it.
For me the
).
What this patch does is just checks the size of available memory,
and assigns the appropriate, safe, value to initial max_nr_stripes:
either the hard-coded NR_STRIPES value if it's safe in the sense that
we'll have some free memory available using stripe cache with such size,
or the calculated
On Jan 29 2008 18:08, Bill Davidsen wrote:
IIRC there was a discussion a while back on renaming mdadm options
(google Time to deprecate old RAID formats?) and the superblocks
to emphasise the location and data structure. Would it be good to
introduce the new names at the same time as
Bill Davidsen wrote:
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options
(google Time
to deprecate old RAID formats?) and the superblocks to emphasise the
location
and
Tim Southerwood wrote:
David Greaves wrote:
IIRC Doug Leford did some digging wrt lilo + grub and found that 1.1 and 1.2
wouldn't work with them. I'd have to review the thread though...
David
-
For what it's worth, that was my finding too. -e 0.9+1.0 are fine with
GRUB, but 1.1 an 1.2
* The only raid level providing unfettered access to the underlying
filesystem is RAID1 with a superblock at its end, and it has been common
wisdom for years that you need RAID1 boot partition in order to boot
anything at all.
Ah. This shines light on my problem...
The problem is that
This makes 1.0 the default sb type for new arrays.
Signed-off-by: Jan Engelhardt [EMAIL PROTECTED]
---
Create.c |6 --
super0.c |4 +---
super1.c |2 +-
3 files changed, 2 insertions(+), 10 deletions(-)
Index: mdadm-2.6.4/Create.c
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options (google Time
to deprecate old RAID formats?) and the superblocks to emphasise the location
and data structure. Would it be good to introduce the new
Peter Rabbitson wrote:
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options
(google Time
to deprecate old RAID formats?) and the superblocks to emphasise the
location
and data
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options (google Time
to deprecate old RAID formats?) and the superblocks to emphasise the location
and data structure. Would it be good to
. Would it be good to
introduce the new names at the same time as changing the default
format/on-disk-location?
The -e 1.0/1.1/1.2 is sufficient for me, I would not need
--metadata 1 --metadata-layout XXX.
So renaming options should definitely be a separate patch.
-
To unsubscribe from this list
David Greaves wrote:
Peter Rabbitson wrote:
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options
(google Time
to deprecate old RAID formats?) and the superblocks to emphasise the
Hi
I have made some patches to hdparm to report min/max transfer rates,
and min/avg/max access times. Enjoy!
http://std.dkuug.dk/keld/hdparm-7.7-ks.tar.gz
Best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
I have been running Konstantin's patch to add raid1 load balancing
since last November. I follow Linus' git version of the kernel + this
patch and haven't noticed any drawback.
Maybe it would be a good idea to apply it, maybe with a FIXME which
reminds people that a more elaborate solution could
Signed-off-by: Jan Engelhardt [EMAIL PROTECTED]
---
drivers/md/md.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index cef9ebd..6295b90 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5033,7 +5033,7 @@ static int
(This should be merged with fix-occasional-deadlock-in-raid5.patch)
As we don't call stripe_handle in make_request any more, we need to
clear STRIPE_DELAYED to (previously done by stripe_handle) to ensure
that we test if the stripe still needs to be delayed or not.
Signed-off-by: Neil Brown
Finish ITERATE_ to for_each conversion.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |8
./include/linux/raid/md_k.h | 14 --
2 files changed, 8 insertions(+), 14 deletions(-)
diff .prev/drivers/md/md.c
Currently, a given device is claimed by a particular array so
that it cannot be used by other arrays.
This is not ideal for DDF and other metadata schemes which have
their own partitioning concept.
So for externally managed metadata, just claim the device for
md in general, require that offset
-attributes-of-component-devices.patch
The third is a replacement for
md-change-iterate_rdev_generic-to-rdev_for_each_list-and-remove-iterate_rdev_pending.patch
which conflicts with the above change.
The last is a fix for
md-fix-an-occasional-deadlock-in-raid5.patch
which makes me a lot
If you try to start an array for which the number of raid disks is
listed as zero, md will currently try to read metadata off any devices
that have been given. This was done because the value of raid_disks
is used to signal whether array details have been provided by
userspace (raid_disks 0) or
On Tuesday January 15, [EMAIL PROTECTED] wrote:
On Wed, 16 Jan 2008 00:09:31 -0700 Dan Williams [EMAIL PROTECTED] wrote:
heheh.
it's really easy to reproduce the hang without the patch -- i could
hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB.
i'll try
On Mon, 14 Jan 2008, NeilBrown wrote:
raid5's 'make_request' function calls generic_make_request on
underlying devices and if we run out of stripe heads, it could end up
waiting for one of those requests to complete.
This is bad as recursive calls to generic_make_request go on a queue
and
On Tue, 15 Jan 2008 21:01:17 -0800 (PST) dean gaudet [EMAIL PROTECTED] wrote:
On Mon, 14 Jan 2008, NeilBrown wrote:
raid5's 'make_request' function calls generic_make_request on
underlying devices and if we run out of stripe heads, it could end up
waiting for one of those requests to
shouldn't be
a big problem. While the fix is fairly simple, it could have some
unexpected consequences, so I'd rather go for the next cycle.
food fight!
heheh.
it's really easy to reproduce the hang without the patch -- i could
hang the box in under 20 min on 2.6.22+ w/XFS and raid5
heheh.
it's really easy to reproduce the hang without the patch -- i could
hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB.
i'll try with ext3... Dan's experiences suggest it won't happen with ext3
(or is even more rare), which would explain why this has is overall a
rare
On Wed, 16 Jan 2008 00:09:31 -0700 Dan Williams [EMAIL PROTECTED] wrote:
heheh.
it's really easy to reproduce the hang without the patch -- i could
hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB.
i'll try with ext3... Dan's experiences suggest it won't happen
On Mon, Jan 14, 2008 at 05:28:44PM +1100, Neil Brown wrote:
On Monday January 14, [EMAIL PROTECTED] wrote:
Thanks. I'll see what I can some up with.
How about this, against current -mm
On both the read and write path for an rdev attribute, we
call mddev_lock, first checking that
On Mon, Jan 14, 2008 at 12:59:39PM +, Al Viro wrote:
I really don't like the entire scheme, to be honest. BTW, what happens
if you try to add the same device to the same array after having it kicked
out? If that comes before your delayed kobject_del(), the things will
get nasty since
. While the fix is fairly simple, it could
have some unexpected consequences, so I'd rather go for the next cycle.
The second patch fixes a bug which only affect -mm at the moment but
will probably affect 2.6.25 unless fixed.
The rest are cleanups with no functional change (I hope).
Thanks
As this is more consistent with kernel style.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2008-01-14
as this is morein line with common practice in the kernel.
Also swap the args around to be more like list_for_each.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/bitmap.c |4 +-
./drivers/md/faulty.c |2 -
./drivers/md/linear.c |2 -
Finish ITERATE_ to for_each conversion.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |8
./include/linux/raid/md_k.h | 14 --
2 files changed, 8 insertions(+), 14 deletions(-)
diff .prev/drivers/md/md.c
On Monday January 14, [EMAIL PROTECTED] wrote:
On Mon, Jan 14, 2008 at 12:45:31PM +1100, NeilBrown wrote:
Due to possible deadlock issues we need to use a schedule work to
kobject_del an 'rdev' object from a different thread.
A recent change means that kobject_add no longer gets a
On Monday January 14, [EMAIL PROTECTED] wrote:
On Mon, Jan 14, 2008 at 02:21:45PM +1100, Neil Brown wrote:
Maybe it isn't there any more
Once upon a time, when I
echo remove /sys/block/mdX/md/dev-YYY/state
Egads. And just what will protect you from parallel callers
of
On Monday January 14, [EMAIL PROTECTED] wrote:
Thanks. I'll see what I can some up with.
How about this, against current -mm
On both the read and write path for an rdev attribute, we
call mddev_lock, first checking that mddev is not NULL.
Once we get the lock, we check again.
If rdev-mddev
This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and 24-rc.
It would be great if it cold get into 23.13 and 24.final.
Thanks.
NeilBrown
### Comments for Changeset
We currently do not wait for the block from the missing device
to be computed from parity before copying data to the new
On Thu, 2008-01-03 at 15:46 -0700, NeilBrown wrote:
This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and 24-rc.
It would be great if it cold get into 23.13 and 24.final.
Thanks.
NeilBrown
### Comments for Changeset
We currently do not wait for the block from the missing device
On Thu, 2008-01-03 at 16:00 -0700, Williams, Dan J wrote:
On Thu, 2008-01-03 at 15:46 -0700, NeilBrown wrote:
This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and
24-rc.
It would be great if it cold get into 23.13 and 24.final.
Thanks.
NeilBrown
### Comments for Changeset
for then Andrew commits this and make sure
the right patch goes in...
Thanks,
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
stripe_cache_size (currently raid5 only)
As far as I have understood, it applies to raid6, too.
Kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Sun, 30 Dec 2007, Thiemo Nagel wrote:
stripe_cache_size (currently raid5 only)
As far as I have understood, it applies to raid6, too.
good point... and raid4.
here's an updated patch.
-dean
Signed-off-by: dean gaudet [EMAIL PROTECTED]
Index: linux/Documentation/md.txt
On Sun, 30 Dec 2007, dean gaudet wrote:
On Sun, 30 Dec 2007, Thiemo Nagel wrote:
stripe_cache_size (currently raid5 only)
As far as I have understood, it applies to raid6, too.
good point... and raid4.
here's an updated patch.
and once again with a typo fix. oops.
-dean
Document the amount of memory used by the stripe cache and the fact that
it's tied down and unavailable for other purposes (right?). thanks to Dan
Williams for the formula.
-dean
Signed-off-by: dean gaudet [EMAIL PROTECTED]
Index: linux/Documentation/md.txt
Yuri Tikhonov wrote:
This patch implements support for the asynchronous computation of RAID-6
syndromes.
It provides an API to compute RAID-6 syndromes asynchronously in a format
conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum
functions are very similar to async_xor
On Fri, 14 Dec 2007 17:26:08 +1100 NeilBrown [EMAIL PROTECTED] wrote:
+ if (strncmp(buf, external:, 9) == 0) {
+ int namelen = len-9;
+ if (namelen = sizeof(mddev-metadata_type))
+ namelen = sizeof(mddev-metadata_type)-1;
+
On Fri, 14 Dec 2007 17:26:28 +1100 NeilBrown [EMAIL PROTECTED] wrote:
+ mddev_unlock(rdev-mddev);
+ ITERATE_MDDEV(mddev, tmp) {
+ mdk_rdev_t *rdev2;
+
+ mddev_lock(mddev);
+ ITERATE_RDEV(mddev, rdev2, tmp2)
, but it might (will) change??
In that case can we have the patch as it stands and when the path to
block devices in /sys changes, the ioctl can be changed at the same
time to match?
Or are you saying that as the kernel is today, some block devices
appear under /devices/..., in which case could you please
On Dec 14, 2007 7:26 AM, NeilBrown [EMAIL PROTECTED] wrote:
Given an fd on a block device, returns a string like
/block/sda/sda1
which can be used to find related information in /sys.
Ideally we should have an ioctl that works on char devices as well,
but that seems far from
Following are 7 md related patches are suitable for the next -mm
and maybe for 2.6.25.
They move towards giving user-space programs more fine control of an
array so that we can add support for more complex metadata formats
(e.g. DDF) without bothering the kernel with such things.
The last patch
- Add a state flag 'external' to indicate that the metadata is managed
externally (by user-space) so important changes need to be
left of user-space to handle.
Alternates are non-persistant ('none') where there is no stable metadata -
after the array is stopped there is no record of
When a device fails, we must not allow an further writes to the array
until the device failure has been recorded in array metadata.
When metadata is managed externally, this requires some synchronisation...
Allow/require userspace to explicitly remove failed devices
from active service in the
This allows userspace to control resync/reshape progress and
synchronise it with other activities, such as shared access in a SAN,
or backing up critical sections during a tricky reshape.
Writing a number of sectors (which must be a multiple of the chunk
size if such is meaningful) causes a
Currently, a given device is claimed by a particular array so
that it cannot be used by other arrays.
This is not ideal for DDF and other metadata schemes which have
their own partitioning concept.
So for externally managed metadata, just claim the device for
md in general, require that offset
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-12-14 16:09:01.0 +1100
+++ ./drivers/md/md.c 2007-12-14
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-12-14 16:09:03.0 +1100
+++ ./drivers/md/md.c 2007-12-14
Given an fd on a block device, returns a string like
/block/sda/sda1
which can be used to find related information in /sys.
Ideally we should have an ioctl that works on char devices as well,
but that seems far from trivial, so it seems reasonable to have
this until the later can be
(the same as the
bitmap update time) does not notciably affect resync performance.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
Hi Neil,
You forgot to export bitmap_cond_end_sync. Please see the attached patch.
regards,
Mike
diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
index f31ea4f
Hello Neil,
On Friday 07 December 2007 03:10:37 Neil Brown wrote:
On Thursday December 6, [EMAIL PROTECTED] wrote:
Hello,
here is the second version of the patch. With this version also on
setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up.
Though, I still don't
Hello,
here is the second version of the patch. With this version also on
setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up.
Though, I still don't understand why md_wakeup_thread() is not working.
Signed-off-by: Bernd Schubert [EMAIL PROTECTED]
Index: linux-2.6.22
On Thursday December 6, [EMAIL PROTECTED] wrote:
Hello,
here is the second version of the patch. With this version also on
setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up.
Though, I still don't understand why md_wakeup_thread() is not working.
Could give a little
From: H. Peter Anvin [EMAIL PROTECTED]
Make both mktables.c and its output CodingStyle compliant. Update the
copyright notice.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/mktables.c | 43
Following 3 patches for md provide some code tidyup and a small
functionality improvement.
They do not need to go into 2.6.24 but are definitely appropriate 25-rc1.
(Patches made against 2.6.24-rc3-mm2)
Thanks,
NeilBrown
[PATCH 001 of 3] md: raid6: Fix mktable.c
[PATCH 002 of 3] md: raid6
From: H. Peter Anvin [EMAIL PROTECTED]
Date: Fri, 26 Oct 2007 11:22:42 -0700
Clean up the coding style in raid6test/test.c. Break it apart into
subfunctions to make the code more readable.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat
Currently and md array with a write-intent bitmap does not updated
that bitmap to reflect successful partial resync. Rather the entire
bitmap is updated when the resync completes.
This is because there is no guarentee that resync requests will
complete in order, and tracking each request
The raid_run_ops routine uses the asynchronous offload api and
the stripe_operations member of a stripe_head to carry out xor+pqxor+copy
operations asynchronously, outside the lock.
The operations performed by RAID-6 are the same as in the RAID-5 case
except for no support of STRIPE_OP_PREXOR
This patch implements support for the asynchronous computation of RAID-6
syndromes.
It provides an API to compute RAID-6 syndromes asynchronously in a format
conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum
functions are very similar to async_xor functions but make use
This patch adds support for asynchronous RAID-6 recovery operations.
An asynchronous implementation using async_tx API is provided to compute
two missing data blocks (async_r6_dd_recov) and to compute one missing data
block and one missing parity_block (async_r6_dp_recov).
In general
Scheduling and processing the asynchronous computations.
handle_stripe will compute a block when a backing disk has failed. Since both
RAID-5/6 use the same ops_complete_compute() we should set the second
computation target in RAID-5 to (-1) [no target].
Signed-off-by: Yuri Tikhonov [EMAIL
Some clean-up of the replaced or already unnecessary functions.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9b6336f..1d45887 100644
--- a/drivers/md/raid5.c
+++
We utilize get_stripe_work() to find new work to run. This function is shared
with RAID-5. The only RAID-5 specific operation there is PREXOR. Then we call
raid_run_ops() to process the requests pending.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL
Support for expanding RAID-6 stripes asynchronously.
By setting STRIPE_OP_POSTXOR without setting STRIPE_OP_BIODRAIN the
completion path in handle stripe can differentiate expand operations
from normal write operations.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail
I/O submission requests were already handled outside of the stripe lock in
handle_stripe. Now that handle_stripe is only tasked with finding work,
this logic belongs in raid5_run_ops
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
diff
().
This patch introduces one more RAID-5/6 shared function, it is
handle_completed_postxor_requests(), to be called when either handle_stripe5()
or handle_stripe6() discover the completeness of a post-xor operation for the
stripe.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail
corresponds
to the correct parity, non-zerp - to non-correct.
This patch also removes spare page for RAID-6 Q-parity check since it gone
into async_pqxor() [this need for the synchronous CPU cases only; if the check
operation is being performed by DMA - there is no need in spares].
Signed-off
When a read bio is attached to the stripe and the corresponding block is
marked as R5_UPTODATE, then a biofill operation is scheduled to copy
the data from the stripe cache to the bio buffer.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
Hello all,
Here is a patch which allows to skip intermediate data copying between the bio
requested to write and the disk cache in sh if the full-stripe write
operation is
on the way.
This improves the performance of write operations for some dedicated cases
when big chunks of data
On Friday November 23, [EMAIL PROTECTED] wrote:
Hello all,
Here is a patch which allows to skip intermediate data copying between the
bio
requested to write and the disk cache in sh if the full-stripe write
operation is
on the way.
This improves the performance of write
also pick up def6ae26 md: fix
misapplied patch in raid5.c or I can resend the original raid5: fix
clearing of biofill operations.
The other patch for -stable raid5: fix unending write sequence is
currently in -mm.
Hm, I've attached the two patches that I have right now
On Tuesday November 13, [EMAIL PROTECTED] wrote:
raid5-fix-unending-write-sequence.patch is in -mm and I believe is
waiting on an Acked-by from Neil?
It seems to have just been sent on to Linus, so it probably will go in
without:
Acked-By: NeilBrown [EMAIL PROTECTED]
I'm beginning to
On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote:
It appears that a couple of bugs slipped in to md for 2.6.23.
These two patches fix them and are appropriate for 2.6.23.y as well
as 2.6.24-rcX
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps
with v1.0 metadata.
[PATCH 002 of 2] md: raid5: fix clearing of biofill operations
I don't see these patches in 2.6.24-rcX, are they there under some other
subject?
Oh nevermind, I found them, sorry for the noise
for 2.6.23.y as well
as 2.6.24-rcX
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of
bitmaps with v1.0 metadata.
[PATCH 002 of 2] md: raid5: fix clearing of biofill operations
I don't see these patches in 2.6.24-rcX, are they there under
in to md for 2.6.23.
These two patches fix them and are appropriate for 2.6.23.y as well
as 2.6.24-rcX
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of
bitmaps with v1.0 metadata.
[PATCH 002 of 2] md: raid5: fix clearing of biofill
On Nov 13, 2007 8:43 PM, Greg KH [EMAIL PROTECTED] wrote:
Careful, it looks like you cherry picked commit 4ae3f847 md: raid5:
fix clearing of biofill operations which ended up misapplied in
Linus' tree, You should either also pick up def6ae26 md: fix
misapplied patch in raid5.c or I can
Konstantin == Konstantin Sharlaimov
[EMAIL PROTECTED] writes:
Konstantin This patch adds RAID1 read balancing to device mapper. A
Konstantin read operation that is close (in terms of sectors) to a
Konstantin previous read or write goes to the same mirror.
I am currently running it on top
chunk size. So read/write within a 16k chunk will be the same disk
but the next 16k are a different disk and near doesn't apply
anymore.
Currently there is no way to turn this feature off (this is only a
request for comments patch), but I'm planning to make it configurable
via sysfs and module
Rik van Riel wrote:
On Thu, 08 Nov 2007 17:28:37 +0100
Goswin von Brederlow [EMAIL PROTECTED] wrote:
Maybe you need more parameter:
Generally a bad idea, unless you can come up with sane defaults (which
do not need tuning 99% of the time) or you can derive these parameters
a
request for comments patch), but I'm planning to make it configurable
via sysfs and module parameters.
Thanks for suggestion for the near definition. What do you think about
adding the chunk_size parameter (with the default value of 1 chunk = 1
sector). Setting it to 32 will make all reads within 16k
On Thu, 08 Nov 2007 17:28:37 +0100
Goswin von Brederlow [EMAIL PROTECTED] wrote:
Maybe you need more parameter:
Generally a bad idea, unless you can come up with sane defaults (which
do not need tuning 99% of the time) or you can derive these parameters
automatically from the RAID configuration
was sampled 'set' when it
should not have been. This patch cleans up cases where the code looks at
sh-ops.pending when it should be looking at the consistent stack-based
snapshot of the operations flags.
Report from Joël:
Resync done. Patch fix this bug.
Signed-off-by: Dan Williams [EMAIL
Rik van Riel [EMAIL PROTECTED] writes:
On Thu, 08 Nov 2007 17:28:37 +0100
Goswin von Brederlow [EMAIL PROTECTED] wrote:
Maybe you need more parameter:
Generally a bad idea, unless you can come up with sane defaults (which
do not need tuning 99% of the time) or you can derive these
Konstantin Sharlaimov [EMAIL PROTECTED] writes:
This patch adds RAID1 read balancing to device mapper. A read operation
that is close (in terms of sectors) to a previous read or write goes to
the same mirror.
I wonder if there shouldn't be a way to turn this off (or if there
already is one
This patch adds RAID1 read balancing to device mapper. A read operation
that is close (in terms of sectors) to a previous read or write goes to
the same mirror.
Signed-off-by: Konstantin Sharlaimov [EMAIL PROTECTED]
---
Please give it a try, it works for me, yet my results might be system
On Sat, 03 Nov 2007 20:08:42 +1000
Konstantin Sharlaimov [EMAIL PROTECTED] wrote:
This patch adds RAID1 read balancing to device mapper. A read
operation that is close (in terms of sectors) to a previous read or
write goes to the same mirror.
Signed-off-by: Konstantin Sharlaimov [EMAIL
1 - 100 of 1240 matches
Mail list logo