patch for raid10,f1 to operate like raid0

2008-02-12 Thread Keld Jørn Simonsen
This patch changes the disk to be read for layout far 1 to always be the disk with the lowest block address. Thus the chunks to be read will always be (for a fully functioning array) from the first band of stripes, and the raid will then work as a raid0 consisting of the first band of stripes

Re: [PATCH] Use new sb type

2008-02-11 Thread Bill Davidsen
David Greaves wrote: Jan Engelhardt wrote: Feel free to argue that the manpage is clear on this - but as we know, not everyone reads the manpages in depth... That is indeed suboptimal (but I would not care since I know the implications of an SB at the front) Neil cares even

Re: [PATCH] Use new sb type

2008-02-11 Thread David Greaves
Jan Engelhardt wrote: Feel free to argue that the manpage is clear on this - but as we know, not everyone reads the manpages in depth... That is indeed suboptimal (but I would not care since I know the implications of an SB at the front) Neil cares even less and probably doesn't even need

Re: [PATCH] Use new sb type

2008-02-10 Thread David Greaves
Jan Engelhardt wrote: On Jan 29 2008 18:08, Bill Davidsen wrote: IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data structure. Would it be good to introduce the new names at

Re: [PATCH] Use new sb type

2008-02-10 Thread Jan Engelhardt
On Feb 10 2008 10:34, David Greaves wrote: Jan Engelhardt wrote: On Jan 29 2008 18:08, Bill Davidsen wrote: IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data structure. Would

Re: [PATCH] Use new sb type

2008-02-10 Thread David Greaves
change that ushers in broader benefit. I acknowledge that I am only talking semantics - OTOH I think semantics can be a very important aspect of communication. David PS I would love to send a patch to mdadm in - I am currently being heavily nagged to sort out our house electrics and get lunch. It may

Re: [PATCH] Use new sb type

2008-02-10 Thread Jan Engelhardt
On Feb 10 2008 12:27, David Greaves wrote: I do not see anything wrong by specifying the SB location as a metadata version. Why should not location be an element of the raid type? It's fine the way it is IMHO. (Just the default is not :) There was quite a discussion about it. For me the

[PATCH] Dynamic RAID Stripe Cache Size

2008-02-07 Thread Yuri Tikhonov
). What this patch does is just checks the size of available memory, and assigns the appropriate, safe, value to initial max_nr_stripes: either the hard-coded NR_STRIPES value if it's safe in the sense that we'll have some free memory available using stripe cache with such size, or the calculated

Re: [PATCH] Use new sb type

2008-02-07 Thread Jan Engelhardt
On Jan 29 2008 18:08, Bill Davidsen wrote: IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data structure. Would it be good to introduce the new names at the same time as

Re: [PATCH] Use new sb type

2008-01-30 Thread David Greaves
Bill Davidsen wrote: David Greaves wrote: Jan Engelhardt wrote: This makes 1.0 the default sb type for new arrays. IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and

Re: [PATCH] Use new sb type

2008-01-29 Thread Peter Rabbitson
Tim Southerwood wrote: David Greaves wrote: IIRC Doug Leford did some digging wrt lilo + grub and found that 1.1 and 1.2 wouldn't work with them. I'd have to review the thread though... David - For what it's worth, that was my finding too. -e 0.9+1.0 are fine with GRUB, but 1.1 an 1.2

Yes, but please provide the clue (was Re: [PATCH] Use new sb type)

2008-01-29 Thread Moshe Yudkowsky
* The only raid level providing unfettered access to the underlying filesystem is RAID1 with a superblock at its end, and it has been common wisdom for years that you need RAID1 boot partition in order to boot anything at all. Ah. This shines light on my problem... The problem is that

[PATCH] Use new sb type

2008-01-28 Thread Jan Engelhardt
This makes 1.0 the default sb type for new arrays. Signed-off-by: Jan Engelhardt [EMAIL PROTECTED] --- Create.c |6 -- super0.c |4 +--- super1.c |2 +- 3 files changed, 2 insertions(+), 10 deletions(-) Index: mdadm-2.6.4/Create.c

Re: [PATCH] Use new sb type

2008-01-28 Thread David Greaves
Jan Engelhardt wrote: This makes 1.0 the default sb type for new arrays. IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data structure. Would it be good to introduce the new

Re: [PATCH] Use new sb type

2008-01-28 Thread David Greaves
Peter Rabbitson wrote: David Greaves wrote: Jan Engelhardt wrote: This makes 1.0 the default sb type for new arrays. IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data

Re: [PATCH] Use new sb type

2008-01-28 Thread Peter Rabbitson
David Greaves wrote: Jan Engelhardt wrote: This makes 1.0 the default sb type for new arrays. IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the location and data structure. Would it be good to

Re: [PATCH] Use new sb type

2008-01-28 Thread Jan Engelhardt
. Would it be good to introduce the new names at the same time as changing the default format/on-disk-location? The -e 1.0/1.1/1.2 is sufficient for me, I would not need --metadata 1 --metadata-layout XXX. So renaming options should definitely be a separate patch. - To unsubscribe from this list

Re: [PATCH] Use new sb type

2008-01-28 Thread Tim Southerwood
David Greaves wrote: Peter Rabbitson wrote: David Greaves wrote: Jan Engelhardt wrote: This makes 1.0 the default sb type for new arrays. IIRC there was a discussion a while back on renaming mdadm options (google Time to deprecate old RAID formats?) and the superblocks to emphasise the

hdparm patch with min/max transfer rate, and min/avg/max access times

2008-01-25 Thread Keld Jørn Simonsen
Hi I have made some patches to hdparm to report min/max transfer rates, and min/avg/max access times. Enjoy! http://std.dkuug.dk/keld/hdparm-7.7-ks.tar.gz Best regards keld - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More

[PATCH reposted] raid1 load balancing

2008-01-24 Thread Samuel Tardieu
I have been running Konstantin's patch to add raid1 load balancing since last November. I follow Linus' git version of the kernel + this patch and haven't noticed any drawback. Maybe it would be a good idea to apply it, maybe with a FIXME which reminds people that a more elaborate solution could

[PATCH] md: constify function pointer tables

2008-01-22 Thread Jan Engelhardt
Signed-off-by: Jan Engelhardt [EMAIL PROTECTED] --- drivers/md/md.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index cef9ebd..6295b90 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5033,7 +5033,7 @@ static int

[PATCH 004 of 4] md: Fix an occasional deadlock in raid5 - FIX

2008-01-18 Thread NeilBrown
(This should be merged with fix-occasional-deadlock-in-raid5.patch) As we don't call stripe_handle in make_request any more, we need to clear STRIPE_DELAYED to (previously done by stripe_handle) to ensure that we test if the stripe still needs to be delayed or not. Signed-off-by: Neil Brown

[PATCH 003 of 4] md: Change ITERATE_RDEV_GENERIC to rdev_for_each_list, and remove ITERATE_RDEV_PENDING.

2008-01-18 Thread NeilBrown
Finish ITERATE_ to for_each conversion. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |8 ./include/linux/raid/md_k.h | 14 -- 2 files changed, 8 insertions(+), 14 deletions(-) diff .prev/drivers/md/md.c

[PATCH 002 of 4] md: Allow devices to be shared between md arrays.

2008-01-18 Thread NeilBrown
Currently, a given device is claimed by a particular array so that it cannot be used by other arrays. This is not ideal for DDF and other metadata schemes which have their own partitioning concept. So for externally managed metadata, just claim the device for md in general, require that offset

[PATCH 000 of 4] md: assorted md patched - please read carefully.

2008-01-18 Thread NeilBrown
-attributes-of-component-devices.patch The third is a replacement for md-change-iterate_rdev_generic-to-rdev_for_each_list-and-remove-iterate_rdev_pending.patch which conflicts with the above change. The last is a fix for md-fix-an-occasional-deadlock-in-raid5.patch which makes me a lot

[PATCH 001 of 4] md: Set and test the -persistent flag for md devices more consistently.

2008-01-18 Thread NeilBrown
If you try to start an array for which the number of raid disks is listed as zero, md will currently try to read metadata off any devices that have been given. This was done because the value of raid_disks is used to signal whether array details have been provided by userspace (raid_disks 0) or

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-16 Thread Neil Brown
On Tuesday January 15, [EMAIL PROTECTED] wrote: On Wed, 16 Jan 2008 00:09:31 -0700 Dan Williams [EMAIL PROTECTED] wrote: heheh. it's really easy to reproduce the hang without the patch -- i could hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB. i'll try

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-15 Thread dean gaudet
On Mon, 14 Jan 2008, NeilBrown wrote: raid5's 'make_request' function calls generic_make_request on underlying devices and if we run out of stripe heads, it could end up waiting for one of those requests to complete. This is bad as recursive calls to generic_make_request go on a queue and

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-15 Thread Andrew Morton
On Tue, 15 Jan 2008 21:01:17 -0800 (PST) dean gaudet [EMAIL PROTECTED] wrote: On Mon, 14 Jan 2008, NeilBrown wrote: raid5's 'make_request' function calls generic_make_request on underlying devices and if we run out of stripe heads, it could end up waiting for one of those requests to

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-15 Thread dean gaudet
shouldn't be a big problem. While the fix is fairly simple, it could have some unexpected consequences, so I'd rather go for the next cycle. food fight! heheh. it's really easy to reproduce the hang without the patch -- i could hang the box in under 20 min on 2.6.22+ w/XFS and raid5

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-15 Thread Dan Williams
heheh. it's really easy to reproduce the hang without the patch -- i could hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB. i'll try with ext3... Dan's experiences suggest it won't happen with ext3 (or is even more rare), which would explain why this has is overall a rare

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-15 Thread Andrew Morton
On Wed, 16 Jan 2008 00:09:31 -0700 Dan Williams [EMAIL PROTECTED] wrote: heheh. it's really easy to reproduce the hang without the patch -- i could hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB. i'll try with ext3... Dan's experiences suggest it won't happen

Re: [PATCH 002 of 6] md: Fix use-after-free bug when dropping an rdev from an md array.

2008-01-14 Thread Al Viro
On Mon, Jan 14, 2008 at 05:28:44PM +1100, Neil Brown wrote: On Monday January 14, [EMAIL PROTECTED] wrote: Thanks. I'll see what I can some up with. How about this, against current -mm On both the read and write path for an rdev attribute, we call mddev_lock, first checking that

Re: [PATCH 002 of 6] md: Fix use-after-free bug when dropping an rdev from an md array.

2008-01-14 Thread Al Viro
On Mon, Jan 14, 2008 at 12:59:39PM +, Al Viro wrote: I really don't like the entire scheme, to be honest. BTW, what happens if you try to add the same device to the same array after having it kicked out? If that comes before your delayed kobject_del(), the things will get nasty since

[PATCH 000 of 6] md: various fixes for md

2008-01-13 Thread NeilBrown
. While the fix is fairly simple, it could have some unexpected consequences, so I'd rather go for the next cycle. The second patch fixes a bug which only affect -mm at the moment but will probably affect 2.6.25 unless fixed. The rest are cleanups with no functional change (I hope). Thanks

[PATCH 004 of 6] md: Change INTERATE_MDDEV to for_each_mddev

2008-01-13 Thread NeilBrown
As this is more consistent with kernel style. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff .prev/drivers/md/md.c ./drivers/md/md.c --- .prev/drivers/md/md.c 2008-01-14

[PATCH 005 of 6] md: Change ITERATE_RDEV to rdev_for_each

2008-01-13 Thread NeilBrown
as this is morein line with common practice in the kernel. Also swap the args around to be more like list_for_each. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/bitmap.c |4 +- ./drivers/md/faulty.c |2 - ./drivers/md/linear.c |2 -

[PATCH 006 of 6] md: Change ITERATE_RDEV_GENERIC to rdev_for_each_list, and remove ITERATE_RDEV_PENDING.

2008-01-13 Thread NeilBrown
Finish ITERATE_ to for_each conversion. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |8 ./include/linux/raid/md_k.h | 14 -- 2 files changed, 8 insertions(+), 14 deletions(-) diff .prev/drivers/md/md.c

Re: [PATCH 002 of 6] md: Fix use-after-free bug when dropping an rdev from an md array.

2008-01-13 Thread Neil Brown
On Monday January 14, [EMAIL PROTECTED] wrote: On Mon, Jan 14, 2008 at 12:45:31PM +1100, NeilBrown wrote: Due to possible deadlock issues we need to use a schedule work to kobject_del an 'rdev' object from a different thread. A recent change means that kobject_add no longer gets a

Re: [PATCH 002 of 6] md: Fix use-after-free bug when dropping an rdev from an md array.

2008-01-13 Thread Neil Brown
On Monday January 14, [EMAIL PROTECTED] wrote: On Mon, Jan 14, 2008 at 02:21:45PM +1100, Neil Brown wrote: Maybe it isn't there any more Once upon a time, when I echo remove /sys/block/mdX/md/dev-YYY/state Egads. And just what will protect you from parallel callers of

Re: [PATCH 002 of 6] md: Fix use-after-free bug when dropping an rdev from an md array.

2008-01-13 Thread Neil Brown
On Monday January 14, [EMAIL PROTECTED] wrote: Thanks. I'll see what I can some up with. How about this, against current -mm On both the read and write path for an rdev attribute, we call mddev_lock, first checking that mddev is not NULL. Once we get the lock, we check again. If rdev-mddev

[PATCH] md: Fix data corruption when a degraded raid5 array is reshaped.

2008-01-03 Thread NeilBrown
This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and 24-rc. It would be great if it cold get into 23.13 and 24.final. Thanks. NeilBrown ### Comments for Changeset We currently do not wait for the block from the missing device to be computed from parity before copying data to the new

Re: [PATCH] md: Fix data corruption when a degraded raid5 array is reshaped.

2008-01-03 Thread Dan Williams
On Thu, 2008-01-03 at 15:46 -0700, NeilBrown wrote: This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and 24-rc. It would be great if it cold get into 23.13 and 24.final. Thanks. NeilBrown ### Comments for Changeset We currently do not wait for the block from the missing device

Re: [PATCH] md: Fix data corruption when a degraded raid5 array is reshaped.

2008-01-03 Thread Dan Williams
On Thu, 2008-01-03 at 16:00 -0700, Williams, Dan J wrote: On Thu, 2008-01-03 at 15:46 -0700, NeilBrown wrote: This patch fixes a fairly serious bug in md/raid5 in 2.6.23 and 24-rc. It would be great if it cold get into 23.13 and 24.final. Thanks. NeilBrown ### Comments for Changeset

Re: [PATCH] md: Fix data corruption when a degraded raid5 array is reshaped.

2008-01-03 Thread Neil Brown
for then Andrew commits this and make sure the right patch goes in... Thanks, NeilBrown - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [patch] improve stripe_cache_size documentation

2007-12-30 Thread Thiemo Nagel
stripe_cache_size (currently raid5 only) As far as I have understood, it applies to raid6, too. Kind regards, Thiemo Nagel - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: [patch] improve stripe_cache_size documentation

2007-12-30 Thread dean gaudet
On Sun, 30 Dec 2007, Thiemo Nagel wrote: stripe_cache_size (currently raid5 only) As far as I have understood, it applies to raid6, too. good point... and raid4. here's an updated patch. -dean Signed-off-by: dean gaudet [EMAIL PROTECTED] Index: linux/Documentation/md.txt

Re: [patch] improve stripe_cache_size documentation

2007-12-30 Thread dean gaudet
On Sun, 30 Dec 2007, dean gaudet wrote: On Sun, 30 Dec 2007, Thiemo Nagel wrote: stripe_cache_size (currently raid5 only) As far as I have understood, it applies to raid6, too. good point... and raid4. here's an updated patch. and once again with a typo fix. oops. -dean

[patch] improve stripe_cache_size documentation

2007-12-29 Thread dean gaudet
Document the amount of memory used by the stripe cache and the fact that it's tied down and unavailable for other purposes (right?). thanks to Dan Williams for the formula. -dean Signed-off-by: dean gaudet [EMAIL PROTECTED] Index: linux/Documentation/md.txt

Re: [md-raid6-accel PATCH 01/12] async_tx: PQXOR implementation

2007-12-27 Thread H. Peter Anvin
Yuri Tikhonov wrote: This patch implements support for the asynchronous computation of RAID-6 syndromes. It provides an API to compute RAID-6 syndromes asynchronously in a format conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum functions are very similar to async_xor

Re: [PATCH 001 of 7] md: Support 'external' metadata for md arrays.

2007-12-25 Thread Andrew Morton
On Fri, 14 Dec 2007 17:26:08 +1100 NeilBrown [EMAIL PROTECTED] wrote: + if (strncmp(buf, external:, 9) == 0) { + int namelen = len-9; + if (namelen = sizeof(mddev-metadata_type)) + namelen = sizeof(mddev-metadata_type)-1; +

Re: [PATCH 004 of 7] md: Allow devices to be shared between md arrays.

2007-12-25 Thread Andrew Morton
On Fri, 14 Dec 2007 17:26:28 +1100 NeilBrown [EMAIL PROTECTED] wrote: + mddev_unlock(rdev-mddev); + ITERATE_MDDEV(mddev, tmp) { + mdk_rdev_t *rdev2; + + mddev_lock(mddev); + ITERATE_RDEV(mddev, rdev2, tmp2)

Re: [PATCH 007 of 7] md: Get name for block device in sysfs

2007-12-16 Thread Neil Brown
, but it might (will) change?? In that case can we have the patch as it stands and when the path to block devices in /sys changes, the ioctl can be changed at the same time to match? Or are you saying that as the kernel is today, some block devices appear under /devices/..., in which case could you please

Re: [PATCH 007 of 7] md: Get name for block device in sysfs

2007-12-15 Thread Kay Sievers
On Dec 14, 2007 7:26 AM, NeilBrown [EMAIL PROTECTED] wrote: Given an fd on a block device, returns a string like /block/sda/sda1 which can be used to find related information in /sys. Ideally we should have an ioctl that works on char devices as well, but that seems far from

[PATCH 000 of 7] md: Introduction EXPLAIN PATCH SET HERE

2007-12-13 Thread NeilBrown
Following are 7 md related patches are suitable for the next -mm and maybe for 2.6.25. They move towards giving user-space programs more fine control of an array so that we can add support for more complex metadata formats (e.g. DDF) without bothering the kernel with such things. The last patch

[PATCH 001 of 7] md: Support 'external' metadata for md arrays.

2007-12-13 Thread NeilBrown
- Add a state flag 'external' to indicate that the metadata is managed externally (by user-space) so important changes need to be left of user-space to handle. Alternates are non-persistant ('none') where there is no stable metadata - after the array is stopped there is no record of

[PATCH 002 of 7] md: Give userspace control over removing failed devices when external metdata in use

2007-12-13 Thread NeilBrown
When a device fails, we must not allow an further writes to the array until the device failure has been recorded in array metadata. When metadata is managed externally, this requires some synchronisation... Allow/require userspace to explicitly remove failed devices from active service in the

[PATCH 003 of 7] md: Allow a maximum extent to be set for resyncing.

2007-12-13 Thread NeilBrown
This allows userspace to control resync/reshape progress and synchronise it with other activities, such as shared access in a SAN, or backing up critical sections during a tricky reshape. Writing a number of sectors (which must be a multiple of the chunk size if such is meaningful) causes a

[PATCH 004 of 7] md: Allow devices to be shared between md arrays.

2007-12-13 Thread NeilBrown
Currently, a given device is claimed by a particular array so that it cannot be used by other arrays. This is not ideal for DDF and other metadata schemes which have their own partitioning concept. So for externally managed metadata, just claim the device for md in general, require that offset

[PATCH 005 of 7] md: Lock address when changing attributes of component devices.

2007-12-13 Thread NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff .prev/drivers/md/md.c ./drivers/md/md.c --- .prev/drivers/md/md.c 2007-12-14 16:09:01.0 +1100 +++ ./drivers/md/md.c 2007-12-14

[PATCH 006 of 7] md: Allow an md array to appear with 0 drives if it has external metadata.

2007-12-13 Thread NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff .prev/drivers/md/md.c ./drivers/md/md.c --- .prev/drivers/md/md.c 2007-12-14 16:09:03.0 +1100 +++ ./drivers/md/md.c 2007-12-14

[PATCH 007 of 7] md: Get name for block device in sysfs

2007-12-13 Thread NeilBrown
Given an fd on a block device, returns a string like /block/sda/sda1 which can be used to find related information in /sys. Ideally we should have an ioctl that works on char devices as well, but that seems far from trivial, so it seems reasonable to have this until the later can be

Re: [PATCH 003 of 3] md: Update md bitmap during resync.

2007-12-10 Thread Mike Snitzer
(the same as the bitmap update time) does not notciably affect resync performance. Signed-off-by: Neil Brown [EMAIL PROTECTED] Hi Neil, You forgot to export bitmap_cond_end_sync. Please see the attached patch. regards, Mike diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c index f31ea4f

Re: [PATCH] (2nd try) force parallel resync

2007-12-07 Thread Bernd Schubert
Hello Neil, On Friday 07 December 2007 03:10:37 Neil Brown wrote: On Thursday December 6, [EMAIL PROTECTED] wrote: Hello, here is the second version of the patch. With this version also on setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up. Though, I still don't

[PATCH] (2nd try) force parallel resync

2007-12-06 Thread Bernd Schubert
Hello, here is the second version of the patch. With this version also on setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up. Though, I still don't understand why md_wakeup_thread() is not working. Signed-off-by: Bernd Schubert [EMAIL PROTECTED] Index: linux-2.6.22

Re: [PATCH] (2nd try) force parallel resync

2007-12-06 Thread Neil Brown
On Thursday December 6, [EMAIL PROTECTED] wrote: Hello, here is the second version of the patch. With this version also on setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up. Though, I still don't understand why md_wakeup_thread() is not working. Could give a little

[PATCH 001 of 3] md: raid6: Fix mktable.c

2007-12-06 Thread NeilBrown
From: H. Peter Anvin [EMAIL PROTECTED] Make both mktables.c and its output CodingStyle compliant. Update the copyright notice. Signed-off-by: H. Peter Anvin [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/mktables.c | 43

[PATCH 000 of 3] md: a few little patches

2007-12-06 Thread NeilBrown
Following 3 patches for md provide some code tidyup and a small functionality improvement. They do not need to go into 2.6.24 but are definitely appropriate 25-rc1. (Patches made against 2.6.24-rc3-mm2) Thanks, NeilBrown [PATCH 001 of 3] md: raid6: Fix mktable.c [PATCH 002 of 3] md: raid6

[PATCH 002 of 3] md: raid6: clean up the style of raid6test/test.c

2007-12-06 Thread NeilBrown
From: H. Peter Anvin [EMAIL PROTECTED] Date: Fri, 26 Oct 2007 11:22:42 -0700 Clean up the coding style in raid6test/test.c. Break it apart into subfunctions to make the code more readable. Signed-off-by: H. Peter Anvin [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat

[PATCH 003 of 3] md: Update md bitmap during resync.

2007-12-06 Thread NeilBrown
Currently and md array with a write-intent bitmap does not updated that bitmap to reflect successful partial resync. Rather the entire bitmap is updated when the resync completes. This is because there is no guarentee that resync requests will complete in order, and tracking each request

[md-raid6-accel PATCH 03/12] md: run stripe operations outside the lock

2007-12-04 Thread Yuri Tikhonov
The raid_run_ops routine uses the asynchronous offload api and the stripe_operations member of a stripe_head to carry out xor+pqxor+copy operations asynchronously, outside the lock. The operations performed by RAID-6 are the same as in the RAID-5 case except for no support of STRIPE_OP_PREXOR

[md-raid6-accel PATCH 01/12] async_tx: PQXOR implementation

2007-12-04 Thread Yuri Tikhonov
This patch implements support for the asynchronous computation of RAID-6 syndromes. It provides an API to compute RAID-6 syndromes asynchronously in a format conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum functions are very similar to async_xor functions but make use

[md-raid6-accel PATCH 02/12] async_tx: RAID-6 recovery implementation

2007-12-04 Thread Yuri Tikhonov
This patch adds support for asynchronous RAID-6 recovery operations. An asynchronous implementation using async_tx API is provided to compute two missing data blocks (async_r6_dd_recov) and to compute one missing data block and one missing parity_block (async_r6_dp_recov). In general

[md-raid6-accel PATCH 06/12] md: req/comp logic for async compute operations

2007-12-04 Thread Yuri Tikhonov
Scheduling and processing the asynchronous computations. handle_stripe will compute a block when a backing disk has failed. Since both RAID-5/6 use the same ops_complete_compute() we should set the second computation target in RAID-5 to (-1) [no target]. Signed-off-by: Yuri Tikhonov [EMAIL

[md-raid6-accel PATCH 11/12] md: remove unused functions

2007-12-04 Thread Yuri Tikhonov
Some clean-up of the replaced or already unnecessary functions. Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED] -- diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 9b6336f..1d45887 100644 --- a/drivers/md/raid5.c +++

[md-raid6-accel PATCH 04/12] md: common handle_stripe6() infrastructure

2007-12-04 Thread Yuri Tikhonov
We utilize get_stripe_work() to find new work to run. This function is shared with RAID-5. The only RAID-5 specific operation there is PREXOR. Then we call raid_run_ops() to process the requests pending. Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail Cherkashin [EMAIL

[md-raid6-accel PATCH 09/12] md: req/comp logic for async expand operations

2007-12-04 Thread Yuri Tikhonov
Support for expanding RAID-6 stripes asynchronously. By setting STRIPE_OP_POSTXOR without setting STRIPE_OP_BIODRAIN the completion path in handle stripe can differentiate expand operations from normal write operations. Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail

[md-raid6-accel PATCH 10/12] md: req/comp logic for async I/O operations

2007-12-04 Thread Yuri Tikhonov
I/O submission requests were already handled outside of the stripe lock in handle_stripe. Now that handle_stripe is only tasked with finding work, this logic belongs in raid5_run_ops Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED] -- diff

[md-raid6-accel PATCH 05/12] md: req/comp logic for async write operations

2007-12-04 Thread Yuri Tikhonov
(). This patch introduces one more RAID-5/6 shared function, it is handle_completed_postxor_requests(), to be called when either handle_stripe5() or handle_stripe6() discover the completeness of a post-xor operation for the stripe. Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail

[md-raid6-accel PATCH 07/12] md: req/comp logic for async check operations

2007-12-04 Thread Yuri Tikhonov
corresponds to the correct parity, non-zerp - to non-correct. This patch also removes spare page for RAID-6 Q-parity check since it gone into async_pqxor() [this need for the synchronous CPU cases only; if the check operation is being performed by DMA - there is no need in spares]. Signed-off

[md-raid6-accel PATCH 08/12] md: req/comp logic for async read operations

2007-12-04 Thread Yuri Tikhonov
When a read bio is attached to the stripe and the corresponding block is marked as R5_UPTODATE, then a biofill operation is scheduled to copy the data from the stripe cache to the bio buffer. Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED] Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED] --

[PATCH] Skip bio copy in full-stripe write ops

2007-11-23 Thread Yuri Tikhonov
Hello all, Here is a patch which allows to skip intermediate data copying between the bio requested to write and the disk cache in sh if the full-stripe write operation is on the way. This improves the performance of write operations for some dedicated cases when big chunks of data

Re: [PATCH] Skip bio copy in full-stripe write ops

2007-11-23 Thread Neil Brown
On Friday November 23, [EMAIL PROTECTED] wrote: Hello all, Here is a patch which allows to skip intermediate data copying between the bio requested to write and the disk cache in sh if the full-stripe write operation is on the way. This improves the performance of write

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-14 Thread Greg KH
also pick up def6ae26 md: fix misapplied patch in raid5.c or I can resend the original raid5: fix clearing of biofill operations. The other patch for -stable raid5: fix unending write sequence is currently in -mm. Hm, I've attached the two patches that I have right now

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-14 Thread Neil Brown
On Tuesday November 13, [EMAIL PROTECTED] wrote: raid5-fix-unending-write-sequence.patch is in -mm and I believe is waiting on an Acked-by from Neil? It seems to have just been sent on to Linus, so it probably will go in without: Acked-By: NeilBrown [EMAIL PROTECTED] I'm beginning to

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
On Mon, Oct 22, 2007 at 05:15:27PM +1000, NeilBrown wrote: It appears that a couple of bugs slipped in to md for 2.6.23. These two patches fix them and are appropriate for 2.6.23.y as well as 2.6.24-rcX Thanks, NeilBrown [PATCH 001 of 2] md: Fix an unsigned compare to allow creation

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps with v1.0 metadata. [PATCH 002 of 2] md: raid5: fix clearing of biofill operations I don't see these patches in 2.6.24-rcX, are they there under some other subject? Oh nevermind, I found them, sorry for the noise

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Dan Williams
for 2.6.23.y as well as 2.6.24-rcX Thanks, NeilBrown [PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps with v1.0 metadata. [PATCH 002 of 2] md: raid5: fix clearing of biofill operations I don't see these patches in 2.6.24-rcX, are they there under

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Greg KH
in to md for 2.6.23. These two patches fix them and are appropriate for 2.6.23.y as well as 2.6.24-rcX Thanks, NeilBrown [PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps with v1.0 metadata. [PATCH 002 of 2] md: raid5: fix clearing of biofill

Re: [stable] [PATCH 000 of 2] md: Fixes for md in 2.6.23

2007-11-13 Thread Dan Williams
On Nov 13, 2007 8:43 PM, Greg KH [EMAIL PROTECTED] wrote: Careful, it looks like you cherry picked commit 4ae3f847 md: raid5: fix clearing of biofill operations which ended up misapplied in Linus' tree, You should either also pick up def6ae26 md: fix misapplied patch in raid5.c or I can

Re: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-11 Thread Samuel Tardieu
Konstantin == Konstantin Sharlaimov [EMAIL PROTECTED] writes: Konstantin This patch adds RAID1 read balancing to device mapper. A Konstantin read operation that is close (in terms of sectors) to a Konstantin previous read or write goes to the same mirror. I am currently running it on top

Re: Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-08 Thread Konstantin Sharlaimov
chunk size. So read/write within a 16k chunk will be the same disk but the next 16k are a different disk and near doesn't apply anymore. Currently there is no way to turn this feature off (this is only a request for comments patch), but I'm planning to make it configurable via sysfs and module

Re: Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-08 Thread Bill Davidsen
Rik van Riel wrote: On Thu, 08 Nov 2007 17:28:37 +0100 Goswin von Brederlow [EMAIL PROTECTED] wrote: Maybe you need more parameter: Generally a bad idea, unless you can come up with sane defaults (which do not need tuning 99% of the time) or you can derive these parameters

Re: Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-08 Thread Bill Davidsen
a request for comments patch), but I'm planning to make it configurable via sysfs and module parameters. Thanks for suggestion for the near definition. What do you think about adding the chunk_size parameter (with the default value of 1 chunk = 1 sector). Setting it to 32 will make all reads within 16k

Re: Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-08 Thread Rik van Riel
On Thu, 08 Nov 2007 17:28:37 +0100 Goswin von Brederlow [EMAIL PROTECTED] wrote: Maybe you need more parameter: Generally a bad idea, unless you can come up with sane defaults (which do not need tuning 99% of the time) or you can derive these parameters automatically from the RAID configuration

[PATCH] raid5: fix unending write sequence

2007-11-08 Thread Dan Williams
was sampled 'set' when it should not have been. This patch cleans up cases where the code looks at sh-ops.pending when it should be looking at the consistent stack-based snapshot of the operations flags. Report from Joël: Resync done. Patch fix this bug. Signed-off-by: Dan Williams [EMAIL

Re: Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-08 Thread Goswin von Brederlow
Rik van Riel [EMAIL PROTECTED] writes: On Thu, 08 Nov 2007 17:28:37 +0100 Goswin von Brederlow [EMAIL PROTECTED] wrote: Maybe you need more parameter: Generally a bad idea, unless you can come up with sane defaults (which do not need tuning 99% of the time) or you can derive these

Was: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-07 Thread Goswin von Brederlow
Konstantin Sharlaimov [EMAIL PROTECTED] writes: This patch adds RAID1 read balancing to device mapper. A read operation that is close (in terms of sectors) to a previous read or write goes to the same mirror. I wonder if there shouldn't be a way to turn this off (or if there already is one

[RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-03 Thread Konstantin Sharlaimov
This patch adds RAID1 read balancing to device mapper. A read operation that is close (in terms of sectors) to a previous read or write goes to the same mirror. Signed-off-by: Konstantin Sharlaimov [EMAIL PROTECTED] --- Please give it a try, it works for me, yet my results might be system

Re: [RFC PATCH 2.6.23.1] md: add dm-raid1 read balancing

2007-11-03 Thread Rik van Riel
On Sat, 03 Nov 2007 20:08:42 +1000 Konstantin Sharlaimov [EMAIL PROTECTED] wrote: This patch adds RAID1 read balancing to device mapper. A read operation that is close (in terms of sectors) to a previous read or write goes to the same mirror. Signed-off-by: Konstantin Sharlaimov [EMAIL

  1   2   3   4   5   6   7   8   9   10   >