raid=noautodetect is apparently ignored?

2007-06-26 Thread Ian Dall
When I try and disable auto detection, with kernel boot parameters, it goes ahead and auto assembles and runs anyway. The md= parameters seem to be noticed, but don't seem to have any other effect (beyond resulting in a dmesg). Here is the result $ dmesg | egrep 'raid|md:' Kernel

Re: raid=noautodetect is apparently ignored?

2007-06-26 Thread Neil Brown
On Tuesday June 26, [EMAIL PROTECTED] wrote: When I try and disable auto detection, with kernel boot parameters, it goes ahead and auto assembles and runs anyway. The md= parameters seem to be noticed, but don't seem to have any other effect (beyond resulting in a dmesg). Odd Maybe you

Re: raid=noautodetect is apparently ignored?

2007-06-26 Thread Ian Dall
On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote: On Tuesday June 26, [EMAIL PROTECTED] wrote: When I try and disable auto detection, with kernel boot parameters, it goes ahead and auto assembles and runs anyway. The md= parameters seem to be noticed, but don't seem to have any other

if (stripe_cache_size = chunk_size) { BUG() }

2007-06-26 Thread Justin Piszcz
If you set the stripe_cache_size less than or equal to the chunk size of the SW RAID5 array, the processes will hang in D-state indefinitely until you change the stripe_cache_size to chunk_size. Tested with 2.6.22-rc6 and a 128 KiB RAID5 Chunk Size, when I set it to 256 KiB, no problems.

Re: if (stripe_cache_size = chunk_size) { BUG() }

2007-06-26 Thread Justin Piszcz
There is some kind of bug, also tried with 256 KiB, it ran 2 tests (bonnie++) OK but then on the third, BANG, bonnie++ is now in D-state, pretty nasty bug there. On Tue, 26 Jun 2007, Justin Piszcz wrote: If you set the stripe_cache_size less than or equal to the chunk size of the SW RAID5

Re: stripe_cache_size and performance

2007-06-26 Thread Jon Nelson
On Mon, 25 Jun 2007, Justin Piszcz wrote: Neil has a patch for the bad speed. What does the patch do? In the mean time, do this (or better to set it to 30, for instance): # Set minimum and maximum raid rebuild speed to 60MB/s. echo Setting minimum and maximum resync speed to 60 MiB/s...

Re: stripe_cache_size and performance

2007-06-26 Thread Justin Piszcz
On Tue, 26 Jun 2007, Jon Nelson wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: Neil has a patch for the bad speed. What does the patch do? In the mean time, do this (or better to set it to 30, for instance): # Set minimum and maximum raid rebuild speed to 60MB/s. echo Setting minimum

Re: stripe_cache_size and performance

2007-06-26 Thread Jon Nelson
On Tue, 26 Jun 2007, Justin Piszcz wrote: On Tue, 26 Jun 2007, Jon Nelson wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: Neil has a patch for the bad speed. What does the patch do? In the mean time, do this (or better to set it to 30, for instance): # Set minimum

Re: stripe_cache_size and performance

2007-06-26 Thread Justin Piszcz
On Tue, 26 Jun 2007, Jon Nelson wrote: On Tue, 26 Jun 2007, Justin Piszcz wrote: On Tue, 26 Jun 2007, Jon Nelson wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: Neil has a patch for the bad speed. What does the patch do? In the mean time, do this (or better to set it to 30, for

Re: stripe_cache_size and performance

2007-06-26 Thread Jon Nelson
On Tue, 26 Jun 2007, Justin Piszcz wrote: On Tue, 26 Jun 2007, Jon Nelson wrote: On Tue, 26 Jun 2007, Justin Piszcz wrote: On Tue, 26 Jun 2007, Jon Nelson wrote: On Mon, 25 Jun 2007, Justin Piszcz wrote: Neil has a patch for the bad speed. What does the

Re: stripe_cache_size and performance

2007-06-26 Thread Justin Piszcz
I repeat: what does the patch do (or is this no longer applicable)? This was for if your stripe_cache_size was above a certain number, it would run at 1-3MB/s rebuild speed. You can always force with the min parameter. Forcing it you should get good speed, faster than 1-3MB/s anyway :)

Linux Software RAID is really RAID?

2007-06-26 Thread Johny Mail list
Hello list, I have a little question about software RAID on Linux. I have installed Software Raid on all my SC1425 servers DELL by believing that the md raid was a strong driver. And recently i make some test on a server and try to view if the RAID hard drive power failure work fine, so i power

deliberately degrading RAID1 to a single disk, then back again

2007-06-26 Thread Maurice Hilarius
Good day all. Scenario: Pair of identical disks. partitions: Disk 0: /boot - NON-RAIDed swap / - rest of disk Disk 01 /boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed swap / - rest of disk I created RAID1 over / on both disks, made /dev/md0 From time to time I want to

Re: deliberately degrading RAID1 to a single disk, then back again

2007-06-26 Thread Justin Piszcz
mdadm /dev/md0 --fail /dev/sda1 On Tue, 26 Jun 2007, Maurice Hilarius wrote: Good day all. Scenario: Pair of identical disks. partitions: Disk 0: /boot - NON-RAIDed swap / - rest of disk Disk 01 /boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed swap / - rest of disk I

Re: Linux Software RAID is really RAID?

2007-06-26 Thread Brad Campbell
Johny Mail list wrote: Hello list, I have a little question about software RAID on Linux. I have installed Software Raid on all my SC1425 servers DELL by believing that the md raid was a strong driver. And recently i make some test on a server and try to view if the RAID hard drive power failure

Re: raid=noautodetect is apparently ignored?

2007-06-26 Thread Bill Davidsen
Ian Dall wrote: On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote: On Tuesday June 26, [EMAIL PROTECTED] wrote: When I try and disable auto detection, with kernel boot parameters, it goes ahead and auto assembles and runs anyway. The md= parameters seem to be noticed, but don't seem

mdadm usage: creating arrays with helpful names?

2007-06-26 Thread Richard Michael
How do I create an array with a helpful name? i.e. /dev/md/storage? The mdadm man page hints at this in the discussion of the --auto option in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done. Must I create the device nodes by hand first using MAKEDEV? Thanks. - To

[md-accel PATCH 01/19] dmaengine: refactor dmaengine around dma_async_tx_descriptor

2007-06-26 Thread Dan Williams
The current dmaengine interface defines mutliple routines per operation, i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding more operation types (xor, crc, etc) to this model would result in an unmanageable number of method permutations. Are we really going to add

[md-accel PATCH 08/19] md: common infrastructure for running operations with raid5_run_ops

2007-06-26 Thread Dan Williams
All the handle_stripe operations that are to be transitioned to use raid5_run_ops need a method to coherently gather work under the stripe-lock and hand that work off to raid5_run_ops. The 'get_stripe_work' routine runs under the lock to read all the bits in sh-ops.pending that do not have the

[md-accel PATCH 10/19] md: handle_stripe5 - add request/completion logic for async compute ops

2007-06-26 Thread Dan Williams
handle_stripe will compute a block when a backing disk has failed, or when it determines it can save a disk read by computing the block from all the other up-to-date blocks. Previously a block would be computed under the lock and subsequent logic in handle_stripe could use the newly up-to-date

[md-accel PATCH 11/19] md: handle_stripe5 - add request/completion logic for async check ops

2007-06-26 Thread Dan Williams
Check operations are scheduled when the array is being resynced or an explicit 'check/repair' command was sent to the array. Previously check operations would destroy the parity block in the cache such that even if parity turned out to be correct the parity block would be marked !R5_UPTODATE at

[md-accel PATCH 12/19] md: handle_stripe5 - add request/completion logic for async read ops

2007-06-26 Thread Dan Williams
When a read bio is attached to the stripe and the corresponding block is marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy the data from the stripe cache to the bio buffer. handle_stripe flags the blocks to be operated on with the R5_Wantfill flag. If new read requests

[md-accel PATCH 13/19] md: handle_stripe5 - add request/completion logic for async expand ops

2007-06-26 Thread Dan Williams
When a stripe is being expanded bulk copying takes place to move the data from the old stripe to the new. Since raid5_run_ops only operates on one stripe at a time these bulk copies are handled in-line under the stripe lock. In the dma offload case we poll for the completion of the operation.

[md-accel PATCH 15/19] md: remove raid5 compute_block and compute_parity5

2007-06-26 Thread Dan Williams
replaced by raid5_run_ops Signed-off-by: Dan Williams [EMAIL PROTECTED] Acked-By: NeilBrown [EMAIL PROTECTED] --- drivers/md/raid5.c | 124 1 files changed, 0 insertions(+), 124 deletions(-) diff --git a/drivers/md/raid5.c

[md-accel PATCH 05/19] raid5: refactor handle_stripe5 and handle_stripe6 (v2)

2007-06-26 Thread Dan Williams
handle_stripe5 and handle_stripe6 have very deep logic paths handling the various states of a stripe_head. By introducing the 'stripe_head_state' and 'r6_state' objects, large portions of the logic can be moved to sub-routines. 'struct stripe_head_state' consumes all of the automatic variables

[md-accel PATCH 17/19] iop13xx: surface the iop13xx adma units to the iop-adma driver

2007-06-26 Thread Dan Williams
Adds the platform device definitions and the architecture specific support routines (i.e. register initialization and descriptor formats) for the iop-adma driver. Changelog: * added 'descriptor pool size' to the platform data * add base support for buffer sizes larger than 16MB (hw max) * build

[md-accel PATCH 18/19] iop3xx: surface the iop3xx DMA and AAU units to the iop-adma driver

2007-06-26 Thread Dan Williams
Adds the platform device definitions and the architecture specific support routines (i.e. register initialization and descriptor formats) for the iop-adma driver. Changelog: * add support for 1k zero sum buffer sizes * added dma/aau platform devices to iq80321 and iq80332 setup * fixed the

[md-accel PATCH 19/19] ARM: Add drivers/dma to arch/arm/Kconfig

2007-06-26 Thread Dan Williams
Cc: Russell King [EMAIL PROTECTED] Signed-off-by: Dan Williams [EMAIL PROTECTED] --- arch/arm/Kconfig |2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 50d9f3e..0cb2d4f 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@

Re: [md-accel PATCH 00/19] md raid acceleration and the async_tx api

2007-06-26 Thread Mr. James W. Laferriere
Hello Dan , On Tue, 26 Jun 2007, Dan Williams wrote: Greetings, Per Andrew's suggestion this is the md raid5 acceleration patch set updated with more thorough changelogs to lower the barrier to entry for reviewers. To get started with the code I would suggest the following order:

Re: [md-accel PATCH 00/19] md raid acceleration and the async_tx api

2007-06-26 Thread Dan Williams
On 6/26/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote: Hello Dan , On Tue, 26 Jun 2007, Dan Williams wrote: Greetings, Per Andrew's suggestion this is the md raid5 acceleration patch set updated with more thorough changelogs to lower the barrier to entry for reviewers. To