When I try and disable auto detection, with kernel boot parameters, it
goes ahead and auto assembles and runs anyway. The md= parameters seem
to be noticed, but don't seem to have any other effect (beyond resulting
in a dmesg).
Here is the result
$ dmesg | egrep 'raid|md:'
Kernel
On Tuesday June 26, [EMAIL PROTECTED] wrote:
When I try and disable auto detection, with kernel boot parameters, it
goes ahead and auto assembles and runs anyway. The md= parameters seem
to be noticed, but don't seem to have any other effect (beyond resulting
in a dmesg).
Odd
Maybe you
On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote:
On Tuesday June 26, [EMAIL PROTECTED] wrote:
When I try and disable auto detection, with kernel boot parameters, it
goes ahead and auto assembles and runs anyway. The md= parameters seem
to be noticed, but don't seem to have any other
If you set the stripe_cache_size less than or equal to the chunk size of
the SW RAID5 array, the processes will hang in D-state indefinitely until
you change the stripe_cache_size to chunk_size.
Tested with 2.6.22-rc6 and a 128 KiB RAID5 Chunk Size, when I set it to
256 KiB, no problems.
There is some kind of bug, also tried with 256 KiB, it ran 2 tests
(bonnie++) OK but then on the third, BANG, bonnie++ is now in D-state,
pretty nasty bug there.
On Tue, 26 Jun 2007, Justin Piszcz wrote:
If you set the stripe_cache_size less than or equal to the chunk size of the
SW RAID5
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum and maximum raid rebuild speed to 60MB/s.
echo Setting minimum and maximum resync speed to 60 MiB/s...
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum and maximum raid rebuild speed to 60MB/s.
echo Setting minimum
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the
I repeat: what does the patch do (or is this no longer applicable)?
This was for if your stripe_cache_size was above a certain number, it
would run at 1-3MB/s rebuild speed. You can always force with the min
parameter. Forcing it you should get good speed, faster than 1-3MB/s
anyway :)
Hello list,
I have a little question about software RAID on Linux.
I have installed Software Raid on all my SC1425 servers DELL by
believing that the md raid was a strong driver.
And recently i make some test on a server and try to view if the RAID
hard drive power failure work fine, so i power
Good day all.
Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/ - rest of disk
Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/ - rest of disk
I created RAID1 over / on both disks, made /dev/md0
From time to time I want to
mdadm /dev/md0 --fail /dev/sda1
On Tue, 26 Jun 2007, Maurice Hilarius wrote:
Good day all.
Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/ - rest of disk
Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/ - rest of disk
I
Johny Mail list wrote:
Hello list,
I have a little question about software RAID on Linux.
I have installed Software Raid on all my SC1425 servers DELL by
believing that the md raid was a strong driver.
And recently i make some test on a server and try to view if the RAID
hard drive power failure
Ian Dall wrote:
On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote:
On Tuesday June 26, [EMAIL PROTECTED] wrote:
When I try and disable auto detection, with kernel boot parameters, it
goes ahead and auto assembles and runs anyway. The md= parameters seem
to be noticed, but don't seem
How do I create an array with a helpful name? i.e. /dev/md/storage?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using MAKEDEV?
Thanks.
-
To
The current dmaengine interface defines mutliple routines per operation,
i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding
more operation types (xor, crc, etc) to this model would result in an
unmanageable number of method permutations.
Are we really going to add
All the handle_stripe operations that are to be transitioned to use
raid5_run_ops need a method to coherently gather work under the stripe-lock
and hand that work off to raid5_run_ops. The 'get_stripe_work' routine
runs under the lock to read all the bits in sh-ops.pending that do not
have the
handle_stripe will compute a block when a backing disk has failed, or when
it determines it can save a disk read by computing the block from all the
other up-to-date blocks.
Previously a block would be computed under the lock and subsequent logic in
handle_stripe could use the newly up-to-date
Check operations are scheduled when the array is being resynced or an
explicit 'check/repair' command was sent to the array. Previously check
operations would destroy the parity block in the cache such that even if
parity turned out to be correct the parity block would be marked
!R5_UPTODATE at
When a read bio is attached to the stripe and the corresponding block is
marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy
the data from the stripe cache to the bio buffer. handle_stripe flags the
blocks to be operated on with the R5_Wantfill flag. If new read requests
When a stripe is being expanded bulk copying takes place to move the data
from the old stripe to the new. Since raid5_run_ops only operates on one
stripe at a time these bulk copies are handled in-line under the stripe
lock. In the dma offload case we poll for the completion of the operation.
replaced by raid5_run_ops
Signed-off-by: Dan Williams [EMAIL PROTECTED]
Acked-By: NeilBrown [EMAIL PROTECTED]
---
drivers/md/raid5.c | 124
1 files changed, 0 insertions(+), 124 deletions(-)
diff --git a/drivers/md/raid5.c
handle_stripe5 and handle_stripe6 have very deep logic paths handling the
various states of a stripe_head. By introducing the 'stripe_head_state'
and 'r6_state' objects, large portions of the logic can be moved to
sub-routines.
'struct stripe_head_state' consumes all of the automatic variables
Adds the platform device definitions and the architecture specific
support routines (i.e. register initialization and descriptor formats) for the
iop-adma driver.
Changelog:
* added 'descriptor pool size' to the platform data
* add base support for buffer sizes larger than 16MB (hw max)
* build
Adds the platform device definitions and the architecture specific support
routines (i.e. register initialization and descriptor formats) for the
iop-adma driver.
Changelog:
* add support for 1k zero sum buffer sizes
* added dma/aau platform devices to iq80321 and iq80332 setup
* fixed the
Cc: Russell King [EMAIL PROTECTED]
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
arch/arm/Kconfig |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 50d9f3e..0cb2d4f 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@
Hello Dan ,
On Tue, 26 Jun 2007, Dan Williams wrote:
Greetings,
Per Andrew's suggestion this is the md raid5 acceleration patch set
updated with more thorough changelogs to lower the barrier to entry for
reviewers. To get started with the code I would suggest the following
order:
On 6/26/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote:
Hello Dan ,
On Tue, 26 Jun 2007, Dan Williams wrote:
Greetings,
Per Andrew's suggestion this is the md raid5 acceleration patch set
updated with more thorough changelogs to lower the barrier to entry for
reviewers. To
30 matches
Mail list logo