Hi Dan,
[ Minor thing ... ]
On 6/27/07, Dan Williams <[EMAIL PROTECTED]> wrote:
The async_tx api tries to use a dma engine for an operation, but will fall
back to an optimized software routine otherwise. Xor support is
implemented using the raid5 xor routines. For organizational purposes this
On 6/26/07, Mr. James W. Laferriere <[EMAIL PROTECTED]> wrote:
Hello Dan ,
On Tue, 26 Jun 2007, Dan Williams wrote:
> Greetings,
>
> Per Andrew's suggestion this is the md raid5 acceleration patch set
> updated with more thorough changelogs to lower the barrier to entry for
> reviewers.
Hello Dan ,
On Tue, 26 Jun 2007, Dan Williams wrote:
Greetings,
Per Andrew's suggestion this is the md raid5 acceleration patch set
updated with more thorough changelogs to lower the barrier to entry for
reviewers. To get started with the code I would suggest the following
order:
[md-a
Cc: Russell King <[EMAIL PROTECTED]>
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
arch/arm/Kconfig |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 50d9f3e..0cb2d4f 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1
Adds the platform device definitions and the architecture specific
support routines (i.e. register initialization and descriptor formats) for the
iop-adma driver.
Changelog:
* added 'descriptor pool size' to the platform data
* add base support for buffer sizes larger than 16MB (hw max)
* build er
Adds the platform device definitions and the architecture specific support
routines (i.e. register initialization and descriptor formats) for the
iop-adma driver.
Changelog:
* add support for > 1k zero sum buffer sizes
* added dma/aau platform devices to iq80321 and iq80332 setup
* fixed the calcu
platform_device defines the
capabilities of the channels
20070626: Callbacks are run in a tasklet. Given the recent discussion on
LKML about killing tasklets in favor of workqueues I did a quick conversion
of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so
the tasklet
I/O submission requests were already handled outside of the stripe lock in
handle_stripe. Now that handle_stripe is only tasked with finding work,
this logic belongs in raid5_run_ops.
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
Acked-By: NeilBrown <[EMAIL PROTECTED]>
---
drivers/md/raid5.c
replaced by raid5_run_ops
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
Acked-By: NeilBrown <[EMAIL PROTECTED]>
---
drivers/md/raid5.c | 124
1 files changed, 0 insertions(+), 124 deletions(-)
diff --git a/drivers/md/raid5.c b/drivers/md/r
When a read bio is attached to the stripe and the corresponding block is
marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy
the data from the stripe cache to the bio buffer. handle_stripe flags the
blocks to be operated on with the R5_Wantfill flag. If new read requests
arri
When a stripe is being expanded bulk copying takes place to move the data
from the old stripe to the new. Since raid5_run_ops only operates on one
stripe at a time these bulk copies are handled in-line under the stripe
lock. In the dma offload case we poll for the completion of the operation.
Af
handle_stripe will compute a block when a backing disk has failed, or when
it determines it can save a disk read by computing the block from all the
other up-to-date blocks.
Previously a block would be computed under the lock and subsequent logic in
handle_stripe could use the newly up-to-date blo
Check operations are scheduled when the array is being resynced or an
explicit 'check/repair' command was sent to the array. Previously check
operations would destroy the parity block in the cache such that even if
parity turned out to be correct the parity block would be marked
!R5_UPTODATE at th
All the handle_stripe operations that are to be transitioned to use
raid5_run_ops need a method to coherently gather work under the stripe-lock
and hand that work off to raid5_run_ops. The 'get_stripe_work' routine
runs under the lock to read all the bits in sh->ops.pending that do not
have the co
After handle_stripe5 decides whether it wants to perform a
read-modify-write, or a reconstruct write it calls
handle_write_operations5. A read-modify-write operation will perform an
xor subtraction of the blocks marked with the R5_Wantprexor flag, copy the
new data into the stripe (biodrain) and p
When the raid acceleration work was proposed, Neil laid out the following
attack plan:
1/ move the xor and copy operations outside spin_lock(&sh->lock)
2/ find/implement an asynchronous offload api
The raid5_run_ops routine uses the asynchronous offload api (async_tx) and
the stripe_operations me
handle_stripe5 and handle_stripe6 have very deep logic paths handling the
various states of a stripe_head. By introducing the 'stripe_head_state'
and 'r6_state' objects, large portions of the logic can be moved to
sub-routines.
'struct stripe_head_state' consumes all of the automatic variables th
Replaces PRINTK with pr_debug, and kills the RAID5_DEBUG definition in
favor of the global DEBUG definition. To get local debug messages just add
'#define DEBUG' to the top of the file.
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
drivers/md/raid5.c | 116 ++-
The async_tx api provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies. It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations. Code
that is wri
The async_tx api tries to use a dma engine for an operation, but will fall
back to an optimized software routine otherwise. Xor support is
implemented using the raid5 xor routines. For organizational purposes this
routine is moved to a common area.
The following fixes are also made:
* rename xor
The current dmaengine interface defines mutliple routines per operation,
i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding
more operation types (xor, crc, etc) to this model would result in an
unmanageable number of method permutations.
Are we really going to add
The current implementation assumes that a channel will only be used by one
client at a time. In order to enable channel sharing the dmaengine core is
changed to a model where clients subscribe to channel-available-events.
Instead of tracking how many channels a client wants and how many it has
rec
Greetings,
Per Andrew's suggestion this is the md raid5 acceleration patch set
updated with more thorough changelogs to lower the barrier to entry for
reviewers. To get started with the code I would suggest the following
order:
[md-accel PATCH 01/19] dmaengine: refactor dmaengine around
dma_asyn
How do I create an array with a helpful name? i.e. "/dev/md/storage"?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using MAKEDEV?
Thanks.
-
To unsubsc
Ian Dall wrote:
On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote:
On Tuesday June 26, [EMAIL PROTECTED] wrote:
When I try and disable auto detection, with kernel boot parameters, it
goes ahead and auto assembles and runs anyway. The md= parameters seem
to be noticed, but don't seem t
Johny Mail list wrote:
Hello list,
I have a little question about software RAID on Linux.
I have installed Software Raid on all my SC1425 servers DELL by
believing that the md raid was a strong driver.
And recently i make some test on a server and try to view if the RAID
hard drive power failure
mdadm /dev/md0 --fail /dev/sda1
On Tue, 26 Jun 2007, Maurice Hilarius wrote:
Good day all.
Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/ - rest of disk
Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/ - rest of disk
I
Good day all.
Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/ - rest of disk
Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/ - rest of disk
I created RAID1 over / on both disks, made /dev/md0
>From time to time I want to "
Hello list,
I have a little question about software RAID on Linux.
I have installed Software Raid on all my SC1425 servers DELL by
believing that the md raid was a strong driver.
And recently i make some test on a server and try to view if the RAID
hard drive power failure work fine, so i power up
I repeat: what does the patch do (or is this no longer applicable)?
This was for if your stripe_cache_size was above a certain number, it
would run at 1-3MB/s rebuild speed. You can always force with the min
parameter. Forcing it you should get good speed, faster than 1-3MB/s
anyway :)
Jus
On Tue, 26 Jun 2007, Justin Piszcz wrote:
>
>
> On Tue, 26 Jun 2007, Jon Nelson wrote:
>
> > On Tue, 26 Jun 2007, Justin Piszcz wrote:
> >
> > >
> > >
> > > On Tue, 26 Jun 2007, Jon Nelson wrote:
> > >
> > > > On Mon, 25 Jun 2007, Justin Piszcz wrote:
> > > >
> > > > > Neil has a patch for the
Ha! I think I have almost figured this out and it is to do with initrd.
I went back to 2.16.17 kernel, which I happened to have the source tree
for, and put in some printks.
First, if there is a ramdisk, the kernel doesn't auto detect
raids /regardless/ of whether raid=noautodetect is specified or
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for in
On Tue, 26 Jun 2007, Justin Piszcz wrote:
>
>
> On Tue, 26 Jun 2007, Jon Nelson wrote:
>
> > On Mon, 25 Jun 2007, Justin Piszcz wrote:
> >
> > > Neil has a patch for the bad speed.
> >
> > What does the patch do?
> >
> > > In the mean time, do this (or better to set it to 30, for instance):
> >
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum and maximum raid rebuild speed to 60MB/s.
echo "Setting minimum
On Mon, 25 Jun 2007, Justin Piszcz wrote:
> Neil has a patch for the bad speed.
What does the patch do?
> In the mean time, do this (or better to set it to 30, for instance):
>
> # Set minimum and maximum raid rebuild speed to 60MB/s.
> echo "Setting minimum and maximum resync speed to 60 MiB/s
There is some kind of bug, also tried with 256 KiB, it ran 2 tests
(bonnie++) OK but then on the third, BANG, bonnie++ is now in D-state,
pretty nasty bug there.
On Tue, 26 Jun 2007, Justin Piszcz wrote:
If you set the stripe_cache_size less than or equal to the chunk size of the
SW RAID5 arr
If you set the stripe_cache_size less than or equal to the chunk size of
the SW RAID5 array, the processes will hang in D-state indefinitely until
you change the stripe_cache_size to > chunk_size.
Tested with 2.6.22-rc6 and a 128 KiB RAID5 Chunk Size, when I set it to
256 KiB, no problems. 12
On Tue, 2007-06-26 at 16:38 +1000, Neil Brown wrote:
> On Tuesday June 26, [EMAIL PROTECTED] wrote:
> > When I try and disable auto detection, with kernel boot parameters, it
> > goes ahead and auto assembles and runs anyway. The md= parameters seem
> > to be noticed, but don't seem to have any oth
39 matches
Mail list logo