If the bitmap size is less than one page including super_block and
bitmap and the inode's i_blkbits is also small, when doing the
read_page function call to read the sb_page, it may return a error.
For example, if the device is 12800 chunks, its bitmap file size is
about 1.6KB include the bitmap
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I can
get better performance or not. I increase the value from 2048 to something
like 16384. After I did that, the raid5 freeze. Any proccess read / write to
it stucked at D state. I tried to change it back to 2048,
On Sun, 21 Jan 2007, Greg KH wrote:
On Sun, Jan 21, 2007 at 12:29:51PM -0500, Justin Piszcz wrote:
On Sun, 21 Jan 2007, Justin Piszcz wrote:
Good luck,
Jurriaan
--
What does ELF stand for (in respect to Linux?)
ELF is the first rock group that
On Mon, 22 Jan 2007, kyle wrote:
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I can
get better performance or not. I increase the value from 2048 to something
like 16384. After I did that, the raid5 freeze. Any proccess read / write to
it stucked at D state.
On 1/21/07, Liang Yang [EMAIL PROTECTED] wrote:
Dan,
Thanks for your reply. Still get two questions left.
Suppose I have a MD-RAID5 array which consists of 8 disks.
1. Do we need to consider the chunk size of the RAID array when we set the
value of Striped_Cache_Size? For example, if the chunk
On Mon, 22 Jan 2007, kyle wrote:
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I
can
get better performance or not. I increase the value from 2048 to
something
like 16384. After I did that, the raid5 freeze. Any proccess read / write
to
it stucked at D state. I
On Mon, 22 Jan 2007, kyle wrote:
On Mon, 22 Jan 2007, kyle wrote:
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I
can
get better performance or not. I increase the value from 2048 to something
like 16384. After I did that, the raid5 freeze.
Justin Piszcz wrote:
Yes, I noticed this bug too, if you change it too many times or change it
at the 'wrong' time, it hangs up when you echo numbr
/proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the 'wrong' time
and it works. Not sure where the bug
On Mon, 22 Jan 2007, Steve Cousins wrote:
Justin Piszcz wrote:
Yes, I noticed this bug too, if you change it too many times or change it at
the 'wrong' time, it hangs up when you echo numbr /proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the
On Mon, 22 Jan 2007, Steve Cousins wrote:
Justin Piszcz wrote:
Yes, I noticed this bug too, if you change it too many times or change it at
the 'wrong' time, it hangs up when you echo numbr /proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the
Justin Piszcz wrote:
Yes, I noticed this bug too, if you change it too many times or change it
at the 'wrong' time, it hangs up when you echo numbr
/proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the 'wrong'
time and it works. Not sure where the bug
Yes, I noticed this bug too, if you change it too many times or change
it
at the 'wrong' time, it hangs up when you echo numbr
/proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the 'wrong'
time
and it works. Not sure where the bug lies, but yeah I've
Do we need to consider the chunk size when we adjust the value of
Striped_Cache_Szie for the MD-RAID5 array?
Liang
- Original Message -
From: Justin Piszcz [EMAIL PROTECTED]
To: kyle [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org; linux-kernel@vger.kernel.org
Sent: Monday, January
On Sun 2007-01-21 14:27:34, Justin Piszcz wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
Doing this on a single disk 2.6.19.2 is OK, no issues. However, this
happens every time!
Anything to try? Any other
On Mon, 22 Jan 2007, Pavel Machek wrote:
On Sun 2007-01-21 14:27:34, Justin Piszcz wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
Doing this on a single disk 2.6.19.2 is OK, no issues. However, this
On Monday January 22, [EMAIL PROTECTED] wrote:
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I can
get better performance or not. I increase the value from 2048 to something
like 16384. After I did that, the raid5 freeze. Any proccess read / write to
it
On Tuesday January 23, [EMAIL PROTECTED] wrote:
This patch will almost certainly fix the problem, though I would like
to completely understand it first
Of course, that patch didn't compile The GFP_IO should have been
GFP_NOIO.
As below.
NeilBrown
--
Avoid
On Thursday January 18, [EMAIL PROTECTED] wrote:
Hi all,
I've hit the following bug while unmounting a xfs partition
--- [cut here ] - [please bite here ] -
Kernel BUG at drivers/md/md.c:5035
Kernel : stock-kernel 2.6.18.6, x86_64
Setup : xfs on raid5, on 5
On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz [EMAIL PROTECTED]
wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
What's that? Software raid or hardware raid? If the latter, which driver?
Doing this
What's that? Software raid or hardware raid? If the latter, which
driver?
Software RAID (md)
On Mon, 22 Jan 2007, Andrew Morton wrote:
On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz [EMAIL PROTECTED]
wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to
Justin Piszcz wrote:
My .config is attached, please let me know if any other information is
needed and please CC (lkml) as I am not on the list, thanks!
Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to
the RAID5 running XFS.
Any idea what happened here?
On Monday January 22, [EMAIL PROTECTED] wrote:
Justin Piszcz wrote:
My .config is attached, please let me know if any other information is
needed and please CC (lkml) as I am not on the list, thanks!
Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to
the RAID5
On Monday January 22, [EMAIL PROTECTED] wrote:
If the bitmap size is less than one page including super_block and
bitmap and the inode's i_blkbits is also small, when doing the
read_page function call to read the sb_page, it may return a error.
For example, if the device is 12800 chunks, its
On Mon 2007-01-22 13:48:44, Justin Piszcz wrote:
On Mon, 22 Jan 2007, Pavel Machek wrote:
On Sun 2007-01-21 14:27:34, Justin Piszcz wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to
invoke
the OOM killer and kill all of my processes?
Doing this
Following are 4 patches suitable for inclusion in 2.6.20.
Thanks,
NeilBrown
[PATCH 001 of 4] md: Update email address and status for MD in MAINTAINERS.
[PATCH 002 of 4] md: Make 'repair' actually work for raid1.
[PATCH 003 of 4] md: Make sure the events count in an md array never returns
to
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./MAINTAINERS |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff .prev/MAINTAINERS ./MAINTAINERS
--- .prev/MAINTAINERS 2007-01-23 11:14:14.0 +1100
+++ ./MAINTAINERS 2007-01-23 11:23:03.0
In most cases we check the size of the bitmap file before
reading data from it. However when reading the superblock,
we always read the first PAGE_SIZE bytes, which might not
always be appropriate. So limit that read to the size of the
file if appropriate.
Also, we get the count of available
When 'repair' finds a block that is different one the various
parts of the mirror. it is meant to write a chosen good version
to the others. However it currently writes out the original data
to each. The memcpy to make all the data the same is missing.
Signed-off-by: Neil Brown [EMAIL
Now that we sometimes step the array events count backwards
(when transitioning dirty-clean where nothing else interesting
has happened - so that we don't need to write to spares all the time),
it is possible for the event count to return to zero, which is
potentially confusing and triggers and
Andrew Morton wrote:
On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz [EMAIL PROTECTED]
wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
What's that? Software raid or hardware raid? If the latter, which
On Tue, 23 Jan 2007 11:37:09 +1100
Donald Douwsma [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz [EMAIL PROTECTED]
wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my
I think your patch is not enough to slove the read_page error
completely. I think in the bitmap_init_from_disk we also need to check
the 'count' never exceeds the size of file before calling the
read_page function. How do your think about it.
Thanks your reply.
2007/1/23, Neil Brown [EMAIL
On 1/22/07, Neil Brown [EMAIL PROTECTED] wrote:
On Monday January 22, [EMAIL PROTECTED] wrote:
Justin Piszcz wrote:
My .config is attached, please let me know if any other information is
needed and please CC (lkml) as I am not on the list, thanks!
Running Kernel 2.6.19.2 on a MD RAID5
From: Dan Williams [EMAIL PROTECTED]
* introduce struct dma_async_tx_descriptor as a common field for all dmaengine
software descriptors
* convert the device_memcpy_* methods into separate prep, set src/dest, and
submit stages
* support capabilities beyond memcpy (xor, memset, xor zero sum,
From: Dan Williams [EMAIL PROTECTED]
Use raid5_run_ops to carry out the memory copies for a raid5 read request.
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
drivers/md/raid5.c | 40 +++-
1 files changed, 15 insertions(+), 25 deletions(-)
diff --git
From: Dan Williams [EMAIL PROTECTED]
Prepare the raid5 implementation to use async_tx for running stripe
operations:
* biofill (copy data into request buffers to satisfy a read request)
* compute block (generate a missing block in the cache from the other
blocks)
* prexor (subtract existing data
From: Dan Williams [EMAIL PROTECTED]
Each stripe has three flag variables to reflect the state of operations
(pending, ack, and complete).
-pending: set to request servicing in raid5_run_ops
-ack: set to reflect that raid5_runs_ops has seen this request
-complete: set when the operation is
From: Dan Williams [EMAIL PROTECTED]
handle_stripe sets STRIPE_OP_CHECK to request a check operation in
raid5_run_ops. If raid5_run_ops is able to perform the check with a
dma engine the parity will be preserved in memory removing the need to
re-read it from disk, as is necessary in the
From: Dan Williams [EMAIL PROTECTED]
replaced by raid5_run_ops
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
drivers/md/raid5.c | 124
1 files changed, 0 insertions(+), 124 deletions(-)
diff --git a/drivers/md/raid5.c
From: Dan Williams [EMAIL PROTECTED]
handle_stripe now only updates the state of stripes. All execution of
operations is moved to raid5_run_ops.
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
drivers/md/raid5.c | 68
1 files changed,
From: Dan Williams [EMAIL PROTECTED]
The parity calculation for an expansion operation is the same as the
calculation performed at the end of a write with the caveat that all blocks
in the stripe are scheduled to be written. An expansion operation is
identified as a stripe with the POSTXOR flag
From: Dan Williams [EMAIL PROTECTED]
async_tx is an api to describe a series of bulk memory
transfers/transforms. When possible these transactions are carried out by
asynchrounous dma engines. The api handles inter-transaction dependencies
and hides dma channel management from the client. When
From: Dan Williams [EMAIL PROTECTED]
handle_stripe sets STRIPE_OP_COMPUTE_BLK to request servicing from
raid5_run_ops. It also sets a flag for the block being computed to let
other parts of handle_stripe submit dependent operations. raid5_run_ops
guarantees that the compute operation completes
From: Dan Williams [EMAIL PROTECTED]
handle_stripe sets STRIPE_OP_PREXOR, STRIPE_OP_BIODRAIN, STRIPE_OP_POSTXOR
to request a write to the stripe cache. raid5_run_ops is triggerred to run
and executes the request outside the stripe lock.
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
44 matches
Mail list logo