If/when you experience the hang again please get a trace of all
processes with:
echo t /proc/sysrq-trigger
Of particular interest is the mke2fs trace; as well as any md threads.
Ok I've played around a bit. I didn't get those long hangs which I
described in my initial mail
but smaller
Maurice Hilarius wrote:
Hi to all.
I wonder if somebody would care to help me to solve a problem?
I have some servers.
They are running CentOS5
This OS has a limitation where the maximum filesystem size is 8TB.
Each server curr3ently has a AMCC/3WARE 16 port SATA controllers. Total
of 16
Dean S. Messing wrote:
Also (as I asked) what is the downside? From what I have read, random
access reads will take a hit. Is this correct?
Thanks very much for your help!
Dean
Besides bonnie++ you should probably check iozone. It will allow you to test
very specific settings quite
Bill Davidsen wrote:
: Dean S. Messing wrote:
snip
: Do you want to tune it to work well now or work well in the final
: configuration? There is no magic tuning which is best for every use, if
: there was it would be locked in and you couldn't change it.
I want it to work well in the
Michal Soltys writes:
: Dean S. Messing wrote:
:
: Also (as I asked) what is the downside? From what I have read, random
: access reads will take a hit. Is this correct?
:
: Thanks very much for your help!
:
: Dean
:
:
: Besides bonnie++ you should probably check iozone. It will
Dean S. Messing wrote:
[]
[] That's what
attracted me to RAID 0 --- which seems to have no downside EXCEPT
safety :-).
So I'm not sure I'll ever figure out the right tuning. I'm at the
point of abandoning RAID entirely and just putting the three disks
together as a big LV and being done
Michael Tokarev writes:
: Dean S. Messing wrote:
: []
: [] That's what
: attracted me to RAID 0 --- which seems to have no downside EXCEPT
: safety :-).
:
: So I'm not sure I'll ever figure out the right tuning. I'm at the
: point of abandoning RAID entirely and just putting the three
Fix a couple bugs and provide documentation for the async_tx api.
Neil, please 'ack' patch #3.
git://lost.foo-projects.org/~dwillia2/git/iop async-tx-fixes-for-linus
Dan Williams (3):
async_tx: usage documentation and developer notes
async_tx: fix dma_wait_for_async_tx
raid5:
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
Documentation/crypto/async-tx-api.txt | 217 +
1 files changed, 217 insertions(+), 0 deletions(-)
diff --git a/Documentation/crypto/async-tx-api.txt
b/Documentation/crypto/async-tx-api.txt
new file mode 100644
Fix dma_wait_for_async_tx to not loop forever in the case where a
dependency chain is longer than two entries. This condition will not
happen with current in-kernel drivers, but fix it for future drivers.
Found-by: Saeed Bishara [EMAIL PROTECTED]
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
ops_complete_biofill tried to avoid calling handle_stripe since all the
state necessary to return read completions is available. However the
process of determining whether more read requests are pending requires
locking the stripe (to block add_stripe_bio from updating dev-toead).
On Thu, 20 Sep 2007 18:27:40 -0700 Dan Williams wrote:
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
Hi Dan,
Looks pretty good and informative. Thanks.
(nits below :)
Documentation/crypto/async-tx-api.txt | 217
+
1 files changed, 217
12 matches
Mail list logo