On Tue, 12 Jun 2012 17:02:21 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
Any comments on the results?
Really no comments?
Parity re-build:
328 128
6min ~15min 5min
My questions:
Why does parity re-build take longer with smaller stripes? Is it
really done one stripe at
So a parity rebuild does so by reading all the data and the
exiting parity, computing the new parity, and then comparing the
existing parity with the new parity. If they match, it's on to the
next stripe. If they differ, the new parity is written out.
Oops.
What's the point of not simply
On Tue, 12 Jun 2012 18:34:52 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
So a parity rebuild does so by reading all the data and the
exiting parity, computing the new parity, and then comparing the
existing parity with the new parity. If they match, it's on to the
next stripe. If they
On Tue, Jun 12, 2012 at 10:40:47AM -0600, Greg Oster wrote:
On Tue, 12 Jun 2012 18:34:52 +0200
Edgar Fu? e...@math.uni-bonn.de wrote:
So a parity rebuild does so by reading all the data and the
exiting parity, computing the new parity, and then comparing the
existing parity with the
On Tue, 12 Jun 2012 13:20:27 -0400
Thor Lancelot Simon t...@panix.com wrote:
On Tue, Jun 12, 2012 at 10:40:47AM -0600, Greg Oster wrote:
On Tue, 12 Jun 2012 18:34:52 +0200
Edgar Fu? e...@math.uni-bonn.de wrote:
So a parity rebuild does so by reading all the data and the
exiting
t...@panix.com (Thor Lancelot Simon) writes:
Are writes to the underlying disk really typically slower?
The drives read-ahead mechanism is usually better than our
write buffering. Even when the drive has its write-cache
enabled, there is a difference.
Also, with just writing you will never know
In general it won't access just one filesystem block.
It will try to readahead 64KB
Oh, so this declustering seems to make matters even more
complicated^Winteresting.
Staying with my example of a 16K fsbsize FFS on a 4+1 disc Level 5
RAIDframe with a stripe size of 4*16k=64k:
Suppose a process
In practice, this is why I often layer a ccd with a huge (and prime)
stripe size over RAIDframe.
Sorry, I don't get how this would improve matters.
On Fri, 11 May 2012 12:48:08 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
Edgar is describing the desideratum for a minimum-latency
application.
Yes, I'm looking for minimum latency.
I've logged the current file server's disc business and the only
time they really are busy is during the
Thanks a lot for your detailed answers.
Yes. Absolutely.
Fine.
As you can see, all of those span all 4 discs.
Yes, that was perfectly clear to me. What I wasn't sure of was that the
whole stack of subsystems involved would really be able to make use of that.
Thanks for confirming it actually
On Fri, 11 May 2012, Edgar Fu? wrote:
EF I have one process doing something largely resulting in meta-data
EF reads (i.e. traversing a very large directory tree). Will the kernel
EF only issue sequential reads or will it be able to parallelise, e.g.
EF reading indirect blocks?
GO I don't
On Fri, 11 May 2012 17:05:24 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
Thanks a lot for your detailed answers.
Yes. Absolutely.
Fine.
As you can see, all of those span all 4 discs.
Yes, that was perfectly clear to me. What I wasn't sure of was that
the whole stack of subsystems
Does that help?
Yes, thanks!
Yet another question: Suppose I have 4k fsbsize and a stripe size such that
16k go to one disc (i.e. 64k stripes with my 4+1 RAID 5 example).
Will RAIDframe ever deal with less than 16k? I.e.:
A. I read one 4k fs block. Will RAIDframe read 4k or 16k from the disc?
Does anyone have some real-world experience with RAIDframe (Level 5)
performance vs. stripe size?
My impression would be that, with a not to large number of components (4+1,
in my case), chances are rather low to spread simultaneous accesses to
different physical discs, so the best choice seems
On Thu, 10 May 2012 17:46:38 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
Does anyone have some real-world experience with RAIDframe (Level 5)
performance vs. stripe size?
My impression would be that, with a not to large number of components
(4+1, in my case), chances are rather low to spread
I don't know whether I'm getting this right.
In my understanding, the benefit of a large stripe size lies in
parallelisation: Suppose the stripe size is such that a file system block
fits on a single disc, i.e. stripe size = (file system block size)*(number
of effective discs). Then, if one
On Thu, 10 May 2012 18:59:42 +0200
Edgar Fuß e...@math.uni-bonn.de wrote:
I don't know whether I'm getting this right.
In my understanding, the benefit of a large stripe size lies in
parallelisation:
Correct.
Suppose the stripe size is such that a file system
block fits on a single disc,
On Thu, 10 May 2012 13:23:24 -0400
Thor Lancelot Simon t...@panix.com wrote:
On Thu, May 10, 2012 at 11:15:09AM -0600, Greg Oster wrote:
What you're typically looking for in the parallelization is that a
given IO will span all of the components. In that way, if you have
n
That's not
On Thu, May 10, 2012 at 11:47:36AM -0600, Greg Oster wrote:
On Thu, 10 May 2012 13:23:24 -0400
Thor Lancelot Simon t...@panix.com wrote:
On Thu, May 10, 2012 at 11:15:09AM -0600, Greg Oster wrote:
What you're typically looking for in the parallelization is that a
given IO will span
On Thu, 10 May 2012 14:06:11 -0400
Thor Lancelot Simon t...@panix.com wrote:
On Thu, May 10, 2012 at 11:47:36AM -0600, Greg Oster wrote:
On Thu, 10 May 2012 13:23:24 -0400
Thor Lancelot Simon t...@panix.com wrote:
On Thu, May 10, 2012 at 11:15:09AM -0600, Greg Oster wrote:
20 matches
Mail list logo