A bottleneck in gstripe?

2007-02-17 Thread Kirk Strauser
I built a gstripe volume with 4 drives and a 128KB stripesize.  When running 
one particular application, gstat reports that stripe/stripe1 is 99% busy, 
although its four drives are running at less than 30% each.  Am I 
misinterpreting the numbers - maybe the total is the sum of the drives? - 
or is there some giant overhead that I'm missing?

I've set kern.geom.stripe.fast=1, and kern.geom.stripe.fast_failed stays at 
0.  I don't have enough experience with geom_strip to even know where to go 
from here.  Are stripe sizes likely to make much of a difference when the 
heaviest load is when PostgreSQL is receiving massive imports?  This is a 
production system and I don't have the opportunity to play with it as much 
as I'd like, so any pointers to experiments likely to make a difference 
would be most welcome.
-- 
Kirk Strauser


pgpnHawE5xGjT.pgp
Description: PGP signature


Re: A bottleneck in gstripe?

2007-02-17 Thread Ivan Voras
Kirk Strauser wrote:
 I built a gstripe volume with 4 drives and a 128KB stripesize.  When running 
 one particular application, gstat reports that stripe/stripe1 is 99% busy, 
 although its four drives are running at less than 30% each.  Am I 
 misinterpreting the numbers - maybe the total is the sum of the drives? - 
 or is there some giant overhead that I'm missing?

Nevermind the % used number, it's an approximation of an approximation
- how is your real world performance? For example, use
ports/benchmarks/bonnie++ .

 I've set kern.geom.stripe.fast=1, and kern.geom.stripe.fast_failed stays at 
 0.  I don't have enough experience with geom_strip to even know where to go 
 from here.  Are stripe sizes likely to make much of a difference when the 
 heaviest load is when PostgreSQL is receiving massive imports?  This is a 

Not much, as the data is first written to WAL, which goes at full file
system speed (no fsyncs).




signature.asc
Description: OpenPGP digital signature