Tom Lane <[EMAIL PROTECTED]> writes:

> I think this is a pipe dream.  Variation in where the data gets laid
> down on your disk drive would alone create more than that kind of delta.
> I'm frankly amazed you could get repeatability within 2-3%.

I think the reason he gets good repeatability is because he's talking about
the aggregate results for a whole test run. Not individual queries. In theory
you could just run the whole test multiple times. The more times you run it
the lower the variation in the total run time would be.

Actually, the variation in run time is also a useful statistic, both for
postgres and the kernel. It might be useful to do multiple complete runs and
keep track of the average standard deviation of the time required for each
step.

Higher standard deviation implies queries can't be reliably depended on not to
take inordinately long, which can be a problem for some working models. For
the kernel it could mean latency issues or it could mean the swapper or buffer
cache was overly aggressive.

-- 
greg


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to