Tom,
Honestly, you've got me. It was either comment from Tom Lane or Josh
that the os is caching the results (I may not be using the right terms
here), so I thought it the database is dropped and recreated, I would
see less of a skew (or variation) in the results. Someone which to comment?
There was some interesting oscillation behavior in both version of
postgres that occurred with 25 clients and 1000 transactions at a
scaling factor of 100. This was repeatable with the distribution
version of pgbench run iteratively from the command line. I'm not sure
how to explain this.
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
I was seeing oscillation across the majority of iterations in the 25
clients/1000 transaction runs on both database versions.
I've got my box specs and configuration files
Tom,
Just a quick thought: after each run/sample of pgbench, I drop the
database and recreate it. When I don't my results become more skewed.
Steve Poe
Thomas F.O'Connell wrote:
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
Steve,
Per your and Tom's recommendations, I significantly increased the
number of transactions used for testing. See my last post.
The database will have pretty heavy mixed use, i.e., both reads and
writes.
I performed 32 iterations per scenario this go-round.
I'll look into OSDB for further
I'm in the fortunate position of having a newly built database server
that's pre-production. I'm about to run it through the ringer with some
simulations of business data and logic, but I wanted to post the
results of some preliminary pgbench marking.
http://www.sitening.com/pgbench.html
To
Tom,
People's opinions on pgbench may vary, so take what I say with a grain
of salt. Here are my thoughts:
1) Test with no less than 200 transactions per client. I've heard with
less than this, your results will vary too much with the direction of
the wind blowing. A high enough value will