Tom,
Honestly, you've got me. It was either comment from Tom Lane or Josh
that the os is caching the results (I may not be using the right terms
here), so I thought it the database is dropped and recreated, I would
see less of a skew (or variation) in the results. Someone which to comment?
Stev
Considering the default vacuuming behavior, why would this be?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Apr 25, 2005, at 12:18 PM,
Tom,
Just a quick thought: after each run/sample of pgbench, I drop the
database and recreate it. When I don't my results become more skewed.
Steve Poe
Thomas F.O'Connell wrote:
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
I
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do that.
I was seeing oscillation across the majority of iterations in the 25
clients/1000 transaction runs on both database versions.
I've got my box specs and configuration files posted.
>There was some interesting oscillation behavior in both version of
postgres that occurred with 25 >clients and 1000 transactions at a
scaling factor of 100. This was repeatable with the distribution
>version of pgbench run iteratively from the command line. I'm not sure
how to explain this.
T
Steve,
Per your and Tom's recommendations, I significantly increased the
number of transactions used for testing. See my last post.
The database will have pretty heavy mixed use, i.e., both reads and
writes.
I performed 32 iterations per scenario this go-round.
I'll look into OSDB for further b
Okay. I updated my benchmark page with new numbers, which are the
result of extensive pgbench usage over this past week. In fact, I
modified pgbench (for both of the latest version of postgres) to be
able to accept multiple iterations as an argument and report the
results of each iteration as w
Tom,
People's opinions on pgbench may vary, so take what I say with a grain
of salt. Here are my thoughts:
1) Test with no less than 200 transactions per client. I've heard with
less than this, your results will vary too much with the direction of
the wind blowing. A high enough value will help
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
> http://www.sitening.com/pgbench.html
You need to run *many* more transactions than that to get pgbench
numbers that aren't mostly noise. In my experience 1000 transactions
per client is a rock-bottom minimum to get repeatable numbers; 1 per
i
I'm in the fortunate position of having a newly built database server
that's pre-production. I'm about to run it through the ringer with some
simulations of business data and logic, but I wanted to post the
results of some preliminary pgbench marking.
http://www.sitening.com/pgbench.html
To me,
10 matches
Mail list logo