On 09/05/2014 06:38 PM, Jan Wieck wrote:
On 09/05/2014 10:12 AM, Fabien COELHO wrote:
Note that despite pg appaling latency performance, in may stay well over
the 90% limit, or even 100%: when things are going well a lot of
transaction run in about ms, while when things are going bad transactions
would take a long time (although possibly under or about 1s anyway), *but*
very few transactions are passed, the throughput is very small. The fact
that during 15 seconds only 30 transactions are processed is a detail that
does not show up in the metric.

Yeah, it makes much more sense to measure the latency from the "scheduled" time than the actual time.

I haven't used the real pgbench for a long time. I will have to look at
your patch and see what the current version actually does or does not.

What I have been using is a Python version of pgbench that I wrote for
myself when I started learning that language. That one does record both
values, the DB transaction latency and the client response time (time
from the request being entered into the Queue until transaction commit).
When I look at those results it is possible to have an utterly failing
run, with <60% of client response times being within 2 seconds, but all
the DB transactions are still in milliseconds.

I think we have to reconsider what we're reporting in 9.4, when --rate is enabled, even though it's already very late in the release cycle. It's a bad idea to change the definition of latency between 9.4 and 9.5, so let's get it right in 9.4.

As said, I'll have to take a look at it. Since I am on vacation next
week, getting ready for my first day at EnterpriseDB, this may actually
happen.

Oh, congrats! :-)

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to