For multi user tests I would not worry about batching the
tests as much.  It would be better if we can demonstrate
that as more users are added throughput increases.  If we
are using 6% of the cpu I could see it taking more than 17
threads running unblocks to drive the group commit log fast
enough to use up all the cpu.

Olav.Sandstaa wrote:
Mike Matrigali <[EMAIL PROTECTED]> wrote:

ok, I was hoping that for single user testing it wouldn't
be a big change.  The single user commit per update is
a problem when comparing derby to other db's which don't
do real transaction guarantees.  It would be great if
someone reading the derby web site would pick the 1000
row per commit single user case to look at first.


I agree that for the single user testing it should not be a big change. I
might give it a try and see what results I get.
I just looked at the insert case, and on the following
page it looks to me like the single user case is taking
about 6% user time and 2% system time.  Am I reading
the %cpu graph correctly? From the description
I think this is a 2 processor machine.  With 2 processors
will it be possible to register 200% of cpu or just 100%
of cpu (I have seen both possibilities on multiprocessor
machines depending on the tool).


You are right about the interpretation of the CPU graphs, the second CPU graph
shows the amount of CPU that the java process (client code and Derby embedded)
uses in user and system time. It is a 2 CPU machine, and in CPU scale goes to
100%.

What surprices me a bit when looking at the CPU graph is that in the single
user case even with the write cache on the disks enabled we are only able
utilize about 6-7 percent of the CPU. I will look into it, but I guess it is
the disk where the log is written that limits the throughput.

..olav



Olav Sandstaa wrote:

Mike Matrigali <[EMAIL PROTECTED]> wrote:


Thanks for the info, anything is better than nothing.
Any chance to measure something like 1000 records per commit.
With one record per commit for the update operations you are
not really measuring the work to do the operation just the
overhead of commit -- at least for the single user case --
assuming your machine is set up to let derby do real disk
syncs (no write cache enabled).


The write cache on the disks are enabled in order make this test CPU
bound also for insert, update and delete load instead of disk bound. I
agree that only having one insert/update/delete operation per
transaction/commit we include a lot of overhead for the commit. The
intention is not to measure throughput, but to identify regressions
and even if the commit takes 50 percent (just guessing) of the CPU
cost/work of doing an update transaction it should still be possible to
identify if there are changes in the update operation itself that
influence on the CPU usage/throughput.

Unfortunately I will have to make major changes to the test client if
it should do 1000 updates per commit. All clients work on the same
table and perform the operation on a random record. With multiple
updates per transaction it would lead to a lot of deadlocks. I think
it would be better to write a new load client than to try to tweek the
one I run right now.

I am also running some tests where the write cache on the disks are
disabled (as it should be), but I have not included the results on
the web page yet (mostly due to much higher variation in the test
results).

..olav







Reply via email to