vodarus vodarus wrote:
[ snip ]
I could also get down to these times by using a HashMap to store the
intermediate totals in 'calculateTotalCommon'. This does of course
use more memory and might cause trouble if you don't know the number
of clients in your table (i.e. whether you need 25 thousand or 100
million entries in the map).
also not applicable, because data amount is 10-100 times more than RAM.
Note that you only need to keep the intermediate results in memory, and
you don't have to calculate everything before you write data back into
the database.
In this case, it is the number of unique clients that matter.
I agree it is still a non-optimal solution though.
It seems what happens is that the log buffer goes full. By
increasing the log buffer, I was able to get a little better
performance. As always with tuning, it's about balance and
tradeoffs. If your IO system is really good, maybe you can run with
a big log buffer and get better performance. However, the effect you
see from this also depends on how often you have commits (then the
complete buffer is flushed anyway, at least in Derby).
So, in short, experiment with the following, using either the
"insert into..." query or your client code modified to somehow store
the totals in memory:
a) Log buffer size
b) Page cache size (and JVM heap)
c) Page size
One of my attempts looked like this:
java -Xmx512M -Dderby.storage.pageSize=32768
-Dderby.storage.logBufferSize=524288
-Dderby.storage.pageCacheSize=2500 -cp .:${JDB10413} derbytest.FatTest
this show 9,5 sec time. Best is Oracle with 1,5 sec.
also is this parameters work with old database or you need to recreate
database to get this parameters work?
Except for the pageSize options, the options take effect when you start
Derby.
Note that the what I posted was just an example. You might still have to
experiment to find the best fit for your environment / data.
Using your original test code I haven't been able to get lower than
around 5 seconds (best), the average being somewhere around 6 seconds.
i could get 9,5 sec best. what another improvements can you do with
application?
Maybe that's your hardware, operating system or JVM version?
I was using Derby 10.4.1.3 on Solaris 10 and Java SE 6 on a dual CPU
(AMD 2.4 GHz) machine.
But if 6 seconds is still too much, and you can't use another algorithm,
I don't think I have anything more that can help you.
As always, you have to do your own tests on your own system to see
if it is good enough for your use :)
:)
Often there are other things to consider besides performance, for
instance installation and ease of use.
:) yes, but performance issue is very important. more important than
installation.
Because with Oracle analysis will take near 100 (algorithms) * 100 (data
is 100 time more than in experiment) * 1,5 sec = 4.17 hours each. So
near 6 data parts a day per server.
Java: analysis 100 * 100 * 9,5 = 26,4 hours each. So less than ONE data
part a day per server. So company need to buy and use 6 time more
servers than it use now. Also disadvantages is Java don't have integrate
SQL into language, so SQL can be validated at runtime, not compile time
(unlike PL/SQL).
I don't think it is correct to blame this on Java. There are other
database products using Java, and for all I know they might be faster
for the specific case we are discussing.
Also, I'm not sure how comparable the Java stored procedure code and the
PL/SQL are.
Could you post the latter?
Does anyone know if (or rather how well maybe?) Oracle is capable of
optimizing the PL/SQL?
regards,
--
Kristian
Does anyone have any ideas on other possible tunings?
--
Kristian