Templexp Tan wrote:
Hello,
On 10/18/07, *Kristian Waagan* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Templexp Tan wrote:
> Is it any optimization for BATCH queries for Derby?
>
> how about the speed for inserts?
> which one will be suitable for batchquery? the statement or
> preparestatement ?
>
> it seems a little bit slow to insert a bunch of data into the DB.
> it is about 7ms for ORACLE (100 queries in batch)
> but it tooks 20ms for DERBY (100 queries in batch)
> and the worse case is that it goes slower and slower if I have
more than
> 30000 rows data in it.
> there is one index in the table. I don't think it gonna cost too
much.
>
> any idea to speed up?
Hello,
Questions like these are generally hard to answer without knowing the
full story. Also, I don't know Oracle.
A few thoughts:
* Are you using Derby embedded or client/server?
it is c/s base DB.
* Is the load program singlethreaded or multithreaded?
probably multi, but might not be in a single program.
* Are the data and the log written to separate devices? (set with
logDevice in the connection URL upon creation)
it should be in the same device. do you mean the operation log, or the
database record log? any documentation about the log, or it could be
optimized the performance.
I mean the transaction log, which I believe you have called database
record log. I *do not* mean the file "derby.log".
Keeping the transaction log on a separate dedicated device makes writing
this sequential log faster, as the disk would not have to reposition the
heads all the time. This pays of the most when you have a highly loaded
database where pages have to be written to disk often (because the page
cache fills up).
You have to specify this when you create the database, something like:
"jdbc:derby://dbhost:1527/myDb;...;create=true;logDevice=/path/to/db/txLog;"
However, if Oracle is faster when writing the data/log to the same
device, Derby must be slower for another reason.
* Do you run with autocommit off in both databases? If yes, how often
do you commit?
no, it is operation by manually, only once after the execute batch.
* How big (approx) are your rows?
it is about less than 4k per row.
Do you know how big data pages Orcale is using?
You could always experiment with setting the size to either 8K, 16K or
32K. The default is 4k.
* Have you adjusted the page cache size?
it is 10000.
* What about the Java heap size and garbage collection?
512MB for java heap. and didn't optimized any thing about the gc. any
help on this?
Nothing except for the usual; monitor under you specific load and tune
from there... Sorry, but there really ain't no "cure for all" on this
one. Remember that if you increase the page size, you might also have to
either reduce the page cache size or increase the heap size;
4K * 10000 ~ 40 MB
32K * 10000 ~ 320 MB
512 MB of heap should be enough, but confirming that GC doesn't take to
much time is a good thing anyway. A few quick hints: use visualgc,
specify -Xloggc:/tmp/specj-gclog.txt (and maybe -X:verbosegc and/or
-XX:+PrintGCDetails) or use jconsole.
Then start reading up on tuning-docs ;)
Sorry I can't be of more help, but diagnosing such problems are really
hard. It might even turn out that Oracle is that much faster, but to
tell for sure you must ensure both databases are configured properly.
regards,
--
Kristian
* Java version?
Java 6 Update 2
* Derby version?
10.3.1.4 <http://10.3.1.4>
That's what I can think of right now. Maybe someone else chimes in with
suggestions too. If you post the code, it might be easier to see if
anything can be done to speed up Derby.
regards,
--
Kristian
>
>
thanks your helps
Regards,
Temple