I have 10's of thousands of rows to add to a table, possibly 100's of
thousands, and I'm wondering if there's anything else I can do to speed
this up. The table could end up having a couple of million rows.
This is what I've done so far:
* Using a PreparedStatement that gets reused with each insert.
* Set locking level to TABLE for that table.
* Turned off autocommit.
* Set the connection to READ_COMMIT.
In addition to that, I'm also setting these system parameters, though
not
necessarily to improve insert performance:
* derby.system.durability=test
* derby.storage.pageSize=32768
The table has one odd feature: The last column is a VARCHAR(32672) FOR
BIT DATA. I've tried setting the length to something smaller, but it
didn't really seem to matter.
The primary key is an auto generated int with another 2-column index on
two BIGINT columns. Something I found interesting is that the inserts
seem to go 2x faster if I have the 2-column index in place than if I
have just the primary-key index.
I'm running
Derby 10.2.2
JRE 1.6.0_07
Windows XP SP2
Is there anything else I can do to speed up row inserts?
Thanks,
Brian