Hi Brian,
In a previous response Peter Ondruška noted that you could use bulk
import to speed up your inserts if your source data lived in a properly
formatted file.
Even if your source data does not live in a file, you can still get bulk
import speed (and the benefits of your generated key) by using table
functions--provided that you upgrade to 10.4. If you wrap your source
data in a table function, then you can bulk insert your data as follows:
insert into MY_TABLE( NON_GEN_COL1, ... NON_GEN_COL_N )
select * from table( MY_TABLE_FUNCTION() ) s
For more information on table functions, see the white paper at
http://developers.sun.com/javadb/reference/whitepapers/index.jsp
Hope this helps,
-Rick
publicayers wrote:
I have 10's of thousands of rows to add to a table, possibly 100's of
thousands, and I'm wondering if there's anything else I can do to speed
this up. The table could end up having a couple of million rows.
This is what I've done so far:
* Using a PreparedStatement that gets reused with each insert.
* Set locking level to TABLE for that table.
* Turned off autocommit.
* Set the connection to READ_COMMIT.
In addition to that, I'm also setting these system parameters, though
not
necessarily to improve insert performance:
* derby.system.durability=test
* derby.storage.pageSize=32768
The table has one odd feature: The last column is a VARCHAR(32672) FOR
BIT DATA. I've tried setting the length to something smaller, but it
didn't really seem to matter.
The primary key is an auto generated int with another 2-column index on
two BIGINT columns. Something I found interesting is that the inserts
seem to go 2x faster if I have the 2-column index in place than if I
have just the primary-key index.
I'm running
Derby 10.2.2
JRE 1.6.0_07
Windows XP SP2
Is there anything else I can do to speed up row inserts?
Thanks,
Brian