I've been working with two simple tests, an INSERT and a SELECT
(i) The INSERT test: 1000 jdbc calls of
"INSERT INTO " + tableName +
" ( int0, int1, int2, blob0, " +
" short0, blob1 ) " +
" VALUES ( ?, ?, ?, ?, ?, ? ) "int0 has a unique index; int1 and int2 have non-unique indexes.
Each INSERT has different values.
The 'blob0's are ~600 bytes each, the 'blob1's are zero length byte[].
The blobs are loaded using ByteArrayInputStreams and ps.setBinaryStream(). The same PreparedStatement is used in each iteration; ps.clearParameters() is called after each INSERT.
This takes ~ 134 seconds on derby and ~ 5 seconds on mysql.
(ii) The SELECT test: 1000 jdbc calls, reading back the data entered in the INSERT test, using
"SELECT blob0, int2, blob1 " + " FROM " + tableName +
" int1 = ? AND int2 = ? "
As in the INSERT test, the same PreparedStatemnt is used for each iteration; ps.clearParameters() is called after
each select. Note that the two variables that define the select criteria are indexed.
This takes ~ 105 seconds on derby and ~ 1 second on mysql.
FYI I ran these tests with Cloudscape 10.0 (under SuSE 9.0) (haven't had time to re-run them under Derby 10.0.2.1).
A couple questions/issues:
Re logging: Something I read in the Derby documentation (or perhaps in the mailing list archive) indicated that logging may be expensive. Is there any way to disable logging completely?
Re record vs table locking: "Tuning Derby" indicates that record level locking can add a lot of overhead and implied that there's a way to force table locking, but it wasn't clear to me how to do this.
Any comments/suggestions re tweaking performance would be appreciated.
Thanks
