ganest <[email protected]> writes: > select count(*) from big inner join bigref on big.id=bigref.bigid and > big.name like '0ff%'; > > The result is: 258 and it takes more than 20 seconds to be executed. Using > mysql with the same > configuration the result is produced in milliseconds. > > It seems that the indexes are taken into account and the query plan is > correct. > > Any idea about what is going wrong would be appreciated.
Hi George, The query plan shows that the optimizer picks a nested loop join strategy. This means that for every qualifying row in the outer table (big) it opens a new scan on the inner table (bigref). In this case it opens 2407 index scans on the inner table, which may take some time, especially if the scans need to go to disk (not unlikely with such a large database and small page cache). A hash join, which is the only alternative join strategy in Derby currently, isn't likely to be more efficient since there's no restriction that can be used to limit the number of rows to read from bigref, so the entire bigref table will have to be read in that case, I think. A couple of comments to the tuning: - derby.storage.pageSize must be set prior to creating the tables and indexes, otherwise it has no effect - derby.storage.pageCacheSize is a system-wide property, so setting it with SYSCS_SET_DATABASE_PROPERTY has no effect. Use a system property or set it in derby.properties instead -- Knut Anders
