Gerald Khin (JIRA) wrote: > [ > http://nagoya.apache.org/jira/browse/DERBY-106?page=comments#action_56877 ] > >Gerald Khin commented on DERBY-106: >----------------------------------- > >The system property derby.language.maxMemoryPerTable is the system property I >asked for. Setting it to 0 works like a charm and turns the hash join strategy >off. So I'm happy and the bug can be closed. Perhaps this system property >should be mentioned somewhere in the derby tuning manual. > > >
I don't think this bug should be closed. Most probably out of memory error is coming because the whole hash table is stored in memory; current implementation of Derby hash table does not have logic to spill the hash table entries to disk when lot of memory is required. Although using the maxMemoryPerTableflag is good work around., it would be good to fix the optimizer to NOT choose hash-table join when memory requirements can not be estimated accurately. -suresh.
