http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5429
--- Comment #4 from Derik Barseghian <[email protected]> 2011-07-07 16:22:18 PDT --- On the issue of max data file size (hsqldb.cache_file_scale=1), we are currently limited to 2gb. I've reached this with one of my stores. The likely scenario is mid-execution you start getting exceptions like: java.io.IOException: S100 Data file size limit is reached in statement... The run in the WRM will continue to show "running..." because the error can't get recorded. Subsequent run-row deletes take an extremely long time and don't seem to work, there's a lot of activity with the log file, but they don't disappear until you restart kepler (which also takes an extremely long time). To change max data file size, certain conditions should be in place, see: http://hsqldb.org/doc/guide/ch04.html Changing the size to 4gb (max for fat32) probably isn't even desirable until we upgrade hsql and can leverage the SET FILES BACKUP INCREMENT feature. -- Configure bugmail: http://bugzilla.ecoinformatics.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact for the bug. _______________________________________________ Kepler-dev mailing list [email protected] http://lists.nceas.ucsb.edu/kepler/mailman/listinfo/kepler-dev
