Hey all, I'm working on a quick and dirty app that unfortunately has to run on a machine with all of a gig of RAM with a decent size dataset (2M rows). I really want the 'fetchSize' attribute of the select element to work so my RowHandler runs lean and mean. The app basically issues a query up front that pulls back a bunch of rows and for every row I do the work.
What I'm seeing is the JVM memory usage climb up to 500M at startup, level off after a bit, and then I see my RowHandler invoked on a per-row basis. This is with fetchSize set to 100 on the select in the sqlmap (the select brings back about 1M rows). I'm using postgres' jdbc driver, and their docs say that the default behavior is to fetch all the rows unless you play around with fetch size. Even if you set the fetch size, the driver will silently fall back to grabbing everything if certain conditions are not met. The two conditions I'm not sure about are these (from the postgres docs): * The Connection must not be in autocommit mode. The backend closes cursors at the end of transactions, so in autocommit mode the backend will have closed the cursor before anything can be fetched from it. * The Statement must be created with a ResultSet type of ResultSet.TYPE_FORWARD_ONLY. This is the default, so no code will need to be rewritten to take advantage of this, but it also means that you cannot scroll backwards or otherwise jump around in the ResultSet I am using Spring's transaction management in this standalone main() app, but not for selects. I'm not sure what that means in terms of autocommit mode and iBATIS for selects. And I don't know what iBATIS does with the ResultSet type on creation. Any thoughts or sage advice (or condolences) before I go jump into the code? This is on version 2.3.0.667 of iBATIS and version 8.1-405 of the postgres jdbc driver, BTW. Thanks, Chris