Hi,

I am developing a thick client which uses an embedded h2 database as
kind of a local cache.  What I have done is that my tool retrieves
data from a remote server, stores it in memory tables in an embedded
h2 database, and then  uses those tables to present the data in
various visualizations.  This has been working very well for me for
data sets up to a certain size but now I am trying to deal with tables
that are too large for me to just keep in memory.

To deal with these cases, I have made my system smart enough to be
able to switch over to using h2 in file mode when needed.
Unfortunately, while this mostly works, I am running into some
performance issues when I then query this data with an "order by" in
my query.  To sort a smallish data set of 50000 rows  using a query
like:

 select * from myTestTable order by column5 limit 100;

is taking me 3-4 seconds.

Since my tables can have a lot of different columns, 50 or more, of
which I may want to sort on any of those columns, indexing all of
them  did not seem like a wise idea.  Additionally, setting the
MAX_MEMORY_ROWS to something larger than my result set helps after the
first query but if I am going to use that I might as well go back to
in memory tables.   I also tried setting the CACHE_SIZE to something
fairly large but this did not really help performance much for me
either.

Does anyone have any other suggestions on ways that I can either
change my queries or else tune H2 to improve performance?

Thanks.

-- Mark

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to