So I wanted to add one last posting to this thread and it can probably be 
closed out...

Longer term I think that adding a way to either force the indices on 
temporary tables to disk or providing some way of limiting how much 
information from the index is maintained in memory is the right way to go.  
In our case the team did not want to use what would be a fork of H2 to fix 
the issue with no clear direction as to what would happen with regards to 
this issue in the future.  To work around the problem we have refactored 
our queries to use inner joins as surrogate filters.  This actually dropped 
the runtime for the queries and seems to avoid the creation of the 
temporary table entirely.  The bad side to this is that everyone is now 
avoiding the use of a WHERE clause and instead trying to simulate the 
filtering that WHERE would provide using inner join instead.  I'm adding 
this information primarily to help anyone reading this with a possible work 
around and NOT as a recommendation that we all go out and do this.

I hope that this issue can be resolved in a future release of H2 as we 
enjoy working with the database and the new worry about using features like 
filtering because of hidden memory faults is a shame.

On Monday, April 24, 2017 at 11:28:15 AM UTC-5, Noel Grandin wrote:
>
> Unfortunately there is no easy way to change an index in-flight, it gets 
> set as memory or disk based at creation time.
>
> If you are unable to change your query, you could also try patching H2 and 
> running a custom build with the indexes set to disk based
>
> On 24 April 2017 at 18:22, James Hurley <[email protected] <javascript:>
> > wrote:
>
>> Hi Noel,
>>
>> I appreciate you taking the time to look into this issue; I'll let the 
>> team know that there is not an immediate solution and we can discuss the 
>> suggestion you've given.
>>
>> On a different note, would it make sense to add a setting that gives the 
>> database a threshold for when to force the indices for a temporary table to 
>> disk?  I'm thinking of something similar to the way that MAX_MEMORY_ROWS 
>> works so that some resource usage tuning could be performed when working 
>> with fairly large tables.  Understandably this may impact performance once 
>> the threshold is reached and the index is persisted but this is better than 
>> a sudden (and unexpected) heap memory issue.  Such a setting would also 
>> give people a way to better manage the performance verses resource trade 
>> off if they find themselves in a similar situation to ours.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "H2 Database" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/h2-database.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to