Re: [h2] Group BY on "large" tables from file-system causes Out of Memory Error

2020-04-21 Thread Noel Grandin
TBH, retrieving super large result sets is not something we optimise for.

If you really need that, you can try turn on the LAZY_FETCH feature.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/CAFYHVnW6H71LKPK1qrdbFKqr7J4qoD%2BU0TM5FaZuQuhR%2BU%3Dh_Q%40mail.gmail.com.


Re: [h2] Group BY on "large" tables from file-system causes Out of Memory Error

2020-04-21 Thread Evgenij Ryazanov
H2 doesn't need a lot of memory for plain queries without aggregate and 
window functions, large results are stored on the disk automatically. But 
queries with aggregate or window functions currently need to load the whole 
result into the memory; the only exclusion is the mentioned optimization 
for group-sorted queries in presence of compatible index.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/4efcbc85-65b9-452e-9577-d0ba4e60dc3e%40googlegroups.com.


Re: [h2] Group BY on "large" tables from file-system causes Out of Memory Error

2020-04-21 Thread MacMahon McCallister


On Tuesday, 21 April 2020 13:12:01 UTC+3, MacMahon McCallister wrote:
>
>
>
> On Tuesday, 21 April 2020 11:18:02 UTC+3, Noel Grandin wrote:
>>
>> Which version is this ?
>>
>> And what happens when you remove the dangerous options? (LOG and UNDO_LOG)
>>
>
> Version: 1.4.200.
> Nothing happens if i remove the options. I actually tried fiddling with 
> the options earlier, but it always halts on 5M rows.
>
> On Tuesday, 21 April 2020 11:30:20 UTC+3, Evgenij Ryazanov wrote
>>
>> If you don't have an index on GROUP BY column, you need a lot of memory 
>> for such queries in H2.
>>
>>  
> This does make kind of sense, but still not in this test-scenario. How 
> come the previous test-cases (up to 1M rows, without index), run fine, even 
> with memory as low as -XmX256m:
> Executing with size: 1000
> Processed 1000, time 30 ms
> Executing with size: 1
> Processed 1, time 50 ms
> Executing with size: 10
> Processed 10, time 241 ms
> Executing with size: 100
> Processed 100, time 1925 ms
>
>
 To respond to myself, actually, the Xmx settings didn't apply properly and 
therefore (as suggested earlier) the h2 operation ran out of memory.
 It seems, that actually using heap settings of Xmx1024 is able to execute 
the unindexed query on a table with 5M rows within 10 seconds.

But a follow up question - for these "un-indexed group by" scenarios, does 
h2 have to read all the result-set into memory?
And besides indexing the table (which I can not too probably) are there any 
other optimizations to consider?


-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/ee3720aa-d375-4681-bcb0-9b43023716b7%40googlegroups.com.


Re: [h2] Group BY on "large" tables from file-system causes Out of Memory Error

2020-04-21 Thread MacMahon McCallister


On Tuesday, 21 April 2020 11:18:02 UTC+3, Noel Grandin wrote:
>
> Which version is this ?
>
> And what happens when you remove the dangerous options? (LOG and UNDO_LOG)
>

Version: 1.4.200.
Nothing happens if i remove the options. I actually tried fiddling with the 
options earlier, but it always halts on 5M rows.

On Tuesday, 21 April 2020 11:30:20 UTC+3, Evgenij Ryazanov wrote
>
> If you don't have an index on GROUP BY column, you need a lot of memory 
> for such queries in H2.
>
>  
This does make kind of sense, but still not in this test-scenario. How come 
the previous test-cases (up to 1M rows, without index), run fine, even with 
memory as low as -XmX256m:
Executing with size: 1000
Processed 1000, time 30 ms
Executing with size: 1
Processed 1, time 50 ms
Executing with size: 10
Processed 10, time 241 ms
Executing with size: 100
Processed 100, time 1925 ms




 


-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/6d82d626-c3de-4ee8-955c-c4a0effc0cac%40googlegroups.com.


Re: [h2] Group BY on "large" tables from file-system causes Out of Memory Error

2020-04-21 Thread Noel Grandin
Which version is this ?

And what happens when you remove the dangerous options? (LOG and UNDO_LOG)

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/CAFYHVnXnwWZm9hHe3KpLEFWZDQ%3DnbCQhF7L67jfhL-5HWdJhvA%40mail.gmail.com.