It's an insert into select. We made "meta" tables to allow doing other
selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev <[email protected]> a
écrit :

> Hello!
>
> I think it would make better sense to mark already updated entries, update
> in batches until no unmarked entries left.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 30 мая 2019 г. в 19:14, yann Blazart <[email protected]>:
>
>> Hmmm. Can I use limit and offset ?
>>
>> Doing limit 10000 by example and continue while insert  ount = 10000 ???
>>
>>
>>
>> Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[email protected]>
>> a écrit :
>>
>>> Hello!
>>>
>>> I'm afraid you will have to split this query into smaller ones. Ignite
>>> doesn't really have lazy insert ... select, so the result set will have to
>>> be held in heap for some time.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 30 мая 2019 г. в 18:36, yann Blazart <[email protected]>:
>>>
>>>> Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.
>>>>
>>>> We store lot's of data in some partitioned tables, then we are
>>>> executing some "insert into select... join..." using SqlQueryField (or
>>>> SqlQueryFieldEx).
>>>>
>>>> With tables of 5000 000 lines, we ran in a OOM error, even with lazy
>>>> set to true and skipOnReduceTable.
>>>>
>>>> How can we handle this please ?
>>>>
>>>> Regards.
>>>>
>>>

Reply via email to