Have a look at Ignite data streamer API, they are designed better for data
streaming and big chunks loading:
https://apacheignite.readme.io/docs/streaming--cep

Or, switch the SQL engine to the streaming mode for the time of that INSERT:
https://apacheignite-sql.readme.io/docs/set

Let me know if any of the above works. We can document this.

-
Denis


On Tue, Jun 4, 2019 at 10:50 AM [email protected] <
[email protected]> wrote:

> Hello, finally I did the following trick.
>
> I broadcast my select request on each node, executing the select with Lazy
> and collocated, then I user QueryCursor and create BinaryObjects that I
> insert into cache by 5000 packet.
>
> Wich seems to be good enough.
>
> Thanks for your help.
>
> Le 30 mai 2019 à 18:18, yann Blazart <[email protected]> a écrit :
>
> It's an insert into select. We made "meta" tables to allow doing other
> selects.
>
> Or can I do a lazy select then batch insert you mean ?
>
> Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev <[email protected]>
> a écrit :
>
>> Hello!
>>
>> I think it would make better sense to mark already updated entries,
>> update in batches until no unmarked entries left.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 30 мая 2019 г. в 19:14, yann Blazart <[email protected]>:
>>
>>> Hmmm. Can I use limit and offset ?
>>>
>>> Doing limit 10000 by example and continue while insert  ount = 10000 ???
>>>
>>>
>>>
>>> Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev <[email protected]>
>>> a écrit :
>>>
>>>> Hello!
>>>>
>>>> I'm afraid you will have to split this query into smaller ones. Ignite
>>>> doesn't really have lazy insert ... select, so the result set will have to
>>>> be held in heap for some time.
>>>>
>>>> Regards,
>>>> --
>>>> Ilya Kasnacheev
>>>>
>>>>
>>>> чт, 30 мая 2019 г. в 18:36, yann Blazart <[email protected]>:
>>>>
>>>>> Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.
>>>>>
>>>>> We store lot's of data in some partitioned tables, then we are
>>>>> executing some "insert into select... join..." using SqlQueryField (or
>>>>> SqlQueryFieldEx).
>>>>>
>>>>> With tables of 5000 000 lines, we ran in a OOM error, even with lazy
>>>>> set to true and skipOnReduceTable.
>>>>>
>>>>> How can we handle this please ?
>>>>>
>>>>> Regards.
>>>>>
>>>>
> This message and any attachments (the "message") is
> intended solely for the intended addressees and is confidential.
> If you receive this message in error,or are not the intended recipient(s),
> please delete it and any copies from your systems and immediately notify
> the sender. Any unauthorized view, use that does not comply with its
> purpose,
> dissemination or disclosure, either whole or partial, is prohibited. Since
> the internet
> cannot guarantee the integrity of this message which may not be reliable,
> BNP PARIBAS
> (and its subsidiaries) shall not be liable for the message if modified,
> changed or falsified.
> Do not print this message unless it is necessary, consider the environment.
>
>
> ----------------------------------------------------------------------------------------------------------------------------------
>
> Ce message et toutes les pieces jointes (ci-apres le "message")
> sont etablis a l'intention exclusive de ses destinataires et sont
> confidentiels.
> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
> merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
> immediatement l'expediteur. Toute lecture non autorisee, toute utilisation
> de
> ce message qui n'est pas conforme a sa destination, toute diffusion ou
> toute
> publication, totale ou partielle, est interdite. L'Internet ne permettant
> pas d'assurer
> l'integrite de ce message electronique susceptible d'alteration, BNP
> Paribas
> (et ses filiales) decline(nt) toute responsabilite au titre de ce message
> dans l'hypothese
> ou il aurait ete modifie, deforme ou falsifie.
> N'imprimez ce message que si necessaire, pensez a l'environnement.
>

Reply via email to