[
https://issues.apache.org/jira/browse/IGNITE-27272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ivan Zlenko updated IGNITE-27272:
---------------------------------
Description:
If the batch size for the data streamer is too large and we risk running out of
memory, it is a good idea to prevent this batch from being inserted until we
have a mechanism in place that will automatically split such batches into
smaller ones.
The maximum available size for one batch could be calculated from the available
memory and the table schema into which the batch should be inserted.
Otherwise, there is a potential issue where the cluster could become
unresponsive after such a batch is sent.
was:
If the batch size for the data streamer is too big and we are in risk run out
of memory it is a good idea to prevent this batch from being inserted, until we
have a mechanism in place which automatically will split such batch into
smaller ones.
Otherwise there is a potential issue where cluster could became unresponsive
after such batch will be sent.
> Block too large batches from being inserted using data streamer
> ---------------------------------------------------------------
>
> Key: IGNITE-27272
> URL: https://issues.apache.org/jira/browse/IGNITE-27272
> Project: Ignite
> Issue Type: Improvement
> Reporter: Ivan Zlenko
> Priority: Major
> Labels: ignite-3
>
> If the batch size for the data streamer is too large and we risk running out
> of memory, it is a good idea to prevent this batch from being inserted until
> we have a mechanism in place that will automatically split such batches into
> smaller ones.
> The maximum available size for one batch could be calculated from the
> available memory and the table schema into which the batch should be inserted.
> Otherwise, there is a potential issue where the cluster could become
> unresponsive after such a batch is sent.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)