[
https://issues.apache.org/jira/browse/BEAM-8352?focusedWorklogId=326426&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-326426
]
ASF GitHub Bot logged work on BEAM-8352:
----------------------------------------
Author: ASF GitHub Bot
Created on: 10/Oct/19 16:28
Start Date: 10/Oct/19 16:28
Worklog Time Spent: 10m
Work Description: aromanenko-dev commented on issue #9745: [BEAM-8352]
Add "withMaxCapacityPerShard()" to KinesisIO.Read
URL: https://github.com/apache/beam/pull/9745#issuecomment-540666549
@lukecwik Regarding your questions:
1. Default value was (and stays) 10K records per shard and it worked well
for topics with not large amount of shards per worker (we've got this issue
only after 2 years of KinesisIO utilisation by different users). So, in
general, we can conclude that user doesn't need to configure it manually and
this option has to be used only in corner cases.
2. Yes, this is a field for improvement. Though, it's not very clear under
which conditions user would need to be able change this option in the runtime
dynamically.
> you can assume each is the maximum size of 1MiB
I mean the maximum size of available memory that can be allocated for this
queue. Actually, it goes to more complicated question how effectively implement
backpressure to have effectively working pair of consumer/producer in terms of
consumed memory and total performance.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 326426)
Time Spent: 1h 20m (was: 1h 10m)
> Reading records in background may lead to OOM errors
> ----------------------------------------------------
>
> Key: BEAM-8352
> URL: https://issues.apache.org/jira/browse/BEAM-8352
> Project: Beam
> Issue Type: Bug
> Components: io-java-kinesis
> Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 2.8.0, 2.9.0,
> 2.10.0, 2.11.0, 2.12.0, 2.13.0, 2.14.0, 2.15.0
> Reporter: Mateusz Juraszek
> Assignee: Alexey Romanenko
> Priority: Major
> Fix For: 2.17.0
>
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> We have faced a problem with OOM errors in our dataflow job containing
> Kinesis sources. After investigation, it occurred that the issue was caused
> by too many records being consumed by Kinesis sources that pipeline couldn't
> handle in time.
> Looking into the Kinesis connector's code, the internal queue (recordsQueue)
> that records are being put in the background is setup for each
> ShardReadersPool (created for each source being Kinesis stream). The size of
> the queue is set to `queueCapacityPerShard * number of shards`. The bigger
> number of shards, the bigger queue size. There is no ability to limit the
> maximum capacity of the queue (queueCapacityPerShard is also not configurable
> and it's set to DEFAULT_CAPACITY_PER_SHARD=10_000). Additionally, there is no
> differentiation on records size, so the size of data placed to the queue
> might increase to the point where OOM will be thrown.
> It would be great to have ability to somehow limit the number of records that
> are being read in the background to some sensible value. At the beginning,
> simple solution would be to allow configuring max queue size for source at
> the creation of KinesisIO.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)