Hi,

Yes, it is only related to **batch** jobs, but not necessarily only to DataSet 
API jobs. If you are using for example Blink SQL/Table API to process some 
bounded data streams (tables), it could also be visible/affected there. If not, 
I would suggest to start a new user mailing list question and posting the 
details (what are you running, job manager/task manager logs, …).

Piotrek

> On 2 Dec 2019, at 10:51, Victor Wong <jiashen...@gmail.com> wrote:
> 
> Hi,
> 
> We encountered similar issues that the task manager kept being killed by yarn.
> 
> - flink 1.9.1
> - heap usage is low.
> 
> But our job is a **streaming** job, so I want to ask if this issue is only 
> related to **batch** job or not? Thanks!
> 
> Best,
> Victor
> 
> 
> yingjie <yjclove...@gmail.com <mailto:yjclove...@gmail.com>> 于2019年11月28日周四 
> 上午11:43写道:
> Piotr is right, that depend on the data size you are reading and the memory
> pressure. Those memory occupied by mmapped region can be recycled and used
> by other processes if memory pressure is high, that is, other process or
> service on the same node won't be affected because the OS will recycle the
> mmapped pages if needed. But currently, you can't assume a bound of the
> memory can be used, because it will use more memory as long as there is free
> space and you have more new data to read.
> 
> 
> 
> --
> Sent from: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ 
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/>
> 
> 
> -- 
> 
> Best,
> Victor

Reply via email to