Thanks for clarifying that helps me identify the root cause. The
problem comes from my code which is not related to Flink. Now the
problem is solved. Thank you again for the advice!
On 16 June 2016 at 23:49, Till Rohrmann wrote:
> Hi Yan Chou Chen,
>
> Flink does not
Hi Yan Chou Chen,
Flink does not instantiate for each record a mapper. Instead, it will
create as many mappers as you've defined with the parallelism. Each of
these mappers is deployed to a slot on a TaskManager. When it is deployed
and before it receives records, the open method is called once.
A quick question. When running a stream job that executes
DataStream.map(MapFunction) , after data is read from Kafka, does each
MapFunction is created per item or based on parallelism?
For instance, for the following code snippet
val env = StreamExecutionEnvironment.getExeutionEnvironment
val