Re: stream of large objects

2019-02-12 Thread Aggarwal, Ajay
c: Chesnay Schepler , "user@flink.apache.org" Subject: Re: stream of large objects Hi Ajay, when repartitioning the stream the events need to transferred between Taskmanagers (processes/nodes). Just passing a reference there won't work. If it is serialization you are worried about and y

Re: stream of large objects

2019-02-12 Thread Konstantin Knauf
does not > seem efficient. > > > > > > *From: *Chesnay Schepler > *Date: *Sunday, February 10, 2019 at 4:57 AM > *To: *"Aggarwal, Ajay" , "user@flink.apache.org" > > *Subject: *Re: stream of large objects > > > > *NetApp Security WARN

Re: stream of large objects

2019-02-11 Thread Aggarwal, Ajay
in a Keyed context, so sharing all of these across all downstream tasks does not seem efficient. From: Chesnay Schepler Date: Sunday, February 10, 2019 at 4:57 AM To: "Aggarwal, Ajay" , "user@flink.apache.org" Subject: Re: stream of large objects NetApp Security WARNING: This

Re: stream of large objects

2019-02-10 Thread Chesnay Schepler
y" , "user@flink.apache.org" *Subject: *Re: stream of large objects Whether a LargeMessage is serialized depends on how the job is structured. For example, if you were to only apply map/filter functions after the aggregation it is likely they wouldn't be serialized. If you w

Re: stream of large objects

2019-02-08 Thread Aggarwal, Ajay
: Friday, February 8, 2019 at 8:45 AM To: "Aggarwal, Ajay" , "user@flink.apache.org" Subject: Re: stream of large objects Whether a LargeMessage is serialized depends on how the job is structured. For example, if you were to only apply map/filter functions after the aggr

Re: stream of large objects

2019-02-08 Thread Chesnay Schepler
Whether a LargeMessage is serialized depends on how the job is structured. For example, if you were to only apply map/filter functions after the aggregation it is likely they wouldn't be serialized. If you were to apply another keyBy they will be serialized again. When you say "small size"

stream of large objects

2019-02-07 Thread Aggarwal, Ajay
In my use case my source stream contain small size messages, but as part of flink processing I will be aggregating them into large messages and further processing will happen on these large messages. The structure of this large message will be something like this: Class LargeMessage {