Thanks zhangminglei,

Does this mean setting env.setParallelism(80) means I have created 80 kafka
consumers? and if this is true then can I change  env.setParallelism(80) to
any number i.e. number of partitions =  env.setParallelism or else I need
to restart my job each time I set new Parallelism in my job. I want to
write partition specific data transformation logic.

In short I want to create N flink kafka consumers for N number of
partitions.

-----------------------------------------------
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com <sac...@iprogrammer.com>
------------------------------------------------

On Mon, Jun 25, 2018 at 3:25 PM, zhangminglei <18717838...@163.com> wrote:

> Hi, Amol
>
> As @Sihua said. Also in my case, if the kafka partition is 80. I will also
> set the job source operator parallelism to 80 as well.
>
> Cheers
> Minglei
>
> 在 2018年6月25日,下午5:39,sihua zhou <summerle...@163.com> 写道:
>
> Hi Amol,
>
> I think If you set the parallelism of the source node equal to the number
> of the partition of the kafka topic, you could have per kafka customer per
> partition in your job. But if the number of the partitions of the kafka
> is dynamic, the 1:1 relationship might break. I think maybe @Gordon(CC)
> could give you more useful information.
>
> Best, Sihua
>
>
>
> On 06/25/2018 17:19,Amol S - iProgrammer<am...@iprogrammer.com>
> <am...@iprogrammer.com> wrote:
>
> Same kind of question I have asked on stack overflow also.
>
> Please answer it ASAP
>
> https://stackoverflow.com/questions/51020018/partition-
> specific-flink-kafka-consumer
>
> -----------------------------------------------
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com <sac...@iprogrammer.com>
> ------------------------------------------------
>
> On Mon, Jun 25, 2018 at 2:09 PM, Amol S - iProgrammer <
> am...@iprogrammer.com
>
> wrote:
>
>
> Hello,
>
> I wrote an streaming programme using kafka and flink to stream mongodb
> oplog. I need to maintain an order of streaming within different kafka
> partitions. As global ordering of records not possible throughout all
> partitions I need N consumers for N different partitions. Is it possible to
> consume data from N different partitions and N flink kafka consumers?
>
> Please suggest.
>
> -----------------------------------------------
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com <sac...@iprogrammer.com>
> ------------------------------------------------
>
>
>

Reply via email to