????mq????????????????????????????????

2020-02-20 文章 claylin
Hi event timekafkatcp(??tcpRecv-Q) org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-108,

Re: How does Flink manage the kafka offset

2020-02-20 文章 Jin Yi
Hi Benchao, Thanks a lot! Eleanore On Thu, Feb 20, 2020 at 4:30 PM Benchao Li wrote: > Hi Jin, > > See below inline replies: > > My understanding is, upon startup, Flink Job Manager will contact kafka to >> get the offset for each partition for this consume group, and distribute >> the task to

Flink任务AMRMToken失效问题

2020-02-20 文章 faaron zheng
Hi,大家好, 请教一个flink任务正常运行一段时间后因为AMRMToken失效导致任务失败的问题。当前使用的环境Flink1.7.2,使用kerberos鉴权,hadoop3.1.1。 JM日志一直checkpoint正常,突然报了附件的错误  社区有个相关的issue单,Flink-12623但是说是和hadoop版本有关的。想问下除了这个原因还有什么原因会导致这个问题出现么?

Re: Re: flink-1.10.0通过run -m yarn-cluster提交任务时异常

2020-02-20 文章 tison
常见问题。 现在 Flink 不 bundle hadoop,所以你要设置下 HADOOP_CLASSPATH Best, tison. amenhub 于2020年2月18日周二 上午11:51写道: > hi, Weihua > > > 如你所说,我想要通过flink on yarn的run方式提交任务到集群上,但是当我运行./bin/flink run -m > yarn-cluster ../examples/batch/WordCount.jar ,还是一样的错误, >

Re: How does Flink manage the kafka offset

2020-02-20 文章 Benchao Li
Hi Jin, See below inline replies: My understanding is, upon startup, Flink Job Manager will contact kafka to > get the offset for each partition for this consume group, and distribute > the task to task managers, and it does not use kafka to manage the consumer > group. Generally, yes. If you

How does Flink manage the kafka offset

2020-02-20 文章 Jin Yi
Hi there, We are running apache beam application with flink being the runner. We use the KafkaIO connector to read from topics: https://beam.apache.org/releases/javadoc/2.19.0/ and we have the following configuration, which enables auto commit of offset, no checkpointing is enabled, and it is

请问一下FLINK-14091 这个JIRA 是否在FLINK17 中也存在

2020-02-20 文章 tao wang
https://issues.apache.org/jira/browse/FLINK-14091 。 现在在生产环境中,FLINK 1.7 遇到了同样的问题,导致checkpoint 失败。

Re: [ANNOUNCE] Apache Flink Python API(PyFlink) 1.9.2 released

2020-02-20 文章 Xingbo Huang
Thanks a lot for the release. Great Work, Jincheng! Also thanks to participants who contribute to this release. Best, Xingbo Till Rohrmann 于2020年2月18日周二 下午11:40写道: > Thanks for updating the 1.9.2 release wrt Flink's Python API Jincheng! > > Cheers, > Till > > On Thu, Feb 13, 2020 at 12:25 PM

Re: flink rocksdb状态后端物理内存溢出的问题

2020-02-20 文章 Yu Li
建议升级到1.10.0版本,该版本默认对RocksDB backend的内存使用会有限制,更多资料请参考官方文档 [1]。 Best Regards, Yu [1] https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/state/state_backends.html#memory-management On Thu, 20 Feb 2020 at 17:42, chanamper wrote: > 请教一下,我采用flink >

flink rocksdb状态后端物理内存溢出的问题

2020-02-20 文章 chanamper
请教一下,我采用flink 1.8版本,状态后端采用rocksdb方式,任务运行一段时间后containter会出现物理内存溢出,单个containter的内存为10G、堆内存使用很少仅1G左右。这种情况下我应该如何分析内存占用情况?

使用flink sql join临时表,出现异常(Flink-1.10.0)

2020-02-20 文章 amenhub
各位好: Flink-1.10.0 可以使用处理时间属性进行 temporal join,当我尝试以下面的 sql 提交 flink 任务时, 【 SELECT m.name, m.age, m.score FROM mysql_out AS m JOIN kafka_out FOR SYSTEM_TIME AS OF m.update_time AS k ON m.name = k.name 】 出现了如下异常: 【 Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There