Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 weijie guo
Congratulations! Well done.


Best regards,

Weijie


Feng Jin  于2024年3月21日周四 11:40写道:

> Congratulations!
>
>
> Best,
> Feng
>
>
> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>
> > Congratulations!
> >
> > Best,
> > Ron
> >
> > Jark Wu  于2024年3月21日周四 10:46写道:
> >
> > > Congratulations and welcome!
> > >
> > > Best,
> > > Jark
> > >
> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > > wrote:
> > > >
> > > > > Congrattulations!
> > > > >
> > > > > Best,
> > > > > Hang
> > > > >
> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > > >
> > > > >>
> > > > >> Congrats, thanks for the great work!
> > > > >>
> > > > >>
> > > > >> Best,
> > > > >> Lincoln Lee
> > > > >>
> > > > >>
> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > > >>
> > > > >>> Congratulations
> > > > >>>
> > > > >>>
> > > > >>> Best Regards
> > > > >>> Peter Huang
> > > > >>>
> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > > wrote:
> > > > >>>
> > > > 
> > > >  Congratulations
> > > > 
> > > > 
> > > > 
> > > >  Best,
> > > >  Huajie Wang
> > > > 
> > > > 
> > > > 
> > > >  Leonard Xu  于2024年3月20日周三 21:36写道:
> > > > 
> > > > > Hi devs and users,
> > > > >
> > > > > We are thrilled to announce that the donation of Flink CDC as a
> > > > > sub-project of Apache Flink has completed. We invite you to
> > explore
> > > > the new
> > > > > resources available:
> > > > >
> > > > > - GitHub Repository: https://github.com/apache/flink-cdc
> > > > > - Flink CDC Documentation:
> > > > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > > >
> > > > > After Flink community accepted this donation[1], we have
> > completed
> > > > > software copyright signing, code repo migration, code cleanup,
> > > > website
> > > > > migration, CI migration and github issues migration etc.
> > > > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > > contributors
> > > > > for their contributions and help during this process!
> > > > >
> > > > >
> > > > > For all previous contributors: The contribution process has
> > > slightly
> > > > > changed to align with the main Flink project. To report bugs or
> > > > suggest new
> > > > > features, please open tickets
> > > > > Apache Jira (https://issues.apache.org/jira).  Note that we
> will
> > > no
> > > > > longer accept GitHub issues for these purposes.
> > > > >
> > > > >
> > > > > Welcome to explore the new repository and documentation. Your
> > > > feedback
> > > > > and contributions are invaluable as we continue to improve
> Flink
> > > CDC.
> > > > >
> > > > > Thanks everyone for your support and happy exploring Flink CDC!
> > > > >
> > > > > Best,
> > > > > Leonard
> > > > > [1]
> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 weijie guo
Congratulations!

Thanks release managers and all the contributors involved.

Best regards,

Weijie


Leonard Xu  于2024年3月18日周一 16:45写道:

> Congratulations, thanks release managers and all involved for the great
> work!
>
>
> Best,
> Leonard
>
> > 2024年3月18日 下午4:32,Jingsong Li  写道:
> >
> > Congratulations!
> >
> > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> >>
> >> Congratulations, thanks for the great work!
> >>
> >> Best,
> >> Rui
> >>
> >> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> wrote:
> >>>
> >>> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> series.
> >>>
> >>> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >>>
> >>> The release is available for download at:
> >>> https://flink.apache.org/downloads.html
> >>>
> >>> Please check out the release blog post for an overview of the
> improvements for this bugfix release:
> >>>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >>>
> >>> The full release notes are available in Jira:
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >>>
> >>> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> >>>
> >>>
> >>> Best,
> >>> Yun, Jing, Martijn and Lincoln
>
>


Re: 关于DataStream API计算批数据的聚合值

2023-07-25 文章 weijie guo
你好:

Batch 模式下的 reduce 操作默认应该就是只输出最后一条数据(per-key)的。Agg 的话可能有点麻烦,可以使用
GlobalWindow + 自定义 Trigger 来 Workaround.

Best regards,

Weijie


Liu Join  于2023年7月26日周三 09:10写道:

> 例如:我使用DataStream api计算批数据也就是有界流的平均值,如何实现只输出最后一条平均值的数据,不输出中间值
>


Re: 如果DataSet API 被彻底废掉了,那我如何用DataStream实现分区、排序这个需求?

2023-07-12 文章 weijie guo
你好,
首先,Batch Shuffle 的中间数据都是会落盘的。其次,对于 Sort 这个操作来说,上面给出的解法和Dataset一致,都不会落盘。

Best regards,

Weijie


jinzhuguang  于2023年7月12日周三 17:28写道:

> 如果我的数据量很大,内存装不下,flink在batch
> mode下的行为是否会像传统的批处理系统,例如hive那样,会进行shuffe、中间数据落盘等操作。
>
> > 2023年7月12日 17:05,weijie guo  写道:
> >
> >
> 你好,对于DataSet中不按照key进行全量聚合/排序的API(例如,sortPartition/mapPartition),DataStream上目前没有直接提供相同的API,但可以通过组合DataStream上现有的API实现相同的功能。
> > 以mapPartition为例,可以通过以下三个步骤实现相同的功能:
> > 1. dataStream.map(record -> (subtaskIndex,
> > record)),为每个Record增加处理该record时子任务编号。
> > 2. dataStream.assignTimestampsAndWatermarks,为每个Record增加相同的时间戳。
> > 3.
> >
> dataStream.window(TumblingEventTimeWindows.of(Time.seconds(1))).apply(mapPartition
> > udf),基于固定时间窗口开窗,收集全量数据进行并使用和mapPartition相同的逻辑进行处理。
> >
> > 以下链接中是相关的示例代码,其中的第一步和第二步,在DataSet API被移除之前后续会考虑为DataStream API提供相关的工具方法:
> >
> > https://netcut.cn/p/dc693599e9031cd7
>
>


Re: 如果DataSet API 被彻底废掉了,那我如何用DataStream实现分区、排序这个需求?

2023-07-12 文章 weijie guo
你好,对于DataSet中不按照key进行全量聚合/排序的API(例如,sortPartition/mapPartition),DataStream上目前没有直接提供相同的API,但可以通过组合DataStream上现有的API实现相同的功能。
以mapPartition为例,可以通过以下三个步骤实现相同的功能:
1. dataStream.map(record -> (subtaskIndex,
record)),为每个Record增加处理该record时子任务编号。
2. dataStream.assignTimestampsAndWatermarks,为每个Record增加相同的时间戳。
3.
dataStream.window(TumblingEventTimeWindows.of(Time.seconds(1))).apply(mapPartition
udf),基于固定时间窗口开窗,收集全量数据进行并使用和mapPartition相同的逻辑进行处理。

以下链接中是相关的示例代码,其中的第一步和第二步,在DataSet API被移除之前后续会考虑为DataStream API提供相关的工具方法:

https://netcut.cn/p/dc693599e9031cd7


Re: [ANNOUNCE] Apache Flink 1.16.2 released

2023-05-28 文章 weijie guo
Hi Jing,

Thank you for caring about the releasing process. It has to be said that
the entire process went smoothly. We have very comprehensive
documentation[1] to guide my work, thanks to the contribution of previous
release managers and the community.

Regarding the obstacles, I actually only have one minor problem: We used an
older twine(1.12.0) to deploy python artifacts to PyPI, and its compatible
dependencies (such as urllib3) are also older. When I tried twine upload,
the process couldn't work as expected as the version of urllib3 installed
in my machine was relatively higher. In order to solve this, I had to
proactively downgrade the version of some dependencies. I added a notice in
the cwiki page[1] to prevent future release managers from encountering the
same problem. It seems that this is a known issue(see comments in [2]),
which has been resolved in the higher version of twine, I wonder if we can
upgrade the version of twine? Does anyone remember the reason why we fixed
a very old version(1.12.0)?

Best regards,

Weijie


[1]
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release

[2] https://github.com/pypa/twine/issues/997


Jing Ge  于2023年5月27日周六 00:15写道:

> Hi Weijie,
>
> Thanks again for your effort. I was wondering if there were any obstacles
> you had to overcome while releasing 1.16.2 and 1.17.1 that could lead us to
> any improvement wrt the release process and management?
>
> Best regards,
> Jing
>
> On Fri, May 26, 2023 at 4:41 PM Martijn Visser 
> wrote:
>
> > Thank you Weijie and those who helped with testing!
> >
> > On Fri, May 26, 2023 at 1:06 PM weijie guo 
> > wrote:
> >
> > > The Apache Flink community is very happy to announce the release of
> > > Apache Flink 1.16.2, which is the second bugfix release for the Apache
> > > Flink 1.16 series.
> > >
> > >
> > >
> > > Apache Flink® is an open-source stream processing framework for
> > > distributed, high-performing, always-available, and accurate data
> > > streaming applications.
> > >
> > >
> > >
> > > The release is available for download at:
> > >
> > > https://flink.apache.org/downloads.html
> > >
> > >
> > >
> > > Please check out the release blog post for an overview of the
> > > improvements for this bugfix release:
> > >
> > > https://flink.apache.org/news/2023/05/25/release-1.16.2.html
> > >
> > >
> > >
> > > The full release notes are available in Jira:
> > >
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352765
> > >
> > >
> > >
> > > We would like to thank all contributors of the Apache Flink community
> > > who made this release possible!
> > >
> > >
> > >
> > > Feel free to reach out to the release managers (or respond to this
> > > thread) with feedback on the release process. Our goal is to
> > > constantly improve the release process. Feedback on what could be
> > > improved or things that didn't go so well are appreciated.
> > >
> > >
> > >
> > > Regards,
> > >
> > > Release Manager
> > >
> >
>


[ANNOUNCE] Apache Flink 1.17.1 released

2023-05-26 文章 weijie guo
The Apache Flink community is very happy to announce the release of
Apache Flink 1.17.1, which is the first bugfix release for the Apache
Flink 1.17 series.



Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.



The release is available for download at:

https://flink.apache.org/downloads.html



Please check out the release blog post for an overview of the
improvements for this bugfix release:

https://flink.apache.org/news/2023/05/25/release-1.17.1.html



The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352886



We would like to thank all contributors of the Apache Flink community
who made this release possible!



Feel free to reach out to the release managers (or respond to this
thread) with feedback on the release process. Our goal is to
constantly improve the release process. Feedback on what could be
improved or things that didn't go so well are appreciated.



Regards,

Release Manager


Re: 退订

2023-02-21 文章 weijie guo
退订请发送邮件到 user-zh-unsubscr...@flink.apache.org

Best regards,

Weijie


宋品如  于2023年2月22日周三 11:37写道:

> 退订
>
>
>
>
>
>
>
>
>
>
> --
>
> 祝工作顺利,生活愉快!
> 发件人:宋品如
> 岗位:大数据开发


Re: 退订

2023-02-21 文章 weijie guo
退订请发送邮件到 user-zh-unsubscr...@flink.apache.org

Best regards,

Weijie


646208563  于2023年2月22日周三 11:39写道:

> 退订


Re: Disable the chain of the Sink operator

2023-02-16 文章 weijie guo
Hi wu,

I don't think it is a good choice to directly change the strategy of chain.
Operator chain usually has better performance and resource utilization. If
we directly change the chain policy between them, users can no longer chain
them together, which is not a good starting point.

Best regards,

Weijie


wudi <676366...@qq.com> 于2023年2月16日周四 19:29写道:

> Thank you for your reply.
>
> Currently in the custom Sink Connector, the Flink task will combine Writer
> and Committer into one thread, and the thread name is similar: *[Sink:
> Writer -> Sink: Committer (1/1)#0]*.
> In this way, when the *Committer.commit()* method is very slow, it will
> block the* SinkWriter.write()* method to receive upstream data.
>
> The client can use the *env.disableOperatorChaining() *method to split
> the thread into two threads:* [Sink: Writer (1/1)#0] *and *[Sink:
> Committer (1/1)#0]*. This Committer. The commit method will not block the
> SinkWriter.write method.
>
> If the chain policy can be disabled in the custom Sink Connector, the
> client can be prevented from setting and disabling the chain. Or is there a
> better way to make* Committer.commit()* not block *SinkWriter.write()*?
>
> Looking forward for your reply.
> Thanks && Regards,
> di.wu
>
> 2023年2月16日 下午6:54,Shammon FY  写道:
>
> Hi
>
> Do you mean how to disable `chain` in your custom sink connector?  Can you
> give an example of what you want?
>
> On Wed, Feb 15, 2023 at 10:42 PM di wu <676366...@qq.com> wrote:
>
> Hello
>
> The current Sink operator will be split into two operations, Writer and
> Commiter. By default, they will be chained together and executed on the
> same thread.
> So sometimes when the commiter is very slow, it will block the data
> writer, causing back pressure.
>
> At present, FlinkSQL can be solved by disabling the chain globally, and
> DataStream can partially disable the chain through the disableChaining
> method, but both of them need to be set by the user.
>
> Can the strategy of the Chain be changed in the Custom Sink Connector to
> separate Writer and Commiter?
>
> Thanks && Regards,
> di.wu
>
>
>
>


Re: 退订

2023-02-07 文章 weijie guo
Hi,

你需要发送邮件到 user-zh-unsubscr...@flink.apache.org
 而不是 user-zh@flink.apache.org.


Best regards,

Weijie


wujunxi <462329...@qq.com.invalid> 于2023年2月7日周二 16:52写道:

> 退订


Re: Flink消费消息队列写入HDFS

2023-02-02 文章 weijie guo
你好,可以使用FileSink,这个是基于新的sink API的。

Best regards,

Weijie


Howie Yang  于2023年2月2日周四 16:28写道:

> Hey,
>
>
> 最近想把消费日志写入到HDFS中,找这块的connector发现大部分都停留在使用 BucketingSink 的方式,这个好像是老版本的api了,
> 这块官方推荐的最新的方式是什么呢?
>
>
>
>
>
>
>
>
>
>
> --
>
> Best,
> Howie


Re: [ANNOUNCE] Apache Flink 1.16.1 released

2023-02-01 文章 weijie guo
Thank Martin for managing the release and all the people involved.


Best regards,

Weijie


Konstantin Knauf  于2023年2月2日周四 06:40写道:

> Great. Thanks, Martijn for managing the release.
>
> Am Mi., 1. Feb. 2023 um 20:26 Uhr schrieb Martijn Visser <
> martijnvis...@apache.org>:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.16.1, which is the first bugfix release for the Apache Flink 1.16
> > series.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> improvements
> > for this bugfix release:
> > https://flink.apache.org/news/2023/01/30/release-1.16.1.html
> >
> > The full release notes are available in Jira:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352344
> >
> > We would like to thank all contributors of the Apache Flink community who
> > made this release possible!
> >
> > Feel free to reach out to the release managers (or respond to this
> thread)
> > with feedback on the release process. Our goal is to constantly improve
> the
> > release process. Feedback on what could be improved or things that didn't
> > go so well are appreciated.
> >
> > Best regards,
> >
> > Martijn Visser
> >
>
>
> --
> https://twitter.com/snntrable
> https://github.com/knaufk
>


Re: Flink SQL 如何优化以及处理反压

2023-01-31 文章 weijie guo
最好先找到导致下游处理过慢的瓶颈算子,适当扩大一下并发。如果还不行,看下jstack的情况,可能需要调整逻辑。

Best regards,

Weijie


ssmq <374060...@qq.com.invalid> 于2023年1月31日周二 17:22写道:

> 你可以测试不写入clickhouse是否还存在反压,如果不是因为写入瓶颈的话就从你的处理逻辑优化了
>
>
> 发件人: lxk
> 发送时间: 2023年1月31日 15:16
> 收件人: user-zh@flink.apache.org
> 主题: Flink SQL 如何优化以及处理反压
>
> Flink版本:1.16.0
> 目前在使用Flink SQL进行多流关联,并写入Clickhouse中
> 具体代码如下:
> select \
> header.id as id, \
> LAST_VALUE(header.order_status), \
> LAST_VALUE(header.customer_id), \
> LAST_VALUE(header.shop_id), \
> LAST_VALUE(header.parent_order_id), \
> LAST_VALUE(header.order_at), \
> LAST_VALUE(header.pay_at), \
> LAST_VALUE(header.channel_id), \
> LAST_VALUE(header.root_order_id), \
> LAST_VALUE(header.last_updated_at), \
> item.id as item_id, \
> LAST_VALUE(item.order_id) as order_id, \
> LAST_VALUE(item.row_num), \
> LAST_VALUE(item.goods_id), \
> LAST_VALUE(item.s_sku_code), \
> LAST_VALUE(item.qty), \
> LAST_VALUE(item.p_paid_sub_amt), \
> LAST_VALUE(item.p_sp_sub_amt), \
> LAST_VALUE(item.bom_type), \
> LAST_VALUE(item.last_updated_at) as item_last_updated_at, \
> LAST_VALUE(item.display_qty), \
> LAST_VALUE(delivery.del_type), \
> LAST_VALUE(delivery.time_slot_type), \
> LAST_VALUE(delivery.time_slot_date), \
> LAST_VALUE(delivery.time_slot_time_from), \
> LAST_VALUE(delivery.time_slot_time_to), \
> LAST_VALUE(delivery.sku_delivery_type), \
> LAST_VALUE(delivery.last_updated_at) as del_last_updated_at, \
> LAST_VALUE(promotion.id) as promo_id, \
> LAST_VALUE(promotion.order_item_id), \
> LAST_VALUE(promotion.p_promo_amt), \
> LAST_VALUE(promotion.promotion_category), \
> LAST_VALUE(promotion.promo_type), \
> LAST_VALUE(promotion.promo_sub_type), \
> LAST_VALUE(promotion.last_updated_at) as promo_last_updated_at, \
> LAST_VALUE(promotion.promotion_cost) \
> from \
>   item \
>   join \
>   header  \
>   on item.order_id = header.id \
>   left join \
>   delivery \
>   on item.order_id = delivery.order_id \
>   left join \
>   promotion \
>   on item.id =promotion.order_item_id \
>   group by header.id,item.id
> 在Flink WEB UI 上发现程序反压很严重,而且时不时挂掉:
> https://pic.imgdb.cn/item/63d8bebbface21e9ef3c92fe.jpg
>
> 参考了京东的一篇文章
> https://flink-learning.org.cn/article/detail/1e86b8b38faaeefd5ed7f70858aa40bc
> ,对相关参数做了调整,但是发现有些功能在Flink 1.16中已经做了相关优化了,同时加了这些参数之后对程序没有起到任何优化的作用。
>
> conf.setString("table.exec.mini-batch.enabled", "true");
> conf.setString("table.exec.mini-batch.allow-latency", "15 s");
> conf.setString("table.exec.mini-batch.size", "5000");
> conf.setString("table.exec.state.ttl", "86400 s");
> conf.setString("table.exec.disabled-operators", "NestedLoopJoin");
> conf.setString("table.optimizer.join.broadcast-threshold", "-1");
> conf.setString("table.optimizer.multiple-input-enabled", "true");
> conf.setString("table.exec.shuffle-mode", "POINTWISE_EDGES_PIPELINED");
> conf.setString("taskmanager.network.sort-shuffle.min-parallelism", "8");
> 想请教下,针对Flink SQL如何处理反压,同时有什么其他的优化手段?
>
>
>
>


Re: 退订

2023-01-30 文章 weijie guo
Hello,

退订请发邮件到user-zh-unsubscr...@flink.apache.org

Best regards,

Weijie


唐凯  于2023年1月19日周四 15:54写道:

> 退订
>
>
>
>
> 唐凯
> mrdon...@foxmail.com
>
>
>
> 


Re: 任务本地运行正常,提交到集群报错 - 图片挂掉,文字贴一下报错信息,非常抱歉打扰

2023-01-30 文章 weijie guo
ping 127.0.0.1:33271 可以ping通吗

Best regards,

Weijie


yidan zhao  于2023年1月12日周四 17:48写道:

> 看报错 Could not connect to BlobServer at address
> localhost/127.0.0.1:33271,你本地的配置是不是不对。提交到什么模式部署的集群,配置是否配对了。
>
> WD.Z  于2023年1月10日周二 10:56写道:
> >
> >
> 任务在webui点击submit时报错,看起来是从JM提交到TM时报错,服务器防火墙已关闭,资源足够,还没有安装hadoop,但以standalone模式启动,看了下文档是不需要hadoop?
> 报错中的Caused by列表如下:
> >
> >
> > 2023-01-10 09:46:14,627 INFO
> org.apache.flink.client.deployment.application.executors.EmbeddedExecutor
> [] - Job e343bc906ea6889d34d9472d40d4f8ff is submitted.
> > 2023-01-10 09:46:14,627 INFO
> org.apache.flink.client.deployment.application.executors.EmbeddedExecutor
> [] - Submitting Job with JobId=e343bc906ea6889d34d9472d40d4f8ff.
> > 2023-01-10 09:46:14,629 WARN
> org.apache.flink.client.deployment.application.DetachedApplicationRunner []
> - Could not execute application:
> > org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error: org.apache.flink.util.FlinkException: Failed to
> execute job 'Flink Streaming Job'.
> >
> >
> > Caused by: java.lang.RuntimeException:
> org.apache.flink.util.FlinkException: Failed to execute job 'Flink
> Streaming Job'.
> >
> >
> > Caused by: org.apache.flink.util.FlinkException: Failed to execute job
> 'Flink Streaming Job'.
> >
> >
> > Caused by: org.apache.flink.util.FlinkException: Failed to execute job
> 'Flink Streaming Job'.
> >
> >
> > Caused by: org.apache.flink.util.FlinkException: Could not upload job
> files.
> >
> >
> > Caused by: java.io.IOException: Could not connect to BlobServer at
> address localhost/127.0.0.1:33271
> >
> >
> > Caused by: java.net.ConnectException: 拒绝连接 (Connection refused)
>


Re: 关于Flink重启策略疑惑

2022-12-09 文章 weijie guo
你好

1.Flink中(JM)JobMaster会监控各个Task的状态,如果Task由于某些原因失败了,JM触发failover,并且决策哪些task应该被重新启动。当然,如果JM挂掉的话,Flink支持配置高可用(HA),通过持久化一些信息到外部系统,从而做到通过standby
JM正确接管作业。
2.无论单个Task挂掉还是TaskManager挂掉failover流程都可以正确处理,处理流程基本是一致的,TaskManager挂掉可以认为是上面所有被调度上去的Task
fail了。

Best regards,

Weijie


李义  于2022年12月9日周五 15:28写道:

> 你好,我们团队在调研Flink相关技术。关于故障重启策略有些困惑
> Task 故障恢复 | Apache Flink
>
> 1.故障重启是通过什么技术手段触发的,我搜查了很多资料 ,都仅提到重启策略是怎么配置的,但是谁触发的? 它不可能挂掉了自己重启吧?
> 2.故障重启是Task级别还是作用于TaskManager服务?
>
> 感谢并支持Flink开发者们的工作,Thanks!
>


Re: batch mode如何同步等待执行结果

2022-11-09 文章 weijie guo
什么部署模式(per-job/session/application),另外是不是使用Detach参数,例如-d或者-yd

Best regards,

Weijie


唐世伟  于2022年11月9日周三 10:00写道:

> 谢谢回复,这个应该只能在table api或者sql的情况下使用,stream api应该不行吧
>
>
> > 2022年11月8日 下午8:10,yuxia  写道:
> >
> > set table.dml-sync = true
> > 是不是可以
> >
> > Best regards,
> > Yuxia
> >
> > - 原始邮件 -
> > 发件人: "唐世伟" 
> > 收件人: "user-zh" 
> > 发送时间: 星期二, 2022年 11 月 08日 下午 8:06:18
> > 主题: batch mode如何同步等待执行结果
> >
> > 通过./bin/flink run 提交一个batch
> mode的任务到yarn集群。命令只会返回任务是否正常被提交并返回applicationId。任务本身是否执行成功不会返回。当我们通过离线调度平台来调度flink
> batch任务的时候,没法捕捉到任务执行结果。请问有什么方式可以同步等待执行结果的吗?
>
>


Re: 关于LocalTransportException的优化方向咨询

2022-11-01 文章 weijie guo
1.all to all的边的话,你这个例子把并发降下来肯定连接数要少很多的。 slot
sharing的话也只会把A和B相同并发的share在一起,连接其他的subtask A还是要建立连接。
2.指的是作业jar,每个TM只会下载一次

Best regards,

Weijie


yidan zhao  于2022年10月31日周一 19:54写道:

> 嗯,问题1我主要是在想,这种复杂的连接关系,会不会增大Sending the partition request to '...'
> failed;这种异常的概率。
> 问题2,你提到的下载jar是指任务jar还是flink的jar。flink的jar不需要,因为我是standalone集群。
> 任务jar的话,这出现另外一个问题,如果一个TM分配到120*10=1200个task,那么任务jar不会分发这么多次吧。
>
> weijie guo  于2022年10月31日周一 12:54写道:
> >
> > 你好,请问使用的flink版本是多少?
> > 1.15的话TM间是有connection reuse的,默认TM间建立一个物理TCP连接。
> >
> 并发大了的话,你的TM只有一个slot,启动的TM会变多。task全变成running的状态变慢的因素也比较多:有些TM容器在的结点比较慢、下载jar包时间长、state
> > restore慢等
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > yidan zhao  于2022年10月30日周日 11:36写道:
> >
> > > 如题,我生产集群频繁报 org.apache.flink.runtime.io
> > > .network.netty.exception.LocalTransportException
> > > 异常,具体异常cause有如下情况,按照出现频率从高到底列举。
> > > (1)Sending the partition request to '...' failed;
> > > org.apache.flink.runtime.io
> > > .network.netty.exception.LocalTransportException:
> > > Sending the partition request to '/10.35.215.18:2065 (#0)' failed.
> > > at org.apache.flink.runtime.io
> > >
> .network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:145)
> > > at org.apache.flink.runtime.io
> > >
> .network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:133)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1017)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:878)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> > > at java.lang.Thread.run(Thread.java:748)
> > > Caused by:
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.StacklessClosedChannelException
> > > at
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object,
> > > ChannelPromise)(Unknown Source)
> > > Caused by:
> > >
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
> > > writeAddress(..) failed: Connection timed out
> > >
> > > (2)readAddress(..) failed: Connection timed out
> > > org.apache.flink.runtime.io
> > > .network.netty.exception.LocalTransportException:
> > > readAddress(..) failed: Connection timed 

Re: 关于busy,idle,backpressure的指标

2022-10-30 文章 weijie guo
可以提供一下A、B中一些并发的thread dump吗

Best regards,

Weijie


yidan zhao  于2022年10月30日周日 17:26写道:

> 当前我发现部分奇怪现象,比如A=>B。
> 存在A处于反压,但是B全部都是idle的,busy为0,这种情况是什么原因呢?
>


Re: 关于LocalTransportException的优化方向咨询

2022-10-30 文章 weijie guo
你好,请问使用的flink版本是多少?
1.15的话TM间是有connection reuse的,默认TM间建立一个物理TCP连接。
并发大了的话,你的TM只有一个slot,启动的TM会变多。task全变成running的状态变慢的因素也比较多:有些TM容器在的结点比较慢、下载jar包时间长、state
restore慢等

Best regards,

Weijie


yidan zhao  于2022年10月30日周日 11:36写道:

> 如题,我生产集群频繁报 org.apache.flink.runtime.io
> .network.netty.exception.LocalTransportException
> 异常,具体异常cause有如下情况,按照出现频率从高到底列举。
> (1)Sending the partition request to '...' failed;
> org.apache.flink.runtime.io
> .network.netty.exception.LocalTransportException:
> Sending the partition request to '/10.35.215.18:2065 (#0)' failed.
> at org.apache.flink.runtime.io
> .network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:145)
> at org.apache.flink.runtime.io
> .network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:133)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1017)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:878)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
> at
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
> at
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.lang.Thread.run(Thread.java:748)
> Caused by:
> org.apache.flink.shaded.netty4.io.netty.channel.StacklessClosedChannelException
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object,
> ChannelPromise)(Unknown Source)
> Caused by:
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
> writeAddress(..) failed: Connection timed out
>
> (2)readAddress(..) failed: Connection timed out
> org.apache.flink.runtime.io
> .network.netty.exception.LocalTransportException:
> readAddress(..) failed: Connection timed out (connection to
> '10.35.109.149/10.35.109.149:2094')
> at org.apache.flink.runtime.io
> .network.netty.CreditBasedPartitionRequestClientHandler.exceptionCaught(CreditBasedPartitionRequestClientHandler.java:168)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
> at
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.handleReadException(AbstractEpollStreamChannel.java:728)
> at
> 

Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 文章 weijie guo
那应该没有办法在Table API中拿到了,我有一些不理解,为什么你需要自动判断执行模式,而不是根据你的任务的实际情况来设置。
如果你期望以批的模式跑作业,然后有些Source是无界的,我理解这本身就是采用的source不合理,应该修改代码。
另外流和批执行模式有很多不同之处,例如sum算子对于每个key是输出多条还是一条,这都是你选择执行模式的时候需要考量的。假设可以支持自动推断,让系统自动推断也可能出现很多预期之外的行为。

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 17:51写道:

>
> 我是flink1.14.5
>
> EnvironmentSettings.fromConfiguration(ReadableConfig configuration) {
> final Builder builder = new Builder();
> switch (configuration.get(RUNTIME_MODE)) {
> case STREAMING:
> builder.inStreamingMode();
> break;
> case BATCH:
> builder.inBatchMode();
> break;
> case AUTOMATIC:
> default:
> throw new TableException(
> String.format(
> "Unsupported mode '%s' for '%s'. "
> + "Only an explicit BATCH or STREAMING
> mode is supported in Table API.",
> configuration.get(RUNTIME_MODE),
> RUNTIME_MODE.key()));
> }限制了不支持AUTOMATIC
>
>
> 发件人: TonyChen
> 发送时间: 2022-10-28 17:13
> 收件人: user-zh
> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 升个小版本,1.14.3就有AUTOMATIC
>
>
> Best,
> TonyChen
>
> > 2022年10月28日 17:09,junjie.m...@goupwith.com 写道:
> >
> > hi,weijie:
> > 我使用的是flink1.14里是不支持设置execution.runtime-mode=AUTOMATIC的,会报如下错误:
> > org.apache.flink.table.api.TableException: Unsupported mode 'AUTOMATIC'
> for 'execution.runtime-mode'. Only an explicit BATCH or STREAMING mode is
> supported in Table API.
> >
> > 是后续版本已经支持execution.runtime-mode=AUTOMATIC了吗?
> >
> >
> > 发件人: weijie guo
> > 发送时间: 2022-10-28 16:38
> > 收件人: user-zh
> > 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 这种需求可以吧execution.runtime-mode设置成AUTOMATIC,
> > 这种模式就是你的需求,如果所有的Source都是有界的,则使用Batch执行模式,否则使用Streaming
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > junjie.m...@goupwith.com  于2022年10月28日周五
> 15:56写道:
> >
> >> hi, Weijie:
> >>
> >>
> 由于我没有发现如何判断管道是有界还是无界的方法,特别是对于Table需要手动指定execution.runtime-mode=BATCH或STREAMING。
> >>
> >>
> 我想通过得到所有的source的有界/无界来判断整个管道是有界/无界的,如果所有scnSource都是有界的则管道必定是有界管道,否则管道就是无界管道。
> >>
> >>
> >> 发件人: weijie guo
> >> 发送时间: 2022-10-28 15:44
> >> 收件人: user-zh
> >> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >> Hi, junjie:
> >>
> >> 我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?
> >>
> >> Best regards,
> >>
> >> Weijie
> >>
> >>
> >> junjie.m...@goupwith.com  于2022年10月28日周五
> >> 15:36写道:
> >>
> >>> public static DynamicTableSource
> FactoryUtil.createTableSource(@Nullable
> >>> Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> >>> catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> >>> boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
> >>>
> >>> 发件人: junjie.m...@goupwith.com
> >>> 发送时间: 2022-10-28 15:33
> >>> 收件人: user-zh
> >>> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >>> 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
> >>>
> >>>
> >>> 发件人: TonyChen
> >>> 发送时间: 2022-10-28 15:21
> >>> 收件人: user-zh
> >>> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >>> 是不是可以看下这个
> >>>
> >>>
> >>
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> >>> Best,
> >>> TonyChen
> >>>> 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> >>>>
> >>>> 大家好:
> >>>>   有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> >>>>
> >>>
> >>
>
>


Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 文章 weijie guo
这种需求可以吧execution.runtime-mode设置成AUTOMATIC,
这种模式就是你的需求,如果所有的Source都是有界的,则使用Batch执行模式,否则使用Streaming

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 15:56写道:

> hi, Weijie:
>
> 由于我没有发现如何判断管道是有界还是无界的方法,特别是对于Table需要手动指定execution.runtime-mode=BATCH或STREAMING。
>
> 我想通过得到所有的source的有界/无界来判断整个管道是有界/无界的,如果所有scnSource都是有界的则管道必定是有界管道,否则管道就是无界管道。
>
>
> 发件人: weijie guo
> 发送时间: 2022-10-28 15:44
> 收件人: user-zh
> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> Hi, junjie:
>
> 我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?
>
> Best regards,
>
> Weijie
>
>
> junjie.m...@goupwith.com  于2022年10月28日周五
> 15:36写道:
>
> > public static DynamicTableSource FactoryUtil.createTableSource(@Nullable
> > Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> > catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> > boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
> >
> > 发件人: junjie.m...@goupwith.com
> > 发送时间: 2022-10-28 15:33
> > 收件人: user-zh
> > 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
> >
> >
> > 发件人: TonyChen
> > 发送时间: 2022-10-28 15:21
> > 收件人: user-zh
> > 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 是不是可以看下这个
> >
> >
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> > Best,
> > TonyChen
> > > 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> > >
> > > 大家好:
> > >有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> > >
> >
>


Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 文章 weijie guo
Hi, junjie:

我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 15:36写道:

> public static DynamicTableSource FactoryUtil.createTableSource(@Nullable
> Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
>
> 发件人: junjie.m...@goupwith.com
> 发送时间: 2022-10-28 15:33
> 收件人: user-zh
> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
>
>
> 发件人: TonyChen
> 发送时间: 2022-10-28 15:21
> 收件人: user-zh
> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 是不是可以看下这个
>
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> Best,
> TonyChen
> > 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> >
> > 大家好:
> >有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> >
>


Re: flink remote shuffle example运行出错

2021-12-01 文章 weijie guo
你好,你是在flink standalone模式下提交的作业吗,另外用的flink是官网download的Apache Flink 1.14.0
for Scala 2.12

吗
casel.chen  于2021年12月2日周四 上午8:12写道:

> 按照 https://github.com/flink-extended/flink-remote-shuffle 上的指南试着运行flink
> remote shuffle服务跑一个batch作业,结果报错如下。我本地使用的是scala 2.12
> 因此编译打包flink-remote-shuffle的时候使用的命令是:mvn clean install -DskipTests
> -Dscala.binary.version=2.12
>
> 报的这个找不到的类是flink-streaming-java_2.12-1.14.0.jar下的,我将该jar包放在$FLINK_HOME/lib目录下也没有作用。本地flink版本是1.14.0
>
>
>
> java.lang.NoClassDefFoundError:
> org/apache/flink/streaming/api/graph/GlobalStreamExchangeMode
>
> at
> com.alibaba.flink.shuffle.examples.BatchJobDemo.main(BatchJobDemo.java:51)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
>
> at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
>
> at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
>
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
>
> at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
>
> at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>
> at
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
>
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.streaming.api.graph.GlobalStreamExchangeMode
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>
> at
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:64)
>
> at
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65)
>
> at
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>
> ... 14 more


Re: [ANNOUNCE] Apache Flink 1.11.2 released

2020-09-18 文章 Weijie Guo 2
Good job! Very thanks @ZhuZhu for driving this and thanks for all contributed
to the release!

best,
Weijie
Zhu Zhu-2 wrote
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.11.2, which is the second bugfix release for the Apache Flink 1.11
> series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data
> streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
> https://flink.apache.org/news/2020/09/17/release-1.11.2.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12348575
> 
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> 
> Thanks,
> Zhu





--
Sent from: http://apache-flink.147419.n8.nabble.com/