Re: 问题求助(Pyflink)

2021-01-29 文章 Shuiqiang Chen
你好,
可以看下source task所在task manager 的日志里看看 consumer 有没有成功获取到kafka
partition相关meta信息和认证相关是否成功的信息。

瞿叶奇 <389243...@qq.com> 于2021年1月30日周六 下午3:14写道:

> 老师,你好,消费是没有任何问题,可以正常消费。
>
>
>
>
> -- 原始邮件 --
> *发件人:* "user-zh" ;
> *发送时间:* 2021年1月30日(星期六) 下午3:08
> *收件人:* "user-zh";
> *主题:* Re:问题求助(Pyflink)
>
> 先看下kafka能否通过命令行消费数据.
>
> 命令行检查确保能消费,再使用Flink.
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-01-30 14:25:57,"瞿叶奇" <389243...@qq.com> 写道:
>
> 老师,您好,我想通过Flink消费kafka写本地csv文件,目前遇到的问题是Flink、Kafka都是kerberos认证的集群,而且我是用的是pyflink,现在程序能执行,不报错,但是不消费数据,csv文件没有结果,但是文件日期修改日期一直在更新。怀疑对接kafka
> 存在问题,希望老师能够给解决疑惑。
> 1)Kafka生产数据:
>
> 2)pyflink 程序
>
>
> #!/usr/bin/python3.7
> # -*- coding: UTF-8 -*-
> from pyflink.datastream import StreamExecutionEnvironment,
> CheckpointingMode
> from pyflink.table import StreamTableEnvironment, TableConfig, DataTypes,
> CsvTableSink, WriteMode, SqlDialect
> from pyflink.table.descriptors import FileSystem,OldCsv,Schema,Kafka,Json
> s_env = StreamExecutionEnvironment.get_execution_environment()
> s_env.set_parallelism(1)
> s_env.enable_checkpointing(3000)
> st_env = StreamTableEnvironment.create(s_env, TableConfig())
> st_env.use_catalog("default_catalog")
> st_env.use_database("default_database")
> st_env.connect(Kafka().version("universal").topic("qyq13").start_from_earliest().property("zookeeper.connect",
> "192.168.0.120:24002,192.168.0.238:24002,192.168.0.6:24002").property("security.protocol",
> 'SASL_PLAINTEXT').property("sasl.kerberos.service.name",
> 'kafka').property("kerberos.domain.name", 
> 'hadoop.hadoop.com').property("bootstrap.servers",
> "192.168.0.151:21007,192.168.0.29:21007,192.168.0.93:21007")).with_format(Json().fail_on_missing_field(False).schema(DataTypes.ROW([DataTypes.FIELD("id",
> DataTypes.BIGINT()),DataTypes.FIELD("name",
> DataTypes.STRING())]))).with_schema(Schema().field("id",
> DataTypes.BIGINT()).field("name",
> DataTypes.STRING())).register_table_source("sourceKafka")
> fieldNames = ["id", "name"]
> fieldTypes = [DataTypes.BIGINT(),DataTypes.STRING()]
> csvSink = CsvTableSink(fieldNames, fieldTypes, "/tmp/result2021.csv", ",",
> 1, WriteMode.OVERWRITE)
> st_env.register_table_sink("csvTableSink", csvSink)
> resultQuery = st_env.sql_query("select id,name from sourceKafka")
> resultQuery = resultQuery.insert_into("csvTableSink")
> st_env.execute("pyflink-kafka-v2")
> 3)pyflink-shell.sh local
>
> 4)运行结果
> 在pyflink-shell local运行程序的同时,启用生产者生产数据,查看结果文件如下:
>
>
> 可以看出文件确实在被更新,文件的修改时间在变化,但是里面是空的,一方面希望pyflink可以增加打印到控制台的功能,一方面希望老师能给出对接kerberos认证的kafka的案例,我是陕西国网用电信息采集系统新架构改造的工程师,我们计划使用flink测试kafka-hdfs数据的数据传输。希望老师能给出一个案例,帮助我们完成测试。
>
>
>
>


????????????????Pyflink??

2021-01-29 文章 ??????
??







--  --
??: 
   "user-zh"



Re:问题求助(Pyflink)

2021-01-29 文章 Appleyuchi
先看下kafka能否通过命令行消费数据.

命令行检查确保能消费,再使用Flink.













在 2021-01-30 14:25:57,"瞿叶奇" <389243...@qq.com> 写道:

老师,您好,我想通过Flink消费kafka写本地csv文件,目前遇到的问题是Flink、Kafka都是kerberos认证的集群,而且我是用的是pyflink,现在程序能执行,不报错,但是不消费数据,csv文件没有结果,但是文件日期修改日期一直在更新。怀疑对接kafka
 存在问题,希望老师能够给解决疑惑。
1)Kafka生产数据:

2)pyflink 程序


#!/usr/bin/python3.7
# -*- coding: UTF-8 -*-
from pyflink.datastream import StreamExecutionEnvironment, CheckpointingMode
from pyflink.table import StreamTableEnvironment, TableConfig, DataTypes, 
CsvTableSink, WriteMode, SqlDialect
from pyflink.table.descriptors import FileSystem,OldCsv,Schema,Kafka,Json
s_env = StreamExecutionEnvironment.get_execution_environment()
s_env.set_parallelism(1)
s_env.enable_checkpointing(3000)
st_env = StreamTableEnvironment.create(s_env, TableConfig())
st_env.use_catalog("default_catalog")
st_env.use_database("default_database")
st_env.connect(Kafka().version("universal").topic("qyq13").start_from_earliest().property("zookeeper.connect",
 
"192.168.0.120:24002,192.168.0.238:24002,192.168.0.6:24002").property("security.protocol",
 'SASL_PLAINTEXT').property("sasl.kerberos.service.name", 
'kafka').property("kerberos.domain.name", 
'hadoop.hadoop.com').property("bootstrap.servers", 
"192.168.0.151:21007,192.168.0.29:21007,192.168.0.93:21007")).with_format(Json().fail_on_missing_field(False).schema(DataTypes.ROW([DataTypes.FIELD("id",
 DataTypes.BIGINT()),DataTypes.FIELD("name", 
DataTypes.STRING())]))).with_schema(Schema().field("id", 
DataTypes.BIGINT()).field("name", 
DataTypes.STRING())).register_table_source("sourceKafka")
fieldNames = ["id", "name"]
fieldTypes = [DataTypes.BIGINT(),DataTypes.STRING()]
csvSink = CsvTableSink(fieldNames, fieldTypes, "/tmp/result2021.csv", ",", 1, 
WriteMode.OVERWRITE)
st_env.register_table_sink("csvTableSink", csvSink)
resultQuery = st_env.sql_query("select id,name from sourceKafka")
resultQuery = resultQuery.insert_into("csvTableSink")
st_env.execute("pyflink-kafka-v2")
3)pyflink-shell.sh local

4)运行结果
在pyflink-shell local运行程序的同时,启用生产者生产数据,查看结果文件如下:

可以看出文件确实在被更新,文件的修改时间在变化,但是里面是空的,一方面希望pyflink可以增加打印到控制台的功能,一方面希望老师能给出对接kerberos认证的kafka的案例,我是陕西国网用电信息采集系统新架构改造的工程师,我们计划使用flink测试kafka-hdfs数据的数据传输。希望老师能给出一个案例,帮助我们完成测试。





??????????Pyflink??

2021-01-29 文章 ??????
Flinkkafka??csv??Flink??Kafkakerberos??pyflink??csv??kafka
 ??
1??Kafka??

2??pyflink 


#!/usr/bin/python3.7
# -*- coding: UTF-8 -*-
from pyflink.datastream import StreamExecutionEnvironment, CheckpointingMode
from pyflink.table import StreamTableEnvironment, TableConfig, DataTypes, 
CsvTableSink, WriteMode, SqlDialect
from pyflink.table.descriptors import FileSystem,OldCsv,Schema,Kafka,Json
s_env = StreamExecutionEnvironment.get_execution_environment()
s_env.set_parallelism(1)
s_env.enable_checkpointing(3000)
st_env = StreamTableEnvironment.create(s_env, TableConfig())
st_env.use_catalog("default_catalog")
st_env.use_database("default_database")
st_env.connect(Kafka().version("universal").topic("qyq13").start_from_earliest().property("zookeeper.connect",
 
"192.168.0.120:24002,192.168.0.238:24002,192.168.0.6:24002").property("security.protocol",
 'SASL_PLAINTEXT').property("sasl.kerberos.service.name", 
'kafka').property("kerberos.domain.name", 
'hadoop.hadoop.com').property("bootstrap.servers", 
"192.168.0.151:21007,192.168.0.29:21007,192.168.0.93:21007")).with_format(Json().fail_on_missing_field(False).schema(DataTypes.ROW([DataTypes.FIELD("id",
 DataTypes.BIGINT()),DataTypes.FIELD("name", 
DataTypes.STRING())]))).with_schema(Schema().field("id", 
DataTypes.BIGINT()).field("name", 
DataTypes.STRING())).register_table_source("sourceKafka")
fieldNames = ["id", "name"]
fieldTypes = [DataTypes.BIGINT(),DataTypes.STRING()]
csvSink = CsvTableSink(fieldNames, fieldTypes, "/tmp/result2021.csv", ",", 1, 
WriteMode.OVERWRITE)
st_env.register_table_sink("csvTableSink", csvSink)
resultQuery = st_env.sql_query("select id,name from sourceKafka")
resultQuery = resultQuery.insert_into("csvTableSink")
st_env.execute("pyflink-kafka-v2")
3??pyflink-shell.sh local


4)
??pyflink-shell local??

??pyflinkkerberos??kafkaflinkkafka-hdfs??

Re: Flink 1.11 job hit error "Job leader lost leadership" or "ResourceManager leader changed to new address null"

2021-01-29 文章 Xintong Song
There's indeed a ZK version upgrading during 1.9 and 1.11, but I'm not
aware of any similar issue reported since the upgrading.
I would suggest the following:
- Turn on the DEBUG log see if there's any valuable details
- Maybe try asking in the Apache Zookeeper community, see if this is a
known issue.

Thank you~
Xintong Song



Thank you~

Xintong Song



On Sat, Jan 30, 2021 at 6:47 AM Lu Niu  wrote:

> Hi, Xintong
>
> Thanks for replying. Could it relate to the zk version? We are a platform
> team at Pinterest in the middle of migrating form 1.9.1 to 1.11. Both 1.9
> and 1.11 jobs talking to the same ZK for JM HA. This problem only surfaced
> in 1.11 jobs. That's why we think it is related to version upgrade.
>
> Best
> Lu
>
> On Thu, Jan 28, 2021 at 7:56 PM Xintong Song 
> wrote:
>
>> The ZK client side uses 15s connection timeout and 60s session timeout
>> in Flink. There's nothing similar to a heartbeat interval configured, which
>> I assume is up to ZK's internal implementation. These things have not
>> changed in FLink since at least 2017.
>>
>> If both ZK client and server complain about timeout, and there's no gc
>> issue spotted, I would consider a network instability.
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>>
>> On Fri, Jan 29, 2021 at 3:15 AM Lu Niu  wrote:
>>
>>> After checking the log I found the root cause is zk client timeout on TM:
>>> ```
>>> 2021-01-25 14:01:49,600 WARN
>>> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn - Client
>>> session timed out, have not heard from server in 40020ms for sessionid
>>> 0x404f9ca531a5d6f
>>> 2021-01-25 14:01:49,610 INFO
>>> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn - Client
>>> session timed out, have not heard from server in 40020ms for sessionid
>>> 0x404f9ca531a5d6f, closing socket connection and attempting reconnect
>>> 2021-01-25 14:01:49,711 INFO
>>> org.apache.flink.shaded.curator4.org.apache.curator.framework.state.ConnectionStateManager
>>> - State change: SUSPENDED
>>> 2021-01-25 14:01:49,711 WARN
>>> org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService -
>>> Connection to ZooKeeper suspended. Can no longer retrieve the leader from
>>> ZooKeeper.
>>> 2021-01-25 14:01:49,712 INFO
>>> org.apache.flink.runtime.taskexecutor.TaskExecutor - JobManager for job
>>> 27ac39342913d29baac4cde13062c4a4 with leader id
>>> b5af099c17a05fcf15e7bbfcb74e49ea lost leadership.
>>> 2021-01-25 14:01:49,712 WARN
>>> org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService -
>>> Connection to ZooKeeper suspended. Can no longer retrieve the leader from
>>> ZooKeeper.
>>> 2021-01-25 14:01:49,712 INFO
>>> org.apache.flink.runtime.taskexecutor.TaskExecutor - Close JobManager
>>> connection for job 27ac39342913d29baac4cde13062c4a4.
>>> 2021-01-25 14:01:49,712 INFO org.apache.flink.runtime.taskmanager.Task -
>>> Attempting to fail task externally Sink:
>>> USER_EVENTS-spo_derived_event-SINK-SINKS.kafka (156/360)
>>> (d5b5887e639874cb70d7fef939b957b7).
>>> 2021-01-25 14:01:49,712 WARN org.apache.flink.runtime.taskmanager.Task -
>>> Sink: USER_EVENTS-spo_derived_event-SINK-SINKS.kafka (156/360)
>>> (d5b5887e639874cb70d7fef939b957b7) switched from RUNNING to FAILED.
>>> org.apache.flink.util.FlinkException: JobManager responsible for
>>> 27ac39342913d29baac4cde13062c4a4 lost the leadership.
>>> ```
>>>
>>> I checked that TM gc log, no gc issues. it also shows client timeout in
>>> zookeeper server log. How frequently the zk client sync with server side in
>>> flink? The log says client doesn't heartbeat to server for 40s. Any help?
>>> thanks!
>>>
>>> Best
>>> Lu
>>>
>>>
>>> On Thu, Dec 17, 2020 at 6:10 PM Xintong Song 
>>> wrote:
>>>
 I'm not aware of any significant changes to the HA components between
 1.9/1.11.
 Would you mind sharing the complete jobmanager/taskmanager logs?

 Thank you~

 Xintong Song



 On Fri, Dec 18, 2020 at 8:53 AM Lu Niu  wrote:

> Hi, Xintong
>
> Thanks for replying and your suggestion. I did check the ZK side but
> there is nothing interesting. The error message actually shows that only
> one TM thought JM lost leadership while others ran fine. Also, this
> happened only after we migrated from 1.9 to 1.11. Is it possible this is
> introduced by 1.11?
>
> Best
> Lu
>
> On Wed, Dec 16, 2020 at 5:56 PM Xintong Song 
> wrote:
>
>> Hi Lu,
>>
>> I assume you are using ZooKeeper as the HA service?
>>
>> A common cause of unexpected leadership lost is the instability of HA
>> service. E.g., if ZK does not receive heartbeat from Flink RM for a
>> certain period of time, it will revoke the leadership and notify
>> other components. You can look into the ZooKeeper logs checking why RM's
>> leadership is revoked.
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>>
>> On Thu, Dec 17, 2020 at 8:42 AM Lu Niu  wro

FOR SYSTEM_TIME AS OF 维表关联 报错

2021-01-29 文章 阿华田


各位大佬 在flink sql客户端执行维度关联报错
sql语句:insert into  sink_a select a.user_id, b.user_name  from source_a as a 
left join source_b  FOR SYSTEM_TIME AS OF a.proc_time  b on a.user_id = 
b.user_id; 
报错信息
[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough 
rules to produce a node with desired properties: convention=STREAM_PHYSICAL, 
FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, 
ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE].
Missing conversion is FlinkLogicalJoin[convention: LOGICAL -> STREAM_PHYSICAL]
There is 1 empty subset: rel#280:Subset#14.STREAM_PHYSICAL.any.None: 
0.[NONE].[NONE], the relevant part of the original plan is as follows
| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制



Re:提交job的命令,./bin/flink run-application -t yarn-application ... 和 ./bin/flink run -m yarn-cluster ...

2021-01-29 文章 Yapor
-t是flink1.12出来的,通过-t指定提交模式后,yarn相关参数要通过 -D来指定,例如 -D yarn.application.name 

















在 2021-01-29 12:52:41,"lp" <973182...@qq.com> 写道:
>如题,在 ./flink --help中看到提交job的命令有两个相似的,貌似都会将jobManager发送yarn
>node上去之行,但不明白他们区别,官网也未找到他们的区别,请帮忙解释下他们之间的区别?
>
>
>
>--
>Sent from: http://apache-flink.147419.n8.nabble.com/


Re: [ANNOUNCE] Apache Flink 1.10.3 released

2021-01-29 文章 Dian Fu
Thanks Xintong for driving this release!

Regards,
Dian

> 在 2021年1月29日,下午5:24,Till Rohrmann  写道:
> 
> Thanks Xintong for being our release manager. Well done!
> 
> Cheers,
> Till
> 
> On Fri, Jan 29, 2021 at 9:50 AM Yang Wang  > wrote:
> Thanks Xintong for driving this release.
> 
> Best,
> Yang
> 
> Yu Li mailto:car...@gmail.com>> 于2021年1月29日周五 下午3:52写道:
> Thanks Xintong for being our release manager and everyone else who made the 
> release possible!
> 
> Best Regards,
> Yu
> 
> 
> On Fri, 29 Jan 2021 at 15:05, Xintong Song  > wrote:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.10.3, which is the third bugfix release for the Apache Flink 1.10
> series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html 
> 
> 
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
> https://flink.apache.org/news/2021/01/29/release-1.10.3.html 
> 
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348668
>  
> 
> 
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> 
> Regards,
> Xintong Song



Re: 为什么window的reduceFunction不支持RichFunction呢

2021-01-29 文章 likai
你好, 请问想用reduce 里面需要存储状态,可以用 aggregate() 吗? 看了一下代码,窗口的话,是把数据存储在窗口里面的状态。reduce 
会生成 reduce 
对应的状态存储放入窗口。任务窗口函数是不是自定义的,里面的状态也不能自定义。有聚合的情况下,只是把聚合函数作用在了窗口的状态上,里面的状态是要保存聚合结果的状态。
 可以任务 窗口加聚合是一个算子。


likai
1137591...@qq.com



> 在 2021年1月29日,下午12:49,赵一旦  写道:
> 
> windowFunc



?????? ????????????

2021-01-29 文章 ???????L





kafka ??3??, ??, flink??3


--  --
??: 
   "user-zh"

http://apache-flink.147419.n8.nabble.com/

Re: 未生成水位线

2021-01-29 文章 Jessica.J.Wang
看一下 WaterMarkAssigner节点 是否有 数据流入



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: [ANNOUNCE] Apache Flink 1.10.3 released

2021-01-29 文章 Till Rohrmann
Thanks Xintong for being our release manager. Well done!

Cheers,
Till

On Fri, Jan 29, 2021 at 9:50 AM Yang Wang  wrote:

> Thanks Xintong for driving this release.
>
> Best,
> Yang
>
> Yu Li  于2021年1月29日周五 下午3:52写道:
>
>> Thanks Xintong for being our release manager and everyone else who made
>> the release possible!
>>
>> Best Regards,
>> Yu
>>
>>
>> On Fri, 29 Jan 2021 at 15:05, Xintong Song  wrote:
>>
>>> The Apache Flink community is very happy to announce the release of
>>> Apache
>>> Flink 1.10.3, which is the third bugfix release for the Apache Flink 1.10
>>> series.
>>>
>>> Apache Flink® is an open-source stream processing framework for
>>> distributed, high-performing, always-available, and accurate data
>>> streaming
>>> applications.
>>>
>>> The release is available for download at:
>>> https://flink.apache.org/downloads.html
>>>
>>> Please check out the release blog post for an overview of the
>>> improvements
>>> for this bugfix release:
>>> https://flink.apache.org/news/2021/01/29/release-1.10.3.html
>>>
>>> The full release notes are available in Jira:
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348668
>>>
>>> We would like to thank all contributors of the Apache Flink community who
>>> made this release possible!
>>>
>>> Regards,
>>> Xintong Song
>>>
>>


Re:关于Flink作业的负载监控 task-load指标

2021-01-29 文章 hailongwang
Hi,
在 `MailboxProcessor#runMailboxLoop` 中分别计算 default mailbox actions(处理业务数据) 和 
event mailbox actions(checkpoint 同步阶段、timer等) 的时间占比,假设分别为t1,t2,都介于[0,1]之间。
那么理论上 t1 + t2 + idle的占比 = 1;这样可以根据 t1, t2 的值来判断单个线程的 CPU 是否跑满了。


Best,
Hailong

在 2021-01-29 12:25:56,"1305332" <1305...@163.com> 写道:
>
>
>Hi,everyone:
>滴滴的一篇文档中提到:
>
>
> "我们一直希望能精确衡量任务的负载状况,使用反压指标指标只能粗略的判断任务的资源够或者不够。结合新版的 
> Mailbox 线程模型,所有互斥操作全部运行在 TaskThread 中,只需统计出线程的占用时间,就可以精确计算任务负载的百分比。   
>  未来可以使用指标进行任务的资源推荐,让任务负载维持在一个比较健康的水平”
>  关于统计出线程的占用时间,这个具体该怎么做呢?
>
>


Re: 提交job的命令,./bin/flink run-application -t yarn-application ... 和 ./bin/flink run -m yarn-cluster ...

2021-01-29 文章 Yang Wang
-m yarn-cluster和-t yarn-per-job都是可以用来提交per-job任务到Yarn集群的
只是背后实现的CLI不一样而已,前者FlinkYarnSessionCLI是以前的方式
后者是在1.10引入的一个更加通用的方式,可以和K8s、Standalone等保持一致

另外,还有一个差异是,-m yarn-cluster是可以支持-yq -ynm等这些CLI参数的
-t yarn-per-job只能通过-D的方式来设置


Best,
Yang

lp <973182...@qq.com> 于2021年1月29日周五 下午3:00写道:

> 应该说是否:1.11和1.12这里这两种提交方式 是不是一样的,只不过命令有了变化?
>
> 官网中的摘录如下:
>
> flink-1.11:
> Run a single Flink job on YARN
>
> Example:
> ./bin/flink run -m yarn-cluster ./examples/batch/WordCount.jar
>
> --
> flink-1.12:
> Per-Job Cluster Mode
>
> Example:
> ./bin/flink run -t yarn-per-job --detached
> ./examples/streaming/TopSpeedWindowing.jar
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>


????????????

2021-01-29 文章 ???????L
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().useBlinkPlanner().build();
StreamExecutionEnvironment executionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment();
executionEnvironment.setParallelism(3);
Map

Re: [ANNOUNCE] Apache Flink 1.10.3 released

2021-01-29 文章 Yang Wang
Thanks Xintong for driving this release.

Best,
Yang

Yu Li  于2021年1月29日周五 下午3:52写道:

> Thanks Xintong for being our release manager and everyone else who made
> the release possible!
>
> Best Regards,
> Yu
>
>
> On Fri, 29 Jan 2021 at 15:05, Xintong Song  wrote:
>
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink 1.10.3, which is the third bugfix release for the Apache Flink 1.10
>> series.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data
>> streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the improvements
>> for this bugfix release:
>> https://flink.apache.org/news/2021/01/29/release-1.10.3.html
>>
>> The full release notes are available in Jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348668
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>> Regards,
>> Xintong Song
>>
>