Re: When should we use flink-json instead of Jackson directly?

2022-10-28 Thread Yaroslav Tkachenko
Hey Vishal,

I guess you're using the DataStream API? In this case, you have more
control over data serialization, so it makes sense to use custom
serialization logic.

IMO, flink-json (as well as flink-avro, flink-csv, etc.) is really helpful
when using Table API / SQL API, because it contains the formats that can be
used when defining new tables.

On Fri, Oct 28, 2022 at 9:46 AM Vishal Surana  wrote:

> I've been using Jackson to deserialize JSON messages into Scala classes
> and Java POJOs. The object mapper is heavily customized for our use cases.
> It seems that flink-json internally uses Jackson as well and allows for
> injecting our own mappers. Would there be any benefit of using flink-json
> instead?
>
> --
> Thanks,
> Vishal
>


When should we use flink-json instead of Jackson directly?

2022-10-28 Thread Vishal Surana
I've been using Jackson to deserialize JSON messages into Scala classes and
Java POJOs. The object mapper is heavily customized for our use cases. It
seems that flink-json internally uses Jackson as well and allows for
injecting our own mappers. Would there be any benefit of using flink-json
instead?

-- 
Thanks,
Vishal


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread Jing Ge
Congrats!

On Fri, Oct 28, 2022 at 1:22 PM 任庆盛  wrote:

> Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo
> for the awesome work for 1.16!
>
> Best regards,
> Qingsheng Ren
>
> > On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> >
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.16.0, which is the first release for the Apache Flink 1.16
> series.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> > improvements for this release:
> > https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> >
> > The full release notes are available in Jira:
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> >
> > We would like to thank all contributors of the Apache Flink community
> > who made this release possible!
> >
> > Regards,
> > Chesnay, Martijn, Godfrey & Xingbo
>


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread Jing Ge
Congrats!

On Fri, Oct 28, 2022 at 1:22 PM 任庆盛  wrote:

> Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo
> for the awesome work for 1.16!
>
> Best regards,
> Qingsheng Ren
>
> > On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> >
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.16.0, which is the first release for the Apache Flink 1.16
> series.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> > improvements for this release:
> > https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> >
> > The full release notes are available in Jira:
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> >
> > We would like to thank all contributors of the Apache Flink community
> > who made this release possible!
> >
> > Regards,
> > Chesnay, Martijn, Godfrey & Xingbo
>


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread 任庆盛
Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo for 
the awesome work for 1.16! 

Best regards,
Qingsheng Ren

> On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the
> improvements for this release:
> https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Chesnay, Martijn, Godfrey & Xingbo


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread 任庆盛
Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo for 
the awesome work for 1.16! 

Best regards,
Qingsheng Ren

> On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the
> improvements for this release:
> https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Chesnay, Martijn, Godfrey & Xingbo


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread 任庆盛
Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo for 
the awesome work for 1.16! 

Best regards,
Qingsheng Ren

> On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the
> improvements for this release:
> https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Chesnay, Martijn, Godfrey & Xingbo


Re: [ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread 任庆盛
Congratulations and a big thanks to Chesnay, Martijn, Godfrey and Xingbo for 
the awesome work for 1.16! 

Best regards,
Qingsheng Ren

> On Oct 28, 2022, at 14:46, Xingbo Huang  wrote:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the
> improvements for this release:
> https://flink.apache.org/news/2022/10/28/1.16-announcement.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Chesnay, Martijn, Godfrey & Xingbo


upsert kafka作为source时,消费不到kafka中的数据

2022-10-28 Thread 左岩
upsert kafka作为source时,消费不到kafka中的数据
通过flink sql 写了一个upsert kafka connector 的连接器,去读一个topic,读不到数据,但是普通kafka 
连接器消费这个topic 就能读到,都是读的同一个topic,代码如下

Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 Thread weijie guo
那应该没有办法在Table API中拿到了,我有一些不理解,为什么你需要自动判断执行模式,而不是根据你的任务的实际情况来设置。
如果你期望以批的模式跑作业,然后有些Source是无界的,我理解这本身就是采用的source不合理,应该修改代码。
另外流和批执行模式有很多不同之处,例如sum算子对于每个key是输出多条还是一条,这都是你选择执行模式的时候需要考量的。假设可以支持自动推断,让系统自动推断也可能出现很多预期之外的行为。

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 17:51写道:

>
> 我是flink1.14.5
>
> EnvironmentSettings.fromConfiguration(ReadableConfig configuration) {
> final Builder builder = new Builder();
> switch (configuration.get(RUNTIME_MODE)) {
> case STREAMING:
> builder.inStreamingMode();
> break;
> case BATCH:
> builder.inBatchMode();
> break;
> case AUTOMATIC:
> default:
> throw new TableException(
> String.format(
> "Unsupported mode '%s' for '%s'. "
> + "Only an explicit BATCH or STREAMING
> mode is supported in Table API.",
> configuration.get(RUNTIME_MODE),
> RUNTIME_MODE.key()));
> }限制了不支持AUTOMATIC
>
>
> 发件人: TonyChen
> 发送时间: 2022-10-28 17:13
> 收件人: user-zh
> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 升个小版本,1.14.3就有AUTOMATIC
>
>
> Best,
> TonyChen
>
> > 2022年10月28日 17:09,junjie.m...@goupwith.com 写道:
> >
> > hi,weijie:
> > 我使用的是flink1.14里是不支持设置execution.runtime-mode=AUTOMATIC的,会报如下错误:
> > org.apache.flink.table.api.TableException: Unsupported mode 'AUTOMATIC'
> for 'execution.runtime-mode'. Only an explicit BATCH or STREAMING mode is
> supported in Table API.
> >
> > 是后续版本已经支持execution.runtime-mode=AUTOMATIC了吗?
> >
> >
> > 发件人: weijie guo
> > 发送时间: 2022-10-28 16:38
> > 收件人: user-zh
> > 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 这种需求可以吧execution.runtime-mode设置成AUTOMATIC,
> > 这种模式就是你的需求,如果所有的Source都是有界的,则使用Batch执行模式,否则使用Streaming
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > junjie.m...@goupwith.com  于2022年10月28日周五
> 15:56写道:
> >
> >> hi, Weijie:
> >>
> >>
> 由于我没有发现如何判断管道是有界还是无界的方法,特别是对于Table需要手动指定execution.runtime-mode=BATCH或STREAMING。
> >>
> >>
> 我想通过得到所有的source的有界/无界来判断整个管道是有界/无界的,如果所有scnSource都是有界的则管道必定是有界管道,否则管道就是无界管道。
> >>
> >>
> >> 发件人: weijie guo
> >> 发送时间: 2022-10-28 15:44
> >> 收件人: user-zh
> >> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >> Hi, junjie:
> >>
> >> 我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?
> >>
> >> Best regards,
> >>
> >> Weijie
> >>
> >>
> >> junjie.m...@goupwith.com  于2022年10月28日周五
> >> 15:36写道:
> >>
> >>> public static DynamicTableSource
> FactoryUtil.createTableSource(@Nullable
> >>> Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> >>> catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> >>> boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
> >>>
> >>> 发件人: junjie.m...@goupwith.com
> >>> 发送时间: 2022-10-28 15:33
> >>> 收件人: user-zh
> >>> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >>> 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
> >>>
> >>>
> >>> 发件人: TonyChen
> >>> 发送时间: 2022-10-28 15:21
> >>> 收件人: user-zh
> >>> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> >>> 是不是可以看下这个
> >>>
> >>>
> >>
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> >>> Best,
> >>> TonyChen
>  2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> 
>  大家好:
>    有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> 
> >>>
> >>
>
>


Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 Thread TonyChen
升个小版本,1.14.3就有AUTOMATIC


Best,
TonyChen

> 2022年10月28日 17:09,junjie.m...@goupwith.com 写道:
> 
> hi,weijie:
> 我使用的是flink1.14里是不支持设置execution.runtime-mode=AUTOMATIC的,会报如下错误:
> org.apache.flink.table.api.TableException: Unsupported mode 'AUTOMATIC' for 
> 'execution.runtime-mode'. Only an explicit BATCH or STREAMING mode is 
> supported in Table API.
> 
> 是后续版本已经支持execution.runtime-mode=AUTOMATIC了吗?
> 
> 
> 发件人: weijie guo
> 发送时间: 2022-10-28 16:38
> 收件人: user-zh
> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 这种需求可以吧execution.runtime-mode设置成AUTOMATIC,
> 这种模式就是你的需求,如果所有的Source都是有界的,则使用Batch执行模式,否则使用Streaming
> 
> Best regards,
> 
> Weijie
> 
> 
> junjie.m...@goupwith.com  于2022年10月28日周五 15:56写道:
> 
>> hi, Weijie:
>> 
>> 由于我没有发现如何判断管道是有界还是无界的方法,特别是对于Table需要手动指定execution.runtime-mode=BATCH或STREAMING。
>> 
>> 我想通过得到所有的source的有界/无界来判断整个管道是有界/无界的,如果所有scnSource都是有界的则管道必定是有界管道,否则管道就是无界管道。
>> 
>> 
>> 发件人: weijie guo
>> 发送时间: 2022-10-28 15:44
>> 收件人: user-zh
>> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
>> Hi, junjie:
>> 
>> 我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?
>> 
>> Best regards,
>> 
>> Weijie
>> 
>> 
>> junjie.m...@goupwith.com  于2022年10月28日周五
>> 15:36写道:
>> 
>>> public static DynamicTableSource FactoryUtil.createTableSource(@Nullable
>>> Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
>>> catalogTable, ReadableConfig configuration, ClassLoader classLoader,
>>> boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
>>> 
>>> 发件人: junjie.m...@goupwith.com
>>> 发送时间: 2022-10-28 15:33
>>> 收件人: user-zh
>>> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
>>> 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
>>> 
>>> 
>>> 发件人: TonyChen
>>> 发送时间: 2022-10-28 15:21
>>> 收件人: user-zh
>>> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
>>> 是不是可以看下这个
>>> 
>>> 
>> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
>>> Best,
>>> TonyChen
 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
 
 大家好:
   有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
 
>>> 
>> 



Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 Thread weijie guo
这种需求可以吧execution.runtime-mode设置成AUTOMATIC,
这种模式就是你的需求,如果所有的Source都是有界的,则使用Batch执行模式,否则使用Streaming

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 15:56写道:

> hi, Weijie:
>
> 由于我没有发现如何判断管道是有界还是无界的方法,特别是对于Table需要手动指定execution.runtime-mode=BATCH或STREAMING。
>
> 我想通过得到所有的source的有界/无界来判断整个管道是有界/无界的,如果所有scnSource都是有界的则管道必定是有界管道,否则管道就是无界管道。
>
>
> 发件人: weijie guo
> 发送时间: 2022-10-28 15:44
> 收件人: user-zh
> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> Hi, junjie:
>
> 我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?
>
> Best regards,
>
> Weijie
>
>
> junjie.m...@goupwith.com  于2022年10月28日周五
> 15:36写道:
>
> > public static DynamicTableSource FactoryUtil.createTableSource(@Nullable
> > Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> > catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> > boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
> >
> > 发件人: junjie.m...@goupwith.com
> > 发送时间: 2022-10-28 15:33
> > 收件人: user-zh
> > 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
> >
> >
> > 发件人: TonyChen
> > 发送时间: 2022-10-28 15:21
> > 收件人: user-zh
> > 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> > 是不是可以看下这个
> >
> >
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> > Best,
> > TonyChen
> > > 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> > >
> > > 大家好:
> > >有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> > >
> >
>


Re: Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 Thread weijie guo
Hi, junjie:

我想先了解一下你的目的是什么,为什么需要在Table API中判断Source的Boundness,这些信息对你的场景的帮助是什么?

Best regards,

Weijie


junjie.m...@goupwith.com  于2022年10月28日周五 15:36写道:

> public static DynamicTableSource FactoryUtil.createTableSource(@Nullable
> Catalog catalog,ObjectIdentifier objectIdentifier, ResolvedCatalogTable
> catalogTable, ReadableConfig configuration, ClassLoader classLoader,
> boolean isTemporary)可以得到但是需要传入的参数太多了,这些参数不知道如何获取
>
> 发件人: junjie.m...@goupwith.com
> 发送时间: 2022-10-28 15:33
> 收件人: user-zh
> 主题: Re: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 问题是如何拿到ScanTableSource呢?目前我没发现哪里可以得到
>
>
> 发件人: TonyChen
> 发送时间: 2022-10-28 15:21
> 收件人: user-zh
> 主题: Re: 关于如何得到管道中哪些源是有界和无界的问题
> 是不是可以看下这个
>
> https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94
> Best,
> TonyChen
> > 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> >
> > 大家好:
> >有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> >
>


Re: 关于如何得到管道中哪些源是有界和无界的问题

2022-10-28 Thread TonyChen
是不是可以看下这个

https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/ScanTableSource.java#L94


Best,
TonyChen

> 2022年10月28日 15:12,junjie.m...@goupwith.com 写道:
> 
> 大家好:
>有个问题判断Table是有界还是无界,或者通过TableEnv如何能得到source table信息和其有界还是无界。
> 



Performing left join between two streams

2022-10-28 Thread Surendra Lalwani via user
Hi Team,

Is it possible in Flink to perform Left Outer join between two streams as
it is possible in Spark. I checked for Internal join it only supports inner
join as of now.

Thanks and Regards ,
Surendra Lalwani

-- 

IMPORTANT NOTICE: This e-mail, including any attachments, may contain 
confidential information and is intended only for the addressee(s) named 
above. If you are not the intended recipient(s), you should not 
disseminate, distribute, or copy this e-mail. Please notify the sender by 
reply e-mail immediately if you have received this e-mail in error and 
permanently delete all copies of the original message from your system. 
E-mail transmission cannot be guaranteed to be secure as it could be 
intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
contain viruses. Company accepts no liability for any damage or loss of 
confidential information caused by this email or due to any virus 
transmitted by this email or otherwise.


[ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread Xingbo Huang
The Apache Flink community is very happy to announce the release of Apache
Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the
improvements for this release:
https://flink.apache.org/news/2022/10/28/1.16-announcement.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Chesnay, Martijn, Godfrey & Xingbo


[ANNOUNCE] Apache Flink 1.16.0 released

2022-10-28 Thread Xingbo Huang
The Apache Flink community is very happy to announce the release of Apache
Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the
improvements for this release:
https://flink.apache.org/news/2022/10/28/1.16-announcement.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351275

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Chesnay, Martijn, Godfrey & Xingbo


Re: Broadcast state and job restarts

2022-10-28 Thread Zakelly Lan
Hi Alexis,

Broadcast state is one type of the Operator State, which is included
in savepoints and checkpoints and won't be lost.
Please refer to
https://stackoverflow.com/questions/62509773/flink-broadcast-state-rocksdb-state-backend/62510423#62510423

Best,
Zakelly

On Fri, Oct 28, 2022 at 4:41 AM Alexis Sarda-Espinosa
 wrote:
>
> Hello,
>
> The documentation for broadcast state specifies that it is always kept in 
> memory. My assumptions based on this statement are:
>
> 1. If a job restarts in the same Flink cluster (i.e. using a restart 
> strategy), the tasks' attempt number increases and the broadcast state is 
> restored since it's not lost from memory.
> 2. If the whole Flink cluster is restarted with a savepoint, broadcast state 
> will not be restored and I need to write my application with this in mind.
>
> Are these correct?
>
> Regards,
> Alexis.
>