[ANNOUNCE] Apache Flink 1.18.1 released

2024-01-19 Thread Jing Ge
The Apache Flink community is very happy to announce the release of Apache
Flink 1.18.1, which is the first bugfix release for the Apache Flink 1.18
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/2024/01/19/apache-flink-1.18.1-release-announcement/

Please note: Users that have state compression should not migrate to 1.18.1
(nor 1.18.0) due to a critical bug that could lead to data loss. Please
refer to FLINK-34063 for more information.

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640

We would like to thank all contributors of the Apache Flink community who
made this release possible! Special thanks to @Qingsheng Ren @Leonard Xu
 @Xintong Song @Matthias Pohl @Martijn Visser for the support during this
release.

A Jira task series based on the Flink release wiki has been created for
1.18.1 release. Tasks that need to be done by PMC have been explicitly
created separately. It will be convenient for the release manager to reach
out to PMC for those tasks. Any future patch release could consider cloning
it and follow the standard release process.
https://issues.apache.org/jira/browse/FLINK-33824

Feel free to reach out to the release managers (or respond to this thread)
with feedback on the release process. Our goal is to constantly improve the
release process. Feedback on what could be improved or things that didn't
go so well are appreciated.

Regards,
Jing


[ANNOUNCE] Apache Flink 1.18.1 released

2024-01-19 Thread Jing Ge via user
The Apache Flink community is very happy to announce the release of Apache
Flink 1.18.1, which is the first bugfix release for the Apache Flink 1.18
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/2024/01/19/apache-flink-1.18.1-release-announcement/

Please note: Users that have state compression should not migrate to 1.18.1
(nor 1.18.0) due to a critical bug that could lead to data loss. Please
refer to FLINK-34063 for more information.

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640

We would like to thank all contributors of the Apache Flink community who
made this release possible! Special thanks to @Qingsheng Ren @Leonard Xu
 @Xintong Song @Matthias Pohl @Martijn Visser for the support during this
release.

A Jira task series based on the Flink release wiki has been created for
1.18.1 release. Tasks that need to be done by PMC have been explicitly
created separately. It will be convenient for the release manager to reach
out to PMC for those tasks. Any future patch release could consider cloning
it and follow the standard release process.
https://issues.apache.org/jira/browse/FLINK-33824

Feel free to reach out to the release managers (or respond to this thread)
with feedback on the release process. Our goal is to constantly improve the
release process. Feedback on what could be improved or things that didn't
go so well are appreciated.

Regards,
Jing


RE: Re:RE: binlog文件丢失问题

2024-01-19 Thread Jiabao Sun
Hi,

日志中有包含 GTID 的内容吗?
用 SHOW VARIABLES LIKE 'gtid_mode’; 确认下是否开启了GTID呢?

Best,
Jiabao


On 2024/01/19 09:36:38 wyk wrote:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 抱歉,具体报错和代码如下:
> 
> 
> 报错部分:
> Caused by: java.lang.IllegalStateException: The connector is trying to read 
> binlog starting at 
> Struct{version=1.5.4.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1705645599953,db=,server_id=0,file=mysql_bin.007132,pos=729790304,row=0},
>  but this is no longer available on the server. Reconfigure the connector to 
> use a snapshot when needed.
> at 
> com.ververica.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.loadStartingOffsetState(StatefulTaskContext.java:179)
> at 
> com.ververica.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.configure(StatefulTaskContext.java:112)
> at 
> com.ververica.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:93)
> at 
> com.ververica.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:65)
> at 
> com.ververica.cdc.connectors.mysql.source.reader.MySqlSplitReader.checkSplitOrStartNext(MySqlSplitReader.java:170)
> at 
> com.ververica.cdc.connectors.mysql.source.reader.MySqlSplitReader.fetch(MySqlSplitReader.java:75)
> at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
> ... 6 more
> 
> 
> 
> 
> 代码部分: 
> if (!isBinlogAvailable(mySqlOffsetContext)) {
> throw new IllegalStateException(
> "The connector is trying to read binlog starting at "
> + mySqlOffsetContext.getSourceInfo()
> + ", but this is no longer "
> + "available on the server. Reconfigure the connector to 
> use a snapshot when needed.");
> }
> 
> 在 2024-01-19 17:33:03,"Jiabao Sun"  写道:
> >Hi,
> >
> >你的图挂了,可以贴一下图床链接或者直接贴一下代码。
> >
> >Best,
> >Jiabao
> >
> >
> >On 2024/01/19 09:16:55 wyk wrote:
> >> 
> >> 
> >> 各位大佬好:
> >> 现在有一个binlog文件丢失问题,需要请教各位,具体问题描述如下:
> >> 
> >> 
> >> 问题描述:
> >> 场景: 公司mysql有两个备库: 备库1和备库2。
> >> 1. 现在备库1需要下线,需要将任务迁移至备库2
> >> 2.我正常将任务保存savepoint后,将链接信息修改为备库2从savepoint启动,这个时候提示报错binlog文件不存在问题,报错截图如下图一
> >> 3.我根据报错找到对应代码(如下图二)后,发现是一块校验binlog文件是否存在的逻辑,我理解的是我们从gtid启动不需要对binlog文件进行操作,就将这部分代码进行了注释,任务能够正常从savepoint启动,并且数据接入正常
> >> 
> >> 
> >> 
> >> 
> >> 疑问: 想问一下校验binlog文件是否存在这块逻辑是否需要,或者是应该修改为校验gtid是否存在,期待您的回复,谢谢
> >> 
> >> 
> >> 注意: 备库一个备库二的gtid是保持一致的
> >> 
> >> 
> >> 
> >> 
> >> 图一:
> >> 
> >> 
> >> 图二:
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> 

RE: Re: Python flink statefun

2024-01-19 Thread Jiabao Sun
Hi Alex,

I think that logic is in IngressWebServer[1] and EgressWebServer[2].

Best,
Jiabao


[1] 
https://github.com/apache/flink-statefun-playground/blob/5b52061784626c8685ab33e172e4471840ce5ee1/playground-internal/statefun-playground-entrypoint/src/main/java/org/apache/flink/statefun/playground/internal/io/flink/IngressWebServer.java#L18
[2] 
https://github.com/apache/flink-statefun-playground/blob/5b52061784626c8685ab33e172e4471840ce5ee1/playground-internal/statefun-playground-entrypoint/src/main/java/org/apache/flink/statefun/playground/internal/io/flink/EgressWebServer.java#L30

On 2024/01/19 09:50:21 Alexandre LANGUILLAT wrote:
> Thanks Sun I use now the 3.2 version and it works as described in the
> README tutorial! I don't see in the code where the port redirection is
> handled tho, eg 8090 for PUT and 8091 for GET (they are in the module.yaml
> but dont see where in Python it's handled).
> 
> Bests,
> 
> Alex
> 
> Le ven. 19 janv. 2024 à 02:44, Jiabao Sun  a
> écrit :
> 
> > Hi Alexandre,
> >
> > I couldn't find the image apache/flink-statefun-playground:3.3.0-1.0 in
> > Docker Hub.
> > You can temporarily use the release-3.2 version.
> >
> > Hi Martijn, did we ignore pushing it to the docker registry?
> >
> > Best,
> > Jiabao
> >
> > [1] https://hub.docker.com/r/apache/flink-statefun-playground/tags
> >
> > On 2024/01/18 17:09:20 Alexandre LANGUILLAT wrote:
> > > Hi,
> > >
> > > I am trying to run the example provided here:
> > >
> > https://github.com/apache/flink-statefun-playground/tree/release-3.3/python/greeter
> > >
> > > 1 - Following the read.me, with docker (that I installed):
> > >
> > > "docker-compose build" works well. But "docker-compose up" returns an
> > error:
> > >
> > > [image: image.png]
> > >
> > > 2 - Without docker, having a virtual env with apache-flink-statefun and
> > > aiohttp installed, I ran "python functions.py" but I the server runs on
> > > port 8000 according to the script and I dont know how the request in curl
> > > (or postman) would work since it calls port 8090... :
> > >
> > > curl -X PUT -H "Content-Type: application/vnd.example/GreetRequest" -d
> > > '{"name": "Bob"}' localhost:8090/example/person/Bob
> > >
> > >
> > > I wonder what I have to configure additionaly? I owuld be keen to run it
> > > without docker actually, to understand how it works under the hood.
> > >
> > > Bests
> > >
> > > --
> > > Alexandre
> > >
> 
> 
> 
> -- 
> Alexandre Languillat
> 

Re: Python flink statefun

2024-01-19 Thread Alexandre LANGUILLAT
Thanks Sun I use now the 3.2 version and it works as described in the
README tutorial! I don't see in the code where the port redirection is
handled tho, eg 8090 for PUT and 8091 for GET (they are in the module.yaml
but dont see where in Python it's handled).

Bests,

Alex

Le ven. 19 janv. 2024 à 02:44, Jiabao Sun  a
écrit :

> Hi Alexandre,
>
> I couldn't find the image apache/flink-statefun-playground:3.3.0-1.0 in
> Docker Hub.
> You can temporarily use the release-3.2 version.
>
> Hi Martijn, did we ignore pushing it to the docker registry?
>
> Best,
> Jiabao
>
> [1] https://hub.docker.com/r/apache/flink-statefun-playground/tags
>
> On 2024/01/18 17:09:20 Alexandre LANGUILLAT wrote:
> > Hi,
> >
> > I am trying to run the example provided here:
> >
> https://github.com/apache/flink-statefun-playground/tree/release-3.3/python/greeter
> >
> > 1 - Following the read.me, with docker (that I installed):
> >
> > "docker-compose build" works well. But "docker-compose up" returns an
> error:
> >
> > [image: image.png]
> >
> > 2 - Without docker, having a virtual env with apache-flink-statefun and
> > aiohttp installed, I ran "python functions.py" but I the server runs on
> > port 8000 according to the script and I dont know how the request in curl
> > (or postman) would work since it calls port 8090... :
> >
> > curl -X PUT -H "Content-Type: application/vnd.example/GreetRequest" -d
> > '{"name": "Bob"}' localhost:8090/example/person/Bob
> >
> >
> > I wonder what I have to configure additionaly? I owuld be keen to run it
> > without docker actually, to understand how it works under the hood.
> >
> > Bests
> >
> > --
> > Alexandre
> >



-- 
Alexandre Languillat


Re:RE: binlog文件丢失问题

2024-01-19 Thread wyk









抱歉,具体报错和代码如下:


报错部分:
Caused by: java.lang.IllegalStateException: The connector is trying to read 
binlog starting at 
Struct{version=1.5.4.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1705645599953,db=,server_id=0,file=mysql_bin.007132,pos=729790304,row=0},
 but this is no longer available on the server. Reconfigure the connector to 
use a snapshot when needed.
at 
com.ververica.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.loadStartingOffsetState(StatefulTaskContext.java:179)
at 
com.ververica.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.configure(StatefulTaskContext.java:112)
at 
com.ververica.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:93)
at 
com.ververica.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:65)
at 
com.ververica.cdc.connectors.mysql.source.reader.MySqlSplitReader.checkSplitOrStartNext(MySqlSplitReader.java:170)
at 
com.ververica.cdc.connectors.mysql.source.reader.MySqlSplitReader.fetch(MySqlSplitReader.java:75)
at 
org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
... 6 more




代码部分: 
if (!isBinlogAvailable(mySqlOffsetContext)) {
throw new IllegalStateException(
"The connector is trying to read binlog starting at "
+ mySqlOffsetContext.getSourceInfo()
+ ", but this is no longer "
+ "available on the server. Reconfigure the connector to 
use a snapshot when needed.");
}

在 2024-01-19 17:33:03,"Jiabao Sun"  写道:
>Hi,
>
>你的图挂了,可以贴一下图床链接或者直接贴一下代码。
>
>Best,
>Jiabao
>
>
>On 2024/01/19 09:16:55 wyk wrote:
>> 
>> 
>> 各位大佬好:
>> 现在有一个binlog文件丢失问题,需要请教各位,具体问题描述如下:
>> 
>> 
>> 问题描述:
>> 场景: 公司mysql有两个备库: 备库1和备库2。
>> 1. 现在备库1需要下线,需要将任务迁移至备库2
>> 2.我正常将任务保存savepoint后,将链接信息修改为备库2从savepoint启动,这个时候提示报错binlog文件不存在问题,报错截图如下图一
>> 3.我根据报错找到对应代码(如下图二)后,发现是一块校验binlog文件是否存在的逻辑,我理解的是我们从gtid启动不需要对binlog文件进行操作,就将这部分代码进行了注释,任务能够正常从savepoint启动,并且数据接入正常
>> 
>> 
>> 
>> 
>> 疑问: 想问一下校验binlog文件是否存在这块逻辑是否需要,或者是应该修改为校验gtid是否存在,期待您的回复,谢谢
>> 
>> 
>> 注意: 备库一个备库二的gtid是保持一致的
>> 
>> 
>> 
>> 
>> 图一:
>> 
>> 
>> 图二:
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 


RE: binlog文件丢失问题

2024-01-19 Thread Jiabao Sun
Hi,

你的图挂了,可以贴一下图床链接或者直接贴一下代码。

Best,
Jiabao


On 2024/01/19 09:16:55 wyk wrote:
> 
> 
> 各位大佬好:
> 现在有一个binlog文件丢失问题,需要请教各位,具体问题描述如下:
> 
> 
> 问题描述:
> 场景: 公司mysql有两个备库: 备库1和备库2。
> 1. 现在备库1需要下线,需要将任务迁移至备库2
> 2.我正常将任务保存savepoint后,将链接信息修改为备库2从savepoint启动,这个时候提示报错binlog文件不存在问题,报错截图如下图一
> 3.我根据报错找到对应代码(如下图二)后,发现是一块校验binlog文件是否存在的逻辑,我理解的是我们从gtid启动不需要对binlog文件进行操作,就将这部分代码进行了注释,任务能够正常从savepoint启动,并且数据接入正常
> 
> 
> 
> 
> 疑问: 想问一下校验binlog文件是否存在这块逻辑是否需要,或者是应该修改为校验gtid是否存在,期待您的回复,谢谢
> 
> 
> 注意: 备库一个备库二的gtid是保持一致的
> 
> 
> 
> 
> 图一:
> 
> 
> 图二:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 

binlog文件丢失问题

2024-01-19 Thread wyk


各位大佬好:
现在有一个binlog文件丢失问题,需要请教各位,具体问题描述如下:


问题描述:
场景: 公司mysql有两个备库: 备库1和备库2。
1. 现在备库1需要下线,需要将任务迁移至备库2
2.我正常将任务保存savepoint后,将链接信息修改为备库2从savepoint启动,这个时候提示报错binlog文件不存在问题,报错截图如附件内图一
3.我根据报错找到对应代码(如附件内图二)后,发现是一块校验binlog文件是否存在的逻辑,我理解的是我们从gtid启动不需要对binlog文件进行操作,就将这部分代码进行了注释,任务能够正常从savepoint启动,并且数据接入正常




疑问: 想问一下校验binlog文件是否存在这块逻辑是否需要,或者是应该修改为校验gtid是否存在,期待您的回复,谢谢


注意: 备库一个备库二的gtid是保持一致的

















binlog文件丢失问题

2024-01-19 Thread wyk


各位大佬好:
现在有一个binlog文件丢失问题,需要请教各位,具体问题描述如下:


问题描述:
场景: 公司mysql有两个备库: 备库1和备库2。
1. 现在备库1需要下线,需要将任务迁移至备库2
2.我正常将任务保存savepoint后,将链接信息修改为备库2从savepoint启动,这个时候提示报错binlog文件不存在问题,报错截图如下图一
3.我根据报错找到对应代码(如下图二)后,发现是一块校验binlog文件是否存在的逻辑,我理解的是我们从gtid启动不需要对binlog文件进行操作,就将这部分代码进行了注释,任务能够正常从savepoint启动,并且数据接入正常




疑问: 想问一下校验binlog文件是否存在这块逻辑是否需要,或者是应该修改为校验gtid是否存在,期待您的回复,谢谢


注意: 备库一个备库二的gtid是保持一致的




图一:


图二: