Re: [PR] [FLINK-34902][table] Fix IndexOutOfBoundsException for VALUES [flink]

2024-04-26 Thread via GitHub


reswqa commented on PR #24724:
URL: https://github.com/apache/flink/pull/24724#issuecomment-2078864754

   @twalthr, would you mind taking a look at this? thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35012) ChangelogNormalizeRestoreTest.testRestore failure

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841108#comment-17841108
 ] 

Weijie Guo commented on FLINK-35012:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=11604

> ChangelogNormalizeRestoreTest.testRestore failure
> -
>
> Key: FLINK-35012
> URL: https://issues.apache.org/jira/browse/FLINK-35012
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58716=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11921
> {code}
> Apr 03 22:57:43 22:57:43.159 [ERROR] Failures: 
> Apr 03 22:57:43 22:57:43.160 [ERROR]   
> ChangelogNormalizeRestoreTest>RestoreTestBase.testRestore:337 
> Apr 03 22:57:43 Expecting actual:
> Apr 03 22:57:43   ["+I[two, 2, b]",
> Apr 03 22:57:43 "+I[one, 1, a]",
> Apr 03 22:57:43 "+I[three, 3, c]",
> Apr 03 22:57:43 "-U[one, 1, a]",
> Apr 03 22:57:43 "+U[one, 1, aa]",
> Apr 03 22:57:43 "-U[three, 3, c]",
> Apr 03 22:57:43 "+U[three, 3, cc]",
> Apr 03 22:57:43 "-D[two, 2, b]",
> Apr 03 22:57:43 "+I[four, 4, d]",
> Apr 03 22:57:43 "+I[five, 5, e]",
> Apr 03 22:57:43 "-U[four, 4, d]",
> Apr 03 22:57:43 "+U[four, 4, dd]"]
> Apr 03 22:57:43 to contain exactly in any order:
> Apr 03 22:57:43   ["+I[one, 1, a]",
> Apr 03 22:57:43 "+I[two, 2, b]",
> Apr 03 22:57:43 "-U[one, 1, a]",
> Apr 03 22:57:43 "+U[one, 1, aa]",
> Apr 03 22:57:43 "+I[three, 3, c]",
> Apr 03 22:57:43 "-D[two, 2, b]",
> Apr 03 22:57:43 "-U[three, 3, c]",
> Apr 03 22:57:43 "+U[three, 3, cc]",
> Apr 03 22:57:43 "+I[four, 4, d]",
> Apr 03 22:57:43 "+I[five, 5, e]",
> Apr 03 22:57:43 "-U[four, 4, d]",
> Apr 03 22:57:43 "+U[four, 4, dd]",
> Apr 03 22:57:43 "+I[six, 6, f]",
> Apr 03 22:57:43 "-D[six, 6, f]"]
> Apr 03 22:57:43 but could not find the following elements:
> Apr 03 22:57:43   ["+I[six, 6, f]", "-D[six, 6, f]"]
> Apr 03 22:57:43 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35245) Add metrics for flink-connector-tidb-cdc

2024-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35245:
---
Labels: pull-request-available  (was: )

> Add metrics for flink-connector-tidb-cdc
> 
>
> Key: FLINK-35245
> URL: https://issues.apache.org/jira/browse/FLINK-35245
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Priority: Major
>  Labels: pull-request-available
>
> As [https://github.com/apache/flink-cdc/issues/985] had been closed, but it 
> has not been resolved.
> Create  a new issue to track this issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35173) Debezium for Mysql connector Custom Time Serializer

2024-04-26 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-35173:
--
Affects Version/s: cdc-3.1.0
   (was: 3.1.0)

> Debezium for Mysql connector Custom Time Serializer 
> 
>
> Key: FLINK-35173
> URL: https://issues.apache.org/jira/browse/FLINK-35173
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: ZhengYu Chen
>Assignee: ZhengYu Chen
>Priority: Major
>  Labels: CDC, pull-request-available
> Fix For: cdc-3.1.0
>
>
> Currently, Flink CDC Time encounters time type errors (including DateTime, 
> Time, Date, TimeStamp) when using MySQL Connector 
> (JsonDebeziumDeserializationSchema) as deserialization, and the converted 
> time is wrong. The essential reason is that the timestamp returned by the 
> bottom layer of debezium is UTC (such as io.debezium.time.Timestamp). The 
> community has already had some 
> [PR|https://github.com/apache/flink-cdc/pull/1366/files#diff-e129e9fae3eea0bb32f0019debb4932413c91088d6dae656e2ecb63913badae4],
>  but they are not work.
> Now a way is provided to provide a solution based on Debezium's custom 
> Convert interface 
> (https://debezium.io/documentation/reference/1.9/development/converters.html),
> Users can choose to convert the above four time types into STRING according 
> to the specified time format to ensure that users can correctly convert JSON 
> when using the Flink DataStream API.
> When the user enables this converter, we need to configure it according to 
> the parameters, That's some datastream use case:
> {code:java}
> Properties debeziumProperties = new Properties();
> debeziumProperties.setProperty("converters", "datetime");
> debeziumProperties.setProperty("datetime.database.type", 
> DataBaseType.MYSQL.getType());
> debeziumProperties.setProperty("datetime.type", 
> "cn.xxx.sources.cdc.MysqlDebeziumConverter");
> debeziumProperties.setProperty("datetime.format.date", "-MM-dd");
> debeziumProperties.setProperty("datetime.format.time", "HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.datetime", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp.zone", "UTC+8");
> MySqlSourceBuilder builder = MySqlSource.builder()
>         .hostname(url[0])
>         .port(Integer.parseInt(url[1]))
>         .databaseList(table.getDatabase())
>         .tableList(getTablePattern(table))
>         .username(table.getUserName())
>         .password(table.getPassword())
>         .debeziumProperties(debeziumProperties); {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35192) operator oom

2024-04-26 Thread Biao Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biao Geng updated FLINK-35192:
--
Attachment: screenshot-3.png

> operator oom
> 
>
> Key: FLINK-35192
> URL: https://issues.apache.org/jira/browse/FLINK-35192
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.6.1
> Environment: jdk: openjdk11
> operator version: 1.6.1
>Reporter: chenyuzhi
>Priority: Major
> Attachments: image-2024-04-22-15-47-49-455.png, 
> image-2024-04-22-15-52-51-600.png, image-2024-04-22-15-58-23-269.png, 
> image-2024-04-22-15-58-42-850.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>
> The kubernetest operator docker process was killed by kernel cause out of 
> memory(the time is 2024.04.03: 18:16)
>  !image-2024-04-22-15-47-49-455.png! 
> Metrics:
> the pod memory (RSS) is increasing slowly in the past 7 days:
>  !screenshot-1.png! 
> However the jvm memory metrics of operator not shown obvious anomaly:
>  !image-2024-04-22-15-58-23-269.png! 
>  !image-2024-04-22-15-58-42-850.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35189) Introduce test-filesystem Catalog based on FileSystem Connector to support materialized table

2024-04-26 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841128#comment-17841128
 ] 

dalongliu commented on FLINK-35189:
---

Merged in master: 714d1cb2e0bd0df03393492dc87cbd800af63e1b

> Introduce test-filesystem Catalog based on FileSystem Connector to support 
> materialized table
> -
>
> Key: FLINK-35189
> URL: https://issues.apache.org/jira/browse/FLINK-35189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Table SQL / API, Tests
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35195) Support the execution of create materialized table in continuous refresh mode

2024-04-26 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-35195:
-

Assignee: dalongliu

> Support the execution of create materialized table in continuous refresh mode
> -
>
> Key: FLINK-35195
> URL: https://issues.apache.org/jira/browse/FLINK-35195
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Gateway
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> In continuous refresh mode, support creates materialized table and its 
> background refresh job:
> {code:SQL}
> CREATE MATERIALIZED TABLE [catalog_name.][db_name.]table_name
>  
> [ ([  ]) ]
>  
> [COMMENT table_comment]
>  
> [PARTITIONED BY (partition_column_name1, partition_column_name2, ...)]
>  
> [WITH (key1=val1, key2=val2, ...)]
>  
> FRESHNESS = INTERVAL '' { SECOND | MINUTE | HOUR | DAY }
>  
> [REFRESH_MODE = { CONTINUOUS | FULL }]
>  
> AS 
>  
> :
>   [CONSTRAINT constraint_name] PRIMARY KEY (column_name, ...) NOT ENFORCED
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35192) operator oom

2024-04-26 Thread Biao Geng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841129#comment-17841129
 ] 

Biao Geng commented on FLINK-35192:
---

 !screenshot-3.png! 
According to the flink k8s op's codes, the deleteOnExit() is called when create 
config files or pod template files. It looks like that it is possible to lead 
the memory leak if the operator pod runs for a long time. In the operator's 
FlinkConfigManager implementation, we would clean up these temp files/dirs. 
Maybe we can safely remove the deleteOnExit() usage? cc [~gyfora]

Also, from the attached yaml, it looks like a custom flink k8s op 
image(gdc-flink-kubernetes-operator:1.6.1-GDC1.0.2) is used.  [~stupid_pig] 
would you mind checking if your codes call methods like deleteOnExit if you 
have some customized changes to the operator?

> operator oom
> 
>
> Key: FLINK-35192
> URL: https://issues.apache.org/jira/browse/FLINK-35192
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.6.1
> Environment: jdk: openjdk11
> operator version: 1.6.1
>Reporter: chenyuzhi
>Priority: Major
> Attachments: image-2024-04-22-15-47-49-455.png, 
> image-2024-04-22-15-52-51-600.png, image-2024-04-22-15-58-23-269.png, 
> image-2024-04-22-15-58-42-850.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>
> The kubernetest operator docker process was killed by kernel cause out of 
> memory(the time is 2024.04.03: 18:16)
>  !image-2024-04-22-15-47-49-455.png! 
> Metrics:
> the pod memory (RSS) is increasing slowly in the past 7 days:
>  !screenshot-1.png! 
> However the jvm memory metrics of operator not shown obvious anomaly:
>  !image-2024-04-22-15-58-23-269.png! 
>  !image-2024-04-22-15-58-42-850.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34694) Delete num of associations for streaming outer join

2024-04-26 Thread Roman Boyko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837205#comment-17837205
 ] 

Roman Boyko edited comment on FLINK-34694 at 4/26/24 10:05 AM:
---

Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q9 and q20). 
Because all of them use INNER JOIN (but this optimization works only for outer 
join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-26-16-55-19-800.png!

AFTER optimization:

!image-2024-04-26-16-55-56-994.png!

As you can see here - for all INNER JOIN queries the result remains almost the 
same (small difference most probably cause the measurement error). But for all 
FULL OUTER JOIN benchmarks the performance is increased. Especially for 
q20_outer where it was more than 3 times better. The reason of such huge 
difference can be found on flame graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.

Totally the performance boost is 6.75 / 5.15 = 1.31 (31%).


was (Author: rovboyko):
Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q9 and q20). 
Because all of them use INNER JOIN (but this optimization works only for outer 
join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-26-16-55-19-800.png!

AFTER optimization:

!image-2024-04-26-16-55-56-994.png!

As you can see here - for all INNER JOIN queries the result remains almost the 
same (small difference most probably cause the measurement error). But for all 
FULL OUTER JOIN benchmarks the performance is increased. Especially for 
q20_outer where it was more than 3 times better. The reason of such huge 
difference can be found on flame graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.

Totally the performance boost is 6.75 / 5.15 = 1.31 (30%).

> Delete num of associations for streaming outer join
> ---
>
> Key: FLINK-34694
> URL: https://issues.apache.org/jira/browse/FLINK-34694
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Roman Boyko
>Priority: Major
> Attachments: image-2024-03-15-19-51-29-282.png, 
> image-2024-03-15-19-52-24-391.png, image-2024-04-15-15-45-51-027.png, 
> image-2024-04-15-15-46-17-671.png, image-2024-04-15-19-14-14-735.png, 
> image-2024-04-15-19-14-41-909.png, image-2024-04-15-19-15-23-010.png, 
> image-2024-04-26-16-55-19-800.png, image-2024-04-26-16-55-56-994.png
>
>
> Currently in StreamingJoinOperator (non-window) in case of OUTER JOIN the 
> OuterJoinRecordStateView is used to store additional field - the number of 
> associations for every record. This leads to store additional Tuple2 and 
> Integer data for every record in outer state.
> This functionality is used only for sending:
>  * -D[nullPaddingRecord] in case of first Accumulate record
>  * +I[nullPaddingRecord] in case of last Revoke record
> The overhead of storing additional data and updating the counter for 
> associations can be avoided by checking the input state for these events.
>  
> The proposed solution can be found here - 
> [https://github.com/rovboyko/flink/commit/1ca2f5bdfc2d44b99d180abb6a4dda123e49d423]
>  
> According to the nexmark q20 test (changed to OUTER JOIN) it could increase 
> the performance up to 20%:
>  * Before:
> !image-2024-03-15-19-52-24-391.png!
>  * After:
> !image-2024-03-15-19-51-29-282.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34694) Delete num of associations for streaming outer join

2024-04-26 Thread Roman Boyko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837205#comment-17837205
 ] 

Roman Boyko edited comment on FLINK-34694 at 4/26/24 10:04 AM:
---

Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q9 and q20). 
Because all of them use INNER JOIN (but this optimization works only for outer 
join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-26-16-55-19-800.png!

AFTER optimization:

!image-2024-04-26-16-55-56-994.png!

As you can see here - for all INNER JOIN queries the result remains almost the 
same (small difference most probably cause the measurement error). But for all 
FULL OUTER JOIN benchmarks the performance is increased. Especially for 
q20_outer where it was more than 3 times better. The reason of such huge 
difference can be found on flame graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.

Totally the performance boost is 6.75 / 5.15 = 1.31 (30%).


was (Author: rovboyko):
Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q9 and q20). 
Because all of them use INNER JOIN (but this optimization works only for outer 
join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-26-16-55-19-800.png!

AFTER optimization:

!image-2024-04-26-16-55-56-994.png!

As you can see here - for all INNER JOIN queries the result remains almost the 
same (small difference most probably cause the measurement error). But for all 
FULL OUTER JOIN benchmarks the performance is increased. Especially for 
q20_outer where it was more than 3 times better. The reason of such huge 
difference can be found on flame graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.

> Delete num of associations for streaming outer join
> ---
>
> Key: FLINK-34694
> URL: https://issues.apache.org/jira/browse/FLINK-34694
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Roman Boyko
>Priority: Major
> Attachments: image-2024-03-15-19-51-29-282.png, 
> image-2024-03-15-19-52-24-391.png, image-2024-04-15-15-45-51-027.png, 
> image-2024-04-15-15-46-17-671.png, image-2024-04-15-19-14-14-735.png, 
> image-2024-04-15-19-14-41-909.png, image-2024-04-15-19-15-23-010.png, 
> image-2024-04-26-16-55-19-800.png, image-2024-04-26-16-55-56-994.png
>
>
> Currently in StreamingJoinOperator (non-window) in case of OUTER JOIN the 
> OuterJoinRecordStateView is used to store additional field - the number of 
> associations for every record. This leads to store additional Tuple2 and 
> Integer data for every record in outer state.
> This functionality is used only for sending:
>  * -D[nullPaddingRecord] in case of first Accumulate record
>  * +I[nullPaddingRecord] in case of last Revoke record
> The overhead of storing additional data and updating the counter for 
> associations can be avoided by checking the input state for these events.
>  
> The proposed solution can be found here - 
> [https://github.com/rovboyko/flink/commit/1ca2f5bdfc2d44b99d180abb6a4dda123e49d423]
>  
> According to the nexmark q20 test (changed to OUTER JOIN) it could increase 
> the performance up to 20%:
>  * Before:
> !image-2024-03-15-19-52-24-391.png!
>  * After:
> !image-2024-03-15-19-51-29-282.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [cdc-base] Support `Scan Newly Added Tables` feature [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun closed pull request #1838: [cdc-base] Support `Scan Newly Added 
Tables` feature
URL: https://github.com/apache/flink-cdc/pull/1838


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35246:
---
Fix Version/s: 1.20.0

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35246:
---
Description: 
{code:java}
Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
('localhost/:36112') doesn't contain a valid port
Apr 26 01:51:10 at 
org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
Apr 26 01:51:10 at 
org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
Apr 26 01:51:10 at 
org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
Apr 26 01:51:10 at 
org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
Apr 26 01:51:10 at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
Apr 26 01:51:10 at 
org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
Apr 26 01:51:10 at 
org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
Apr 26 01:51:10 at 
java.base/java.lang.reflect.Method.invoke(Method.java:580)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
Apr 26 01:51:10 at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)

{code}


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
> ('localhost/:36112') doesn't contain a valid port
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
> Apr 26 01:51:10   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [tidb] Add metrics for tidb connector [flink-cdc]

2024-04-26 Thread via GitHub


xieyi888 commented on PR #1974:
URL: https://github.com/apache/flink-cdc/pull/1974#issuecomment-2078942188

   > add metrcis: currentFetchEventTimeLag, currentEmitEventTimeLag, 
sourceIdleTime for TiKVRichParallelSourceFunction
   
   
   
   > Thanks @xieyi888 for the great work! Before this PR could be merged, could 
you please rebase it with latest `master` branch?
   
   Thanks a lot for pushing this PR.
   As it was created before version 2.3.0 (2022-11-10) I had create new issue 
and PR to solve it
   https://issues.apache.org/jira/browse/FLINK-35245
   https://github.com/apache/flink-cdc/pull/3266
   
   Please take a look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35244) Move package for flink-connector-tidb-cdc test

2024-04-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35244:
--

Assignee: Xie Yi

> Move package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35246][test] Fix incorrect address construction in SqlClientSSLTest [flink]

2024-04-26 Thread via GitHub


flinkbot commented on PR #24727:
URL: https://github.com/apache/flink/pull/24727#issuecomment-2078971258

   
   ## CI report:
   
   * a327b1e5ee7948bfd7f5c699222be5575e72f6f6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32877][Filesystem]add HTTP options to gcs-cloud-storage client [flink]

2024-04-26 Thread via GitHub


dannycranmer closed pull request #23226: [FLINK-32877][Filesystem]add HTTP 
options to gcs-cloud-storage client
URL: https://github.com/apache/flink/pull/23226


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32877][Filesystem]add HTTP options to gcs-cloud-storage client [flink]

2024-04-26 Thread via GitHub


dannycranmer commented on PR #23226:
URL: https://github.com/apache/flink/pull/23226#issuecomment-2078971443

   Merged in https://github.com/apache/flink/pull/24673


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-32877) Support for HTTP connect and timeout options while writes in GCS connector

2024-04-26 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-32877.
---
Resolution: Done

> Support for HTTP connect and timeout options while writes in GCS connector
> --
>
> Key: FLINK-32877
> URL: https://issues.apache.org/jira/browse/FLINK-32877
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.3, 1.16.2, 1.17.1, 1.19.0, 1.18.1
>Reporter: Jayadeep Jayaraman
>Assignee: Ravi Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The current GCS connector uses the gcs java storage library and bypasses the 
> hadoop gcs connector which supports multiple http options. There are 
> situations where GCS takes longer to provide a response for a PUT operation 
> than the default value.
> This change will allow users to customize their connect time and read timeout 
> based on their application



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34694) Delete num of associations for streaming outer join

2024-04-26 Thread Roman Boyko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837205#comment-17837205
 ] 

Roman Boyko edited comment on FLINK-34694 at 4/26/24 10:01 AM:
---

Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q9 and q20). 
Because all of them use INNER JOIN (but this optimization works only for outer 
join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-26-16-55-19-800.png!

AFTER optimization:

!image-2024-04-26-16-55-56-994.png!

As you can see here - for all INNER JOIN queries the result remains almost the 
same (small difference most probably cause the measurement error). But for all 
FULL OUTER JOIN benchmarks the performance is increased. Especially for 
q20_outer where it was more than 3 times better. The reason of such huge 
difference can be found on flame graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.


was (Author: rovboyko):
Hi [~xu_shuai_] !

I prepared and executed all nexmark which uses streaming join (q4, q7, q9 and 
q20). Because all of them use INNER JOIN (but this optimization works only for 
outer join) I created the copy with FULL OUTER JOIN for every one.

BEFORE optimization:

!image-2024-04-15-15-45-51-027.png!

AFTER optimization:

!image-2024-04-15-15-46-17-671.png!

As you can see here - for all queries except q20_outer the result remains 
almost the same (small difference most probably cause the measurement error). 
But for q20_outer the performance is more than 2 times better (I repeated the 
test several times). The reason of such huge difference can be found on flame 
graph:

BEFORE optimization:

!image-2024-04-15-19-15-23-010.png!

 

AFTER optimization:

!image-2024-04-15-19-14-41-909.png!

 

Because of prevalence of state.update operation in before-optimization case the 
rocksdb CompactionJob is invoked more often spending the most CPU time.

There is no such performance boost for q4, q7 and q9 because:
 * q7 translates to Interval join
 * q4 and q9 transformed to InnerJoin by FlinkFilterJoinRule (maybe this is a 
bug, I will check later)

> Delete num of associations for streaming outer join
> ---
>
> Key: FLINK-34694
> URL: https://issues.apache.org/jira/browse/FLINK-34694
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Roman Boyko
>Priority: Major
> Attachments: image-2024-03-15-19-51-29-282.png, 
> image-2024-03-15-19-52-24-391.png, image-2024-04-15-15-45-51-027.png, 
> image-2024-04-15-15-46-17-671.png, image-2024-04-15-19-14-14-735.png, 
> image-2024-04-15-19-14-41-909.png, image-2024-04-15-19-15-23-010.png, 
> image-2024-04-26-16-55-19-800.png, image-2024-04-26-16-55-56-994.png
>
>
> Currently in StreamingJoinOperator (non-window) in case of OUTER JOIN the 
> OuterJoinRecordStateView is used to store additional field - the number of 
> associations for every record. This leads to store additional Tuple2 and 
> Integer data for every record in outer state.
> This functionality is used only for sending:
>  * -D[nullPaddingRecord] in case of first Accumulate record
>  * +I[nullPaddingRecord] in case of last Revoke record
> The overhead of storing additional data and updating the counter for 
> associations can be avoided by checking the input state for these events.
>  
> The proposed solution can be found here - 
> [https://github.com/rovboyko/flink/commit/1ca2f5bdfc2d44b99d180abb6a4dda123e49d423]
>  
> According to the nexmark q20 test (changed to OUTER JOIN) it could increase 
> the performance up to 20%:
>  * Before:
> !image-2024-03-15-19-52-24-391.png!
>  * After:
> !image-2024-03-15-19-51-29-282.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35244) Move package for flink-connector-tidb-cdc test

2024-04-26 Thread Xie Yi (Jira)
Xie Yi created FLINK-35244:
--

 Summary: Move package for flink-connector-tidb-cdc test
 Key: FLINK-35244
 URL: https://issues.apache.org/jira/browse/FLINK-35244
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Xie Yi
 Attachments: image-2024-04-26-16-19-39-297.png

test case for flink-connector-tidb-cdc should under
*org.apache.flink.cdc.connectors.tidb* package
instead of *org.apache.flink.cdc.connectors*
!image-2024-04-26-16-19-39-297.png!
 
 
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP3.1][FLINK-35173][cdc][mysql] Debezium custom time serializer for MySQL connector (#3240) [flink-cdc]

2024-04-26 Thread via GitHub


PatrickRen merged PR #3264:
URL: https://github.com/apache/flink-cdc/pull/3264


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1784#comment-1784
 ] 

Weijie Guo edited comment on FLINK-35246 at 4/26/24 8:59 AM:
-

{code:java}
InetSocketAddress.createUnresolved(

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetAddress(),

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetPort())
.toString()
{code}

The construction of  InetSocketAddress fails on Java 17 because the toString 
representation is not guaranteed to return something of the form host:port.


was (Author: weijie guo):

{code:java}
InetSocketAddress.createUnresolved(

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetAddress(),

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetPort())
.toString()
{code}

this construction of  InetSocketAddress fails on Java 17 because the toString 
representation is not guaranteed to return something of the form host:port.

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
> ('localhost/:36112') doesn't contain a valid port
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
> Apr 26 01:51:10   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-3.1][FLINK-35235][pipeline-connector][kafka] Fix missing dependencies in the uber jar of Kafka pipeline sink. [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun merged PR #3263:
URL: https://github.com/apache/flink-cdc/pull/3263


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32877) Support for HTTP connect and timeout options while writes in GCS connector

2024-04-26 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-32877:
--
Affects Version/s: 1.18.1
   1.19.0

> Support for HTTP connect and timeout options while writes in GCS connector
> --
>
> Key: FLINK-32877
> URL: https://issues.apache.org/jira/browse/FLINK-32877
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.3, 1.16.2, 1.17.1, 1.19.0, 1.18.1
>Reporter: Jayadeep Jayaraman
>Assignee: Ravi Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The current GCS connector uses the gcs java storage library and bypasses the 
> hadoop gcs connector which supports multiple http options. There are 
> situations where GCS takes longer to provide a response for a PUT operation 
> than the default value.
> This change will allow users to customize their connect time and read timeout 
> based on their application



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32877) Support for HTTP connect and timeout options while writes in GCS connector

2024-04-26 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-32877:
--
Fix Version/s: 1.20.0

> Support for HTTP connect and timeout options while writes in GCS connector
> --
>
> Key: FLINK-32877
> URL: https://issues.apache.org/jira/browse/FLINK-32877
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.3, 1.16.2, 1.17.1
>Reporter: Jayadeep Jayaraman
>Assignee: Ravi Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The current GCS connector uses the gcs java storage library and bypasses the 
> hadoop gcs connector which supports multiple http options. There are 
> situations where GCS takes longer to provide a response for a PUT operation 
> than the default value.
> This change will allow users to customize their connect time and read timeout 
> based on their application



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35194) Support describe job syntax and execution

2024-04-26 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-35194:
-

Assignee: xuyang

> Support describe job syntax and execution
> -
>
> Key: FLINK-35194
> URL: https://issues.apache.org/jira/browse/FLINK-35194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: xuyang
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> { DESCRIBE | DESC } JOB 'xxx'
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35119][cdc-runtime] Change DataChangeEvent serialization and eserialization [flink-cdc]

2024-04-26 Thread via GitHub


yuxiqian commented on code in PR #3226:
URL: https://github.com/apache/flink-cdc/pull/3226#discussion_r1580840112


##
flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/serializer/event/DataChangeEventSerializer.java:
##
@@ -76,28 +72,18 @@ public DataChangeEvent deserialize(DataInputView source) 
throws IOException {
 OperationType op = opSerializer.deserialize(source);
 TableId tableId = tableIdSerializer.deserialize(source);
 
+RecordData before = recordDataSerializer.deserialize(source);
+RecordData after = recordDataSerializer.deserialize(source);
+Map meta = metaSerializer.deserialize(source);

Review Comment:
   Is it safe to deserialize here if `op` isn't valid? Or we can validate op in 
advance.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35119][cdc-runtime] Change DataChangeEvent serialization and eserialization [flink-cdc]

2024-04-26 Thread via GitHub


yuxiqian commented on code in PR #3226:
URL: https://github.com/apache/flink-cdc/pull/3226#discussion_r1580840112


##
flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/serializer/event/DataChangeEventSerializer.java:
##
@@ -76,28 +72,18 @@ public DataChangeEvent deserialize(DataInputView source) 
throws IOException {
 OperationType op = opSerializer.deserialize(source);
 TableId tableId = tableIdSerializer.deserialize(source);
 
+RecordData before = recordDataSerializer.deserialize(source);
+RecordData after = recordDataSerializer.deserialize(source);
+Map meta = metaSerializer.deserialize(source);

Review Comment:
   Is it safe to deserialize here if `op` isn't any known following one? Or we 
can validate op in advance.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Kafka part required for Flink Apicurio Avro support. Prototype for review [flink-connector-kafka]

2024-04-26 Thread via GitHub


boring-cyborg[bot] commented on PR #99:
URL: 
https://github.com/apache/flink-connector-kafka/pull/99#issuecomment-2079138385

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35026][runtime][config] Introduce async execution configurations [flink]

2024-04-26 Thread via GitHub


fredia merged PR #24667:
URL: https://github.com/apache/flink/pull/24667


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35176][Connector/JDBC] Support property authentication connection for JDBC catalog & dynamic table [flink-connector-jdbc]

2024-04-26 Thread via GitHub


RocMarshal commented on code in PR #116:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/116#discussion_r1580440708


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/JdbcCatalog.java:
##
@@ -29,18 +29,22 @@
 import org.apache.flink.table.catalog.exceptions.TableNotExistException;
 
 import java.util.List;
+import java.util.Properties;
+
+import static 
org.apache.flink.connector.jdbc.JdbcConnectionOptions.getBriefAuthProperties;
 
 /** Catalogs for relational databases via JDBC. */
 @PublicEvolving
 public class JdbcCatalog extends AbstractJdbcCatalog {
 
 private final AbstractJdbcCatalog internal;
 
+@Deprecated
 /**
  * Creates a JdbcCatalog.

Review Comment:
   Anchor: A



##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/JdbcCatalog.java:
##
@@ -77,17 +81,42 @@ public JdbcCatalog(
 String pwd,
 String baseUrl,
 String compatibleMode) {
-super(userClassLoader, catalogName, defaultDatabase, username, pwd, 
baseUrl);
+this(
+userClassLoader,
+catalogName,
+defaultDatabase,
+baseUrl,
+compatibleMode,
+getBriefAuthProperties(username, pwd));
+}
+
+/**
+ * Creates a JdbcCatalog.

Review Comment:
   @caicancai Thanks for the comment.
   I typed it in based on the style like 'Anchor: A'.
   Maybe I get the wrong meaning from the comment. Would you mind clarifying 
more details ?  many thx. :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1784#comment-1784
 ] 

Weijie Guo commented on FLINK-35246:



{code:java}
InetSocketAddress.createUnresolved(

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetAddress(),

SQL_GATEWAY_REST_ENDPOINT_EXTENSION.getTargetPort())
.toString()
{code}

this construction of  InetSocketAddress fails on Java 17 because the toString 
representation is not guaranteed to return something of the form host:port.

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
> ('localhost/:36112') doesn't contain a valid port
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
> Apr 26 01:51:10   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35246:
--

 Summary: SqlClientSSLTest.testGatewayMode failed in AZP
 Key: FLINK-35246
 URL: https://issues.apache.org/jira/browse/FLINK-35246
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-35246:
--

Assignee: Weijie Guo

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35246:
---
Labels: pull-request-available  (was: )

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> {code:java}
> Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
> ('localhost/:36112') doesn't contain a valid port
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
> Apr 26 01:51:10   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35235) Fix missing dependencies in the uber jar

2024-04-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35235.

  Assignee: LvYanquan
Resolution: Fixed

Resolved via

* cdc master: ec643c9dd7365261f3cee620d4d6bd5d042917e0
* cdc release-3.1: b96ea11cc7df6c3d57a155573f29c18bf9d787ae

> Fix missing dependencies in the uber jar
> 
>
> Key: FLINK-35235
> URL: https://issues.apache.org/jira/browse/FLINK-35235
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: LvYanquan
>Assignee: LvYanquan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
> Attachments: image-2024-04-25-15-17-20-987.png, 
> image-2024-04-25-15-17-34-717.png
>
>
> Some class of Kafka were not included in fat jar.
> !image-2024-04-25-15-17-34-717.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31223] sql-client.sh fails to start with ssl enabled [flink]

2024-04-26 Thread via GitHub


reswqa commented on PR #22026:
URL: https://github.com/apache/flink/pull/22026#issuecomment-2078960338

   @davidradl Make sense to back port this as we should treat this as a bugfix 
because sql client previously supported SSL, which is a kind of regresssion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35197) Support the execution of suspend, resume materialized table in continuous refresh mode

2024-04-26 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-35197:
-

Assignee: Feng Jin

> Support the execution of suspend, resume materialized table in continuous 
> refresh mode
> --
>
> Key: FLINK-35197
> URL: https://issues.apache.org/jira/browse/FLINK-35197
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Gateway
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: Feng Jin
>Priority: Major
> Fix For: 1.20.0
>
>
> In continuous refresh mode, support suspend, resume the background refresh 
> job of materialized table.
> {code:SQL}
> // suspend
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name SUSPEND
> // resume
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name RESUME
> [WITH('key1' = 'val1', 'key2' = 'val2')]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35194) Support describe job syntax and execution

2024-04-26 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841122#comment-17841122
 ] 

dalongliu commented on FLINK-35194:
---

Yeah, assigned to you.

> Support describe job syntax and execution
> -
>
> Key: FLINK-35194
> URL: https://issues.apache.org/jira/browse/FLINK-35194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: xuyang
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> { DESCRIBE | DESC } JOB 'xxx'
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35189][test-utils]Introduce test-filesystem Catalog based on FileSystem Connector to support materialized table [flink]

2024-04-26 Thread via GitHub


lsyldliu merged PR #24712:
URL: https://github.com/apache/flink/pull/24712


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35026) Introduce async execution configurations

2024-04-26 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841182#comment-17841182
 ] 

Yanfei Lei commented on FLINK-35026:


Merged into master via 713c30f..3ff2ba4 

> Introduce async execution configurations
> 
>
> Key: FLINK-35026
> URL: https://issues.apache.org/jira/browse/FLINK-35026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration, Runtime / Task
>Reporter: Yanfei Lei
>Assignee: Yanfei Lei
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35026) Introduce async execution configurations

2024-04-26 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei resolved FLINK-35026.

Resolution: Resolved

> Introduce async execution configurations
> 
>
> Key: FLINK-35026
> URL: https://issues.apache.org/jira/browse/FLINK-35026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration, Runtime / Task
>Reporter: Yanfei Lei
>Assignee: Yanfei Lei
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [fix] repair a snapshot-split bug: [flink-cdc]

2024-04-26 Thread via GitHub


yuxiqian commented on PR #2968:
URL: https://github.com/apache/flink-cdc/pull/2968#issuecomment-2078771360

   Hi @AidenPerce, could you please rebase this PR with latest `master` branch 
before it could be merged? Renaming like `com.ververica.cdc` to 
`org.apache.flink.cdc` might be necessary.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-3.1][minor][cdc][docs] Add user guide about providing extra jar package in quickstart docs [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun merged PR #3251:
URL: https://github.com/apache/flink-cdc/pull/3251


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35173) Debezium for Mysql connector Custom Time Serializer

2024-04-26 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841109#comment-17841109
 ] 

Qingsheng Ren commented on FLINK-35173:
---

flink-cdc release-3.1: 90511b3a65f5a3646f70cfca73e54df363e2d119

flink-cdc master: 6232d84052422aa88299f28074a8437e91db2988

> Debezium for Mysql connector Custom Time Serializer 
> 
>
> Key: FLINK-35173
> URL: https://issues.apache.org/jira/browse/FLINK-35173
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: ZhengYu Chen
>Priority: Major
>  Labels: CDC, pull-request-available
> Fix For: 3.1.0
>
>
> Currently, Flink CDC Time encounters time type errors (including DateTime, 
> Time, Date, TimeStamp) when using MySQL Connector 
> (JsonDebeziumDeserializationSchema) as deserialization, and the converted 
> time is wrong. The essential reason is that the timestamp returned by the 
> bottom layer of debezium is UTC (such as io.debezium.time.Timestamp). The 
> community has already had some 
> [PR|https://github.com/apache/flink-cdc/pull/1366/files#diff-e129e9fae3eea0bb32f0019debb4932413c91088d6dae656e2ecb63913badae4],
>  but they are not work.
> Now a way is provided to provide a solution based on Debezium's custom 
> Convert interface 
> (https://debezium.io/documentation/reference/1.9/development/converters.html),
> Users can choose to convert the above four time types into STRING according 
> to the specified time format to ensure that users can correctly convert JSON 
> when using the Flink DataStream API.
> When the user enables this converter, we need to configure it according to 
> the parameters, That's some datastream use case:
> {code:java}
> Properties debeziumProperties = new Properties();
> debeziumProperties.setProperty("converters", "datetime");
> debeziumProperties.setProperty("datetime.database.type", 
> DataBaseType.MYSQL.getType());
> debeziumProperties.setProperty("datetime.type", 
> "cn.xxx.sources.cdc.MysqlDebeziumConverter");
> debeziumProperties.setProperty("datetime.format.date", "-MM-dd");
> debeziumProperties.setProperty("datetime.format.time", "HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.datetime", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp.zone", "UTC+8");
> MySqlSourceBuilder builder = MySqlSource.builder()
>         .hostname(url[0])
>         .port(Integer.parseInt(url[1]))
>         .databaseList(table.getDatabase())
>         .tableList(getTablePattern(table))
>         .username(table.getUserName())
>         .password(table.getPassword())
>         .debeziumProperties(debeziumProperties); {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35173) Debezium for Mysql connector Custom Time Serializer

2024-04-26 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35173:
-

Assignee: ZhengYu Chen

> Debezium for Mysql connector Custom Time Serializer 
> 
>
> Key: FLINK-35173
> URL: https://issues.apache.org/jira/browse/FLINK-35173
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: ZhengYu Chen
>Assignee: ZhengYu Chen
>Priority: Major
>  Labels: CDC, pull-request-available
> Fix For: 3.1.0
>
>
> Currently, Flink CDC Time encounters time type errors (including DateTime, 
> Time, Date, TimeStamp) when using MySQL Connector 
> (JsonDebeziumDeserializationSchema) as deserialization, and the converted 
> time is wrong. The essential reason is that the timestamp returned by the 
> bottom layer of debezium is UTC (such as io.debezium.time.Timestamp). The 
> community has already had some 
> [PR|https://github.com/apache/flink-cdc/pull/1366/files#diff-e129e9fae3eea0bb32f0019debb4932413c91088d6dae656e2ecb63913badae4],
>  but they are not work.
> Now a way is provided to provide a solution based on Debezium's custom 
> Convert interface 
> (https://debezium.io/documentation/reference/1.9/development/converters.html),
> Users can choose to convert the above four time types into STRING according 
> to the specified time format to ensure that users can correctly convert JSON 
> when using the Flink DataStream API.
> When the user enables this converter, we need to configure it according to 
> the parameters, That's some datastream use case:
> {code:java}
> Properties debeziumProperties = new Properties();
> debeziumProperties.setProperty("converters", "datetime");
> debeziumProperties.setProperty("datetime.database.type", 
> DataBaseType.MYSQL.getType());
> debeziumProperties.setProperty("datetime.type", 
> "cn.xxx.sources.cdc.MysqlDebeziumConverter");
> debeziumProperties.setProperty("datetime.format.date", "-MM-dd");
> debeziumProperties.setProperty("datetime.format.time", "HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.datetime", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp.zone", "UTC+8");
> MySqlSourceBuilder builder = MySqlSource.builder()
>         .hostname(url[0])
>         .port(Integer.parseInt(url[1]))
>         .databaseList(table.getDatabase())
>         .tableList(getTablePattern(table))
>         .username(table.getUserName())
>         .password(table.getPassword())
>         .debeziumProperties(debeziumProperties); {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35173) Debezium for Mysql connector Custom Time Serializer

2024-04-26 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-35173.
---
Resolution: Fixed

> Debezium for Mysql connector Custom Time Serializer 
> 
>
> Key: FLINK-35173
> URL: https://issues.apache.org/jira/browse/FLINK-35173
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: ZhengYu Chen
>Assignee: ZhengYu Chen
>Priority: Major
>  Labels: CDC, pull-request-available
> Fix For: 3.1.0
>
>
> Currently, Flink CDC Time encounters time type errors (including DateTime, 
> Time, Date, TimeStamp) when using MySQL Connector 
> (JsonDebeziumDeserializationSchema) as deserialization, and the converted 
> time is wrong. The essential reason is that the timestamp returned by the 
> bottom layer of debezium is UTC (such as io.debezium.time.Timestamp). The 
> community has already had some 
> [PR|https://github.com/apache/flink-cdc/pull/1366/files#diff-e129e9fae3eea0bb32f0019debb4932413c91088d6dae656e2ecb63913badae4],
>  but they are not work.
> Now a way is provided to provide a solution based on Debezium's custom 
> Convert interface 
> (https://debezium.io/documentation/reference/1.9/development/converters.html),
> Users can choose to convert the above four time types into STRING according 
> to the specified time format to ensure that users can correctly convert JSON 
> when using the Flink DataStream API.
> When the user enables this converter, we need to configure it according to 
> the parameters, That's some datastream use case:
> {code:java}
> Properties debeziumProperties = new Properties();
> debeziumProperties.setProperty("converters", "datetime");
> debeziumProperties.setProperty("datetime.database.type", 
> DataBaseType.MYSQL.getType());
> debeziumProperties.setProperty("datetime.type", 
> "cn.xxx.sources.cdc.MysqlDebeziumConverter");
> debeziumProperties.setProperty("datetime.format.date", "-MM-dd");
> debeziumProperties.setProperty("datetime.format.time", "HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.datetime", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp.zone", "UTC+8");
> MySqlSourceBuilder builder = MySqlSource.builder()
>         .hostname(url[0])
>         .port(Integer.parseInt(url[1]))
>         .databaseList(table.getDatabase())
>         .tableList(getTablePattern(table))
>         .username(table.getUserName())
>         .password(table.getPassword())
>         .debeziumProperties(debeziumProperties); {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35173) Debezium for Mysql connector Custom Time Serializer

2024-04-26 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-35173:
--
Fix Version/s: cdc-3.1.0
   (was: 3.1.0)

> Debezium for Mysql connector Custom Time Serializer 
> 
>
> Key: FLINK-35173
> URL: https://issues.apache.org/jira/browse/FLINK-35173
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: ZhengYu Chen
>Assignee: ZhengYu Chen
>Priority: Major
>  Labels: CDC, pull-request-available
> Fix For: cdc-3.1.0
>
>
> Currently, Flink CDC Time encounters time type errors (including DateTime, 
> Time, Date, TimeStamp) when using MySQL Connector 
> (JsonDebeziumDeserializationSchema) as deserialization, and the converted 
> time is wrong. The essential reason is that the timestamp returned by the 
> bottom layer of debezium is UTC (such as io.debezium.time.Timestamp). The 
> community has already had some 
> [PR|https://github.com/apache/flink-cdc/pull/1366/files#diff-e129e9fae3eea0bb32f0019debb4932413c91088d6dae656e2ecb63913badae4],
>  but they are not work.
> Now a way is provided to provide a solution based on Debezium's custom 
> Convert interface 
> (https://debezium.io/documentation/reference/1.9/development/converters.html),
> Users can choose to convert the above four time types into STRING according 
> to the specified time format to ensure that users can correctly convert JSON 
> when using the Flink DataStream API.
> When the user enables this converter, we need to configure it according to 
> the parameters, That's some datastream use case:
> {code:java}
> Properties debeziumProperties = new Properties();
> debeziumProperties.setProperty("converters", "datetime");
> debeziumProperties.setProperty("datetime.database.type", 
> DataBaseType.MYSQL.getType());
> debeziumProperties.setProperty("datetime.type", 
> "cn.xxx.sources.cdc.MysqlDebeziumConverter");
> debeziumProperties.setProperty("datetime.format.date", "-MM-dd");
> debeziumProperties.setProperty("datetime.format.time", "HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.datetime", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp", "-MM-dd 
> HH:mm:ss");
> debeziumProperties.setProperty("datetime.format.timestamp.zone", "UTC+8");
> MySqlSourceBuilder builder = MySqlSource.builder()
>         .hostname(url[0])
>         .port(Integer.parseInt(url[1]))
>         .databaseList(table.getDatabase())
>         .tableList(getTablePattern(table))
>         .username(table.getUserName())
>         .password(table.getPassword())
>         .debeziumProperties(debeziumProperties); {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841105#comment-17841105
 ] 

Weijie Guo commented on FLINK-18476:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=298e20ef-7951-5965-0e79-ea664ddc435e=d4c90338-c843-57b0-3232-10ae74f00347=22849

> PythonEnvUtilsTest#testStartPythonProcess fails
> ---
>
> Key: FLINK-18476
> URL: https://issues.apache.org/jira/browse/FLINK-18476
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0, 1.15.3, 1.18.0, 1.19.0, 1.20.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> The 
> {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} 
> failed in my local environment as it assumes the environment has 
> {{/usr/bin/python}}. 
> I don't know exactly how did I get python in Ubuntu 20.04, but I have only 
> alias for {{python = python3}}. Therefore the tests fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32084][checkpoint] Migrate current file merging of channel state snapshot into the unify file merging framework [flink]

2024-04-26 Thread via GitHub


fredia commented on PR #24653:
URL: https://github.com/apache/flink/pull/24653#issuecomment-2078953534

   @Zakelly @1996fanrui would you please to take a look? thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35194) Support describe job syntax and execution

2024-04-26 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841113#comment-17841113
 ] 

xuyang commented on FLINK-35194:


Hi, can I take this jira?

> Support describe job syntax and execution
> -
>
> Key: FLINK-35194
> URL: https://issues.apache.org/jira/browse/FLINK-35194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> {code:java}
> { DESCRIBE | DESC } JOB 'xxx'
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32877][Filesystem][Rebased] Add HTTP options to java-storage client [flink]

2024-04-26 Thread via GitHub


dannycranmer commented on PR #24673:
URL: https://github.com/apache/flink/pull/24673#issuecomment-2078965258

   LGTM, I will fix the commit message on merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35189) Introduce test-filesystem Catalog based on FileSystem Connector to support materialized table

2024-04-26 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu resolved FLINK-35189.
---
Resolution: Fixed

> Introduce test-filesystem Catalog based on FileSystem Connector to support 
> materialized table
> -
>
> Key: FLINK-35189
> URL: https://issues.apache.org/jira/browse/FLINK-35189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Table SQL / API, Tests
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [minor][docs] Add user guide about providing extra jar package in quickstart docs [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun merged PR #3250:
URL: https://github.com/apache/flink-cdc/pull/3250


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35217] Add missing fsync to #closeForCommit in some subclasses of RecoverableFsDataOutputStream. [flink]

2024-04-26 Thread via GitHub


rkhachatryan commented on PR #24722:
URL: https://github.com/apache/flink/pull/24722#issuecomment-2078909771

   Thanks for the fix, I think it should solve the problem. However, 
   1. Is there a way to test it?
   2. Should `GSRecoverableFsDataOutputStream` and `FSDataOutputStreamWrapper` 
also be fixed?
   
   I suspect that there might be more bugs like this. So if we come up with 
some generic solution for (1), then its value will be more than just testing 
this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35244) Move package for flink-connector-tidb-cdc test

2024-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35244:
---
Labels: pull-request-available  (was: )

> Move package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32843][JUnit5 Migration] Migrate the jobmaster package of flink-runtime module to JUnit5 [flink]

2024-04-26 Thread via GitHub


Jiabao-Sun commented on code in PR #24723:
URL: https://github.com/apache/flink/pull/24723#discussion_r1580689343


##
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java:
##
@@ -96,17 +89,17 @@ public class JobMasterQueryableStateTest extends TestLogger 
{
 JOB_GRAPH.setJobType(JobType.STREAMING);
 }
 
-@BeforeClass
+@BeforeAll
 public static void setupClass() {
 rpcService = new TestingRpcService();
 }
 
-@After
+@AfterEach

Review Comment:
   minor: public void teardown() can be package default.



##
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterQueryableStateTest.java:
##
@@ -96,17 +89,17 @@ public class JobMasterQueryableStateTest extends TestLogger 
{
 JOB_GRAPH.setJobType(JobType.STREAMING);
 }
 
-@BeforeClass
+@BeforeAll
 public static void setupClass() {
 rpcService = new TestingRpcService();
 }
 
-@After
+@AfterEach
 public void teardown() throws Exception {
 rpcService.clearGateways();
 }
 
-@AfterClass
+@AfterAll

Review Comment:
   minor: public static void teardownClass() can be package default.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35119][cdc-runtime] Change DataChangeEvent serialization and eserialization [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun commented on PR #3226:
URL: https://github.com/apache/flink-cdc/pull/3226#issuecomment-2078993303

   Thanks @zhongqishang for this fix.
   Hi @yuxiqian, do you have time to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27146] [Filesystem] Migrate to Junit5 [flink]

2024-04-26 Thread via GitHub


kottmann commented on PR #22789:
URL: https://github.com/apache/flink/pull/22789#issuecomment-2079230716

   Sorry for the delay on my end, I added a new commit to address all your 
comments above.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31223] sql-client.sh fails to start with ssl enabled [flink]

2024-04-26 Thread via GitHub


davidradl commented on PR #22026:
URL: https://github.com/apache/flink/pull/22026#issuecomment-2078935154

   > Thanks for the review! merging...
   
   @reswqa thanks for doing this :-)  We are looking for this to me back ported 
to 1.18 and 1.19. Are you ok to do this and I can review - or would you like me 
to back port and you review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35041) IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841107#comment-17841107
 ] 

Weijie Guo commented on FLINK-35041:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=d89de3df-4600-5585-dadc-9bbc9a5e661c=be5a4b15-4b23-56b1-7582-795f58a645a2=9556



> IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed
> --
>
> Key: FLINK-35041
> URL: https://issues.apache.org/jira/browse/FLINK-35041
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Feifan Wang
>Priority: Blocker
>
> {code:java}
> Apr 08 03:22:45 03:22:45.450 [ERROR] 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration
>  -- Time elapsed: 0.034 s <<< FAILURE!
> Apr 08 03:22:45 org.opentest4j.AssertionFailedError: 
> Apr 08 03:22:45 
> Apr 08 03:22:45 expected: false
> Apr 08 03:22:45  but was: true
> Apr 08 03:22:45   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 08 03:22:45   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 08 03:22:45   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(K.java:45)
> Apr 08 03:22:45   at 
> org.apache.flink.runtime.state.DiscardRecordedStateObject.verifyDiscard(DiscardRecordedStateObject.java:34)
> Apr 08 03:22:45   at 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration(IncrementalRemoteKeyedStateHandleTest.java:211)
> Apr 08 03:22:45   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 08 03:22:45   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> {code}
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=9238]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35217] Add missing fsync to #closeForCommit in some subclasses of RecoverableFsDataOutputStream. [flink]

2024-04-26 Thread via GitHub


StefanRRichter commented on PR #24722:
URL: https://github.com/apache/flink/pull/24722#issuecomment-2078937897

   Thanks for the review, for
   1. I don't know how we can test this without OS level tools, but I'm open to 
ideas.
   2. I don't think `GSRecoverableFsDataOutputStream` requires any fix because 
the commit method is calling the same methods internally as the persist method. 
I also don't see the connection to `FSDataOutputStreamWrapper` and what you 
think should be fixed there, maybe you can give more detail?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35245) Add metrics for flink-connector-tidb-cdc

2024-04-26 Thread Xie Yi (Jira)
Xie Yi created FLINK-35245:
--

 Summary: Add metrics for flink-connector-tidb-cdc
 Key: FLINK-35245
 URL: https://issues.apache.org/jira/browse/FLINK-35245
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Xie Yi


As [https://github.com/apache/flink-cdc/issues/985] had been closed, but it has 
not been resolved.

Create  a new issue to track this issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35235][pipeline-connector][kafka] Fix missing dependencies in the uber jar of Kafka pipeline sink. [flink-cdc]

2024-04-26 Thread via GitHub


Jiabao-Sun merged PR #3262:
URL: https://github.com/apache/flink-cdc/pull/3262


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32877][Filesystem][Rebased] Add HTTP options to java-storage client [flink]

2024-04-26 Thread via GitHub


dannycranmer commented on PR #24673:
URL: https://github.com/apache/flink/pull/24673#issuecomment-2078968511

   Apologies I did not fix the commit message 臘 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32877][Filesystem][Rebased] Add HTTP options to java-storage client [flink]

2024-04-26 Thread via GitHub


dannycranmer merged PR #24673:
URL: https://github.com/apache/flink/pull/24673


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841117#comment-17841117
 ] 

Alexander Fedulov commented on FLINK-35240:
---

I don't think touching any of the flush-specific properties should be 
necessary. You can see in the FlameGraph that flush calls are due to close 
being called and, as [~robyoung] mentioned, this is what 
JsonGenerator.Feature#AUTO_CLOSE_TARGET is there for. 

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
> Attachments: image-2024-04-26-00-23-29-975.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35119][cdc-runtime] Change DataChangeEvent serialization and eserialization [flink-cdc]

2024-04-26 Thread via GitHub


yuxiqian commented on PR #3226:
URL: https://github.com/apache/flink-cdc/pull/3226#issuecomment-2079004056

   @Jiabao-Sun Sure, I'll take it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread Zhongqiang Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841063#comment-17841063
 ] 

Zhongqiang Gong commented on FLINK-35240:
-

Welcome [~robyoung] to join this disscussion. And you understand what i want to 
express.

[~afedulov] [~robyoung] I had try to solove this issue on my local. 
https://github.com/GOODBOY008/flink/commit/4f78be92b5bdebcf92a1e32736434517ccc6f561

 

 

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
> Attachments: image-2024-04-26-00-23-29-975.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32622][table-planner] Optimize mini-batch assignment [flink]

2024-04-26 Thread via GitHub


jeyhunkarimov commented on PR #23470:
URL: https://github.com/apache/flink/pull/23470#issuecomment-2078737997

   Hi @xuyangzhong thanks for the comment. I addressed your comments. Please, 
let me know if you agree.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread Zhongqiang Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhongqiang Gong updated FLINK-35240:

Attachment: screenshot-1.png

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
> Attachments: image-2024-04-26-00-23-29-975.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35072) Doris pipeline sink does not support applying AlterColumnTypeEvent

2024-04-26 Thread yux (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yux updated FLINK-35072:

Fix Version/s: cdc-3.2.0

> Doris pipeline sink does not support applying AlterColumnTypeEvent
> --
>
> Key: FLINK-35072
> URL: https://issues.apache.org/jira/browse/FLINK-35072
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> According to [Doris 
> documentation|https://doris.apache.org/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN/],
>  altering column types dynamically is supported (via ALTER TABLE ... MODIFY 
> COLUMN statement) when lossless conversion is available. However, now Doris 
> pipeline connector has no support to AlterColumnTypeEvent, and raises 
> RuntimeException all the time.
> It would be convenient for users if they can sync compatible type 
> conversions, and could be easily implemented by extending Doris' 
> SchemaChangeManager helper class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35120) Add Doris Pipeline connector integration test cases

2024-04-26 Thread yux (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yux updated FLINK-35120:

Fix Version/s: cdc-3.2.0

> Add Doris Pipeline connector integration test cases
> ---
>
> Key: FLINK-35120
> URL: https://issues.apache.org/jira/browse/FLINK-35120
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Currently, Flink CDC Doris pipeline connector has very limited test cases 
> (which only covers row convertion). Adding an ITCase testing its data 
> pipeline and metadata applier should help improving connector's reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35092) Add integrated test for Doris / Starrocks sink pipeline connector

2024-04-26 Thread yux (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yux updated FLINK-35092:

Fix Version/s: cdc-3.2.0

> Add integrated test for Doris / Starrocks sink pipeline connector
> -
>
> Key: FLINK-35092
> URL: https://issues.apache.org/jira/browse/FLINK-35092
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Currently, no integrated test are being applied to Doris pipeline connector 
> (there's only one DorisRowConverterTest case for now). Adding ITcases would 
> improving Doris connector's code quality and reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35090) Doris sink fails to create table when database does not exist

2024-04-26 Thread yux (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yux updated FLINK-35090:

Fix Version/s: cdc-3.2.0

> Doris sink fails to create table when database does not exist
> -
>
> Key: FLINK-35090
> URL: https://issues.apache.org/jira/browse/FLINK-35090
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Currently, Doris sink connector doesn't support creating database 
> automatically. When user specifies a sink namespace with non-existing 
> database in YAML config, Doris connector will crash.
> Expected behaviour: Doris sink connector should create both database and 
> table automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35239) 1.19 docs show outdated warning

2024-04-26 Thread Ufuk Celebi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi resolved FLINK-35239.
-
Resolution: Fixed

> 1.19 docs show outdated warning
> ---
>
> Key: FLINK-35239
> URL: https://issues.apache.org/jira/browse/FLINK-35239
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: Screenshot 2024-04-25 at 15.01.57.png
>
>
> The docs for 1.19 are currently marked as outdated although it's the 
> currently stable release.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35173] Debezium for Mysql connector Custom Time Serializer [flink-cdc]

2024-04-26 Thread via GitHub


PatrickRen merged PR #3240:
URL: https://github.com/apache/flink-cdc/pull/3240


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35243) Carry pre-schema payload with SchemaChangeEvents

2024-04-26 Thread yux (Jira)
yux created FLINK-35243:
---

 Summary: Carry pre-schema payload with SchemaChangeEvents
 Key: FLINK-35243
 URL: https://issues.apache.org/jira/browse/FLINK-35243
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: yux
 Fix For: cdc-3.2.0


Currently, Flink CDC 3.x SchemaChangeEvent provides no information about the 
previous schema state before applying changes.

Most pipeline sources can't provide PreSchema info (because it's not recorded 
in the binlog / oplog / ...), but some pipeline sinks require it to perform 
validation checks and apply schema change. This ticket suggests adding 
framework-level support to filling pre-schema payload to any SchemaChangeEvent 
that requires such info.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35039][rest] Use PUT method supported by YARN web proxy instead of POST [flink]

2024-04-26 Thread via GitHub


yeezychao commented on PR #24689:
URL: https://github.com/apache/flink/pull/24689#issuecomment-2078737494

   @Myasuka PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35242) Add fine-grained schema evolution strategy

2024-04-26 Thread yux (Jira)
yux created FLINK-35242:
---

 Summary: Add fine-grained schema evolution strategy
 Key: FLINK-35242
 URL: https://issues.apache.org/jira/browse/FLINK-35242
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: yux
 Fix For: cdc-3.2.0


Currently, Flink CDC allows three SE strategy: evolving it, ignoring it, or 
throwing an exception. However such configuration strategy doesn't cover all 
user cases and requires want more fine-grained strategy configuration.

This ticket suggests adding one more strategy "try_evolve" or 
"evolve_when_available". It's basically like "evolving" option, but doesn't 
throw an exception if such operation fails, which provides more flexibility.

Also, this ticket suggests allowing user to configure per-schema-event 
strategy, so users could evolve some types of event (like rename column) and 
reject some dangerous events (like truncate table, remove column).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35240][Connectors][format]Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record for csv format [flink]

2024-04-26 Thread via GitHub


flinkbot commented on PR #24730:
URL: https://github.com/apache/flink/pull/24730#issuecomment-2079743542

   
   ## CI report:
   
   * 418a88849ef1c9f850cc80b5f691cfe033ac7c09 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33210] Cleanup the lineage interface comments [flink]

2024-04-26 Thread via GitHub


HuangZhenQiu opened a new pull request, #24731:
URL: https://github.com/apache/flink/pull/24731

   ## What is the purpose of the change
   Format the class comments of lineage interface classes
   
   
   ## Brief change log
 - Remove the unneeded empty line of class comments 
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread Zhongqiang Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841301#comment-17841301
 ] 

Zhongqiang Gong commented on FLINK-35240:
-

[~afedulov] I opened a pr to patch this issue. Would like help me review? Thank 
you~ :)

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-26-00-23-29-975.png, 
> image-2024-04-26-17-16-07-925.png, image-2024-04-26-17-16-20-647.png, 
> image-2024-04-26-17-16-30-293.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35240:
---
Labels: pull-request-available  (was: )

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-26-00-23-29-975.png, 
> image-2024-04-26-17-16-07-925.png, image-2024-04-26-17-16-20-647.png, 
> image-2024-04-26-17-16-30-293.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35182] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink Pulsar connector [flink-connector-pulsar]

2024-04-26 Thread via GitHub


GOODBOY008 commented on code in PR #90:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/90#discussion_r1581286005


##
flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/testutils/SampleData.java:
##
@@ -49,7 +49,7 @@ public Foo() {}
 
 @Override
 public String toString() {
-return "" + i + "," + f + "," + (bar == null ? "null" : 
bar.toString());
+return i + "," + f + "," + (bar == null ? "null" : bar.toString());

Review Comment:
   Because ci failed ,I just modify code to trigger ci. Now,This code had been 
reverted.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33210] Cleanup the lineage interface comments [flink]

2024-04-26 Thread via GitHub


flinkbot commented on PR #24731:
URL: https://github.com/apache/flink/pull/24731#issuecomment-2079751600

   
   ## CI report:
   
   * 93207bb3788bdcd5f6cf3657c2feb5d92ddb5871 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35228][Connectors/Kafka] Fix: DynamicKafkaSource does not read re-added topic for the same cluster [flink-connector-kafka]

2024-04-26 Thread via GitHub


mas-chen commented on code in PR #97:
URL: 
https://github.com/apache/flink-connector-kafka/pull/97#discussion_r1581372979


##
flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/dynamic/source/enumerator/DynamicKafkaSourceEnumeratorTest.java:
##
@@ -464,6 +466,87 @@ public void testSnapshotState() throws Throwable {
 }
 }
 
+@Test
+public void testSnapshotStateMigration() throws Throwable {

Review Comment:
   nit: 
   ```suggestion
   public void testEnumeratorStateDoesNotContainStaleTopicPartitions() 
throws Throwable {
   ```
   
   or something similar



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33211][table] support flink table lineage [flink]

2024-04-26 Thread via GitHub


HuangZhenQiu commented on code in PR #24618:
URL: https://github.com/apache/flink/pull/24618#discussion_r1581321727


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/PhysicalTransformation.java:
##
@@ -34,6 +35,7 @@
 public abstract class PhysicalTransformation extends Transformation {
 
 private boolean supportsConcurrentExecutionAttempts = true;
+private LineageVertex lineageVertex;

Review Comment:
   Make sense. Added an LineagedTransformation class for this purpose.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35247) Upgrade spotless apply to `2.41.1` in flink-connector-parent to work with Java 21

2024-04-26 Thread Mason Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mason Chen updated FLINK-35247:
---
Description: 
Spotless apply version from flink-connector-parent does not work with Java 21

Issue found here: [https://github.com/apache/flink-connector-kafka/pull/98]

This is already fixed by spotless apply: 
[https://github.com/diffplug/spotless/pull/1920]

but also requires an upgrade to a later `google-java-format`

  was:
Spotless apply version from flink-connector-parent does not work with Java 21

Tested here: [https://github.com/apache/flink-connector-kafka/pull/98]

This is already fixed by spotless apply: 
https://github.com/diffplug/spotless/pull/1920


> Upgrade spotless apply to `2.41.1` in flink-connector-parent to work with 
> Java 21
> -
>
> Key: FLINK-35247
> URL: https://issues.apache.org/jira/browse/FLINK-35247
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI, Connectors / Common
>Affects Versions: connector-parent-1.1.0
>Reporter: Mason Chen
>Priority: Major
>
> Spotless apply version from flink-connector-parent does not work with Java 21
> Issue found here: [https://github.com/apache/flink-connector-kafka/pull/98]
> This is already fixed by spotless apply: 
> [https://github.com/diffplug/spotless/pull/1920]
> but also requires an upgrade to a later `google-java-format`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31223] sql-client.sh fails to start with ssl enabled [flink]

2024-04-26 Thread via GitHub


davidradl commented on PR #22026:
URL: https://github.com/apache/flink/pull/22026#issuecomment-2079304678

   > @davidradl Make sense to back port this as we should treat this as a 
bugfix because sql client previously supported SSL, which is a kind of 
regresssion.
   > 
   > If you want, just go ahead.
   
   Ok will do


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35182] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink Pulsar connector [flink-connector-pulsar]

2024-04-26 Thread via GitHub


syhily commented on code in PR #90:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/90#discussion_r1581009843


##
flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/testutils/SampleData.java:
##
@@ -49,7 +49,7 @@ public Foo() {}
 
 @Override
 public String toString() {
-return "" + i + "," + f + "," + (bar == null ? "null" : 
bar.toString());
+return i + "," + f + "," + (bar == null ? "null" : bar.toString());

Review Comment:
   I prefer to do only one thing in one PR. This refactor shouldn't be included 
in the version bump.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35246) SqlClientSSLTest.testGatewayMode failed in AZP

2024-04-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841227#comment-17841227
 ] 

Weijie Guo commented on FLINK-35246:


master via 4e6dbe2d1a225a0d0e48fd0997c1f11317402e42.

> SqlClientSSLTest.testGatewayMode failed in AZP
> --
>
> Key: FLINK-35246
> URL: https://issues.apache.org/jira/browse/FLINK-35246
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> {code:java}
> Apr 26 01:51:10 java.lang.IllegalArgumentException: The given host:port 
> ('localhost/:36112') doesn't contain a valid port
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.validateHostPortString(NetUtils.java:120)
> Apr 26 01:51:10   at 
> org.apache.flink.util.NetUtils.getCorrectHostnamePort(NetUtils.java:81)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayAddress(CliOptionsParser.java:325)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.cli.CliOptionsParser.parseGatewayModeClient(CliOptionsParser.java:296)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:207)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientTestBase.runSqlClient(SqlClientTestBase.java:111)
> Apr 26 01:51:10   at 
> org.apache.flink.table.client.SqlClientSSLTest.testGatewayMode(SqlClientSSLTest.java:74)
> Apr 26 01:51:10   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
> Apr 26 01:51:10   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59173=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12418



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35240) Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record

2024-04-26 Thread Zhongqiang Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841238#comment-17841238
 ] 

Zhongqiang Gong commented on FLINK-35240:
-

Base on the logic of _writer. close,We can disable FLUSH_PASSED_TO_STREAM too. 
So we can control  flush in CsvBulkWriter. 

> Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record
> -
>
> Key: FLINK-35240
> URL: https://issues.apache.org/jira/browse/FLINK-35240
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Zhongqiang Gong
>Priority: Minor
> Attachments: image-2024-04-26-00-23-29-975.png, screenshot-1.png
>
>
> *Reproduce:*
> * According to user email: 
> https://lists.apache.org/thread/9j5z8hv4vjkd54dkzqy1ryyvm0l5rxhc
> *  !image-2024-04-26-00-23-29-975.png! 
> *Analysis:*
> * `org.apache.flink.formats.csv.CsvBulkWriter#addElement` will flush per 
> record.
> *Solution:*
> * I think maybe we can disable `FLUSH_AFTER_WRITE_VALUE` to avoid flush when 
> a record added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35228][Connectors/Kafka] Fix: DynamicKafkaSource does not read re-added topic for the same cluster [flink-connector-kafka]

2024-04-26 Thread via GitHub


IgnasD commented on PR #97:
URL: 
https://github.com/apache/flink-connector-kafka/pull/97#issuecomment-2079488271

   Changed the title as requested. Also, I've added filtering to 
`unassignedInitialPartitions` as suggested and covered it with a test case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35022][Connector/DynamoDB] Add TypeInformed DDB Element Converter [flink-connector-aws]

2024-04-26 Thread via GitHub


vahmed-hamdy commented on code in PR #136:
URL: 
https://github.com/apache/flink-connector-aws/pull/136#discussion_r1581114799


##
flink-connector-aws/flink-connector-dynamodb/src/main/java/org/apache/flink/connector/dynamodb/sink/DynamoDbTypeInformedElementConverter.java:
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.dynamodb.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.BasicArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.NumericTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.api.java.tuple.Tuple;
+import org.apache.flink.api.java.typeutils.ObjectArrayTypeInfo;
+import org.apache.flink.api.java.typeutils.PojoTypeInfo;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.connector.dynamodb.table.converter.ArrayAttributeConverterProvider;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.Preconditions;
+
+import software.amazon.awssdk.core.SdkBytes;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeConverterProvider;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeValueType;
+import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
+import software.amazon.awssdk.enhanced.dynamodb.TableSchema;
+import 
software.amazon.awssdk.enhanced.dynamodb.internal.mapper.BeanAttributeGetter;
+import software.amazon.awssdk.enhanced.dynamodb.mapper.StaticTableSchema;
+import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
+
+import java.beans.BeanInfo;
+import java.beans.IntrospectionException;
+import java.beans.Introspector;
+import java.beans.PropertyDescriptor;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * A {@link ElementConverter} that converts an element to a {@link 
DynamoDbWriteRequest} using
+ * TypeInformation provided.
+ */
+@PublicEvolving
+public class DynamoDbTypeInformedElementConverter
+implements ElementConverter {
+private final CompositeType typeInfo;
+private final boolean ignoreNulls;
+private TableSchema tableSchema;
+
+/**
+ * Creates a {@link DynamoDbTypeInformedElementConverter} that converts an 
element to a {@link
+ * DynamoDbWriteRequest} using the provided {@link CompositeType}. Usage: 
{@code new
+ * 
DynamoDbTypeInformedElementConverter<>(TypeInformation.of(MyPojoClass.class))}
+ *
+ * @param typeInfo The {@link CompositeType} that provides the type 
information for the element.
+ */
+public DynamoDbTypeInformedElementConverter(CompositeType 
typeInfo) {
+this(typeInfo, true);
+}
+
+public DynamoDbTypeInformedElementConverter(
+CompositeType typeInfo, boolean ignoreNulls) {
+this.typeInfo = typeInfo;
+this.ignoreNulls = ignoreNulls;
+}
+
+@Override
+public void open(Sink.InitContext context) {
+try {
+tableSchema = createTableSchema(typeInfo);
+} catch (IntrospectionException | IllegalStateException | 
IllegalArgumentException e) {
+throw new FlinkRuntimeException("Failed to extract DynamoDb table 
schema", e);
+}
+}
+
+@Override
+public DynamoDbWriteRequest apply(inputT input, SinkWriter.Context 
context) {
+Preconditions.checkNotNull(tableSchema, "TableSchema is not 
initialized");
+try {
+return DynamoDbWriteRequest.builder()
+.setType(DynamoDbWriteRequestType.PUT)
+  

[jira] [Commented] (FLINK-35232) Support for retry settings on GCS connector

2024-04-26 Thread Oleksandr Nitavskyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841267#comment-17841267
 ] 

Oleksandr Nitavskyi commented on FLINK-35232:
-

[~galenwarren] thanks. We have reduced the amount of methods to the bare 
minimum. Reflected in the description: 
 * 
[maxAttempts|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getMaxAttempts__]
 * 
[initialRpcTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getInitialRpcTimeout__]
 * 
[rpcTimeoutMultiplier|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getRpcTimeoutMultiplier__]
 * 
[maxRpcTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getMaxRpcTimeout__]
 * 
[totalTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getTotalTimeout__]

Thus Flink user will be able to adjust the total timeout time to the checkpoint 
timeout, so job does it best before it gave up to commit the data.

> Support for retry settings on GCS connector
> ---
>
> Key: FLINK-35232
> URL: https://issues.apache.org/jira/browse/FLINK-35232
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.3, 1.16.2, 1.17.1, 1.19.0, 1.18.1
>Reporter: Vikas M
>Assignee: Ravi Singh
>Priority: Major
>
> https://issues.apache.org/jira/browse/FLINK-32877 is tracking ability to 
> specify transport options in GCS connector. While setting the params enabled 
> here reduced read timeouts, we still see 503 errors leading to Flink job 
> restarts.
> Thus, in this ticket, we want to specify additional retry settings as noted 
> in 
> [https://cloud.google.com/storage/docs/retry-strategy#customize-retries.|https://cloud.google.com/storage/docs/retry-strategy#customize-retries]
> We need 
> [these|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#methods]
>  methods available for Flink users so that they can customize their 
> deployment. In particular next settings seems to be the minimum required to 
> adjust GCS timeout with Job's checkpoint config:
>  * 
> [maxAttempts|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getMaxAttempts__]
>  * 
> [initialRpcTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getInitialRpcTimeout__]
>  * 
> [rpcTimeoutMultiplier|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getRpcTimeoutMultiplier__]
>  * 
> [maxRpcTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getMaxRpcTimeout__]
>  * 
> [totalTimeout|https://cloud.google.com/java/docs/reference/gax/latest/com.google.api.gax.retrying.RetrySettings#com_google_api_gax_retrying_RetrySettings_getTotalTimeout__]
>  
> All of the config options should be optional and the default one should be 
> used in case some of configs are not provided.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-31223) sql-client.sh fails to start with ssl enabled

2024-04-26 Thread david radley (Jira)


[ https://issues.apache.org/jira/browse/FLINK-31223 ]


david radley deleted comment on FLINK-31223:
--

was (Author: JIRAUSER300523):
[~Weijie Guo] I have created pr [https://github.com/apache/flink/pull/24729] 
for the 1.18 backport

> sql-client.sh fails to start with ssl enabled
> -
>
> Key: FLINK-31223
> URL: https://issues.apache.org/jira/browse/FLINK-31223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.17.0
>Reporter: macdoor615
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>
> *Version:* 1.17-SNAPSHOT *Commit:* c66ef25 
> 1. ssl disabled 
> sql-client.sh works properly
> 2. ssl enabled
> web ui can access with [https://url|https://url/]
> The task can be submitted correctly through sql-gateway. I can confirm that 
> sql-gateway exposes the http protocol, not https.
> But sql-client.sh fails to start with the following exceptions. It seems that 
> sql-client.sh expect https protocol
>  
> {code:java}
> 2023-02-25 14:43:19,317 INFO  org.apache.flink.configuration.Configuration    
>              [] - Config uses fallback configuration key 'rest.port' instead 
> of key 'rest.bind-port'
> 2023-02-25 14:43:19,343 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Starting 
> rest endpoint.
> 2023-02-25 14:43:19,713 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Rest 
> endpoint listening at localhost:44922
> 2023-02-25 14:43:19,715 INFO  org.apache.flink.table.client.SqlClient         
>              [] - Start embedded gateway on port 44922
> 2023-02-25 14:43:20,040 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Shutting 
> down rest endpoint.
> 2023-02-25 14:43:20,088 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Shut down 
> complete.
> 2023-02-25 14:43:20,089 ERROR org.apache.flink.table.client.SqlClient         
>              [] - SQL Client must stop.
> org.apache.flink.table.client.SqlClientException: Failed to create the 
> executor.
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.(ExecutorImpl.java:170)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.(ExecutorImpl.java:113)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.gateway.Executor.create(Executor.java:34) 
> ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:110) 
> ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:228) 
> [flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:179) 
> [flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
> Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: 
> Failed to get response.
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.getResponse(ExecutorImpl.java:427)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.getResponse(ExecutorImpl.java:416)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.negotiateVersion(ExecutorImpl.java:447)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.table.client.gateway.ExecutorImpl.(ExecutorImpl.java:132)
>  ~[flink-sql-client-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         ... 5 more
> Caused by: 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.DecoderException: 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.NotSslRecordException: 
> not an SSL/TLS record: 
> 485454502f312e3120343034204e6f7420466f756e640d0a636f6e74656e742d747970653a206170706c69636174696f6e2f6a736f6e3b20636861727365743d5554462d380d0a6163636573732d636f6e74726f6c2d616c6c6f772d6f726967696e3a202a0d0a636f6e74656e742d6c656e6774683a2033380d0a0d0a7b226572726f7273223a5b224e6f7420666f756e643a202f6261642d72657175657374225d7d
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>         

Re: [PR] [FLINK-31223][sqlgateway] Introduce getFlinkConfigurationOptions to g… [flink]

2024-04-26 Thread via GitHub


davidradl closed pull request #24729: [FLINK-31223][sqlgateway] Introduce 
getFlinkConfigurationOptions to g…
URL: https://github.com/apache/flink/pull/24729


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31223] sql-client.sh fails to start with ssl enabled [flink]

2024-04-26 Thread via GitHub


davidradl commented on PR #22026:
URL: https://github.com/apache/flink/pull/22026#issuecomment-2079731646

   > > @davidradl Make sense to back port this as we should treat this as a 
bugfix because sql client previously supported SSL, which is a kind of 
regresssion.
   > > If you want, just go ahead.
   > 
   > Ok will do
   
   @reswqa Hi I have had a quick look at the back port it is not straight 
forward. I forgot to ask for the commits to be squashed; the first 2 commits 
come in nicely with cherry pick but the 3rd with the 10 files does not. It 
makes changes to files that are not present at 118 for example 
flink-table/flink-sql-gateway-api/src/main/java/org/apache/flink/table/gateway/api/endpoint/SqlGatewayEndpointFactory.java.
 I think that more files need to be backported. Could you advise on what else 
is required for me to do the backport please, unless you want to take over. 
kind regards, David.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >