[GitHub] [flink] XComp commented on a diff in pull request #22601: [FLINK-31781][runtime] Introduces the contender ID to the LeaderElectionService interface

2023-06-15 Thread via GitHub


XComp commented on code in PR #22601:
URL: https://github.com/apache/flink/pull/22601#discussion_r1230500087


##
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/AbstractHaServices.java:
##
@@ -247,7 +247,7 @@ private LeaderElection createLeaderElection(String 
leaderName) throws Exception
 leaderElectionService.startLeaderElectionBackend();
 
 closeableRegistry.registerCloseable(leaderElectionService);
-return leaderElectionService.createLeaderElection();
+return leaderElectionService.createLeaderElection(leaderName);

Review Comment:
   The `contenderID` that's passed in through `createLeaderElection` is saved 
in the `DefaultLeaderElection` and forwarded in each call to the 
`DefaultLeaderElectionService`. But we don't forward it for now to the driver. 
The driver gets it through the `MultipleComponentLeaderElectionDriverAdapter` 
for now (which gets the value through 
[AbstractHaServices:246](https://github.com/XComp/flink/blob/f4c8cdb8f370de26f55759e7ea4b6f5461c7a806/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/AbstractHaServices.java#L246)).
   
   In FLINK-31783 (PR #22661), we will remove the adapter functionality and 
integrate the `MultipleComponentLeaderElectionDriver` into 
`DefaultLeaderElectionService`. I replaced the variable usage with a static 
string that should make it clearer that the value isn't used for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp merged pull request #22404: [FLINK-31797][runtime] Moves LeaderElection creation into HighAvailabilityServices

2023-06-15 Thread via GitHub


XComp merged PR #22404:
URL: https://github.com/apache/flink/pull/22404


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-31797) Move LeaderElectionService out of LeaderContender

2023-06-15 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-31797.
---
Resolution: Fixed

master: 55630de18baf6345072113f587006dac94f61c78

> Move LeaderElectionService out of LeaderContender
> -
>
> Key: FLINK-31797
> URL: https://issues.apache.org/jira/browse/FLINK-31797
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-jdbc] WenDing-Y commented on pull request #49: [FLINK-32068] connector jdbc support clickhouse

2023-06-15 Thread via GitHub


WenDing-Y commented on PR #49:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/49#issuecomment-1592422341

   What other tasks are needed for this PR to merge @MartijnVisser 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32205) Flink Rest Client should support connecting to the server using URLs.

2023-06-15 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu updated FLINK-32205:
--
Description: 
Currently, Flink Client can only connect to the server via the address:port, 
which is configured by the rest.address and rest.port.

But in some other scenarios. Flink Server is run behind a proxy. Such as 
running on Kubernetes and exposing services through ingress. The URL to access 
the Flink server can be: http://\{proxy address}/\{some prefix path to identify 
flink clusters}/\{flink request path}

In FLINK-32030, the SQL Client gateway accepts URLs by using the '--endpoint'.

IMO, we should introduce an option, such as "rest.url-prefix", to make the 
Flink client work with URLs.

  was:
Currently, Flink Client can only connect to the server via the address:port, 
which is configured by the rest.address and rest.port.

But in some other scenarios. Flink Server is run behind a proxy. Such as 
running on Kubernetes and exposing services through ingress. The URL to access 
the Flink server can be: http://\{proxy address}/\{some prefix path to identify 
flink clusters}/\{flink request path}

In FLINK-32030, the SQL Client gateway accepts URLs by using the '--endpoint'.

IMO, we should introduce an option, such as "rest.endpoint", to make the Flink 
client work with URLs.


> Flink Rest Client should support connecting to the server using URLs.
> -
>
> Key: FLINK-32205
> URL: https://issues.apache.org/jira/browse/FLINK-32205
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.17.0
>Reporter: Weihua Hu
>Priority: Major
>
> Currently, Flink Client can only connect to the server via the address:port, 
> which is configured by the rest.address and rest.port.
> But in some other scenarios. Flink Server is run behind a proxy. Such as 
> running on Kubernetes and exposing services through ingress. The URL to 
> access the Flink server can be: http://\{proxy address}/\{some prefix path to 
> identify flink clusters}/\{flink request path}
> In FLINK-32030, the SQL Client gateway accepts URLs by using the '--endpoint'.
> IMO, we should introduce an option, such as "rest.url-prefix", to make the 
> Flink client work with URLs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] FangYongs commented on a diff in pull request #22758: [FLINK-31549][jdbc-driver] Add jdbc driver docs

2023-06-15 Thread via GitHub


FangYongs commented on code in PR #22758:
URL: https://github.com/apache/flink/pull/22758#discussion_r1230560310


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/BaseConnection.java:
##
@@ -102,12 +101,6 @@ public int getTransactionIsolation() throws SQLException {
 "FlinkConnection#getTransactionIsolation is not supported 
yet.");
 }
 
-@Override
-public SQLWarning getWarnings() throws SQLException {

Review Comment:
   DONE



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32343) Fix exception for jdbc tools

2023-06-15 Thread Fang Yong (Jira)
Fang Yong created FLINK-32343:
-

 Summary: Fix exception for jdbc tools
 Key: FLINK-32343
 URL: https://issues.apache.org/jira/browse/FLINK-32343
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / JDBC
Affects Versions: 1.18.0
Reporter: Fang Yong


Fix exception for jdbc tools



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32342) SQL Server container behaves unexpected while testing with several surefire forks

2023-06-15 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-32342:
---

 Summary: SQL Server container behaves unexpected while testing 
with several surefire forks
 Key: FLINK-32342
 URL: https://issues.apache.org/jira/browse/FLINK-32342
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / JDBC
Affects Versions: jdbc-3.1.0
Reporter: Sergey Nuyanzin


By default it inherits {{flink.forkCountITCase == 2}} from Flink.
it looks sqlserver container has issues with starting in several surefire 
forks...
Based on 
[https://github.com/MartijnVisser/flink-connector-jdbc/actions/runs/5265349453/jobs/9517854060]
sql server container is hanging while start
{noformat}
"main" #1 prio=5 os_prio=0 cpu=1965.96ms elapsed=2568.93s 
tid=0x7f84a0027000 nid=0x1c82 runnable  [0x7f84a41fc000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(java.base@11.0.19/Native 
Method)
at 
java.net.SocketInputStream.socketRead(java.base@11.0.19/SocketInputStream.java:115)
at 
java.net.SocketInputStream.read(java.base@11.0.19/SocketInputStream.java:168)
at 
java.net.SocketInputStream.read(java.base@11.0.19/SocketInputStream.java:140)
at 
com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.readInternal(IOBuffer.java:1192)
- locked <0x930e38f0> (a 
com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream)
at 
com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.read(IOBuffer.java:1179)
at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2307)
- locked <0x930e38f0> (a 
com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.Prelogin(SQLServerConnection.java:3391)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:3200)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2833)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:2671)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1640)
at 
com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:936)
at 
org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:253)
at 
org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:218)
at 
org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted(JdbcDatabaseContainer.java:158)
at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:490)
at 
org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:344)
at 
org.testcontainers.containers.GenericContainer$$Lambda$532/0x0001003d1440.call(Unknown
 Source)
at 
org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
at 
org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
at 
org.apache.flink.connector.jdbc.testutils.databases.sqlserver.SqlServerDatabase$SqlServerContainer.start(SqlServerDatabase.java:81)
at 
org.apache.flink.connector.jdbc.testutils.databases.sqlserver.SqlServerDatabase.startDatabase(SqlServerDatabase.java:52)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.beforeAll(DatabaseExtension.java:122)
...
{noformat}


as a WA setting {{flink.forkCountITCase == 1}} solves the issue
However need to find a better way to allow running tests with several forks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32344) MongoDB connector support unbounded streaming read via ChangeStream feature

2023-06-15 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-32344:
--

 Summary: MongoDB connector support unbounded streaming read via 
ChangeStream feature
 Key: FLINK-32344
 URL: https://issues.apache.org/jira/browse/FLINK-32344
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / MongoDB
Affects Versions: mongodb-1.0.1
Reporter: Jiabao Sun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32318) [flink-operator] missing s3 plugin in folder plugins

2023-06-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-32318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732909#comment-17732909
 ] 

Luís Costa edited comment on FLINK-32318 at 6/15/23 8:03 AM:
-

Hi [~nateab] 

Yes, already did that
{code:java}
kubectl exec -it flink-operator-8778bd969-2kxj5 -n flinkoperator /bin/bash

flink@flink-operator-8778bd969-2kxj5:/flink-kubernetes-operator$ ls -lrth 
/opt/flink/plugins/s3/ total 121M -rw-r--r-- 1 flink flink 92M Jun 14 17:19 
flink-s3-fs-presto-1.16.2.jar -rw-r--r-- 1 flink flink 30M Jun 14 17:19 
flink-s3-fs-hadoop-1.16.2.jar{code}


was (Author: JIRAUSER286937):
Hi [~nateab] 

Yes, already did that
{code:java}
flink@flink-operator-8778bd969-2kxj5:/flink-kubernetes-operator$ ls -lrth 
/opt/flink/plugins/s3/
total 121M
-rw-r--r-- 1 flink flink 92M Jun 14 17:19 flink-s3-fs-presto-1.16.2.jar
-rw-r--r-- 1 flink flink 30M Jun 14 17:19 flink-s3-fs-hadoop-1.16.2.jar {code}

> [flink-operator] missing s3 plugin in folder plugins
> 
>
> Key: FLINK-32318
> URL: https://issues.apache.org/jira/browse/FLINK-32318
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.5.0
>Reporter: Luís Costa
>Priority: Minor
>
> Greetings,
> I'm trying to configure [Flink's Kubernetes HA 
> services|https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/ha/kubernetes_ha/]
>  for flink operator jobs, but got an error regarding s3 plugin: _"Could not 
> find a file system implementation for scheme 's3'. The scheme is directly 
> supported by Flink through the following plugin(s): flink-s3-fs-hadoop, 
> flink-s3-fs-presto"_
> {code:java}
> 2023-06-12 10:05:16,981 INFO  akka.remote.Remoting
>  [] - Starting remoting
> 2023-06-12 10:05:17,194 INFO  akka.remote.Remoting
>  [] - Remoting started; listening on addresses 
> :[akka.tcp://flink@10.4.125.209:6123]
> 2023-06-12 10:05:17,377 INFO  
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils[] - Actor 
> system started at akka.tcp://flink@10.4.125.209:6123
> 2023-06-12 10:05:18,175 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status FAILED. 
> Diagnostics org.apache.flink.util.FlinkException: Could not create the ha 
> services from the instantiated HighAvailabilityServicesFactory 
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.
>   at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:299)
>   at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:285)
>   at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:145)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:439)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:382)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:282)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:232)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:229)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:729)
>   at 
> org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint.main(KubernetesApplicationClusterEntrypoint.java:86)
> Caused by: java.io.IOException: Could not create FileSystem for highly 
> available storage path 
> (s3://td-infra-stg-us-east-1-s3-flinkoperator/flink-data/ha/flink-basic-example-xpto)
>   at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:102)
>   at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:86)
>   at 
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.createHAServices(KubernetesHaServicesFactory.java:41)
>   at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:296)
>   ... 10 more
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
> Could not find a file system implementation for scheme 's3'. The scheme 

[GitHub] [flink-connector-aws] MartijnVisser closed pull request #80: [FLINK-31923] Run nightly builds against multiple branches and Flink versions

2023-06-15 Thread via GitHub


MartijnVisser closed pull request #80: [FLINK-31923] Run nightly builds against 
multiple branches and Flink versions
URL: https://github.com/apache/flink-connector-aws/pull/80


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] boring-cyborg[bot] commented on pull request #27: [FLINK-31923] Run nightly builds against multiple branches and Flink versions

2023-06-15 Thread via GitHub


boring-cyborg[bot] commented on PR #27:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/27#issuecomment-1592621331

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32346) JdbcNumericBetweenParametersProvider Sharding key boundaries large storage long integer overflow, use BigDecimal instead Long

2023-06-15 Thread zhilinli (Jira)
zhilinli created FLINK-32346:


 Summary: JdbcNumericBetweenParametersProvider  Sharding key 
boundaries large storage long integer overflow, use BigDecimal instead Long
 Key: FLINK-32346
 URL: https://issues.apache.org/jira/browse/FLINK-32346
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: zhilinli
 Attachments: image-2023-06-15-16-42-16-773.png

Sharding key boundaries large storage long integer overflow, use BigDecimal 
instead Long, so that length types such as DecimalType(30,0) are compatible and 
LONG cannot be stored Can be assigned to me and I want to complete it  

!image-2023-06-15-16-45-44-721.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32346) JdbcNumericBetweenParametersProvider Sharding key boundaries large storage long integer overflow, use BigDecimal instead Long

2023-06-15 Thread zhilinli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhilinli updated FLINK-32346:
-
Description: 
Sharding key boundaries large storage long integer overflow, use BigDecimal 
instead Long, so that length types such as DecimalType(30,0) are compatible and 
LONG cannot be stored Can be assigned to me and I want to complete it  

!image-2023-06-15-16-46-13-188.png!

  was:
Sharding key boundaries large storage long integer overflow, use BigDecimal 
instead Long, so that length types such as DecimalType(30,0) are compatible and 
LONG cannot be stored Can be assigned to me and I want to complete it  

!image-2023-06-15-16-45-44-721.png!


> JdbcNumericBetweenParametersProvider  Sharding key boundaries large storage 
> long integer overflow, use BigDecimal instead Long
> --
>
> Key: FLINK-32346
> URL: https://issues.apache.org/jira/browse/FLINK-32346
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: zhilinli
>Priority: Major
> Attachments: image-2023-06-15-16-42-16-773.png, 
> image-2023-06-15-16-46-13-188.png
>
>
> Sharding key boundaries large storage long integer overflow, use BigDecimal 
> instead Long, so that length types such as DecimalType(30,0) are compatible 
> and LONG cannot be stored Can be assigned to me and I want to complete it  
> !image-2023-06-15-16-46-13-188.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32346) JdbcNumericBetweenParametersProvider Sharding key boundaries large storage long integer overflow, use BigDecimal instead Long

2023-06-15 Thread zhilinli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhilinli updated FLINK-32346:
-
Attachment: image-2023-06-15-16-46-13-188.png

> JdbcNumericBetweenParametersProvider  Sharding key boundaries large storage 
> long integer overflow, use BigDecimal instead Long
> --
>
> Key: FLINK-32346
> URL: https://issues.apache.org/jira/browse/FLINK-32346
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: zhilinli
>Priority: Major
> Attachments: image-2023-06-15-16-42-16-773.png, 
> image-2023-06-15-16-46-13-188.png
>
>
> Sharding key boundaries large storage long integer overflow, use BigDecimal 
> instead Long, so that length types such as DecimalType(30,0) are compatible 
> and LONG cannot be stored Can be assigned to me and I want to complete it  
> !image-2023-06-15-16-45-44-721.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32346) JdbcNumericBetweenParametersProvider Sharding key boundaries large storage long integer overflow, use BigDecimal instead Long

2023-06-15 Thread zhilinli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhilinli updated FLINK-32346:
-
Description: 
*JdbcNumericBetweenParametersProvider.class*

Sharding key boundaries large storage long integer overflow, use BigDecimal 
instead Long, so that length types such as DecimalType(30,0) are compatible and 
LONG cannot be stored Can be assigned to me and I want to complete it  

 

  was:
Sharding key boundaries large storage long integer overflow, use BigDecimal 
instead Long, so that length types such as DecimalType(30,0) are compatible and 
LONG cannot be stored Can be assigned to me and I want to complete it  

!image-2023-06-15-16-46-13-188.png!


> JdbcNumericBetweenParametersProvider  Sharding key boundaries large storage 
> long integer overflow, use BigDecimal instead Long
> --
>
> Key: FLINK-32346
> URL: https://issues.apache.org/jira/browse/FLINK-32346
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: zhilinli
>Priority: Major
> Attachments: image-2023-06-15-16-42-16-773.png, 
> image-2023-06-15-16-46-13-188.png
>
>
> *JdbcNumericBetweenParametersProvider.class*
> Sharding key boundaries large storage long integer overflow, use BigDecimal 
> instead Long, so that length types such as DecimalType(30,0) are compatible 
> and LONG cannot be stored Can be assigned to me and I want to complete it  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] StefanRRichter opened a new pull request, #22788: [FLINK-32345][state] Improve parallel download of RocksDB incremental state.

2023-06-15 Thread via GitHub


StefanRRichter opened a new pull request, #22788:
URL: https://github.com/apache/flink/pull/22788

   This commit improves RocksDBStateDownloader to support parallelized state 
download across multiple state types and across multiple state handles. This 
can improve our download times for scale-in.
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   - Introduced `StateHandleDownloadSpec` to combine incremental remote handles 
with their download paths.
   - Modified `RocksDBStateDownloader` to accept a list of 
`StateHandleDownloadSpec` that can combine files from multiple handles and 
sub-states (shared, private).
   - Modified `RocksDBIncrementalRestoreOperation` to first assemble a complete 
list of `StateHandleDownloadSpec` before invoking the download process.
   - Adapted and added unit tests in `RocksDBStateDownloaderTest`.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
   - `RocksDBStateDownloaderTest`. I added more tests there.
   - `EmbeddedRocksDBStateBackendTest`
   -  `EmbeddedRocksDBStateBackendMigrationTest`
   - `RescalingBenchmarkTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] FangYongs commented on a diff in pull request #22583: [FLINK-31673][jdbc-driver] Add e2e test for flink jdbc driver

2023-06-15 Thread via GitHub


FangYongs commented on code in PR #22583:
URL: https://github.com/apache/flink/pull/22583#discussion_r1230700575


##
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/SqlGateway.java:
##
@@ -168,7 +168,6 @@ public ShutdownThread(SqlGateway gateway) {
 @Override
 public void run() {
 // Shutdown the gateway
-System.out.println("\nShutting down the Flink SqlGateway...");

Review Comment:
   @libenchao Yes, it will cause e2e test failed because the e2e test framework 
will check the `.out` log file and confirm that it is empty. I have confirmed 
this with @fsk119 that the `System.out.println` here is meaningless, we can 
remove them directly here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31673) Add E2E tests for flink jdbc driver

2023-06-15 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-31673.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

Implemented via 
[https://github.com/apache/flink/commit/089e9edcb6f250fbd18ae4cb0f285d47c629deb4]
 (master)

[~zjureel] Thanks for your PR!

> Add E2E tests for flink jdbc driver
> ---
>
> Key: FLINK-31673
> URL: https://issues.apache.org/jira/browse/FLINK-31673
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / JDBC, Tests
>Reporter: Benchao Li
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Since jdbc driver will be used by third party projects, and we've introduced 
> a bundled jar in flink-sql-jdbc-driver-bundle, we'd better to have e2e tests 
> to verify and ensure it works fine (in case of the dependency management).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-elasticsearch] MartijnVisser merged pull request #68: [FLINK-31923] Run nightly builds against multiple branches and Flink versions

2023-06-15 Thread via GitHub


MartijnVisser merged PR #68:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/68


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-26041) AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure hang on azure

2023-06-15 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang closed FLINK-26041.
--
Resolution: Cannot Reproduce

> AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure 
> hang on azure
> -
>
> Key: FLINK-26041
> URL: https://issues.apache.org/jira/browse/FLINK-26041
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
>
> {code:java}
> Feb 08 13:04:58 "main" #1 prio=5 os_prio=0 tid=0x7fdcf000b800 nid=0x47bd 
> waiting on condition [0x7fdcf697b000]
> Feb 08 13:04:58java.lang.Thread.State: WAITING (parking)
> Feb 08 13:04:58   at sun.misc.Unsafe.park(Native Method)
> Feb 08 13:04:58   - parking to wait for  <0x8f644330> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Feb 08 13:04:58   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Feb 08 13:04:58   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 08 13:04:58   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:36)
> Feb 08 13:04:58   at 
> org.apache.flink.test.recovery.AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure(AbstractTaskManagerProcessFailureRecoveryTest.java:209)
> Feb 08 13:04:58   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 08 13:04:58   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 08 13:04:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 08 13:04:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 08 13:04:58   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:27)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30901=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=14617



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26041) AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure hang on azure

2023-06-15 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732964#comment-17732964
 ] 

Lijie Wang commented on FLINK-26041:


Close this ticket as it cannot be reproduced

> AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure 
> hang on azure
> -
>
> Key: FLINK-26041
> URL: https://issues.apache.org/jira/browse/FLINK-26041
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
>
> {code:java}
> Feb 08 13:04:58 "main" #1 prio=5 os_prio=0 tid=0x7fdcf000b800 nid=0x47bd 
> waiting on condition [0x7fdcf697b000]
> Feb 08 13:04:58java.lang.Thread.State: WAITING (parking)
> Feb 08 13:04:58   at sun.misc.Unsafe.park(Native Method)
> Feb 08 13:04:58   - parking to wait for  <0x8f644330> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Feb 08 13:04:58   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Feb 08 13:04:58   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 08 13:04:58   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:36)
> Feb 08 13:04:58   at 
> org.apache.flink.test.recovery.AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure(AbstractTaskManagerProcessFailureRecoveryTest.java:209)
> Feb 08 13:04:58   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 08 13:04:58   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 08 13:04:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 08 13:04:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 08 13:04:58   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:27)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30901=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=14617



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-mongodb] MartijnVisser merged pull request #10: [FLINK-31923] Run nightly builds against multiple branches and Flink versions

2023-06-15 Thread via GitHub


MartijnVisser merged PR #10:
URL: https://github.com/apache/flink-connector-mongodb/pull/10


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32348) MongoDB tests are flaky and time out

2023-06-15 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732967#comment-17732967
 ] 

Martijn Visser commented on FLINK-32348:


A full run with thread dumps can be found at 
https://github.com/apache/flink-connector-mongodb/actions/runs/5276796512/jobs/9543998611?pr=10#step:15:48

> MongoDB tests are flaky and time out
> 
>
> Key: FLINK-32348
> URL: https://issues.apache.org/jira/browse/FLINK-32348
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / MongoDB
>Reporter: Martijn Visser
>Priority: Critical
>  Labels: test-stability
>
> https://github.com/apache/flink-connector-mongodb/actions/runs/5232649632/jobs/9447519651#step:13:39307



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-19830) Properly implements processing-time temporal table join

2023-06-15 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732976#comment-17732976
 ] 

Jark Wu commented on FLINK-19830:
-

Hi [~rmetzger], sorry for the late reply. Yes, your understanding of the 
Processing Time Temporal Join is correct. But this is not the same limitation 
with Event Time Temporal Join, because the probe side will wait for the build 
side based on the watermark, so there won't be wrong unmatched records emitted 
for event time join. 

Currently, we have some plans to support some kind of custom event mechanism to 
support this feature in the near time. Hope I can be back with more information 
soon! 

> Properly implements processing-time temporal table join
> ---
>
> Key: FLINK-19830
> URL: https://issues.apache.org/jira/browse/FLINK-19830
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> The exsiting TemporalProcessTimeJoinOperator has already supported temporal 
> table join.
>  However, the semantic of this implementation is problematic, because the 
> join processing for left stream doesn't wait for the complete snapshot of 
> temporal table, this may mislead users in production environment.
> Under the processing time temporal join semantics, to get the complete 
> snapshot of temporal table may need introduce new mechanism in FLINK SQL in 
> the future.
> **Background** : 
>  * The reason why we turn off the switch[1] for `FOR SYSTEM_TIME AS OF` 
> syntax for *temporal table join* is only the semantic consideration as above.
>  * The reason why we turn on *temporal table function*  is that it has been 
> alive for a long time, thus although it exists same semantic problem, but we 
> still support it from the perspective of compatibility.
> [1] 
> [https://github.com/apache/flink/blob/4fe9f525a92319acc1e3434bebed601306f7a16f/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecTemporalJoin.java#L257]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32349) Support atomic for CREATE TABLE AS SELECT(CTAS) statement

2023-06-15 Thread tartarus (Jira)
tartarus created FLINK-32349:


 Summary: Support atomic for CREATE TABLE AS SELECT(CTAS) statement
 Key: FLINK-32349
 URL: https://issues.apache.org/jira/browse/FLINK-32349
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: tartarus
 Fix For: 1.18.0


For detailed information, see FLIP-305

https://cwiki.apache.org/confluence/display/FLINK/FLIP-305%3A+Support+atomic+for+CREATE+TABLE+AS+SELECT%28CTAS%29+statement



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32325) SqlServerDynamicTableSourceITCase is flaky

2023-06-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-32325:
---
Fix Version/s: jdbc-3.1.1

> SqlServerDynamicTableSourceITCase is flaky
> --
>
> Key: FLINK-32325
> URL: https://issues.apache.org/jira/browse/FLINK-32325
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.2.0, jdbc-3.1.1
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Critical
>  Labels: pull-request-available
> Fix For: jdbc-3.1.1
>
>
> {code:java}
> [INFO] Running 
> org.apache.flink.connector.jdbc.databases.sqlserver.table.SqlServerDynamicTableSourceITCase
> [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.533 
> s - in org.apache.flink.connector.jdbc.JdbcITCase
> [INFO] Running 
> org.apache.flink.connector.jdbc.databases.sqlserver.table.SqlServerTableSourceITCase
> Jun 13, 2023 8:49:50 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 39249b3b-40c2-4f71-9598-20abe5e93d2d Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:50 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 2c0e4870-7284-4022-b97d-7f441fc834dd Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:50 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 34e9eff4-445c-477e-8975-d23180897ff8 Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:50 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 0d4fe549-66e7-4354-b7c5-ed7ee66527d2 Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:51 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 4e98d176-2f1f-4dec-af3e-798ecb536c39 Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:52 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 32ba6716-772b-42f3-b27c-9ec1593adcd7 Prelogin error: host localhost port 
> 32783 Error reading prelogin response: Connection reset 
> ClientConnectionId:32ba6716-772b-42f3-b27c-9ec1593adcd7
> Jun 13, 2023 8:49:53 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> fe5e363f-fade-48b8-beb1-9f2e3a524282 Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:54 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> b454a476-5f05-4cd7-bb43-f62e1f7e030e Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:55 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 50282ce3-1fdc-4fa5-8467-4cd4867a8395 Prelogin error: host localhost port 
> 32783 Unexpected end of prelogin response after 0 bytes read
> Jun 13, 2023 8:49:56 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 837d01bc-7b0c-4532-88d2-1d91671d74f3 Prelogin error: host localhost port 
> 32783 Error reading prelogin response: Connection reset 
> ClientConnectionId:837d01bc-7b0c-4532-88d2-1d91671d74f3
> Jun 13, 2023 8:49:57 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 43ed7181-5b5d-46e3-b7d2-f3cd2decb043 Prelogin error: host localhost port 
> 32783 Error reading prelogin response: Connection reset 
> ClientConnectionId:43ed7181-5b5d-46e3-b7d2-f3cd2decb043
> Jun 13, 2023 8:49:58 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> f5a54844-ef86-4675-9b39-ede75733686b Prelogin error: host localhost port 
> 32783 Error reading prelogin response: Connection reset 
> ClientConnectionId:f5a54844-ef86-4675-9b39-ede75733686b
> Jun 13, 2023 8:49:59 AM com.microsoft.sqlserver.jdbc.SQLServerConnection 
> Prelogin
> WARNING: ConnectionID:1 ClientConnectionId: 
> 82da197b-0c48-4cb1-9a0b-e5dbfa27c616 Prelogin error: host localhost port 
> 32783 Error reading prelogin response: Connection reset 
> ClientConnectionId:82da197b-0c48-4cb1-9a0b-e5dbfa27c616
> Jun 13, 2023 8:50:00 AM 

[jira] [Created] (FLINK-32350) FLIP-311: Support Call Stored Procedure

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32350:


 Summary: FLIP-311: Support Call Stored Procedure
 Key: FLINK-32350
 URL: https://issues.apache.org/jira/browse/FLINK-32350
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: luoyuxia
Assignee: luoyuxia


Umbrella issue for 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-311%3A+Support+Call+Stored+Procedure



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa opened a new pull request, #22790: [hotfix][state] Fix incorrect comments of channel input state redistribution.

2023-06-15 Thread via GitHub


reswqa opened a new pull request, #22790:
URL: https://github.com/apache/flink/pull/22790

   
   
   
   
   ## What is the purpose of the change
   
   *Fix incorrect comments of channel input state redistribution.*
   
   
   ## Brief change log
   
 - *Fix incorrect comments of channel input state redistribution.*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32352) Support convert call procedure statement to correpsonding operation

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32352:


 Summary: Support convert call procedure statement to correpsonding 
operation
 Key: FLINK-32352
 URL: https://issues.apache.org/jira/browse/FLINK-32352
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32354) Support to execute the call procedure operation

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32354:


 Summary: Support to execute the call procedure operation
 Key: FLINK-32354
 URL: https://issues.apache.org/jira/browse/FLINK-32354
 Project: Flink
  Issue Type: Sub-task
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733022#comment-17733022
 ] 

Sergey Nuyanzin commented on FLINK-32357:
-

I remember something like that, I will have a look

> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gaborgsomogyi commented on a diff in pull request #614: [FLINK-32057] Support 1.18 rescale api for applying parallelism overrides

2023-06-15 Thread via GitHub


gaborgsomogyi commented on code in PR #614:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/614#discussion_r1230837995


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/NativeFlinkService.java:
##
@@ -132,4 +165,153 @@ protected void deleteClusterInternal(
 deleteHAData(namespace, clusterId, conf);
 }
 }
+
+@Override
+public boolean scale(FlinkResourceContext ctx) throws Exception {
+var resource = ctx.getResource();
+var spec = resource.getSpec();
+
+var observeConfig = ctx.getObserveConfig();
+
+if (spec.getJob() == null
+|| !observeConfig.get(
+
KubernetesOperatorConfigOptions.JOB_UPGRADE_INPLACE_SCALING_ENABLED)) {
+return false;
+}
+
+if 
(!observeConfig.get(FLINK_VERSION).isNewerVersionThan(FlinkVersion.v1_17)) {
+LOG.debug("In-place rescaling is only available starting from 
Flink 1.18");
+return false;
+}
+
+if (!observeConfig
+.get(JobManagerOptions.SCHEDULER)
+.equals(JobManagerOptions.SchedulerType.Adaptive)) {
+LOG.debug("In-place rescaling is only available with the adaptive 
scheduler");
+return false;
+}
+
+var status = resource.getStatus();
+if (ReconciliationUtils.isJobInTerminalState(status)
+|| 
JobStatus.RECONCILING.name().equals(status.getJobStatus().getState())) {
+LOG.info("Job in terminal or reconciling state cannot be scaled 
in-place");

Review Comment:
   +1 to have either such log or comment. One can follow the logic better w/ 
such explanations.



##
examples/autoscaling/Dockerfile:
##
@@ -16,5 +16,5 @@
 # limitations under the License.
 

 
-FROM flink:1.17
+FROM ghcr.io/apache/flink-docker:1.18-SNAPSHOT-scala_2.12-java11-debian

Review Comment:
   I know there is no 1.18 release yet but using snapshot can make tests fail 
randomly.



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java:
##
@@ -233,42 +233,41 @@ private void onMissingDeployment(FlinkDeployment 
deployment) {
 }
 
 @Override
-protected void updateStatusToDeployedIfAlreadyUpgraded(
-FlinkResourceContext ctx) {
+protected boolean 
checkIfAlreadyUpgraded(FlinkResourceContext ctx) {
 var flinkDep = ctx.getResource();
 var status = flinkDep.getStatus();
+
+// We are performing a full upgrade
 Optional depOpt = 
ctx.getJosdkContext().getSecondaryResource(Deployment.class);
-depOpt.ifPresent(
-deployment -> {
-Map annotations = 
deployment.getMetadata().getAnnotations();
-if (annotations == null) {
-return;
-}
-Long deployedGeneration =
-
Optional.ofNullable(annotations.get(FlinkUtils.CR_GENERATION_LABEL))
-.map(Long::valueOf)
-.orElse(-1L);
-
-Long upgradeTargetGeneration =
-
ReconciliationUtils.getUpgradeTargetGeneration(flinkDep);
-
-if (deployedGeneration.equals(upgradeTargetGeneration)) {
-logger.info("Pending upgrade is already deployed, 
updating status.");
-
ReconciliationUtils.updateStatusForAlreadyUpgraded(flinkDep);
-if (flinkDep.getSpec().getJob() != null) {
-status.getJobStatus()
-.setState(
-
org.apache.flink.api.common.JobStatus.RECONCILING
-.name());
-}
-
status.setJobManagerDeploymentStatus(JobManagerDeploymentStatus.DEPLOYING);
-} else {
-logger.warn(
-"Running deployment generation {} doesn't 
match upgrade target generation {}.",
-deployedGeneration,
-upgradeTargetGeneration);
-}
-});
+
+if (!depOpt.isPresent()) {
+return false;

Review Comment:
   Other places of `return false/true` there is an explanation what happens. 
Maybe we can add something to all places. This stands not just here but all 
boolean exits.



##
flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/TestingFlinkService.java:
##
@@ -547,14 +553,30 @@ public Map getClusterInfo(Configuration 
conf) {
 }
 
 @Override
-

[GitHub] [flink] Tartarus0zm commented on pull request #22344: [FLINK-31721][core] Move JobStatusHook to flink-core module

2023-06-15 Thread via GitHub


Tartarus0zm commented on PR #22344:
URL: https://github.com/apache/flink/pull/22344#issuecomment-1592937653

   @gaoyunhaii  hello, FLIP-305 has been accepted, please take a look at this pr


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #22601: [FLINK-31781][runtime] Introduces the contender ID to the LeaderElectionService interface

2023-06-15 Thread via GitHub


XComp commented on PR #22601:
URL: https://github.com/apache/flink/pull/22601#issuecomment-1592971299

   Thanks. I squashed the commits for a final CI run (even though I realized 
after the push that I could have just used the squash & merge in this case 
:facepalm: ).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #22642: [FLINK-32177][runtime] Refactors MultipleComponentLeaderElectionDriver.Listener.notifyAllKnownLeaderInformation signature

2023-06-15 Thread via GitHub


XComp commented on PR #22642:
URL: https://github.com/apache/flink/pull/22642#issuecomment-1592975146

   I rebased on most recent version of FLINK-31781 after that was approved to 
fix the conflicts.
   
   > e LeaderInformationWithComponentId is more expressive though ("What is 
that String key?"). Maybe we should wrap the map instead, we arent doing a lot 
with the map anyway.
   
   I added a commit that introduced `LeaderInformationRegister` wrapping the 
`Map` with the `contenderID` to `LeaderInformation` mapping


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22792: [FLINK-25000][build] Java 17 uses Scala 2.12.15

2023-06-15 Thread via GitHub


flinkbot commented on PR #22792:
URL: https://github.com/apache/flink/pull/22792#issuecomment-1593041863

   
   ## CI report:
   
   * baea61f3e55e790e63f5e84d02a58c9fc5174291 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #22656: [FLINK-32180][runtime] Moves error handling from DefaultMultipleComponentLeaderElectionService into the MultipleComponentLeaderElectionDriver i

2023-06-15 Thread via GitHub


XComp commented on PR #22656:
URL: https://github.com/apache/flink/pull/22656#issuecomment-1593052042

   Rebased to the most-recent version of FLINK-31782 (to fix the conflicts).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32347) Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager

2023-06-15 Thread Stefan Richter (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733059#comment-17733059
 ] 

Stefan Richter commented on FLINK-32347:


Hey, I've already opened a PR. The issue was still unassigned, so I thought I 
can still work on it.

> Exceptions from the CompletedCheckpointStore are not registered by the 
> CheckpointFailureManager 
> 
>
> Key: FLINK-32347
> URL: https://issues.apache.org/jira/browse/FLINK-32347
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.3, 1.16.2, 1.17.1
>Reporter: Tigran Manasyan
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> Currently if an error occurs while saving a completed checkpoint in the 
> {_}CompletedCheckpointStore{_}, _CheckpointCoordinator_ doesn't call 
> _CheckpointFailureManager_ to handle the error. Such behavior leads to the 
> fact, that errors from _CompletedCheckpointStore_ don't increase the failed 
> checkpoints count and 
> _'execution.checkpointing.tolerable-failed-checkpoints'_ option does not 
> limit the number of errors of this kind in any way.
> Possible solution may be to move the notification of 
> _CheckpointFailureManager_ about successful checkpoint after storing 
> completed checkpoint in the _CompletedCheckpointStore_ and providing the 
> exception to the _CheckpointFailureManager_ in the 
> {_}CheckpointCoordinator#{_}{_}[addCompletedCheckpointToStoreAndSubsumeOldest()|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1440]{_}
>  method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22793: [FLINK-32347][checkpoint] Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager.

2023-06-15 Thread via GitHub


flinkbot commented on PR #22793:
URL: https://github.com/apache/flink/pull/22793#issuecomment-1593052850

   
   ## CI report:
   
   * a5ce6c4cd7ffdaef79381f91bb1fcc8f5a1ac83e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #22794: [FLINK-25002][build] Add java 17 add-opens/add-exports JVM arguments

2023-06-15 Thread via GitHub


zentol commented on code in PR #22794:
URL: https://github.com/apache/flink/pull/22794#discussion_r1231022699


##
flink-end-to-end-tests/flink-queryable-state-test/pom.xml:
##
@@ -107,6 +107,9 @@



org.apache.flink.streaming.tests.queryablestate.QsStateClient
+   

+   
java.base/java.util
+   


Review Comment:
   This is a handy alternative in case you run a self-contained jar; you can 
encode the opens/exports in the jars manifest.
   Unfortunately this only works if the you use the `-jar` command; we can't 
use it for the distribution :cry: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #22794: [FLINK-25002][build] Add java 17 add-opens/add-exports JVM arguments

2023-06-15 Thread via GitHub


zentol commented on code in PR #22794:
URL: https://github.com/apache/flink/pull/22794#discussion_r1231023922


##
pom.xml:
##
@@ -196,6 +196,12 @@ under the License.
  */

 
+   

Review Comment:
   > [-]{2}add
   
   This looks as weird as it does because you can't have double-dashes in xml 
comments :facepalm: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22789: [FLINK-27146] [Filesystem] Migrate to Junit5 and Assertj

2023-06-15 Thread via GitHub


flinkbot commented on PR #22789:
URL: https://github.com/apache/flink/pull/22789#issuecomment-1592845381

   
   ## CI report:
   
   * 980d7e4495fe2aa0406674f860e16112f91b32d3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32342) SQL Server container behaves unexpected while testing with several surefire forks

2023-06-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-32342:
---
Affects Version/s: jdbc-3.1.1

> SQL Server container behaves unexpected while testing with several surefire 
> forks
> -
>
> Key: FLINK-32342
> URL: https://issues.apache.org/jira/browse/FLINK-32342
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.0, jdbc-3.1.1
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> By default it inherits {{flink.forkCountITCase == 2}} from Flink.
> it looks sqlserver container has issues with starting in several surefire 
> forks...
> Based on 
> [https://github.com/MartijnVisser/flink-connector-jdbc/actions/runs/5265349453/jobs/9517854060]
> sql server container is hanging while start
> {noformat}
> "main" #1 prio=5 os_prio=0 cpu=1965.96ms elapsed=2568.93s 
> tid=0x7f84a0027000 nid=0x1c82 runnable  [0x7f84a41fc000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(java.base@11.0.19/Native 
> Method)
>   at 
> java.net.SocketInputStream.socketRead(java.base@11.0.19/SocketInputStream.java:115)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.19/SocketInputStream.java:168)
>   at 
> java.net.SocketInputStream.read(java.base@11.0.19/SocketInputStream.java:140)
>   at 
> com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.readInternal(IOBuffer.java:1192)
>   - locked <0x930e38f0> (a 
> com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream)
>   at 
> com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.read(IOBuffer.java:1179)
>   at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2307)
>   - locked <0x930e38f0> (a 
> com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.Prelogin(SQLServerConnection.java:3391)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:3200)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2833)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:2671)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1640)
>   at 
> com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:936)
>   at 
> org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:253)
>   at 
> org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:218)
>   at 
> org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted(JdbcDatabaseContainer.java:158)
>   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:490)
>   at 
> org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:344)
>   at 
> org.testcontainers.containers.GenericContainer$$Lambda$532/0x0001003d1440.call(Unknown
>  Source)
>   at 
> org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
>   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
>   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
>   at 
> org.apache.flink.connector.jdbc.testutils.databases.sqlserver.SqlServerDatabase$SqlServerContainer.start(SqlServerDatabase.java:81)
>   at 
> org.apache.flink.connector.jdbc.testutils.databases.sqlserver.SqlServerDatabase.startDatabase(SqlServerDatabase.java:52)
>   at 
> org.apache.flink.connector.jdbc.testutils.DatabaseExtension.beforeAll(DatabaseExtension.java:122)
> ...
> {noformat}
> as a WA setting {{flink.forkCountITCase == 1}} solves the issue
> However need to find a better way to allow running tests with several forks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31923) Connector weekly runs are only testing main branches instead of all supported branches

2023-06-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31923:
---
Fix Version/s: elasticsearch-4.0.0
   cassandra-4.0.0
   aws-connector-4.2.0
   pulsar-4.0.1
   opensearch-1.1.0
   jdbc-3.2.0
   gcp-pubsub-3.0.2
   rabbitmq-3.0.2
   mongodb-1.1.0

> Connector weekly runs are only testing main branches instead of all supported 
> branches
> --
>
> Key: FLINK-31923
> URL: https://issues.apache.org/jira/browse/FLINK-31923
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Connectors / Common
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-4.0.0, cassandra-4.0.0, 
> aws-connector-4.2.0, pulsar-4.0.1, opensearch-1.1.0, jdbc-3.2.0, 
> gcp-pubsub-3.0.2, rabbitmq-3.0.2, mongodb-1.1.0
>
>
> We have a weekly scheduled build for connectors. That's only triggered for 
> the {{main}} branches, because that's how the Github Actions {{schedule}} 
> works, per 
> https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule
> We can resolve that by having the Github Action flow checkout multiple 
> branches as a matrix to run these weekly tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32351) Introduce base interfaces for call procedure

2023-06-15 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia reassigned FLINK-32351:


Assignee: luoyuxia

> Introduce base interfaces for call procedure
> 
>
> Key: FLINK-32351
> URL: https://issues.apache.org/jira/browse/FLINK-32351
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733019#comment-17733019
 ] 

Martijn Visser commented on FLINK-32357:


[~snuyanzin] [~reta] Haven't we seen this issue before with Opensearch as well? 

> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-32357:
---
Description: 
{code:java}
[INFO] 
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) on 
project flink-connector-elasticsearch-base: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' failed 
to discover tests: 
com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
 -> [Help 1]
{code}

https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



  was:
{code:java|
[INFO] 
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) on 
project flink-connector-elasticsearch-base: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' failed 
to discover tests: 
com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
 -> [Help 1]
{code}

https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159




> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-32357:
--

 Summary: Elasticsearch v3.0 won't compile when testing against 
Flink 1.17.1
 Key: FLINK-32357
 URL: https://issues.apache.org/jira/browse/FLINK-32357
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Reporter: Martijn Visser


{code:java|
[INFO] 
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) on 
project flink-connector-elasticsearch-base: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' failed 
to discover tests: 
com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
 -> [Help 1]
{code}

https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32347) Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager

2023-06-15 Thread Stefan Richter (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter reassigned FLINK-32347:
--

Assignee: Stefan Richter

> Exceptions from the CompletedCheckpointStore are not registered by the 
> CheckpointFailureManager 
> 
>
> Key: FLINK-32347
> URL: https://issues.apache.org/jira/browse/FLINK-32347
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.3, 1.16.2, 1.17.1
>Reporter: Tigran Manasyan
>Assignee: Stefan Richter
>Priority: Major
>
> Currently if an error occurs while saving a completed checkpoint in the 
> {_}CompletedCheckpointStore{_}, _CheckpointCoordinator_ doesn't call 
> _CheckpointFailureManager_ to handle the error. Such behavior leads to the 
> fact, that errors from _CompletedCheckpointStore_ don't increase the failed 
> checkpoints count and 
> _'execution.checkpointing.tolerable-failed-checkpoints'_ option does not 
> limit the number of errors of this kind in any way.
> Possible solution may be to move the notification of 
> _CheckpointFailureManager_ about successful checkpoint after storing 
> completed checkpoint in the _CompletedCheckpointStore_ and providing the 
> exception to the _CheckpointFailureManager_ in the 
> {_}CheckpointCoordinator#{_}{_}[addCompletedCheckpointToStoreAndSubsumeOldest()|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1440]{_}
>  method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32205) Support Flink client to access REST API through K8s Ingress

2023-06-15 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu updated FLINK-32205:
--
Description: 
Currently, Flink Client can only connect to the server through the 
address:port, which is configured by the rest.address and rest.port.

But when running Flink on Kubernetes and exposing services through ingress. The 
URL to access the Flink server should be: http://\{proxy address}/\{some prefix 
path to identify flink clusters}/\{flink real path}

I'd like to introduce an option named "rest.url-prefix" to support adding a 
prefix to URLs in RestClient.

  was:
Currently, Flink Client can only connect to the server via the address:port, 
which is configured by the rest.address and rest.port.

But in some other scenarios. Flink Server is run behind a proxy. Such as 
running on Kubernetes and exposing services through ingress. The URL to access 
the Flink server can be: http://\{proxy address}/\{some prefix path to identify 
flink clusters}/\{flink request path}

In FLINK-32030, the SQL Client gateway accepts URLs by using the '--endpoint'.

IMO, we should introduce an option, such as "rest.url-prefix", to make the 
Flink client work with URLs.


> Support Flink client to access REST API through K8s Ingress
> ---
>
> Key: FLINK-32205
> URL: https://issues.apache.org/jira/browse/FLINK-32205
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.17.0
>Reporter: Weihua Hu
>Assignee: Weihua Hu
>Priority: Major
>
> Currently, Flink Client can only connect to the server through the 
> address:port, which is configured by the rest.address and rest.port.
> But when running Flink on Kubernetes and exposing services through ingress. 
> The URL to access the Flink server should be: http://\{proxy address}/\{some 
> prefix path to identify flink clusters}/\{flink real path}
> I'd like to introduce an option named "rest.url-prefix" to support adding a 
> prefix to URLs in RestClient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32316) Duplicated announceCombinedWatermark task maybe scheduled if jobmanager failover

2023-06-15 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732993#comment-17732993
 ] 

Rui Fan edited comment on FLINK-32316 at 6/15/23 11:56 AM:
---

Hi [~cailiuyang] , thanks for the report.

As I understand, you reported 2 bugs. Could you create a new Jira for "subtask 
25 is not ready yet to receive events"?

And would you like to fix these bugs?


was (Author: fanrui):
Hi [~cailiuyang] , thanks for the report.

As I understand, you reported 2 bugs. Could you create a new Jira for "subtask 
25 is not ready yet to receive events"?

> Duplicated announceCombinedWatermark task maybe scheduled if jobmanager 
> failover
> 
>
> Key: FLINK-32316
> URL: https://issues.apache.org/jira/browse/FLINK-32316
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Cai Liuyang
>Priority: Major
>
> When we try SourceAlignment feature, we found there will be a duplicated 
> announceCombinedWatermark task will be scheduled after JobManager failover 
> and auto recover job from checkpoint.
> The reason i think is  we should schedule announceCombinedWatermark task 
> during SourceCoordinator::start function not in SourceCoordinator construct 
> function (see  
> [https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinator.java#L149]
>  ), because when jobManager encounter failover and auto recover job, it will 
> create SourceCoordinator twice:
>  * The first one is  when JobMaster is created it will create the 
> DefaultExecutionGraph, this will init the first sourceCoordinator but will 
> not start it.
>  * The Second one is JobMaster call restoreLatestCheckpointedStateInternal 
> method, which will be reset old sourceCoordinator and initialize a new one, 
> but because the first sourceCoordinator is not started(SourceCoordinator will 
> be started before SchedulerBase::startScheduling), so the first 
> SourceCoordinator will not be fully closed.
>  
> And we also found there is another problem that announceCombinedWatermark may 
> throw a exception (like  "subtask 25 is not ready yet to receive events" , 
> this subtask maybe under failover) lead the period task not running any more 
> (ThreadPoolExecutor will not schedule the period task if it throw a 
> exception), i think we should increase the robustness of 
> announceCombinedWatermark function to cover this case (see 
> [https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinator.java#L199]
>  )



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32311) ZooKeeperLeaderElectionTest.testZooKeeperReelectionWithReplacement and DefaultLeaderElectionService.onGrantLeadership fell into dead lock

2023-06-15 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-32311.
---
Fix Version/s: 1.18.0
   Resolution: Fixed

master: fdfff096a3a513979fe7676ab2e3ab1100468494

> ZooKeeperLeaderElectionTest.testZooKeeperReelectionWithReplacement and 
> DefaultLeaderElectionService.onGrantLeadership fell into dead lock
> -
>
> Key: FLINK-32311
> URL: https://issues.apache.org/jira/browse/FLINK-32311
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.18.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49750=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8]
>  
> there are 2 threads one locked {{0xe3a8a1e8}} and waiting for 
> {{0xe3a89c18}}
> {noformat}
> 2023-06-08T01:18:54.5609123Z Jun 08 01:18:54 
> "ForkJoinPool-50-worker-25-EventThread" #956 daemon prio=5 os_prio=0 
> tid=0x7f9374253800 nid=0x6a4e waiting for monitor entry 
> [0x7f94b63e1000]
> 2023-06-08T01:18:54.5609820Z Jun 08 01:18:54java.lang.Thread.State: 
> BLOCKED (on object monitor)
> 2023-06-08T01:18:54.5610557Z Jun 08 01:18:54  at 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService.runInLeaderEventThread(DefaultLeaderElectionService.java:425)
> 2023-06-08T01:18:54.5611459Z Jun 08 01:18:54  - waiting to lock 
> <0xe3a89c18> (a java.lang.Object)
> 2023-06-08T01:18:54.5612198Z Jun 08 01:18:54  at 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService.onGrantLeadership(DefaultLeaderElectionService.java:300)
> 2023-06-08T01:18:54.5613110Z Jun 08 01:18:54  at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver.isLeader(ZooKeeperLeaderElectionDriver.java:153)
> 2023-06-08T01:18:54.5614070Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch$$Lambda$1649/586959400.accept(Unknown
>  Source)
> 2023-06-08T01:18:54.5615014Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.listen.MappingListenerManager.lambda$forEach$0(MappingListenerManager.java:92)
> 2023-06-08T01:18:54.5616259Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.listen.MappingListenerManager$$Lambda$1640/1393625763.run(Unknown
>  Source)
> 2023-06-08T01:18:54.5617137Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.listen.MappingListenerManager$$Lambda$1633/2012730699.execute(Unknown
>  Source)
> 2023-06-08T01:18:54.5618047Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.listen.MappingListenerManager.forEach(MappingListenerManager.java:89)
> 2023-06-08T01:18:54.5618994Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.listen.StandardListenerManager.forEach(StandardListenerManager.java:89)
> 2023-06-08T01:18:54.5620071Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch.setLeadership(LeaderLatch.java:711)
> 2023-06-08T01:18:54.5621198Z Jun 08 01:18:54  - locked <0xe3a8a1e8> 
> (a 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch)
> 2023-06-08T01:18:54.5622072Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch.checkLeadership(LeaderLatch.java:597)
> 2023-06-08T01:18:54.5622991Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch.access$600(LeaderLatch.java:64)
> 2023-06-08T01:18:54.5623988Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.leader.LeaderLatch$7.processResult(LeaderLatch.java:648)
> 2023-06-08T01:18:54.5624965Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:926)
> 2023-06-08T01:18:54.5626218Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:683)
> 2023-06-08T01:18:54.5627369Z Jun 08 01:18:54  at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
> 2023-06-08T01:18:54.5628353Z Jun 08 01:18:54  at 
> 

[GitHub] [flink] XComp merged pull request #22769: [FLINK-32311][runtime] Moves driver cleanup out of lock monitoring to prevent deadlocks

2023-06-15 Thread via GitHub


XComp merged PR #22769:
URL: https://github.com/apache/flink/pull/22769


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-shaded] MartijnVisser commented on a diff in pull request #119: Update Flink Shaded dependencies to the latest versions

2023-06-15 Thread via GitHub


MartijnVisser commented on code in PR #119:
URL: https://github.com/apache/flink-shaded/pull/119#discussion_r1230931941


##
flink-shaded-jackson-parent/flink-shaded-jackson-2/src/main/resources/META-INF/NOTICE:
##
@@ -6,11 +6,11 @@ The Apache Software Foundation (http://www.apache.org/).
 
 This project bundles the following dependencies under the Apache Software 
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
 
-- com.fasterxml.jackson.core:jackson-annotations:2.13.4
-- com.fasterxml.jackson.core:jackson-core:2.13.4
-- com.fasterxml.jackson.core:jackson-databind:2.13.4.2
-- com.fasterxml.jackson.dataformat:jackson-dataformat-csv:2.13.4
-- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.13.4
-- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.13.4
-- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.13.4
-- org.yaml:snakeyaml:1.31
+- com.fasterxml.jackson.core:jackson-annotations:2.14.2
+- com.fasterxml.jackson.core:jackson-core:2.14.2
+- com.fasterxml.jackson.core:jackson-databind:2.14.2
+- com.fasterxml.jackson.dataformat:jackson-dataformat-csv:2.14.2
+- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.14.2
+- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.14.2
+- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.14.2
+- org.yaml:snakeyaml:1.33

Review Comment:
   I first want to complete the upgrade of Flink to the newly released Flink 
Shaded before tackling another Flink shaded update.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733046#comment-17733046
 ] 

Andriy Redko commented on FLINK-32357:
--

Yes, we certainly had it !

> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol commented on pull request #22780: [FLINK-32338][build] Add FailsOnJava17 annotation

2023-06-15 Thread via GitHub


zentol commented on PR #22780:
URL: https://github.com/apache/flink/pull/22780#issuecomment-1592959070

   yes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol opened a new pull request, #22792: [FLINK-25000][build] Java 17 uses Scala 2.12.15

2023-06-15 Thread via GitHub


zentol opened a new pull request, #22792:
URL: https://github.com/apache/flink/pull/22792

   Based on #22780.
   
   Since Scala 2.12.7 doesn't compile we will run the build with a later 
version of Scala when running on Java 17. This breaks various compatibility 
tests which are now disable in Java 17 builds, and I don't see a way around 
that.
   
   Note that compiled 2.12.7 Scala code does appear to run on Java 17.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32347) Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager

2023-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32347:
---
Labels: pull-request-available  (was: )

> Exceptions from the CompletedCheckpointStore are not registered by the 
> CheckpointFailureManager 
> 
>
> Key: FLINK-32347
> URL: https://issues.apache.org/jira/browse/FLINK-32347
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.3, 1.16.2, 1.17.1
>Reporter: Tigran Manasyan
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> Currently if an error occurs while saving a completed checkpoint in the 
> {_}CompletedCheckpointStore{_}, _CheckpointCoordinator_ doesn't call 
> _CheckpointFailureManager_ to handle the error. Such behavior leads to the 
> fact, that errors from _CompletedCheckpointStore_ don't increase the failed 
> checkpoints count and 
> _'execution.checkpointing.tolerable-failed-checkpoints'_ option does not 
> limit the number of errors of this kind in any way.
> Possible solution may be to move the notification of 
> _CheckpointFailureManager_ about successful checkpoint after storing 
> completed checkpoint in the _CompletedCheckpointStore_ and providing the 
> exception to the _CheckpointFailureManager_ in the 
> {_}CheckpointCoordinator#{_}{_}[addCompletedCheckpointToStoreAndSubsumeOldest()|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1440]{_}
>  method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25002) Setup required --add-opens/--add-exports

2023-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25002:
---
Labels: pull-request-available  (was: )

> Setup required --add-opens/--add-exports
> 
>
> Key: FLINK-25002
> URL: https://issues.apache.org/jira/browse/FLINK-25002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Build System / CI, Documentation, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Java 17 actually enforces the encapsulation of the JDK (opposed to Java 11 
> which just printed warnings), requiring us to explicitly open/export any 
> package that we access illegally.
> The following is a list of opens/exports that I needed to get most tests to 
> pass, also with some comments which component needed them. Overall the 
> ClosureCleaner and FieldSerializer result in the most offenses, as they try 
> to access private fields.
> These properties need to be set _for all JVMs in which we run Flink_, 
> including surefire forks, other tests processes 
> (TestJvmProcess/TestProcessBuilder/Yarn) and the distribution.
> This needs some thought on how we can share this list across poms (surefire), 
> code (test processes / yarn) and the configuration (distribution).
> {code:xml}
>  --add-exports java.base/sun.net.util=ALL-UNNAMED --add-exports java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED 
> --add-exports jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens java.base/java.math=ALL-UNNAMED --add-opens java.base/java.net=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.sql/java.sql=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/java.text=ALL-UNNAMED --add-opens java.base/java.time=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens java.base/java.util.concurrent.locks=ALL-UNNAMED --add-opens java.base/java.util.stream=ALL-UNNAMED --add-opens java.base/sun.util.calendar=ALL-UNNAMED
> 
> {code}
> Additionally, the following JVM arguments must be supplied when running Maven:
> {code}
> export MAVEN_OPTS="\
> --add-exports jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED \
> --add-exports jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED \
> --add-exports jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED \
> --add-exports jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED \
> --add-exports jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED \
> --add-exports java.security.jgss/sun.security.krb5=ALL-UNNAMED"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol opened a new pull request, #22794: [FLINK-25002][build] Add java 17 add-opens/add-exports JVM arguments

2023-06-15 Thread via GitHub


zentol opened a new pull request, #22794:
URL: https://github.com/apache/flink/pull/22794

   This PR sets of all required add-opens/add-exports statements for Maven, 
surefire, test-internal JVMs and flink-dist.
   
   If a module requires such a statement for surefire/testing then it must 
define a list of statements in the `surefire.module.config` property of the 
module's pom. This property is set on the test JVMs via surefire, but is also 
exposed as a system property to the test itself such that utils like the 
`TestJvmProcess` can forward them.
   I documented a hint for each statement as to why it is required.
   
   flink-dist is configured separately via `flink-conf.yaml`.
   
   The required statements were purely determined by running tests and checking 
what fails.
   "Completeness" was validated on my personal 
[CI](https://dev.azure.com/chesnay/flink/_build/results?buildId=3606=results);
 we may not be covering all tests since some only run on the apache branches. 
We'll just amend missing ones in the future if necessary.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31923) Connector weekly runs are only testing main branches instead of all supported branches

2023-06-15 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733002#comment-17733002
 ] 

Martijn Visser commented on FLINK-31923:


Fixed in:

apache/flink-connector-gcp-pubsub@main: 01e1b6fa0d830aea734201f63c1ee58874d5fa23
apache/flink-connector-rabbitmq@main: 6b70965d331c5f0f94bf08a7defcb7ecf62dbc5c
apache/flink-connector-jdbc@main: bd371d64be644a36b8000ed06c9afa9928cb8fc4
apache/flink-connector-pulsar@main: f463d3f707c8824bc61cce29c0efcdd4de94257e
apache/flink-connector-mongodb@main: fe65806824ae181831b3440f04cbd13bee9af95d
apache/flink-connector-opensearch@main: aa2d57ee7af212757815378ae43eec8536fcde1a
apache/flink-connector-cassandra@main: ac6cf71fb3ed73f2ca1f6414d6e22abc9c756529
apache/flink-connector-elasticsearch@main: 
be0f30428fec7871644509cd431e088f1d39f390

Todo:
apache/flink-connector-aws@main: 

> Connector weekly runs are only testing main branches instead of all supported 
> branches
> --
>
> Key: FLINK-31923
> URL: https://issues.apache.org/jira/browse/FLINK-31923
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Connectors / Common
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> We have a weekly scheduled build for connectors. That's only triggered for 
> the {{main}} branches, because that's how the Github Actions {{schedule}} 
> works, per 
> https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule
> We can resolve that by having the Github Action flow checkout multiple 
> branches as a matrix to run these weekly tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] liuyongvs commented on pull request #22143: [FLINK-31377][table] Fix array_contains ArrayData.ElementGetter shoul…

2023-06-15 Thread via GitHub


liuyongvs commented on PR #22143:
URL: https://github.com/apache/flink/pull/22143#issuecomment-1592862067

   hi @snuyanzin do you have time to have a look again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-21883) Introduce cooldown period into adaptive scheduler

2023-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733007#comment-17733007
 ] 

Robert Metzger commented on FLINK-21883:


There's FLIP for this Jira: 
https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=255072164#content/view/255072164

> Introduce cooldown period into adaptive scheduler
> -
>
> Key: FLINK-21883
> URL: https://issues.apache.org/jira/browse/FLINK-21883
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Robert Metzger
>Assignee: Etienne Chauchot
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> reactive
>
> This is a follow up to reactive mode, introduced in FLINK-10407.
> Introduce a cooldown timeout, during which no further scaling actions are 
> performed, after a scaling action.
> Without such a cooldown timeout, it can happen with unfortunate timing, that 
> we are rescaling the job very frequently, because TaskManagers are not all 
> connecting at the same time.
> With the current implementation (1.13), this only applies to scaling up, but 
> this can also apply to scaling down with autoscaling support.
> With this implemented, users can define a cooldown timeout of say 5 minutes: 
> If taskmanagers are now slowly connecting one after another, we will only 
> rescale every 5 minutes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32343) Fix exception for jdbc tools

2023-06-15 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-32343.
--
Fix Version/s: 1.18.0
 Assignee: Fang Yong
   Resolution: Fixed

Fixed via 
[https://github.com/apache/flink/commit/a521f5a98a7ae759631acce953d4717f14381fc4]
 (master)

[~zjureel] Thanks for the PR!

> Fix exception for jdbc tools
> 
>
> Key: FLINK-32343
> URL: https://issues.apache.org/jira/browse/FLINK-32343
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Fix exception for jdbc tools



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32136) Pyflink gateway server launch fails when purelib != platlib

2023-06-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733017#comment-17733017
 ] 

Dian Fu commented on FLINK-32136:
-

[~wash] Good catch! This seems like a critical problem. Would you like to open 
a PR to fix this issue?

> Pyflink gateway server launch fails when purelib != platlib
> ---
>
> Key: FLINK-32136
> URL: https://issues.apache.org/jira/browse/FLINK-32136
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.3
>Reporter: William Ashley
>Priority: Major
>
> On distros where python's {{purelib}} is different than {{platlib}} (e.g. 
> Amazon Linux 2, but from my research it's all of the Redhat-based ones), you 
> wind up with components of packages being installed across two different 
> locations (e.g. {{/usr/local/lib/python3.7/site-packages/pyflink}} and 
> {{{}/usr/local/lib64/python3.7/site-packages/pyflink{}}}).
> {{_find_flink_home}} 
> [handles|https://github.com/apache/flink/blob/06688f345f6793a8964ec2175f44cda13c33/flink-python/pyflink/find_flink_home.py#L58C63-L60]
>  this, and in flink releases <= 1.13.2 its setting of the {{FLINK_LIB_DIR}} 
> environment variable was the one being used. However, from 1.13.3, a 
> refactoring of {{launch_gateway_server_process}} 
> ([1.13.2,|https://github.com/apache/flink/blob/release-1.13.2/flink-python/pyflink/pyflink_gateway_server.py#L200]
>  
> [1.13.3|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L280])
>  re-ordered some method calls. {{{}prepare_environment_variable{}}}'s 
> [non-awareness|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L94C67-L95]
>  of multiple homes and setting of {{FLINK_LIB_DIR}} now is the one that 
> matters, and it is the incorrect location.
> I've confirmed this problem on Amazon Linux 2 and 2023. The problem does not 
> exist on, for example, Ubuntu 20 and 22 (for which {{platlib}} == 
> {{{}purelib{}}}).
> Repro steps on Amazon Linux 2
> {quote}{{yum -y install python3 java-11}}
> {{pip3 install apache-flink==1.13.3}}
> {{python3 -c 'from pyflink.table import EnvironmentSettings ; 
> EnvironmentSettings.new_instance()'}}
> {quote}
> The resulting error is
> {quote}{{The flink-python jar is not found in the opt folder of the 
> FLINK_HOME: /usr/local/lib64/python3.7/site-packages/pyflink}}
> {{Error: Could not find or load main class 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Traceback (most recent call last):}}
> {{  File "", line 1, in }}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 214, in new_instance}}
> {{    return EnvironmentSettings.Builder()}}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 48, in {_}{{_}}init{{_}}{_}}}
> {{    gateway = get_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 62, in get_gateway}}
> {{    _gateway = launch_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 112, in launch_gateway}}
> {{    raise Exception("Java gateway process exited before sending its port 
> number")}}
> {{Exception: Java gateway process exited before sending its port number}}
> {quote}
> The flink home under /lib64/ does not contain the jar, but it is in the /lib/ 
> location
> {quote}{{bash-4.2# find /usr/local/lib64/python3.7/site-packages/pyflink 
> -name "flink-python*.jar"}}
> {{bash-4.2# find /usr/local/lib/python3.7/site-packages/pyflink -name 
> "flink-python*.jar"}}
> {{/usr/local/lib/python3.7/site-packages/pyflink/opt/flink-python_2.11-1.13.3.jar}}
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32351) Introduce base interfaces for call procedure

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32351:


 Summary: Introduce base interfaces for call procedure
 Key: FLINK-32351
 URL: https://issues.apache.org/jira/browse/FLINK-32351
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22790: [hotfix][state] Fix incorrect comments of channel input state redistribution.

2023-06-15 Thread via GitHub


flinkbot commented on PR #22790:
URL: https://github.com/apache/flink/pull/22790#issuecomment-1592888738

   
   ## CI report:
   
   * fbab9c3cd3e147cb887fa92acea26b090cad1bb2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32353) Make Cassandra connector compatible with Flink 1.18

2023-06-15 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-32353:
--

 Summary: Make Cassandra connector compatible with Flink 1.18
 Key: FLINK-32353
 URL: https://issues.apache.org/jira/browse/FLINK-32353
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Cassandra
Reporter: Martijn Visser


The current Cassandra connector in {{main}} fails when testing against Flink 
1.18-SNAPSHOT

{code:java}
Error:  Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.1 s 
<<< FAILURE! - in org.apache.flink.architecture.rules.ITCaseRules
Error:  ITCaseRules.ITCASE_USE_MINICLUSTER  Time elapsed: 0.025 s  <<< FAILURE!
java.lang.AssertionError: 
Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests should use a 
MiniCluster resource or extension' was violated (1 times):
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase does 
not satisfy: only one of the following predicates match:
* reside in a package 'org.apache.flink.runtime.*' and contain any fields that 
are static, final, and of type InternalMiniClusterExtension and annotated with 
@RegisterExtension
* reside outside of package 'org.apache.flink.runtime.*' and contain any fields 
that are static, final, and of type MiniClusterExtension and annotated with 
@RegisterExtension or are , and of type MiniClusterTestEnvironment and 
annotated with @TestEnv
* reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension
* reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension
 or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
{code}

https://github.com/apache/flink-connector-cassandra/actions/runs/5276835802/jobs/9544092571#step:13:811



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32354) Support to execute the call procedure operation

2023-06-15 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-32354:
-
Component/s: Table SQL / Runtime

> Support to execute the call procedure operation
> ---
>
> Key: FLINK-32354
> URL: https://issues.apache.org/jira/browse/FLINK-32354
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32355) Support to list procedure

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32355:


 Summary: Support to list procedure
 Key: FLINK-32355
 URL: https://issues.apache.org/jira/browse/FLINK-32355
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32352) Support to convert call procedure statement to correpsonding operation

2023-06-15 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-32352:
-
Summary: Support to convert call procedure statement to correpsonding 
operation  (was: Support convert call procedure statement to correpsonding 
operation)

> Support to convert call procedure statement to correpsonding operation
> --
>
> Key: FLINK-32352
> URL: https://issues.apache.org/jira/browse/FLINK-32352
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-32357:
---

Assignee: Sergey Nuyanzin

> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp commented on pull request #22601: [FLINK-31781][runtime] Introduces the contender ID to the LeaderElectionService interface

2023-06-15 Thread via GitHub


XComp commented on PR #22601:
URL: https://github.com/apache/flink/pull/22601#issuecomment-1592916372

   Rebased PR after merging FLINK-31797 to master fixing the conflict


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu closed pull request #22782: [FLINK-32034][python] Updated GET_SITE_PACKAGES_SCRIPT to remove distutils dependency

2023-06-15 Thread via GitHub


dianfu closed pull request #22782: [FLINK-32034][python] Updated 
GET_SITE_PACKAGES_SCRIPT to remove distutils dependency
URL: https://github.com/apache/flink/pull/22782


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32347) Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager

2023-06-15 Thread Tigran Manasyan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733034#comment-17733034
 ] 

Tigran Manasyan commented on FLINK-32347:
-

Hi, [~srichter] ! I've already fixed this issue in our local flink fork, so I 
can share it if you haven't started working on the task yet.

> Exceptions from the CompletedCheckpointStore are not registered by the 
> CheckpointFailureManager 
> 
>
> Key: FLINK-32347
> URL: https://issues.apache.org/jira/browse/FLINK-32347
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.3, 1.16.2, 1.17.1
>Reporter: Tigran Manasyan
>Assignee: Stefan Richter
>Priority: Major
>
> Currently if an error occurs while saving a completed checkpoint in the 
> {_}CompletedCheckpointStore{_}, _CheckpointCoordinator_ doesn't call 
> _CheckpointFailureManager_ to handle the error. Such behavior leads to the 
> fact, that errors from _CompletedCheckpointStore_ don't increase the failed 
> checkpoints count and 
> _'execution.checkpointing.tolerable-failed-checkpoints'_ option does not 
> limit the number of errors of this kind in any way.
> Possible solution may be to move the notification of 
> _CheckpointFailureManager_ about successful checkpoint after storing 
> completed checkpoint in the _CompletedCheckpointStore_ and providing the 
> exception to the _CheckpointFailureManager_ in the 
> {_}CheckpointCoordinator#{_}{_}[addCompletedCheckpointToStoreAndSubsumeOldest()|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1440]{_}
>  method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32034) Python's DistUtils is deprecated as of 3.10

2023-06-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-32034.
---
Fix Version/s: 1.18.0
   1.17.2
 Assignee: Colten Pilgreen
   Resolution: Fixed

Fixed in:
- master via 6ee1912e949cf290e5e1e620b1f4bae6552b428b
- release-1.17 via dd1dde377f775c43fcd95eed1bae91bf5fcfee2e

> Python's DistUtils is deprecated as of 3.10
> ---
>
> Key: FLINK-32034
> URL: https://issues.apache.org/jira/browse/FLINK-32034
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0
> Environment: Kubernetes
> Java 11
> Python 3.10.9
>Reporter: Colten Pilgreen
>Assignee: Colten Pilgreen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.2
>
> Attachments: get_site_packages_path_script.py, 
> get_site_packages_path_script_shortened.py
>
>
> I have recent just went through an upgrade from 1.13 to 1.17, along with that 
> I upgraded the python version on our Flink Session server. Most everything 
> that is part of our workflow works, except for Python Dependency Management.
> After doing some digging, I found the reason is due to the DeprecationWarning 
> that is printed when trying to get the site packages path. The script is 
> GET_SITE_PACKAGES_PATH_SCRIPT and it is executed in the getSitePackagesPath 
> method in the PythonEnvironmentManagerUtils class. The issue is that the 
> DeprecationWarning is included into the PYTHONPATH environment variable which 
> is passed to the beam runner. The deprecation warning breaks Python's ability 
> to find the site packages due to characters that are not allowed in 
> filesystem paths.
>  
> Example of the PYTHONPATH environment variable:
> PYTHONPATH == :1: DeprecationWarning: The distutils package is 
> deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 
> 632 for potential 
> alternatives:/tmp/python-dist-c63e1464-925c-4289-bb71-c6f50e83186f/python-requirements/lib/python3.10/site-packages
> HADOOP_CONF_DIR == /opt/flink/conf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-shaded] MartijnVisser commented on a diff in pull request #119: Update Flink Shaded dependencies to the latest versions

2023-06-15 Thread via GitHub


MartijnVisser commented on code in PR #119:
URL: https://github.com/apache/flink-shaded/pull/119#discussion_r1230931941


##
flink-shaded-jackson-parent/flink-shaded-jackson-2/src/main/resources/META-INF/NOTICE:
##
@@ -6,11 +6,11 @@ The Apache Software Foundation (http://www.apache.org/).
 
 This project bundles the following dependencies under the Apache Software 
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
 
-- com.fasterxml.jackson.core:jackson-annotations:2.13.4
-- com.fasterxml.jackson.core:jackson-core:2.13.4
-- com.fasterxml.jackson.core:jackson-databind:2.13.4.2
-- com.fasterxml.jackson.dataformat:jackson-dataformat-csv:2.13.4
-- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.13.4
-- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.13.4
-- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.13.4
-- org.yaml:snakeyaml:1.31
+- com.fasterxml.jackson.core:jackson-annotations:2.14.2
+- com.fasterxml.jackson.core:jackson-core:2.14.2
+- com.fasterxml.jackson.core:jackson-databind:2.14.2
+- com.fasterxml.jackson.dataformat:jackson-dataformat-csv:2.14.2
+- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.14.2
+- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.14.2
+- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.14.2
+- org.yaml:snakeyaml:1.33

Review Comment:
   I first want to complete the upgrade of Flink to the new dependencies before 
tackling another Flink shaded update.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32358) CI may unintentionally use fallback akka loader

2023-06-15 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-32358:


 Summary: CI may unintentionally use fallback akka loader
 Key: FLINK-32358
 URL: https://issues.apache.org/jira/browse/FLINK-32358
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.18.0


We have a fallback akka loader for developer convenience in the IDE, that is on 
the classpath of most modules. Depending on the order of jars on the classpath 
it can happen that the fallback loader appears first, which we dont want 
because it slows down the build and creates noisy logs.

We can add a simple prioritization scheme to the rpc system loading to remedy 
that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-jdbc] eskabetxe commented on a diff in pull request #49: [FLINK-32068] connector jdbc support clickhouse

2023-06-15 Thread via GitHub


eskabetxe commented on code in PR #49:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/49#discussion_r1230973150


##
flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/testutils/tables/TableBuilder.java:
##
@@ -51,4 +52,8 @@ private static TableField createField(
 String name, TableField.DbType dbType, DataType dataType, boolean 
pkField) {
 return new TableField(name, dataType, dbType, pkField);
 }
+
+public static ClickhouseTableRow ckTableRow(String name, TableField... 
fields) {

Review Comment:
   ck? do you mean ch?



##
flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/testutils/tables/TableBase.java:
##
@@ -291,4 +291,12 @@ protected  T getNullable(ResultSet rs, 
FunctionWithException T getNullable(ResultSet rs, T value) throws SQLException {
 return rs.wasNull() ? null : value;
 }
+
+public TableField[] getFields() {

Review Comment:
   this is not used, could be removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol opened a new pull request, #22791: [FLINK-32358][rpc] Add rpc system loading priority

2023-06-15 Thread via GitHub


zentol opened a new pull request, #22791:
URL: https://github.com/apache/flink/pull/22791

   This PR ensures that, if the main and fallback akka rpc system loaders are 
both on the classpath, the main loader gets priority. This avoids cases on CI 
where the fallback loader was used because it appeared first on the classpath.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #614: [FLINK-32057] Support 1.18 rescale api for applying parallelism overrides

2023-06-15 Thread via GitHub


gyfora commented on code in PR #614:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/614#discussion_r1230988179


##
examples/autoscaling/Dockerfile:
##
@@ -16,5 +16,5 @@
 # limitations under the License.
 

 
-FROM flink:1.17
+FROM ghcr.io/apache/flink-docker:1.18-SNAPSHOT-scala_2.12-java11-debian

Review Comment:
   This is not part of the test but the e2e uses the same snapshot image. I 
don't really have a better idea on how to test this with 1.18. It makes sense 
to catch any problems early even at the expense of some test failures.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] StefanRRichter opened a new pull request, #22793: [FLINK-32347][checkpoint] Exceptions from the CompletedCheckpointStore are not registered by the CheckpointFailureManager.

2023-06-15 Thread via GitHub


StefanRRichter opened a new pull request, #22793:
URL: https://github.com/apache/flink/pull/22793

   
   
   ## What is the purpose of the change
   
   Currently if an error occurs while saving a completed checkpoint in the 
CompletedCheckpointStore, CheckpointCoordinator doesn't call 
CheckpointFailureManager to handle the error. Such behavior leads to the fact, 
that errors from CompletedCheckpointStore don't increase the failed checkpoints 
count and 'execution.checkpointing.tolerable-failed-checkpoints' option does 
not limit the number of errors of this kind in any way.
   
   This PR corrects the behavior as follows:
   - Only report checkpointSuccess AFTER the completed checkpoint is stored in 
the `CompletedCheckpointStore`.
   - Calls handleCheckpointException for exceptions during the storing of the 
completed checkpoint. 
   
   
   ## Brief change log
   
   - Moved call to `checkpointSuccess` until after the checkpoint was stored.
   - Replaced the stats tracking with proper reporting to the error manager for 
checkpoint storage exceptions.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   - Added  
`CheckpointCoordinatorTest::testExceptionInStoringCompletedCheckpointIsReportedToFailureManager`
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #22656: [FLINK-32180][runtime] Moves error handling from DefaultMultipleComponentLeaderElectionService into the MultipleComponentLeaderElectionDriver i

2023-06-15 Thread via GitHub


XComp commented on PR #22656:
URL: https://github.com/apache/flink/pull/22656#issuecomment-1593046291

   > Why wouldn't the service be able to forward errors to the contender? Is it 
because the driver is doing stuff internally that the service doesn't even know 
about?
   
   The k8s implementation forwards errors that were caught by the watcher to 
the contender independently of the calls the service sends to the driver.
   
   > How will the driver get access to a contender-specific failure handler?
   
   The driver got access to the error handler during instnatiation. I followed 
that approach for the `DefaultMultipleComponentLeaderElectionService`.
   
   > I'm wondering if rather there should be a way for the driver to report 
errors to the service, which forwards it to the contender.
   
   I could do that. Adding a `onError` method to the 
`MultipleComponentLeaderElectionService` through the `Listener` interface feels 
like the more consistent way of passing over the error


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32180) Move error handling into MultipleComponentLeaderElectionDriverFactory

2023-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32180:
---
Labels: pull-request-available  (was: )

> Move error handling into MultipleComponentLeaderElectionDriverFactory
> -
>
> Key: FLINK-32180
> URL: https://issues.apache.org/jira/browse/FLINK-32180
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> {{LeaderElectionDriverFactory}} allows passing the error handling which can 
> then be used to pass in an error handler that  forwards any error to the 
> contender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22794: [FLINK-25002][build] Add java 17 add-opens/add-exports JVM arguments

2023-06-15 Thread via GitHub


flinkbot commented on PR #22794:
URL: https://github.com/apache/flink/pull/22794#issuecomment-1593081332

   
   ## CI report:
   
   * ad4624f02ec3bb8b3dab9d121abd88d4027f42e8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32136) Pyflink gateway server launch fails when purelib != platlib

2023-06-15 Thread William Ashley (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733064#comment-17733064
 ] 

William Ashley commented on FLINK-32136:


I doubt I'll have the time anytime soon to study the code enough to make such a 
change.

> Pyflink gateway server launch fails when purelib != platlib
> ---
>
> Key: FLINK-32136
> URL: https://issues.apache.org/jira/browse/FLINK-32136
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.3
>Reporter: William Ashley
>Priority: Major
>
> On distros where python's {{purelib}} is different than {{platlib}} (e.g. 
> Amazon Linux 2, but from my research it's all of the Redhat-based ones), you 
> wind up with components of packages being installed across two different 
> locations (e.g. {{/usr/local/lib/python3.7/site-packages/pyflink}} and 
> {{{}/usr/local/lib64/python3.7/site-packages/pyflink{}}}).
> {{_find_flink_home}} 
> [handles|https://github.com/apache/flink/blob/06688f345f6793a8964ec2175f44cda13c33/flink-python/pyflink/find_flink_home.py#L58C63-L60]
>  this, and in flink releases <= 1.13.2 its setting of the {{FLINK_LIB_DIR}} 
> environment variable was the one being used. However, from 1.13.3, a 
> refactoring of {{launch_gateway_server_process}} 
> ([1.13.2,|https://github.com/apache/flink/blob/release-1.13.2/flink-python/pyflink/pyflink_gateway_server.py#L200]
>  
> [1.13.3|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L280])
>  re-ordered some method calls. {{{}prepare_environment_variable{}}}'s 
> [non-awareness|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L94C67-L95]
>  of multiple homes and setting of {{FLINK_LIB_DIR}} now is the one that 
> matters, and it is the incorrect location.
> I've confirmed this problem on Amazon Linux 2 and 2023. The problem does not 
> exist on, for example, Ubuntu 20 and 22 (for which {{platlib}} == 
> {{{}purelib{}}}).
> Repro steps on Amazon Linux 2
> {quote}{{yum -y install python3 java-11}}
> {{pip3 install apache-flink==1.13.3}}
> {{python3 -c 'from pyflink.table import EnvironmentSettings ; 
> EnvironmentSettings.new_instance()'}}
> {quote}
> The resulting error is
> {quote}{{The flink-python jar is not found in the opt folder of the 
> FLINK_HOME: /usr/local/lib64/python3.7/site-packages/pyflink}}
> {{Error: Could not find or load main class 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Traceback (most recent call last):}}
> {{  File "", line 1, in }}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 214, in new_instance}}
> {{    return EnvironmentSettings.Builder()}}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 48, in {_}{{_}}init{{_}}{_}}}
> {{    gateway = get_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 62, in get_gateway}}
> {{    _gateway = launch_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 112, in launch_gateway}}
> {{    raise Exception("Java gateway process exited before sending its port 
> number")}}
> {{Exception: Java gateway process exited before sending its port number}}
> {quote}
> The flink home under /lib64/ does not contain the jar, but it is in the /lib/ 
> location
> {quote}{{bash-4.2# find /usr/local/lib64/python3.7/site-packages/pyflink 
> -name "flink-python*.jar"}}
> {{bash-4.2# find /usr/local/lib/python3.7/site-packages/pyflink -name 
> "flink-python*.jar"}}
> {{/usr/local/lib/python3.7/site-packages/pyflink/opt/flink-python_2.11-1.13.3.jar}}
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] kottmann opened a new pull request, #22789: [FLINK-27146] [Filesystem] Migrate to Junit5 and Assertj

2023-06-15 Thread via GitHub


kottmann opened a new pull request, #22789:
URL: https://github.com/apache/flink/pull/22789

   
   ## What is the purpose of the change
   
   This migrates the Filesystem test code to use Junit5 and Assertj. 
   
   ## Brief change log
   
   - Migrated the test code to use Junit5 according to the migration 
documentation
   - Also migrated to Assertj (this is in a separate commit and could be 
dropped or resubmitted in a separate PR)
   
   ## Verifying this change
   
   This change will not have any impact on non-test code since only test code 
was changed. The  change can potentially break tests, but the majority of 
changes are simple and don't touch the logic.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28460) Flink SQL supports atomic CREATE TABLE AS SELECT(CTAS)

2023-06-15 Thread tartarus (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732998#comment-17732998
 ] 

tartarus commented on FLINK-28460:
--

Resolved in FLIP-305, for details see 
https://issues.apache.org/jira/browse/FLINK-32349

> Flink SQL supports atomic CREATE TABLE AS SELECT(CTAS)
> --
>
> Key: FLINK-28460
> URL: https://issues.apache.org/jira/browse/FLINK-28460
> Project: Flink
>  Issue Type: Sub-task
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>
> Enable support for atomic CTAS in stream and batch mode via option.
> Active deletion of the created target table when the job fails or is 
> cancelled, but requires Catalog can be serialized directly via Java 
> Serialization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27146) [JUnit5 Migration] Module: flink-filesystems

2023-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27146:
---
Labels: pull-request-available  (was: )

> [JUnit5 Migration] Module: flink-filesystems
> 
>
> Key: FLINK-27146
> URL: https://issues.apache.org/jira/browse/FLINK-27146
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems, Tests
>Reporter: Chesnay Schepler
>Assignee: Jörn Kottmann
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32340) NPE in K8s operator which brakes current and subsequent deployments

2023-06-15 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-32340.
--
Resolution: Duplicate

This is fixed already in FLINK-32111

> NPE in K8s operator which brakes current and subsequent deployments
> ---
>
> Key: FLINK-32340
> URL: https://issues.apache.org/jira/browse/FLINK-32340
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.5.0
>Reporter: Sergii Nazarov
>Priority: Critical
>
> Prerequisites:
>  * Deployment via Apache Flink Kubernetes operator with version 1.5.0
>  * Deployment using FlinkDeployment spec
>  * Upgrade mode - savepoint
>  * Configuration property 
> "kubernetes.operator.job.upgrade.last-state-fallback.enabled" is true
>  * Flink version 1.15.4
>  
> Steps to reproduce:
>  # Deploy an app
>  # You can wait till the app creates a checkpoint (it doesn't change anything 
> even if "kubernetes.operator.job.upgrade.last-state-fallback.enabled" is true)
>  # Deploy a new version of the app with an error that causes throwing an 
> exception from the main method of the app
> Exception which causes operator NPE
> {code:none}
> 36mo.a.f.k.o.o.JobStatusObserver [m [1;31m[ERROR][flink-apps/myApp] Job 
> 0d78a62fe581b047510e28f26393a7ce failed with error: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Consumer does not exist
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
>   at 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:291)
>   at 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.lambda$runApplicationAsync$2(ApplicationDispatcherBootstrap.java:244)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
>   at 
> org.apache.flink.runtime.concurrent.akka.ActorSystemScheduledExecutorAdapter$ScheduledFutureTask.run(ActorSystemScheduledExecutorAdapter.java:171)
>   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$withContextClassLoader$0(ClassLoadingUtils.java:41)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
>   at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>   at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown 
> Source)
> {code}
> NPE in K8s operator
> {code:none}
> [36mo.a.f.k.o.l.AuditUtils[m [32m[INFO ][flink-apps/myApp] >>> Event  
> | Info| JOBSTATUSCHANGED | Job status changed from RECONCILING to FAILED
> [36mo.a.f.k.o.o.SavepointObserver [m [1;31m[ERROR][flink-apps/myApp] Could 
> not observe latest savepoint information.
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.service.CheckpointHistoryWrapper.getInProgressCheckpoint(CheckpointHistoryWrapper.java:60)
>   at 
> org.apache.flink.kubernetes.operator.service.AbstractFlinkService.getCheckpointInfo(AbstractFlinkService.java:564)
>   at 
> org.apache.flink.kubernetes.operator.service.AbstractFlinkService.getLastCheckpoint(AbstractFlinkService.java:520)
>   at 
> org.apache.flink.kubernetes.operator.observer.SavepointObserver.observeLatestSavepoint(SavepointObserver.java:209)
>   at 
> org.apache.flink.kubernetes.operator.observer.SavepointObserver.observeSavepointStatus(SavepointObserver.java:73)
>   at 
> org.apache.flink.kubernetes.operator.observer.deployment.ApplicationObserver.observeFlinkCluster(ApplicationObserver.java:61)
>   at 
> org.apache.flink.kubernetes.operator.observer.deployment.AbstractFlinkDeploymentObserver.observeInternal(AbstractFlinkDeploymentObserver.java:73)
>   at 
> org.apache.flink.kubernetes.operator.observer.AbstractFlinkResourceObserver.observe(AbstractFlinkResourceObserver.java:53)
>   at 
> 

[jira] [Assigned] (FLINK-32354) Support to execute the call procedure operation

2023-06-15 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia reassigned FLINK-32354:


Assignee: luoyuxia

> Support to execute the call procedure operation
> ---
>
> Key: FLINK-32354
> URL: https://issues.apache.org/jira/browse/FLINK-32354
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32352) Support to convert call procedure statement to correpsonding operation

2023-06-15 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia reassigned FLINK-32352:


Assignee: luoyuxia

> Support to convert call procedure statement to correpsonding operation
> --
>
> Key: FLINK-32352
> URL: https://issues.apache.org/jira/browse/FLINK-32352
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32356) Add document for calling procedure

2023-06-15 Thread luoyuxia (Jira)
luoyuxia created FLINK-32356:


 Summary: Add document for calling procedure
 Key: FLINK-32356
 URL: https://issues.apache.org/jira/browse/FLINK-32356
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32353) Make Cassandra connector compatible with Flink 1.18

2023-06-15 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733018#comment-17733018
 ] 

Martijn Visser commented on FLINK-32353:


[~echauchot] Want to take a look?

> Make Cassandra connector compatible with Flink 1.18
> ---
>
> Key: FLINK-32353
> URL: https://issues.apache.org/jira/browse/FLINK-32353
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Reporter: Martijn Visser
>Priority: Major
>
> The current Cassandra connector in {{main}} fails when testing against Flink 
> 1.18-SNAPSHOT
> {code:java}
> Error:  Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.1 s 
> <<< FAILURE! - in org.apache.flink.architecture.rules.ITCaseRules
> Error:  ITCaseRules.ITCASE_USE_MINICLUSTER  Time elapsed: 0.025 s  <<< 
> FAILURE!
> java.lang.AssertionError: 
> Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests should use a 
> MiniCluster resource or extension' was violated (1 times):
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase does 
> not satisfy: only one of the following predicates match:
> * reside in a package 'org.apache.flink.runtime.*' and contain any fields 
> that are static, final, and of type InternalMiniClusterExtension and 
> annotated with @RegisterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and contain any 
> fields that are static, final, and of type MiniClusterExtension and annotated 
> with @RegisterExtension or are , and of type MiniClusterTestEnvironment and 
> annotated with @TestEnv
> * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
> @ExtendWith with class InternalMiniClusterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and is annotated 
> with @ExtendWith with class MiniClusterExtension
>  or contain any fields that are public, static, and of type 
> MiniClusterWithClientResource and final and annotated with @ClassRule or 
> contain any fields that is of type MiniClusterWithClientResource and public 
> and final and not static and annotated with @Rule
> {code}
> https://github.com/apache/flink-connector-cassandra/actions/runs/5276835802/jobs/9544092571#step:13:811



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-docker] zentol commented on pull request #158: [ADD-SYMLINKS] Add symlinks for FLINK_VERSION to FLINK_RELEASE jars

2023-06-15 Thread via GitHub


zentol commented on PR #158:
URL: https://github.com/apache/flink-docker/pull/158#issuecomment-1592912361

   > But then I might ask why flink:1.16 is even an option for a base image if 
we still have to worry about patch version changes?
   
   That's an interesting line of thought. To be honest I believe the answer to 
that question is that we just inherited that and didn't think   much about it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] MartijnVisser commented on pull request #42: [FLINK-31820] Support data source sub-database and sub-table

2023-06-15 Thread via GitHub


MartijnVisser commented on PR #42:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/42#issuecomment-1592942865

   No consensus yet in ticket, closing PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] MartijnVisser closed pull request #42: [FLINK-31820] Support data source sub-database and sub-table

2023-06-15 Thread via GitHub


MartijnVisser closed pull request #42: [FLINK-31820] Support data source 
sub-database and sub-table
URL: https://github.com/apache/flink-connector-jdbc/pull/42


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] MartijnVisser commented on pull request #49: [FLINK-32068] connector jdbc support clickhouse

2023-06-15 Thread via GitHub


MartijnVisser commented on PR #49:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/49#issuecomment-1592940408

   > What other tasks are needed for this PR to merge
   
   Please mark review items as resolved, if you really have resolved them. 
Right now, it looks like there are still open items. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] MartijnVisser closed pull request #44: Fix duplicated data on retrying

2023-06-15 Thread via GitHub


MartijnVisser closed pull request #44: Fix duplicated data on retrying
URL: https://github.com/apache/flink-connector-jdbc/pull/44


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   4   >