[jira] [Commented] (FLINK-35358) Breaking change when loading artifacts

2024-05-16 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846911#comment-17846911
 ] 

Ferenc Csaky commented on FLINK-35358:
--

PR is opened against master and 1.19, CI run were successful for both.

> Breaking change when loading artifacts
> --
>
> Key: FLINK-35358
> URL: https://issues.apache.org/jira/browse/FLINK-35358
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, flink-docker
>Affects Versions: 1.19.0
>Reporter: Rasmus Thygesen
>Priority: Not a Priority
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> We have been using the following code snippet in our Dockerfiles for running 
> a Flink job in application mode
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar 
> /opt/flink/usrlib/artifacts/my-job.jar
> USER flink {code}
>  
> Which has been working since at least around Flink 1.14, but the 1.19 update 
> has broken our Dockerfiles. The fix is to put the jar file a step further out 
> so the code snippet becomes
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar /opt/flink/usrlib/my-job.jar
> USER flink  {code}
>  
> We have not spent too much time looking into what the cause is, but we get 
> the stack trace
>  
> {code:java}
> myjob-jobmanager-1   | org.apache.flink.util.FlinkException: Could not load 
> the provided entrypoint class.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:230)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:149)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.lambda$main$0(StandaloneApplicationClusterEntryPoint.java:90)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:89)
>  [flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   | Caused by: 
> org.apache.flink.client.program.ProgramInvocationException: The program's 
> entry point class 'my.company.job.MyJob' was not found in the jar file.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.loadMainClass(PackagedProgram.java:481)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:153)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:65)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram$Builder.build(PackagedProgram.java:691)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:228)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     ... 4 more
> myjob-jobmanager-1   | Caused by: java.lang.ClassNotFoundException: 
> my.company.job.MyJob
> myjob-jobmanager-1   |     at java.net.URLClassLoader.findClass(Unknown 
> Source) ~[?:?]
> myjob-jobmanager-1   |     at java.lang.ClassLoader.loadClass(Unknown Source) 
> ~[?:?]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:67)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:74)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:51)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at java.lang.ClassLoader.loadClass(Unknown Source) 
> ~[?:?]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:197)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at java.lang.Class.forName0(Native Method) ~[?:?]
> myjob-jobmanager-1   |     at java.lang.Class.forName(Unknown Source) ~[?:?]
> myjob-jobmanager-1   |     at 
> 

[jira] [Comment Edited] (FLINK-35358) Breaking change when loading artifacts

2024-05-15 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846647#comment-17846647
 ] 

Ferenc Csaky edited comment on FLINK-35358 at 5/15/24 2:51 PM:
---

This change was unintentional. Restricting JARs to only be put the root level 
of {{usrlib}} seems an unnecessary boundary. I think your analysis is on point, 
but AFAIK whatever we do this only can be fixed in a Flink patch version, but 
[~martijnvisser] correct me if I am wrong.

The fix is easy, I can open a PR with the fix and added unit tests to cover 
this case by EOD today. So feel free to assign this to me.


was (Author: ferenc-csaky):
This change was unintentional. Restricting JARs to only be put 1 level under 
{{usrlib}} seems an unnecessary boundary. I think your analysis is on point, 
but AFAIK whatever we do this only can be fixed in a Flink patch version, but 
[~martijnvisser] correct me if I am wrong.

The fix is easy, I can open a PR with the fix and added unit tests to cover 
this case by EOD today. So feel free to assign this to me.

> Breaking change when loading artifacts
> --
>
> Key: FLINK-35358
> URL: https://issues.apache.org/jira/browse/FLINK-35358
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, flink-docker
>Affects Versions: 1.19.0
>Reporter: Rasmus Thygesen
>Priority: Not a Priority
>  Labels: pull-request-available
>
> We have been using the following code snippet in our Dockerfiles for running 
> a Flink job in application mode
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar 
> /opt/flink/usrlib/artifacts/my-job.jar
> USER flink {code}
>  
> Which has been working since at least around Flink 1.14, but the 1.19 update 
> has broken our Dockerfiles. The fix is to put the jar file a step further out 
> so the code snippet becomes
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar /opt/flink/usrlib/my-job.jar
> USER flink  {code}
>  
> We have not spent too much time looking into what the cause is, but we get 
> the stack trace
>  
> {code:java}
> myjob-jobmanager-1   | org.apache.flink.util.FlinkException: Could not load 
> the provided entrypoint class.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:230)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:149)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.lambda$main$0(StandaloneApplicationClusterEntryPoint.java:90)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:89)
>  [flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   | Caused by: 
> org.apache.flink.client.program.ProgramInvocationException: The program's 
> entry point class 'my.company.job.MyJob' was not found in the jar file.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.loadMainClass(PackagedProgram.java:481)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:153)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:65)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram$Builder.build(PackagedProgram.java:691)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:228)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     ... 4 more
> myjob-jobmanager-1   | Caused by: java.lang.ClassNotFoundException: 
> my.company.job.MyJob
> myjob-jobmanager-1   |     at java.net.URLClassLoader.findClass(Unknown 
> Source) ~[?:?]
> myjob-jobmanager-1   |     at java.lang.ClassLoader.loadClass(Unknown Source) 
> ~[?:?]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:67)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> 

[jira] [Commented] (FLINK-35358) Breaking change when loading artifacts

2024-05-15 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846647#comment-17846647
 ] 

Ferenc Csaky commented on FLINK-35358:
--

This change was unintentional. Restricting JARs to only be put 1 level under 
{{usrlib}} seems an unnecessary boundary. I think your analysis is on point, 
but AFAIK whatever we do this only can be fixed in a Flink patch version, but 
[~martijnvisser] correct me if I am wrong.

The fix is easy, I can open a PR with the fix and added unit tests to cover 
this case by EOD today. So feel free to assign this to me.

> Breaking change when loading artifacts
> --
>
> Key: FLINK-35358
> URL: https://issues.apache.org/jira/browse/FLINK-35358
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, flink-docker
>Affects Versions: 1.19.0
>Reporter: Rasmus Thygesen
>Priority: Not a Priority
>
> We have been using the following code snippet in our Dockerfiles for running 
> a Flink job in application mode
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar 
> /opt/flink/usrlib/artifacts/my-job.jar
> USER flink {code}
>  
> Which has been working since at least around Flink 1.14, but the 1.19 update 
> has broken our Dockerfiles. The fix is to put the jar file a step further out 
> so the code snippet becomes
>  
> {code:java}
> FROM flink:1.18.1-scala_2.12-java17
> COPY --from=build /app/target/my-job*.jar /opt/flink/usrlib/my-job.jar
> USER flink  {code}
>  
> We have not spent too much time looking into what the cause is, but we get 
> the stack trace
>  
> {code:java}
> myjob-jobmanager-1   | org.apache.flink.util.FlinkException: Could not load 
> the provided entrypoint class.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:230)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:149)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.lambda$main$0(StandaloneApplicationClusterEntryPoint.java:90)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:89)
>  [flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   | Caused by: 
> org.apache.flink.client.program.ProgramInvocationException: The program's 
> entry point class 'my.company.job.MyJob' was not found in the jar file.
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.loadMainClass(PackagedProgram.java:481)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:153)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram.(PackagedProgram.java:65)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.PackagedProgram$Builder.build(PackagedProgram.java:691)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:228)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     ... 4 more
> myjob-jobmanager-1   | Caused by: java.lang.ClassNotFoundException: 
> my.company.job.MyJob
> myjob-jobmanager-1   |     at java.net.URLClassLoader.findClass(Unknown 
> Source) ~[?:?]
> myjob-jobmanager-1   |     at java.lang.ClassLoader.loadClass(Unknown Source) 
> ~[?:?]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:67)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:74)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:51)
>  ~[flink-dist-1.19.0.jar:1.19.0]
> myjob-jobmanager-1   |     at java.lang.ClassLoader.loadClass(Unknown Source) 
> ~[?:?]
> myjob-jobmanager-1   |     at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:197)

[jira] [Updated] (FLINK-34931) Update Kudu DataStream connector to use Sink V2

2024-05-14 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-34931:
-
Summary: Update Kudu DataStream connector to use Sink V2  (was: Update Kudu 
connector DataStream Sink implementation)

> Update Kudu DataStream connector to use Sink V2
> ---
>
> Key: FLINK-34931
> URL: https://issues.apache.org/jira/browse/FLINK-34931
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kudu
>Reporter: Ferenc Csaky
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Update the DataSource API classes to use the current interfaces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35136) Release flink-connector-hbase vX.X.X for Flink 1.19

2024-04-17 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838101#comment-17838101
 ] 

Ferenc Csaky commented on FLINK-35136:
--

Currently we have a v3.0.0 HBase connector release, that supports Flink 1.16 
and 1.17. There were some added features, but since then HBase 1.x support [got 
removed|https://github.com/apache/flink-connector-hbase/commit/9cbc109b368e26770891e31750c370d295d629e9]
 from the connector, so my understanding is the new version should be v4.0.0, 
supporting Flink 1.18 and 1.19. WDYT?

I can take care of the necessary changes and create a PR, feel free to assign 
the ticket to me.

> Release flink-connector-hbase vX.X.X for Flink 1.19
> ---
>
> Key: FLINK-35136
> URL: https://issues.apache.org/jira/browse/FLINK-35136
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-hbase



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35114) Remove old Table API implementations

2024-04-15 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-35114:
-
Component/s: Connectors / Kudu

> Remove old Table API implementations
> 
>
> Key: FLINK-35114
> URL: https://issues.apache.org/jira/browse/FLINK-35114
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kudu
>Reporter: Ferenc Csaky
>Priority: Major
>
> At the moment, the connector has both the old Table sink/source/catalog 
> implementations and the matching Dynamic... implementations as well.
> Going forward, the deprecated old implementation should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34928) FLIP-439: Externalize Kudu Connector from Bahir

2024-04-15 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-34928:
-
Component/s: Connectors / Kudu

> FLIP-439: Externalize Kudu Connector from Bahir
> ---
>
> Key: FLINK-34928
> URL: https://issues.apache.org/jira/browse/FLINK-34928
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kudu
>Reporter: Ferenc Csaky
>Priority: Major
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-439%3A+Externalize+Kudu+Connector+from+Bahir



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35114) Remove old Table API implementations

2024-04-15 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-35114:


 Summary: Remove old Table API implementations
 Key: FLINK-35114
 URL: https://issues.apache.org/jira/browse/FLINK-35114
 Project: Flink
  Issue Type: Sub-task
Reporter: Ferenc Csaky


At the moment, the connector has both the old Table sink/source/catalog 
implementations and the matching Dynamic... implementations as well.

Going forward, the deprecated old implementation should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34931) Update Kudu connector DataStream Sink implementation

2024-04-15 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-34931:
-
Summary: Update Kudu connector DataStream Sink implementation  (was: Update 
Kudu connector DataStream Source/Sink implementation)

> Update Kudu connector DataStream Sink implementation
> 
>
> Key: FLINK-34931
> URL: https://issues.apache.org/jira/browse/FLINK-34931
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kudu
>Reporter: Ferenc Csaky
>Priority: Major
>
> Update the DataSource API classes to use the current interfaces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34751) RestClusterClient APIs doesn't work with running Flink application on YARN

2024-03-26 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830877#comment-17830877
 ] 

Ferenc Csaky commented on FLINK-34751:
--

Hi! Would you be able to share how to reproduce the issue?. My understanding 
given the description is {{flink list}} fails to list secured Yarn AM 
deployments. Is that correct?

> RestClusterClient APIs doesn't work with running Flink application on YARN
> --
>
> Key: FLINK-34751
> URL: https://issues.apache.org/jira/browse/FLINK-34751
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
> Apache YARN uses web proxy in Resource Manager to expose the endpoints 
> available through the AM process (in this case RestServerEndpoint that run as 
> part of AM). Note: this is in the context of running Flink cluster in YARN 
> application mode.
> For eg: in the case of RestClusterClient#listJobs -
> {{Standalone listJobs}} makes the request as - 
> {{{}https://:/v1/{}}}{{{}jobs{}}}{{{}/overview{}}}
> YARN the same request has to be proxified as -  
> {{{}https://:/proxy//v1/{}}}{{{}jobs{}}}{{{}/overview?proxyapproved=true{}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34931) Update Kudu connector DataStream Source/Sink implementation

2024-03-25 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34931:


 Summary: Update Kudu connector DataStream Source/Sink 
implementation
 Key: FLINK-34931
 URL: https://issues.apache.org/jira/browse/FLINK-34931
 Project: Flink
  Issue Type: Sub-task
Reporter: Ferenc Csaky


Update the DataSource API classes to use the current interfaces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34930) Move existing Kudu connector code from Bahir repo to dedicated repo

2024-03-25 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-34930:
-
Description: 
Move the existing Kudu connector code from the Bahir [1] repository to the 
dedicated connector repo.

Code should be moved only with necessary changes (bump version, change groupID, 
integrate to common connector CI) and we sould state explicitly that the state 
was forked from the Bahir repo.

> Move existing Kudu connector code from Bahir repo to dedicated repo
> ---
>
> Key: FLINK-34930
> URL: https://issues.apache.org/jira/browse/FLINK-34930
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Ferenc Csaky
>Priority: Major
>
> Move the existing Kudu connector code from the Bahir [1] repository to the 
> dedicated connector repo.
> Code should be moved only with necessary changes (bump version, change 
> groupID, integrate to common connector CI) and we sould state explicitly that 
> the state was forked from the Bahir repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34930) Move existing Kudu connector code from Bahir repo to dedicated repo

2024-03-25 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34930:


 Summary: Move existing Kudu connector code from Bahir repo to 
dedicated repo
 Key: FLINK-34930
 URL: https://issues.apache.org/jira/browse/FLINK-34930
 Project: Flink
  Issue Type: Sub-task
Reporter: Ferenc Csaky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34929) Create "flink-connector-kudu" repository

2024-03-25 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34929:


 Summary: Create "flink-connector-kudu" repository
 Key: FLINK-34929
 URL: https://issues.apache.org/jira/browse/FLINK-34929
 Project: Flink
  Issue Type: Sub-task
Reporter: Ferenc Csaky


We should create a "flink-connector-kudu" repositry under the "apache" GitHub 
organization.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34928) FLIP-439: Externalize Kudu Connector from Bahir

2024-03-25 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34928:


 Summary: FLIP-439: Externalize Kudu Connector from Bahir
 Key: FLINK-34928
 URL: https://issues.apache.org/jira/browse/FLINK-34928
 Project: Flink
  Issue Type: Improvement
Reporter: Ferenc Csaky


Umbrella issue for: 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-439%3A+Externalize+Kudu+Connector+from+Bahir



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34478) NoSuchMethod error for "flink cancel $jobId" via Command Line

2024-03-05 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823647#comment-17823647
 ] 

Ferenc Csaky commented on FLINK-34478:
--

Just tried this with the {{TopSpeedWindowing.jar}} streaming example, worked 
fine for me on an M1 macbook. Are you trying with a custom JAR? On what system?

> NoSuchMethod error for "flink cancel $jobId" via Command Line
> -
>
> Key: FLINK-34478
> URL: https://issues.apache.org/jira/browse/FLINK-34478
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.18.1
>Reporter: Liu Yi
>Priority: Major
>
> On 1.18.1 standalone mode (launched by "\{flink}/bin/start-cluster.sh"), I 
> hit " [java.lang.NoSuchMethodError: 'boolean 
> org.apache.commons.cli.CommandLine.hasOption(org.apache.commons.cli.Option)'|https://www.google.com/search?q=java.lang.NoSuchMethodError:+%27boolean+org.apache.commons.cli.CommandLine.hasOption%28org.apache.commons.cli.Option%29%27]
>  " when trying to cancel a job submitted via the UI by executing the Command 
> Line "{*}{flink}/bin/flink cancel _$jobId_{*}". While clicking on "Cancel 
> Job" link in the UI can cancel the job just fine, and "flink run" command 
> line also works fine.
> Has anyone seen same/similar behavior?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34580) Job run via REST erases "pipeline.classpaths" config

2024-03-05 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34580:


 Summary: Job run via REST erases "pipeline.classpaths" config
 Key: FLINK-34580
 URL: https://issues.apache.org/jira/browse/FLINK-34580
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
Affects Versions: 1.18.1, 1.17.2, 1.19.0
Reporter: Ferenc Csaky
 Fix For: 1.20.0


The 
[{{JarHandlerContext#applyToConfiguration}}|https://github.com/apache/flink/blob/e0b6c121eaf7aeb2974a45d199e452b022f07d29/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/utils/JarHandlerUtils.java#L134]
 creates a {{PackagedProgram}} and then overwrites the {{pipeline.jars}} and 
{{pipeline.classpaths}} values according to that newly created 
{{{}PackagedProgram{}}}.

Although that [{{PackagedProgram}} 
init|https://github.com/apache/flink/blob/e0b6c121eaf7aeb2974a45d199e452b022f07d29/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/utils/JarHandlerUtils.java#L185]
 does not set {{classpaths}} at all, so it will always overwrites the effective 
configuration with an empty value, even if it had something previously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34506) Do not copy "file://" schemed artifact in standalone application modes

2024-02-23 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34506:


 Summary: Do not copy "file://" schemed artifact in standalone 
application modes
 Key: FLINK-34506
 URL: https://issues.apache.org/jira/browse/FLINK-34506
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission
Affects Versions: 1.19.0
Reporter: Ferenc Csaky


In standalone application mode, if an artifact is passed via a path witohut 
prefix, the file will be copied to `user.artifacts.base-dir`, although it 
should not be, as it can accessable locally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34388) Release Testing: Verify FLINK-28915 Support artifact fetching in Standalone and native K8s application mode

2024-02-22 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819967#comment-17819967
 ] 

Ferenc Csaky commented on FLINK-34388:
--

Thank you [~Yu Chen] for the thorough testing!

{quote}By the way, it also works with an absolute path(without `local://`), but 
it will copy the file to the `user.artifacts.base-dir`. I'm not sure whether 
it's expected (In my opinion, it was a local file, maybe we don't need to copy 
that cc Ferenc Csaky ){quote}

This is a good point, with the current implementation a simple absolute path 
without a prefix will be handled as remote, cause there is a strict check to 
{{local}} scheme, so it ignores {{file}}, which will be present, when a path 
without a prefix is passed. I agree with your opinion and would ocnsider this a 
bug. I would not consider this a major thing that should block the 1.19 
release, but it would be an easy fix, so I can prepare a PR today.

> Release Testing: Verify FLINK-28915 Support artifact fetching in Standalone 
> and native K8s application mode
> ---
>
> Key: FLINK-34388
> URL: https://issues.apache.org/jira/browse/FLINK-34388
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Affects Versions: 1.19.0
>Reporter: Ferenc Csaky
>Assignee: Yu Chen
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This ticket covers testing FLINK-28915. More details and the added docs are 
> accessible on the [PR|https://github.com/apache/flink/pull/24065]
> Test 1: Pass {{local://}} job jar in standalone mode, check the artifacts are 
> not actually copied.
> Test 2: Pass multiple artifacts in standalone mode.
> Test 3: Pass a non-local job jar in native k8s mode. [1]
> Test 4: Pass additional remote artifacts in native k8s mode.
> Available config options: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#artifact-fetching
> [1] Custom docker image build instructions: 
> https://github.com/apache/flink-docker/tree/dev-master
> Note: The docker build instructions also contains a web server example that 
> can be used to serve HTTP artifacts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-21 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819354#comment-17819354
 ] 

Ferenc Csaky commented on FLINK-34399:
--

Executed the following test scenarios:
 # Simple datagen table:
{code:sql}
CREATE TABLE IF NOT EXISTS `datagen_table` (
  `col_str` STRING,
  `col_int` INT,
  `col_ts` TIMESTAMP(3),
  WATERMARK FOR `col_ts` AS col_ts - INTERVAL '5' SECOND
) WITH (
  'connector' = 'datagen'
);
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `col_str`, `col_int`, `col_ts` FROM 
`default_catalog`.`default_database`.`datagen_table`
{code}

 # Aggreagte view:
{code:sql}
CREATE TABLE IF NOT EXISTS `txn_gen` (
  `id` INT,
  `amount` INT,
  `timestamp` TIMESTAMP(3),
  WATERMARK FOR `timestamp` AS `timestamp` - INTERVAL '1' SECOND
) WITH (
  'connector' = 'datagen',
  'fields.id.max' = '5',
  'fields.id.min' = '1',
  'rows-per-second' = '1'
);

CREATE VIEW IF NOT EXISTS `aggr_five_sec` AS SELECT
  `id`,
  COUNT(`id`) AS `txn_count`,
  TUMBLE_ROWTIME(`timestamp`, INTERVAL '5' SECOND) AS `w_row_time`
FROM `txn_gen`
GROUP BY `id`, TUMBLE(`timestamp`, INTERVAL '5' SECOND)
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `id`, `txn_count`, `w_row_time` FROM 
`default_catalog`.`default_database`.`aggr_five_sec`
{code}

 # Join view:
{code:sql}
CREATE TEMPORARY TABLE IF NOT EXISTS `location_updates` (
  `character_id` INT,
  `location` STRING,
  `proctime` AS PROCTIME()
)
WITH (
  'connector' = 'faker', 
  'fields.character_id.expression' = '#{number.numberBetween ''0'',''100''}',
  'fields.location.expression' = '#{harry_potter.location}'
);

CREATE TEMPORARY TABLE IF NOT EXISTS `characters` (
  `character_id` INT,
  `name` STRING
)
WITH (
  'connector' = 'faker', 
  'fields.character_id.expression' = '#{number.numberBetween ''0'',''100''}',
  'fields.name.expression' = '#{harry_potter.characters}'
);

CREATE TEMPORARY VIEW IF NOT EXISTS `joined` AS SELECT 
  c.character_id,
  l.location,
  c.name
FROM location_updates AS l
JOIN characters FOR SYSTEM_TIME AS OF proctime AS c
ON l.character_id = c.character_id;
{code}
{{asSerializableString()}} result:
{code:sql}
SELECT `character_id`, `location`, `name` FROM 
`default_catalog`.`default_database`.`joined`
{code}

Job execution went fine, all tests gave the expected resulst. Also checked the 
related PRs for this feature, it is very well covered with unit tests, so I 
think it looks good and works as desired.

> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: release-testing
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-20 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818745#comment-17818745
 ] 

Ferenc Csaky commented on FLINK-34399:
--

Thank you [~lincoln.86xy], I'll proceed with it in the next 1-2 days.

> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: release-testing
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-13 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816940#comment-17816940
 ] 

Ferenc Csaky commented on FLINK-34399:
--

Hi! I can take this task, would someone be able to help and assign it to me? 
Thanks!

> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: release-testing
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34411) "Wordcount on Docker test (custom fs plugin)" timed out with some strange issue while setting the test up

2024-02-08 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815746#comment-17815746
 ] 

Ferenc Csaky commented on FLINK-34411:
--

The current [dev-1.19|https://github.com/apache/flink-docker/tree/dev-1.19] 
branch in the flink-docker repo was cloned from the {{master}} branch, hence 
that script does not exists. I also bumped into this today.

> "Wordcount on Docker test (custom fs plugin)" timed out with some strange 
> issue while setting the test up
> -
>
> Key: FLINK-34411
> URL: https://issues.apache.org/jira/browse/FLINK-34411
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57380=logs=bea52777-eaf8-5663-8482-18fbc3630e81=43ba8ce7-ebbf-57cd-9163-444305d74117=5802
> {code}
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 Running 'Wordcount on Docker test (custom fs plugin)'
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:41 Docker version 24.0.7, build afdd53b
> Feb 07 15:22:44 docker-compose version 1.29.2, build 5becea4c
> Feb 07 15:22:44 Starting fileserver for Flink distribution
> Feb 07 15:22:44 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:07 ~/work/1/s
> Feb 07 15:23:07 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:07 Preparing Dockeriles
> Feb 07 15:23:07 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> Cloning into 'flink-docker'...
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: 
> line 65: ./add-custom.sh: No such file or directory
> Feb 07 15:23:07 Building images
> ERROR: unable to prepare context: path "dev/test_docker_embedded_job-ubuntu" 
> not found
> Feb 07 15:23:09 ~/work/1/s
> Feb 07 15:23:09 Command: build_image test_docker_embedded_job failed. 
> Retrying...
> Feb 07 15:23:14 Starting fileserver for Flink distribution
> Feb 07 15:23:14 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:36 ~/work/1/s
> Feb 07 15:23:36 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:36 Preparing Dockeriles
> Feb 07 15:23:36 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> fatal: destination path 'flink-docker' already exists and is not an empty 
> directory.
> Feb 07 15:23:36 Retry 1/5 exited 128, retrying in 1 seconds...
> Traceback (most recent call last):
>   File 
> "/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/python3_fileserver.py",
>  line 26, in 
> httpd = socketserver.TCPServer(("", ), handler)
>   File "/usr/lib/python3.8/socketserver.py", line 452, in __init__
> self.server_bind()
>   File "/usr/lib/python3.8/socketserver.py", line 466, in server_bind
> self.socket.bind(self.server_address)
> OSError: [Errno 98] Address already in use
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34413) Drop support for HBase v1

2024-02-08 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815744#comment-17815744
 ] 

Ferenc Csaky commented on FLINK-34413:
--

Thanks for creating this ticket, feel free to assign it to me, I will cover 
this.

> Drop support for HBase v1
> -
>
> Key: FLINK-34413
> URL: https://issues.apache.org/jira/browse/FLINK-34413
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / HBase
>Reporter: Martijn Visser
>Priority: Major
>
> As discussed in 
> https://lists.apache.org/thread/6663052dmfnqm8wvqoxx9k8jwcshg1zq 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34388) Release Testing: Verify FLINK-28915 Support artifact fetching in Standalone and native K8s application mode

2024-02-06 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-34388:
-
Description: 
This ticket covers testing FLINK-28915. More details and the added docs are 
accessible on the [PR|https://github.com/apache/flink/pull/24065]

Test 1: Pass {{local://}} job jar in standalone mode, check the artifacts are 
not actually copied.
Test 2: Pass multiple artifacts in standalone mode.
Test 3: Pass a non-local job jar in native k8s mode. [1]
Test 4: Pass additional remote artifacts in native k8s mode.

Available config options: 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#artifact-fetching

[1] Custom docker image build instructions: 
https://github.com/apache/flink-docker/tree/dev-master

Note: The docker build instructions also contains a web server example that can 
be used to serve HTTP artifacts.

  was:
This ticket covers testing three related features: FLINK-33695, FLINK-33735 and 
FLINK-33696.

Instructions:
#  Configure Flink to use 
[Slf4jTraceReporter|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/trace_reporters/#slf4j]
 and with enabled *INFO* level logging (can be to console or to a file, doesn't 
matter).
# Start a streaming job with enabled checkpointing.
# Let it run for a couple of checkpoints.
# Verify presence of a single *JobInitialization* [1] trace logged just after 
job start up.
# Verify presence of a couple of *Checkpoint* [1] traces logged after each 
successful or failed checkpoint.

[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/traces/#checkpointing-and-initialization


> Release Testing: Verify FLINK-28915 Support artifact fetching in Standalone 
> and native K8s application mode
> ---
>
> Key: FLINK-34388
> URL: https://issues.apache.org/jira/browse/FLINK-34388
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Affects Versions: 1.19.0
>Reporter: Ferenc Csaky
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> This ticket covers testing FLINK-28915. More details and the added docs are 
> accessible on the [PR|https://github.com/apache/flink/pull/24065]
> Test 1: Pass {{local://}} job jar in standalone mode, check the artifacts are 
> not actually copied.
> Test 2: Pass multiple artifacts in standalone mode.
> Test 3: Pass a non-local job jar in native k8s mode. [1]
> Test 4: Pass additional remote artifacts in native k8s mode.
> Available config options: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#artifact-fetching
> [1] Custom docker image build instructions: 
> https://github.com/apache/flink-docker/tree/dev-master
> Note: The docker build instructions also contains a web server example that 
> can be used to serve HTTP artifacts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34388) Release Testing: Verify FLINK-28915 Support artifact fetching in Standalone and native K8s application mode

2024-02-06 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-34388:


 Summary: Release Testing: Verify FLINK-28915 Support artifact 
fetching in Standalone and native K8s application mode
 Key: FLINK-34388
 URL: https://issues.apache.org/jira/browse/FLINK-34388
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Metrics
Affects Versions: 1.19.0
Reporter: Ferenc Csaky
 Fix For: 1.19.0


This ticket covers testing three related features: FLINK-33695, FLINK-33735 and 
FLINK-33696.

Instructions:
#  Configure Flink to use 
[Slf4jTraceReporter|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/trace_reporters/#slf4j]
 and with enabled *INFO* level logging (can be to console or to a file, doesn't 
matter).
# Start a streaming job with enabled checkpointing.
# Let it run for a couple of checkpoints.
# Verify presence of a single *JobInitialization* [1] trace logged just after 
job start up.
# Verify presence of a couple of *Checkpoint* [1] traces logged after each 
successful or failed checkpoint.

[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/traces/#checkpointing-and-initialization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32315) Support local file upload in K8s mode

2024-01-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810338#comment-17810338
 ] 

Ferenc Csaky commented on FLINK-32315:
--

Hi! I wrapped up the work on FLINK-28915 and also added support for any kind of 
dependency, not just the job JAR, so step 2. is completely covered in that 
work. I would like take this one as well and make the additional dep management 
story on K8s complete.

> Support local file upload in K8s mode
> -
>
> Key: FLINK-32315
> URL: https://issues.apache.org/jira/browse/FLINK-32315
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, Deployment / Kubernetes
>Reporter: Paul Lin
>Priority: Major
>
> Currently, Flink assumes all resources are locally accessible in the pods, 
> which requires users to prepare the resources by mounting storages, 
> downloading resources with init containers, or rebuilding images for each 
> execution.
> We could make things much easier by introducing a built-in file distribution 
> mechanism based on Flink-supported filesystems. It's implemented in two steps:
>  
> 1. KubernetesClusterDescripter uploads all local resources to remote storage 
> via Flink filesystem (skips if the resources are already remote).
> 2. KubernetesApplicationClusterEntrypoint and KubernetesTaskExecutorRunner 
> download the resources and put them in the classpath during startup.
>  
> The 2nd step is mostly done by 
> [FLINK-28915|https://issues.apache.org/jira/browse/FLINK-28915], thus this 
> issue is focused on the upload part.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33542) Update HBase connector tests to JUnit5

2023-11-14 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-33542:


 Summary: Update HBase connector tests to JUnit5
 Key: FLINK-33542
 URL: https://issues.apache.org/jira/browse/FLINK-33542
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / HBase
Reporter: Ferenc Csaky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33440) Bump flink version on flink-connectors-hbase

2023-11-02 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-33440:


 Summary: Bump flink version on flink-connectors-hbase
 Key: FLINK-33440
 URL: https://issues.apache.org/jira/browse/FLINK-33440
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / HBase
Reporter: Ferenc Csaky


Follow-up the 1.18 release in the connector repo as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33401) Kafka connector has broken version

2023-11-02 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782197#comment-17782197
 ] 

Ferenc Csaky commented on FLINK-33401:
--

In my opinion some restructure would be required in the documentation, because 
we are still highlighting data stream connectors as "bundled", which are not 
anymore: 
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/overview/

> Kafka connector has broken version
> --
>
> Key: FLINK-33401
> URL: https://issues.apache.org/jira/browse/FLINK-33401
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Pavel Khokhlov
>Priority: Major
>  Labels: pull-request-available
>
> Trying to run Flink 1.18 with Kafka Connector
> but official documentation has a bug  
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]
> {noformat}
> 
> org.apache.flink
> flink-connector-kafka
> -1.18
> {noformat}
> Basically version *-1.18* doesn't exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33423) Resolve the problem that YarnClusterClientFactory cannot load yarn configurations

2023-11-02 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782109#comment-17782109
 ] 

Ferenc Csaky commented on FLINK-33423:
--

What the ticket states is correct, currently {{yarn-site.xml}} is not loaded 
automatically. At the moment only Hive loads that on its own to do some stuff 
according to the given YARN conf.

I am wondering why is this necessary though? YARN side configuration specifics 
should not be required to deploy jobs. There is a way to pass YARN specific 
configuration in the {{flink-conf.yaml}} via {{{}flink.yarn.{}}}. More 
details in the 
[docs|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-yarn-%3Ckey%3E].

I think {{HadoopUtils#getHadoopConfiguration}} should definitely not load 
{{{}yarn-site.xml{}}}, because that is Hadoop utility, not a YARN specific one 
and used by other components.

> Resolve the problem that YarnClusterClientFactory cannot load yarn 
> configurations
> -
>
> Key: FLINK-33423
> URL: https://issues.apache.org/jira/browse/FLINK-33423
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.17.1
>Reporter: zhengzhili
>Priority: Major
> Attachments: flinktest.png, 微信图片_20231101151644.png, 
> 微信图片_20231101152359.png, 微信图片_20231101152404.png, 微信截图_20231101152725.png
>
>
> YarnClusterClientFactory. getClusterDescriptor method   Unable to load the 
> configuration for yarn .  The reason is that it is called 
> HadoopUtils.getHadoopConfiguration and this method only loading HDFS 
> configuration.
> The call chain looks like this: 
> YarnClusterClientFactory#getClusterDescriptor-->Utils#getYarnAndHadoopConfiguration-->
>   HadoopUtils.getHadoopConfiguration --> Hadooputills#addHadoopConfIfFound 
> However, the HadoopUtils#addHadoopConfIfFound method does not load yarn 
> configuration information
> First,YarnClusterClientFactory#getClusterDescriptor This method call 
> Utils.getYarnAndHadoopConfiguration method
> {quote}private YarnClusterDescriptor getClusterDescriptor(Configuration 
> configuration)  
> Unknown macro: \{ final YarnClient yarnClient = 
> YarnClient.createYarnClient(); final YarnConfiguration yarnConfiguration = 
> Utils.getYarnAndHadoopConfiguration(configuration); 
> yarnClient.init(yarnConfiguration); yarnClient.start(); return new 
> YarnClusterDescriptor( configuration, yarnConfiguration, yarnClient, 
> YarnClientYarnClusterInformationRetriever.create(yarnClient), false); }
> {quote}
> It then calls Utils# getYarnAndHadoopConfiguration method, in the call 
> HadoopUtils# getHadoopConfiguration methods will only loading the Hadoop 
> configuration unable to load the configuration for Yarn.
> {quote}    public static YarnConfiguration getYarnAndHadoopConfiguration(
>             org.apache.flink.configuration.Configuration flinkConfig) 
> Unknown macro: \{         final YarnConfiguration yarnConfig = 
> getYarnConfiguration(flinkConfig);         
> yarnConfig.addResource(HadoopUtils.getHadoopConfiguration(flinkConfig));      
>    return yarnConfig;     }
> {quote}
> Then in HadoopUtils. GetHadoopConfiguration methods this Approach in the 3 
> will through HadoopUtils# addHadoopConfIfFound method to load the 
> configuration file
> {quote}    public static Configuration getHadoopConfiguration(
>     
>     。
>     
> // Approach 3: HADOOP_CONF_DIR environment variable
>         String hadoopConfDir = System.getenv("HADOOP_CONF_DIR");
>         if (hadoopConfDir != null) {
>             LOG.debug("Searching Hadoop configuration files in 
> HADOOP_CONF_DIR: {}", hadoopConfDir);
>             foundHadoopConfiguration =
>                     addHadoopConfIfFound(result, hadoopConfDir) || 
> foundHadoopConfiguration;
>         }
>         。
> }
> {quote}
>  
> Finally, it calls the Hadooputills#addHadoopConfIfFound  method, which loads 
> only the core-site and hdfs-site configuration but not the yarn-site 
> configuration
> {quote}private static boolean addHadoopConfIfFound(
> Configuration configuration, String possibleHadoopConfPath)
> Unknown macro: \{ boolean foundHadoopConfiguration = false; if (new 
> File(possibleHadoopConfPath).exists()) Unknown macro}
> if (new File(possibleHadoopConfPath + "/hdfs-site.xml").exists())
> Unknown macro: \{ configuration.addResource( new 
> org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/hdfs-site.xml")); 
> LOG.debug( "Adding " + possibleHadoopConfPath + "/hdfs-site.xml to hadoop 
> configuration"); foundHadoopConfiguration = true; }
> }
> return foundHadoopConfiguration;
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33353) SQL fails because "TimestampType.kind" is not serialized

2023-11-02 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782089#comment-17782089
 ] 

Ferenc Csaky commented on FLINK-33353:
--

[~twalthr] any thoughts?

> SQL fails because "TimestampType.kind" is not serialized 
> -
>
> Key: FLINK-33353
> URL: https://issues.apache.org/jira/browse/FLINK-33353
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: Ferenc Csaky
>Priority: Major
>
> We have a custom persistent catalog store, which stores tables, views etc. in 
> a DB. In our application, it is required to utilize the serialized formats of 
> entities, but the same applies to the Hive, as it functions as a persistent 
> catalog.
> Take the following example SQL:
> {code:sql}
> CREATE TABLE IF NOT EXISTS `txn_gen` (
>   `txn_id` INT,
>   `amount` INT,
>   `ts` TIMESTAMP(3),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
> ) WITH (
>   'connector' = 'datagen',
>   'fields.txn_id.min' = '1',
>   'fields.txn_id.max' = '5',
>   'rows-per-second' = '1'
> );
> CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
>   SELECT txn_id,
>  TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
>  COUNT(txn_id) AS txn_count
> FROM txn_gen
> GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);
> SELECT txn_id,
>SUM(txn_count),
>TUMBLE_START(w_row_time, INTERVAL '20' SECOND) AS total_txn_count
>   FROM aggr_ten_sec
>   GROUP BY txn_id, TUMBLE(w_row_time, INTERVAL '20' SECOND);
> {code}
> This will work without any problems when we simply execute it in a 
> {{TableEnvironment}}, but it fails with the below error when we try to 
> execute the query based on the serialized table metadata.
> {code}
> org.apache.flink.table.api.TableException: Window aggregate can only be 
> defined over a time attribute column, but TIMESTAMP(3) encountered.
> {code}
> If there is a view which would require to use ROWTIME, it will be lost and we 
> cannot recreate the same query from the serialized entites.
> Currently in 
> [TimestampType|https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/TimestampType.java]
>  the "kind" field is deliberatly annotated as {{@Internal}} and is not 
> serialized, although it breaks this functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33353) SQL fails because "TimestampType.kind" is not serialized

2023-10-25 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-33353:
-
Description: 
We have a custom persistent catalog store, which stores tables, views etc. in a 
DB. In our application, it is required to utilize the serialized formats of 
entities, but the same applies to the Hive, as it functions as a persistent 
catalog.

Take the following example SQL:

{code:sql}
CREATE TABLE IF NOT EXISTS `txn_gen` (
  `txn_id` INT,
  `amount` INT,
  `ts` TIMESTAMP(3),
   WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
) WITH (
  'connector' = 'datagen',
  'fields.txn_id.min' = '1',
  'fields.txn_id.max' = '5',
  'rows-per-second' = '1'
);

CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
  SELECT txn_id,
 TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
 COUNT(txn_id) AS txn_count
FROM txn_gen
GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);

SELECT txn_id,
   SUM(txn_count),
   TUMBLE_START(w_row_time, INTERVAL '20' SECOND) AS total_txn_count
  FROM aggr_ten_sec
  GROUP BY txn_id, TUMBLE(w_row_time, INTERVAL '20' SECOND);
{code}

This will work without any problems when we simply execute it in a 
{{TableEnvironment}}, but it fails with the below error when we try to execute 
the query based on the serialized table metadata.
{code}
org.apache.flink.table.api.TableException: Window aggregate can only be defined 
over a time attribute column, but TIMESTAMP(3) encountered.
{code}

If there is a view which would require to use ROWTIME, it will be lost and we 
cannot recreate the same query from the serialized entites.

Currently in 
[TimestampType|https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/TimestampType.java]
 the "kind" field is deliberatly annotated as {{@Internal}} and is not 
serialized, although it breaks this functionality.


  was:
We have a custom persistent catalog store, which stores tables, views etc. in a 
DB. In our application, it is required to utilize the serialized formats of 
entities, but the same applies to the Hive, as it functions as a persistent 
catalog.

Take the following example SQL:

{code:sql}
CREATE TABLE IF NOT EXISTS `txn_gen` (
  `txn_id` INT,
  `amount` INT,
  `ts` TIMESTAMP(3),
   WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
) WITH (
  'connector' = 'datagen',
  'fields.txn_id.min' = '1',
  'fields.txn_id.max' = '5',
  'rows-per-second' = '1'
);

CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
  SELECT txn_id,
 TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
 COUNT(txn_id) AS txn_count
FROM txn_gen
GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);

SELECT txn_id,
   SUM(txn_count),
   TUMBLE_START(w_row_time, INTERVAL '20' SECOND) AS total_txn_count
  FROM aggr_ten_sec
  GROUP BY txn_id, TUMBLE(w_row_time, INTERVAL '20' SECOND);
{code}

This will work without any problems when we simply execute it in a 
{{TableEnvironment}}, but it fails with the below error when we try to execute 
the query based on the serialized table metadata.
{code}
org.apache.flink.table.api.TableException: Window aggregate can only be defined 
over a time attribute column, but TIMESTAMP(3) encountered.
{code}

If there is a view which would require to use ROWTIME, it will be lost and we 
cannot recreate the same query from the serialized entites.

Currently in {{TimestampType}} the "kind" field is deliberatly annotated as 
{{@Internal}} and is not serialized, although it breaks this functionality.



> SQL fails because "TimestampType.kind" is not serialized 
> -
>
> Key: FLINK-33353
> URL: https://issues.apache.org/jira/browse/FLINK-33353
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: Ferenc Csaky
>Priority: Major
>
> We have a custom persistent catalog store, which stores tables, views etc. in 
> a DB. In our application, it is required to utilize the serialized formats of 
> entities, but the same applies to the Hive, as it functions as a persistent 
> catalog.
> Take the following example SQL:
> {code:sql}
> CREATE TABLE IF NOT EXISTS `txn_gen` (
>   `txn_id` INT,
>   `amount` INT,
>   `ts` TIMESTAMP(3),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
> ) WITH (
>   'connector' = 'datagen',
>   'fields.txn_id.min' = '1',
>   'fields.txn_id.max' = '5',
>   'rows-per-second' = '1'
> );
> CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
>   SELECT txn_id,
>  TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
>  COUNT(txn_id) AS txn_count
> FROM txn_gen
> GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);
> SELECT txn_id,
>SUM(txn_count),
>TUMBLE_START(w_row_time, INTERVAL 

[jira] [Commented] (FLINK-33353) SQL fails because "TimestampType.kind" is not serialized

2023-10-25 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17779489#comment-17779489
 ] 

Ferenc Csaky commented on FLINK-33353:
--

In a nutshell we are utilizing the (deprecated) 
[DescriptorProperties|https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors/DescriptorProperties.java]
 to extract the metadata from {{CatalogBaseTable}} and store that in as part of 
custom DB record.

> SQL fails because "TimestampType.kind" is not serialized 
> -
>
> Key: FLINK-33353
> URL: https://issues.apache.org/jira/browse/FLINK-33353
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: Ferenc Csaky
>Priority: Major
>
> We have a custom persistent catalog store, which stores tables, views etc. in 
> a DB. In our application, it is required to utilize the serialized formats of 
> entities, but the same applies to the Hive, as it functions as a persistent 
> catalog.
> Take the following example SQL:
> {code:sql}
> CREATE TABLE IF NOT EXISTS `txn_gen` (
>   `txn_id` INT,
>   `amount` INT,
>   `ts` TIMESTAMP(3),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
> ) WITH (
>   'connector' = 'datagen',
>   'fields.txn_id.min' = '1',
>   'fields.txn_id.max' = '5',
>   'rows-per-second' = '1'
> );
> CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
>   SELECT txn_id,
>  TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
>  COUNT(txn_id) AS txn_count
> FROM txn_gen
> GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);
> SELECT txn_id,
>SUM(txn_count),
>TUMBLE_START(w_row_time, INTERVAL '20' SECOND) AS total_txn_count
>   FROM aggr_ten_sec
>   GROUP BY txn_id, TUMBLE(w_row_time, INTERVAL '20' SECOND);
> {code}
> This will work without any problems when we simply execute it in a 
> {{TableEnvironment}}, but it fails with the below error when we try to 
> execute the query based on the serialized table metadata.
> {code}
> org.apache.flink.table.api.TableException: Window aggregate can only be 
> defined over a time attribute column, but TIMESTAMP(3) encountered.
> {code}
> If there is a view which would require to use ROWTIME, it will be lost and we 
> cannot recreate the same query from the serialized entites.
> Currently in {{TimestampType}} the "kind" field is deliberatly annotated as 
> {{@Internal}} and is not serialized, although it breaks this functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33353) SQL fails because "TimestampType.kind" is not serialized

2023-10-24 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-33353:


 Summary: SQL fails because "TimestampType.kind" is not serialized 
 Key: FLINK-33353
 URL: https://issues.apache.org/jira/browse/FLINK-33353
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.18.0
Reporter: Ferenc Csaky


We have a custom persistent catalog store, which stores tables, views etc. in a 
DB. In our application, it is required to utilize the serialized formats of 
entities, but the same applies to the Hive, as it functions as a persistent 
catalog.

Take the following example SQL:

{code:sql}
CREATE TABLE IF NOT EXISTS `txn_gen` (
  `txn_id` INT,
  `amount` INT,
  `ts` TIMESTAMP(3),
   WATERMARK FOR `ts` AS `ts` - INTERVAL '1' SECOND
) WITH (
  'connector' = 'datagen',
  'fields.txn_id.min' = '1',
  'fields.txn_id.max' = '5',
  'rows-per-second' = '1'
);

CREATE VIEW IF NOT EXISTS aggr_ten_sec AS
  SELECT txn_id,
 TUMBLE_ROWTIME(`ts`, INTERVAL '10' SECOND) AS w_row_time,
 COUNT(txn_id) AS txn_count
FROM txn_gen
GROUP BY txn_id, TUMBLE(`ts`, INTERVAL '10' SECOND);

SELECT txn_id,
   SUM(txn_count),
   TUMBLE_START(w_row_time, INTERVAL '20' SECOND) AS total_txn_count
  FROM aggr_ten_sec
  GROUP BY txn_id, TUMBLE(w_row_time, INTERVAL '20' SECOND);
{code}

This will work without any problems when we simply execute it in a 
{{TableEnvironment}}, but it fails with the below error when we try to execute 
the query based on the serialized table metadata.
{code}
org.apache.flink.table.api.TableException: Window aggregate can only be defined 
over a time attribute column, but TIMESTAMP(3) encountered.
{code}

If there is a view which would require to use ROWTIME, it will be lost and we 
cannot recreate the same query from the serialized entites.

Currently in {{TimestampType}} the "kind" field is deliberatly annotated as 
{{@Internal}} and is not serialized, although it breaks this functionality.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20955) Refactor HBase Source in accordance with FLIP-27

2023-10-23 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778779#comment-17778779
 ] 

Ferenc Csaky commented on FLINK-20955:
--

Hey, thanks for look into this! I did not know this in progress ticket, I 
thought about doing this when the externalized connector is released, so sure I 
can commit to review, even help out if you think you need a hand, so feel free 
to ping me about it.

> Refactor HBase Source in accordance with FLIP-27
> 
>
> Key: FLINK-20955
> URL: https://issues.apache.org/jira/browse/FLINK-20955
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: Moritz Manner
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>
> The HBase connector source implementation should be updated in accordance 
> with [FLIP-27: Refactor Source 
> Interface|https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface].
> One source should map to one table in HBase. Users can specify which 
> column[families] to watch; each change in one of the columns triggers a 
> record with change type, table, column family, column, value, and timestamp.
> h3. Idea
> The new Flink HBase Source makes use of the internal [replication mechanism 
> of HBase|https://hbase.apache.org/book.html#_cluster_replication]. The Source 
> is registering at the HBase cluster and will receive all WAL edits written in 
> HBase. From those WAL edits the Source can create the DataStream. 
> h3. Split
> We're still not 100% sure which information a Split should contain. We have 
> the following possibilities: 
>  # There is only one Split per Source and the Split contains all the 
> necessary information to connect with HBase. The SourceReader which processes 
> the Split will receive all WAL edits for all tables and filters the relevant 
> edits. 
>  # There are multiple Splits per Source, each Split representing one HBase 
> Region to read from. This assumes that it is possible to only receive WAL 
> edits from a specific HBase Region and not receive all WAL edits. This would 
> be preferable as it allows parallel processing of multiple regions, but we 
> still need to figure out how this is possible.
> In both cases the Split will contain information about the HBase instance and 
> table. 
> h3. Split Enumerator
> Depending on which Split we'll decide on, the split enumerator will connect 
> to HBase and get all relevant regions or just create one Split.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33226) Forbid to drop current database

2023-10-10 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17773591#comment-17773591
 ] 

Ferenc Csaky commented on FLINK-33226:
--

Hi! I bumped into this a while back and I think this makes sense. Since it is a 
fairly small change, I opened a PR.

> Forbid to drop current database
> ---
>
> Key: FLINK-33226
> URL: https://issues.apache.org/jira/browse/FLINK-33226
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
>
> PG or MySql both doesn't support to drop the current database. PG throws the 
> following exception.
> {code:java}
> test=# drop database
> test-# test;
> ERROR:  cannot drop the currently open database
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-09-06 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762327#comment-17762327
 ] 

Ferenc Csaky commented on FLINK-32796:
--

Added PRs against master: [https://github.com/apache/flink/pull/23366,] 
[https://github.com/apache/flink/pull/23367] 

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-09-06 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762310#comment-17762310
 ] 

Ferenc Csaky commented on FLINK-32796:
--

[~Sergey Nuyanzin] Correct! [~hackergin] I missed your last response on the doc 
PR, so I just addressed that, can you pls. check it again? Thanks!

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32987) BlobClientSslTest>BlobClientTest.testSocketTimeout expected SocketTimeoutException but identified SSLException

2023-08-30 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760563#comment-17760563
 ] 

Ferenc Csaky commented on FLINK-32987:
--

Definitely, pls. assign it to me, I'll take care of it. Thanks for the detailed 
description!

> BlobClientSslTest>BlobClientTest.testSocketTimeout expected 
> SocketTimeoutException but identified SSLException
> --
>
> Key: FLINK-32987
> URL: https://issues.apache.org/jira/browse/FLINK-32987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8692
> {code}
> Aug 29 03:28:11 03:28:11.280 [ERROR]   
> BlobClientSslTest>BlobClientTest.testSocketTimeout:512 
> Aug 29 03:28:11 Expecting a throwable with cause being an instance of:
> Aug 29 03:28:11   java.net.SocketTimeoutException
> Aug 29 03:28:11 but was an instance of:
> Aug 29 03:28:11   javax.net.ssl.SSLException
> Aug 29 03:28:11 Throwable that failed the check:
> Aug 29 03:28:11 
> Aug 29 03:28:11 java.io.IOException: GET operation failed: Read timed out
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.lambda$testSocketTimeout$2(BlobClientTest.java:510)
> Aug 29 03:28:11   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Aug 29 03:28:11   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.testSocketTimeout(BlobClientTest.java:508)
> Aug 29 03:28:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 29 03:28:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 29 03:28:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 29 03:28:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32999) Remove HBase connector from master branch

2023-08-30 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760400#comment-17760400
 ] 

Ferenc Csaky commented on FLINK-32999:
--

Thanks for creating this! Feel free to assign this to me, I'd be happy to help 
wrap-up the HBase externalization process.

> Remove HBase connector from master branch
> -
>
> Key: FLINK-32999
> URL: https://issues.apache.org/jira/browse/FLINK-32999
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> The connector was externalized at FLINK-30061
> Once it is released it would make sense to remove it from master branch



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-25 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759086#comment-17759086
 ] 

Ferenc Csaky commented on FLINK-32796:
--

[~hackergin] I created 2 PRs linked to this issue, if you have some time can 
you check them and see if you think it is okay? Thanks!

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17758706#comment-17758706
 ] 

Ferenc Csaky commented on FLINK-32796:
--

I think the documentation can be improved, I plan to file a PR with suggestions 
tomorrow.

Another point worth to note IMO (plan to add this to the doc as well) is that, 
the current FILE catalog store impl expects the given catalog store path folder 
to exist and it does not create it on its own, hence if a non-existing (or 
inaccessible) path is passed, SQL client will fail to start and the SQL gateway 
will return an arror for session create. Reason for that is the external FS 
support, so the file catalog store can be placed to any supported distributed 
FS (hdfs, s3, etc.) and in those cases creating the catalog store root might 
fail.

Maybe it makes sense to not fail in all cases, but at least try to create the 
dirs to the given path if it does not exists at the catalog store open and only 
fail if it cannot be done. I added the exernal file system support, so I can 
open a PR with this improvement tomorrow.

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17758699#comment-17758699
 ] 

Ferenc Csaky commented on FLINK-32796:
--

Added config to {{{}flink-conf.yaml{}}}:
{code:yaml}
table.catalog-store.kind: file
table.catalog-store.file.path: file:///tmp/test_catalog_store
{code}
Dummy catalog create via SQL client:
!create_catalog.png|width=642,height=1108!

Catalog store folder check:
{code:sh}
/tmp/test_catalog_store
% cat test.yaml
default-database: "dummy"
type: "generic_in_memory"
{code}
Verify persisted catalog in a new SQL client session:
!check_after_restart.png|width=574,height=1083!

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-24 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-32796:
-
Attachment: check_after_restart.png

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: check_after_restart.png, create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-24 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-32796:
-
Attachment: create_catalog.png

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Ferenc Csaky
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: create_catalog.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32796) Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-08-23 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757979#comment-17757979
 ] 

Ferenc Csaky commented on FLINK-32796:
--

[~renqs] I volunteer to test this feature, feel free to assign it to me.

Relevant docs: 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-store

> Release Testing: Verify FLIP-295: Support lazy initialization of catalogs and 
> persistence of catalog configurations
> ---
>
> Key: FLINK-32796
> URL: https://issues.apache.org/jira/browse/FLINK-32796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32835) [JUnit5 Migration] The accumulators, blob and blocklist packages of flink-runtime module

2023-08-16 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755029#comment-17755029
 ] 

Ferenc Csaky commented on FLINK-32835:
--

[~fanrui] if you have time, can you check on the PR? Thanks in advance!

> [JUnit5 Migration] The accumulators, blob and blocklist packages of 
> flink-runtime module
> 
>
> Key: FLINK-32835
> URL: https://issues.apache.org/jira/browse/FLINK-32835
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Ferenc Csaky
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32835) [JUnit5 Migration] The accumulators, blob and blocklist packages of flink-runtime module

2023-08-14 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17754110#comment-17754110
 ] 

Ferenc Csaky commented on FLINK-32835:
--

FYI: Jira contains the "blocklist" package, as I just listed A-B in the other 
jira when we defined the scope for this one, alothugh those tests are already 
JUnit5, so the PR only includes "accumultars" and "blob" unit tests.

> [JUnit5 Migration] The accumulators, blob and blocklist packages of 
> flink-runtime module
> 
>
> Key: FLINK-32835
> URL: https://issues.apache.org/jira/browse/FLINK-32835
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Ferenc Csaky
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25540) [JUnit5 Migration] Module: flink-runtime

2023-08-11 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17753203#comment-17753203
 ] 

Ferenc Csaky commented on FLINK-25540:
--

Thanks for the administration [~fanrui]!

> [JUnit5 Migration] Module: flink-runtime
> 
>
> Key: FLINK-25540
> URL: https://issues.apache.org/jira/browse/FLINK-25540
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25540) [JUnit5 Migration] Module: flink-runtime

2023-08-11 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17753062#comment-17753062
 ] 

Ferenc Csaky commented on FLINK-25540:
--

Thanks for the quick response folks, I agree that that in one chunk it would be 
basically impossible to review, I like the idea to split by packages and size. 
Yesterday I migrated some of them already for starters, so I can take on 
package A-B (accumulators, blob, blocklist) if that's okay. Would that be okay 
to create other the new subtasks under FLINK-25325?

WDYT?

> [JUnit5 Migration] Module: flink-runtime
> 
>
> Key: FLINK-25540
> URL: https://issues.apache.org/jira/browse/FLINK-25540
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: RocMarshal
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25540) [JUnit5 Migration] Module: flink-runtime

2023-08-10 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17752799#comment-17752799
 ] 

Ferenc Csaky commented on FLINK-25540:
--

I just bumped into this when I was working in another task, [~RocMarshal] do 
you still plan to do this? I can take care of it in the next couple days if not.

> [JUnit5 Migration] Module: flink-runtime
> 
>
> Key: FLINK-25540
> URL: https://issues.apache.org/jira/browse/FLINK-25540
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: RocMarshal
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32811) Add port range support for taskmanager.data.bind-port

2023-08-08 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-32811:


 Summary: Add port range support for taskmanager.data.bind-port
 Key: FLINK-32811
 URL: https://issues.apache.org/jira/browse/FLINK-32811
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration, Runtime / Coordination
Reporter: Ferenc Csaky
 Fix For: 1.19.0


Adding this feature could be helpful for installation in a restrictive network 
setup. The "port range" support is already available for some other port config 
options anyway.

Right now, it is possible to specify a {{taskmanager.data.port}} and 
{{taskmanager.data.bind-port}} to be able to support NAT-like setups, although 
{{taskmanager.data.port}} is not bound to anything itself, so supporting a port 
range there is not an option according to my understanding.

Although, supporting a port range only for {{taskmanager.data.bind-port}} can 
be still helpful for anyone who does not require a NAT capability, because if 
{{taskmanager.data.bind-port}} is set and {{taskmanager.data.port}} is set to 
*0*, then the bound port will be used everywhere.

This change should keep the already possible setups working as is.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29281) Replace Akka by gRPC-based RPC implementation

2023-07-26 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747530#comment-17747530
 ] 

Ferenc Csaky commented on FLINK-29281:
--

[~chesnay] is this annulates the Akka Artery migration (FLINK-28372) at this 
point? Now that the migration to Pekko is done, which is an Akka 2.6 fork, 
which does not change the semantics of the Artery migration I guess, but if 
this is in reach, putting more effort into Artery and update/complete the 
existing draft may not worth it.

> Replace Akka by gRPC-based RPC implementation
> -
>
> Key: FLINK-29281
> URL: https://issues.apache.org/jira/browse/FLINK-29281
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> Following the license change I propose to eventually replace Akka.
> Based on LEGAL-619 an exemption is not feasible, and while a fork _may_ be 
> created it's long-term future is up in the air and I'd be uncomfortable with 
> relying on it.
> I've been experimenting with a new RPC implementation based on gRPC and so 
> far I'm quite optimistic. It's also based on Netty while not requiring as 
> much of a tight coupling as Akka did.
> This would also allow us to sidestep migrating our current Akka setup from 
> Netty 3 (which is affected by several CVEs) to Akka Artery, both saving work 
> and not introducing an entirely different network stack to the project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32660) Support external file systems in FileCatalogStore

2023-07-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746430#comment-17746430
 ] 

Ferenc Csaky commented on FLINK-32660:
--

Hi [~Leonard],

can you pls. assign this one to me? I opened the PR with the changes. For now, 
applied 1.18, but IDK if that's feasible.

Thanks,
Ferenc

> Support external file systems in FileCatalogStore
> -
>
> Key: FLINK-32660
> URL: https://issues.apache.org/jira/browse/FLINK-32660
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32433) Add built-in FileCatalogStore

2023-07-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746419#comment-17746419
 ] 

Ferenc Csaky commented on FLINK-32433:
--

Sounds good to me.

> Add built-in FileCatalogStore 
> --
>
> Key: FLINK-32433
> URL: https://issues.apache.org/jira/browse/FLINK-32433
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32660) Support external file systems in FileCatalogStore

2023-07-24 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-32660:


 Summary: Support external file systems in FileCatalogStore
 Key: FLINK-32660
 URL: https://issues.apache.org/jira/browse/FLINK-32660
 Project: Flink
  Issue Type: Sub-task
Reporter: Ferenc Csaky
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32433) Add built-in FileCatalogStore

2023-07-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746407#comment-17746407
 ] 

Ferenc Csaky commented on FLINK-32433:
--

Hi [~hackergin], [~leonard],

I see that this is already merged, although we agreed with [~hackergin] I will 
implement this part. I opened a PR with my changes anyways, because I 
implemented it to also support external filesystems.

I think a local FS only support is inferior in a production environment 
compared to a distributed FS, e.g. HDFS or some cloud storage like S3. In my 
opinion it would worth to support that with the built-in {{FileCatalogStore}}. 
WDYT?

> Add built-in FileCatalogStore 
> --
>
> Key: FLINK-32433
> URL: https://issues.apache.org/jira/browse/FLINK-32433
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Feng Jin
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32571) Prebuild HBase testing docker image

2023-07-18 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17744225#comment-17744225
 ] 

Ferenc Csaky commented on FLINK-32571:
--

After HBase 3.x is released and requires Hadoop 3.x it can be a problem. But 
IDK when that will actually be, it is in alpha for 2 years now.

> Prebuild HBase testing docker image
> ---
>
> Key: FLINK-32571
> URL: https://issues.apache.org/jira/browse/FLINK-32571
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / HBase
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: hbase-3.0.0
>
>
> For testing we currently build an HBase docker image on-demand during 
> testing. We can improve reliability and testing times by building this image 
> ahead of time, as the only parameter is the HBase version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30349) Sync missing HBase e2e tests to external repo

2023-07-12 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742482#comment-17742482
 ] 

Ferenc Csaky commented on FLINK-30349:
--

Cherry-picked the commit for fixing the tests to 1.17 from the Flink repo: 
https://github.com/apache/flink-connector-hbase/pull/14

> Sync missing HBase e2e tests to external repo
> -
>
> Key: FLINK-30349
> URL: https://issues.apache.org/jira/browse/FLINK-30349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Affects Versions: hbase-3.0.0
>Reporter: Martijn Visser
>Assignee: Ferenc Csaky
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: hbase-3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30349) Sync missing HBase e2e tests to external repo

2023-07-12 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742471#comment-17742471
 ] 

Ferenc Csaky commented on FLINK-30349:
--

Correct I think. Although the E2E tests are failing ATM. Currently, as far I 
see from the logs, the tests are executed multiple times (which is weird, might 
be related to the parametrization) until the records won't show up in HBase. I 
did some refactor, but the [CI tests on my 
fork|https://github.com/ferenc-csaky/flink-connector-hbase/actions/runs/5520614808/jobs/10067489222]
 now fails on downloading the {{flink-base}} docker img for one of the tests, 
complains about a missing docker login, which is also weird. For the record, on 
my machine both the current implementation and my changes ran successfully.

And as far as I see the newer commits, some of the non-E2E IT cases also fail 
for Flink 1.17, I'll take a look at that.

> Sync missing HBase e2e tests to external repo
> -
>
> Key: FLINK-30349
> URL: https://issues.apache.org/jira/browse/FLINK-30349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Affects Versions: hbase-3.0.0
>Reporter: Martijn Visser
>Assignee: Ferenc Csaky
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: hbase-3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32427) FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-07-03 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739618#comment-17739618
 ] 

Ferenc Csaky commented on FLINK-32427:
--

Hi [~hackergin],

Really happy to see this feature coming in 1.18, we are also looking forward to 
use SQL gateway in our product and this taks removes one of the blockers, so if 
I can help out with any of the tickets to move things forward, pls. reach out.

> FLIP-295: Support lazy initialization of catalogs and persistence of catalog 
> configurations
> ---
>
> Key: FLINK-32427
> URL: https://issues.apache.org/jira/browse/FLINK-32427
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Feng Jin
>Priority: Major
>
> Umbrella issue for 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32391) YARNSessionFIFOITCase.checkForProhibitedLogContents fails on AZP

2023-06-27 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737624#comment-17737624
 ] 

Ferenc Csaky commented on FLINK-32391:
--

Hey! I can take a look at this, feel free to assign it to me.

> YARNSessionFIFOITCase.checkForProhibitedLogContents fails on AZP
> 
>
> Key: FLINK-32391
> URL: https://issues.apache.org/jira/browse/FLINK-32391
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Konstantin Knauf
>Priority: Critical
>  Labels: test-stability
>
> This build failed 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50217=logs=d871f0ce-7328-5d00-023b-e7391f5801c8=77cbea27-feb9-5cf5-53f7-3267f9f9c6b6=27971
> as 
> {noformat}
> Jun 20 01:30:32 01:30:32.994 [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 79.015 s <<< FAILURE! - in 
> org.apache.flink.yarn.YARNSessionFIFOSecuredITCase
> Jun 20 01:30:32 01:30:32.994 [ERROR] 
> org.apache.flink.yarn.YARNSessionFIFOSecuredITCase.testDetachedMode  Time 
> elapsed: 17.586 s  <<< FAILURE!
> Jun 20 01:30:32 java.lang.AssertionError: 
> Jun 20 01:30:32 Found a file 
> /__w/2/s/flink-yarn-tests/target/test/data/flink-yarn-tests-fifo-secured/yarn-23119131678/flink-yarn-tests-fifo-secured-logDir-nm-1_0/application_1687224557882_0002/container_1687224557882_0002_01_02/taskmanager.log
>  with a prohibited string (one of [Exception, Started 
> SelectChannelConnector@0.0.0.0:8081]). Excerpts:
> Jun 20 01:30:32 [
> Jun 20 01:30:32 2023-06-20 01:29:57,749 INFO  
> org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopping 
> Akka RPC service.
> Jun 20 01:30:32 2023-06-20 01:29:57,767 WARN  akka.actor.CoordinatedShutdown  
>  [] - Could not addJvmShutdownHook, due to: 
> Shutdown in progress
> Jun 20 01:30:32 2023-06-20 01:29:57,767 INFO  akka.actor.CoordinatedShutdown  
>  [] - Running CoordinatedShutdown with reason 
> [ActorSystemTerminateReason]
> Jun 20 01:30:32 2023-06-20 01:29:57,768 WARN  akka.actor.CoordinatedShutdown  
>  [] - Could not addJvmShutdownHook, due to: 
> Shutdown in progress
> Jun 20 01:30:32 2023-06-20 01:29:57,768 INFO  akka.actor.CoordinatedShutdown  
>  [] - Running CoordinatedShutdown with reason 
> [ActorSystemTerminateReason]
> Jun 20 01:30:32 2023-06-20 01:29:57,781 INFO  
> akka.remote.RemoteActorRefProvider$RemotingTerminator[] - Shutting 
> down remote daemon.
> Jun 20 01:30:32 2023-06-20 01:29:57,781 INFO  
> akka.remote.RemoteActorRefProvider$RemotingTerminator[] - Shutting 
> down remote daemon.
> Jun 20 01:30:32 2023-06-20 01:29:57,782 INFO  
> akka.remote.RemoteActorRefProvider$RemotingTerminator[] - Remote 
> daemon shut down; proceeding with flushing remote transports.
> Jun 20 01:30:32 2023-06-20 01:29:57,782 INFO  
> akka.remote.RemoteActorRefProvider$RemotingTerminator[] - Remote 
> daemon shut down; proceeding with flushing remote transports.
> Jun 20 01:30:32 2023-06-20 01:29:57,788 WARN  
> akka.remote.transport.netty.NettyTransport   [] - Remote 
> connection to [264d5b384bcc/192.168.224.2:42920] failed with 
> java.net.SocketException: Connection reset
> Jun 20 01:30:32 2023-06-20 01:29:57,788 WARN  
> akka.remote.transport.netty.NettyTransport   [] - Remote 
> connection to [264d5b384bcc/192.168.224.2:42920] failed with 
> java.net.SocketException: Connection reset
> Jun 20 01:30:32 ]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32174) Update Cloudera product and link in doc page

2023-05-24 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-32174:


 Summary: Update Cloudera product and link in doc page
 Key: FLINK-32174
 URL: https://issues.apache.org/jira/browse/FLINK-32174
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Ferenc Csaky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31609) Fatal error in ResourceManager caused YARNSessionFIFOSecuredITCase.testDetachedMode to fail

2023-05-08 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720484#comment-17720484
 ] 

Ferenc Csaky commented on FLINK-31609:
--

According to the logs, the fatal exception in the {{ResourceManager}} does not 
happen anymore (cause of the changes made in FLINK-30908). According to 
[~xtsong]'s analysis on FLINK-30908, the AMRM heartbeat interruption can happen 
anyways, so the ex about the interrupt is written to the logs, hence the 
failure, because the string


java.io.InterruptedIOException: Interrupted waiting to send RPC request to 
server
is not whitelisted. If we are okay whitelisting that exactly, it should fix the 
tests.

> Fatal error in ResourceManager caused 
> YARNSessionFIFOSecuredITCase.testDetachedMode to fail
> ---
>
> Key: FLINK-31609
> URL: https://issues.apache.org/jira/browse/FLINK-31609
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: test-stability
>
> This looks like FLINK-30908. I created a follow-up ticket because we reached 
> a new minor version.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47547=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461
> {code}
> Mar 24 09:32:29 2023-03-24 09:31:50,001 ERROR 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - 
> Exception on heartbeat
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1461) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1403) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy33.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>  ~[hadoop-yarn-common-2.10.2.jar:?]
> Mar 24 09:32:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_292]
> Mar 24 09:32:29   at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy34.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:297)
>  ~[hadoop-yarn-client-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$HeartbeatThread.run(AMRMClientAsyncImpl.java:274)
>  [hadoop-yarn-client-2.10.2.jar:?]
> Mar 24 09:32:29 Caused by: java.lang.InterruptedException
> Mar 24 09:32:29   at 
> java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1177) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1456) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   ... 17 more
> {code}



--

[jira] [Comment Edited] (FLINK-31609) Fatal error in ResourceManager caused YARNSessionFIFOSecuredITCase.testDetachedMode to fail

2023-05-08 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720484#comment-17720484
 ] 

Ferenc Csaky edited comment on FLINK-31609 at 5/8/23 12:07 PM:
---

According to the logs, the fatal exception in the {{ResourceManager}} does not 
happen anymore (cause of the changes made in FLINK-30908). According to 
[~xtsong]'s analysis on FLINK-30908, the AMRM heartbeat interruption can happen 
anyways, so the ex about the interrupt is written to the logs, hence the 
failure, because the string
{code}
java.io.InterruptedIOException: Interrupted waiting to send RPC request to 
server {code}
is not whitelisted. If we are okay whitelisting that exactly, it should fix the 
tests.


was (Author: ferenc-csaky):
According to the logs, the fatal exception in the {{ResourceManager}} does not 
happen anymore (cause of the changes made in FLINK-30908). According to 
[~xtsong]'s analysis on FLINK-30908, the AMRM heartbeat interruption can happen 
anyways, so the ex about the interrupt is written to the logs, hence the 
failure, because the string


java.io.InterruptedIOException: Interrupted waiting to send RPC request to 
server
is not whitelisted. If we are okay whitelisting that exactly, it should fix the 
tests.

> Fatal error in ResourceManager caused 
> YARNSessionFIFOSecuredITCase.testDetachedMode to fail
> ---
>
> Key: FLINK-31609
> URL: https://issues.apache.org/jira/browse/FLINK-31609
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: test-stability
>
> This looks like FLINK-30908. I created a follow-up ticket because we reached 
> a new minor version.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47547=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461
> {code}
> Mar 24 09:32:29 2023-03-24 09:31:50,001 ERROR 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - 
> Exception on heartbeat
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1461) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1403) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy33.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>  ~[hadoop-yarn-common-2.10.2.jar:?]
> Mar 24 09:32:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_292]
> Mar 24 09:32:29   at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy34.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:297)
>  ~[hadoop-yarn-client-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$HeartbeatThread.run(AMRMClientAsyncImpl.java:274)
>  

[jira] [Commented] (FLINK-31609) Fatal error in ResourceManager caused YARNSessionFIFOSecuredITCase.testDetachedMode to fail

2023-04-18 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713566#comment-17713566
 ] 

Ferenc Csaky commented on FLINK-31609:
--

Yeah, we can give it a check in the next 1-2 weeks, feel free to assign it to 
me.

> Fatal error in ResourceManager caused 
> YARNSessionFIFOSecuredITCase.testDetachedMode to fail
> ---
>
> Key: FLINK-31609
> URL: https://issues.apache.org/jira/browse/FLINK-31609
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> This looks like FLINK-30908. I created a follow-up ticket because we reached 
> a new minor version.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47547=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461
> {code}
> Mar 24 09:32:29 2023-03-24 09:31:50,001 ERROR 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - 
> Exception on heartbeat
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send 
> RPC request to server
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1461) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1403) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy33.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>  ~[hadoop-yarn-common-2.10.2.jar:?]
> Mar 24 09:32:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_292]
> Mar 24 09:32:29   at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at com.sun.proxy.$Proxy34.allocate(Unknown Source) 
> ~[?:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:297)
>  ~[hadoop-yarn-client-2.10.2.jar:?]
> Mar 24 09:32:29   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$HeartbeatThread.run(AMRMClientAsyncImpl.java:274)
>  [hadoop-yarn-client-2.10.2.jar:?]
> Mar 24 09:32:29 Caused by: java.lang.InterruptedException
> Mar 24 09:32:29   at 
> java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:1.8.0_292]
> Mar 24 09:32:29   at 
> org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1177) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   at org.apache.hadoop.ipc.Client.call(Client.java:1456) 
> ~[hadoop-common-2.10.2.jar:?]
> Mar 24 09:32:29   ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31085) Add schema option to confluent registry avro formats

2023-02-15 Thread Ferenc Csaky (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Csaky updated FLINK-31085:
-
Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile)

> Add schema option to confluent registry avro formats
> 
>
> Key: FLINK-31085
> URL: https://issues.apache.org/jira/browse/FLINK-31085
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Ferenc Csaky
>Priority: Major
> Fix For: 1.17.0
>
>
> When using {{avro-confluent}} and {{debezium-avro-confluent}} formats with 
> schemas already defined in the Confluent Schema Registry, serialization 
> fails, because Flink uses a default name `record` when converting row types 
> to avro schema. So if the predefined schema has a different name, the 
> serialization schema will be incompatible with the registered schema due to 
> name mismatch. Check 
> [this|https://lists.apache.org/thread/5xppmnqjqwfzxqo4gvd3lzz8wzs566zp] 
> thread about reproducing the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31085) Add schema option to confluent registry avro formats

2023-02-15 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-31085:


 Summary: Add schema option to confluent registry avro formats
 Key: FLINK-31085
 URL: https://issues.apache.org/jira/browse/FLINK-31085
 Project: Flink
  Issue Type: Improvement
Reporter: Ferenc Csaky
 Fix For: 1.17.0


When using {{avro-confluent}} and {{debezium-avro-confluent}} formats with 
schemas already defined in the Confluent Schema Registry, serialization fails, 
because Flink uses a default name `record` when converting row types to avro 
schema. So if the predefined schema has a different name, the serialization 
schema will be incompatible with the registered schema due to name mismatch. 
Check [this|https://lists.apache.org/thread/5xppmnqjqwfzxqo4gvd3lzz8wzs566zp] 
thread about reproducing the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25343) HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure

2023-01-25 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17680596#comment-17680596
 ] 

Ferenc Csaky commented on FLINK-25343:
--

Yep, that's a fair point too. I'll try to do some investigation by the end of 
the week. I think the root cause will be some weird synergy on how some tests 
are starting services and sometimes something will stuck won't be executed 
properly...so I was not eager to start debugging and go trial and error. :)

> HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure
> -
>
> Key: FLINK-25343
> URL: https://issues.apache.org/jira/browse/FLINK-25343
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.14.3, 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> Dec 15 16:53:00 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Dec 15 16:53:01 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:53:05 Formatting using clusterid: testClusterID
> Dec 15 16:54:20 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
> Dec 15 16:54:20 Thread[HFileArchiver-8,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-9,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-10,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-11,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-12,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-13,5,PEWorkerGroup]
> Dec 15 16:54:20 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 79.639 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:54:20 
> testTableSourceSinkWithDDL(org.apache.flink.connector.hbase2.HBaseConnectorITCase)
>   Time elapsed: 6.843 sec  <<< FAILURE!
> Dec 15 16:54:20 java.lang.AssertionError: expected:<8> but was:<3>
> Dec 15 16:54:20   at org.junit.Assert.fail(Assert.java:89)
> Dec 15 16:54:20   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:120)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:146)
> Dec 15 16:54:20   at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:371)
> Dec 15 16:54:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 15 16:54:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 15 16:54:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 15 16:54:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28204=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25343) HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure

2023-01-24 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17680261#comment-17680261
 ] 

Ferenc Csaky commented on FLINK-25343:
--

Hi! No, I was not able to spend time on this one. Although now that the 
connector will be externalized I'm wondering if this worth more time, or not. 
My hunch is that this will be an environmental problem and occurs pretty rarely.

> HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure
> -
>
> Key: FLINK-25343
> URL: https://issues.apache.org/jira/browse/FLINK-25343
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.14.3, 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> Dec 15 16:53:00 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Dec 15 16:53:01 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:53:05 Formatting using clusterid: testClusterID
> Dec 15 16:54:20 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
> Dec 15 16:54:20 Thread[HFileArchiver-8,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-9,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-10,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-11,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-12,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-13,5,PEWorkerGroup]
> Dec 15 16:54:20 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 79.639 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:54:20 
> testTableSourceSinkWithDDL(org.apache.flink.connector.hbase2.HBaseConnectorITCase)
>   Time elapsed: 6.843 sec  <<< FAILURE!
> Dec 15 16:54:20 java.lang.AssertionError: expected:<8> but was:<3>
> Dec 15 16:54:20   at org.junit.Assert.fail(Assert.java:89)
> Dec 15 16:54:20   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:120)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:146)
> Dec 15 16:54:20   at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:371)
> Dec 15 16:54:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 15 16:54:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 15 16:54:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 15 16:54:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28204=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25343) HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure

2022-11-21 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636714#comment-17636714
 ] 

Ferenc Csaky commented on FLINK-25343:
--

Okay, this new run showed the actual difference:
{code:java}
expected:<[
+I[1, 10, Hello-1, 100, 1.01, false, Welt-1, 2019-08-18T19:00, 2019-08-18, 
19:00, 12345678.0001],
+I[2, 20, Hello-2, 200, 2.02, true, Welt-2, 2019-08-18T19:01, 2019-08-18, 
19:01, 12345678.0002],
+I[3, 30, Hello-3, 300, 3.03, false, Welt-3, 2019-08-18T19:02, 2019-08-18, 
19:02, 12345678.0003],
+I[4, 40, null, 400, 4.04, true, Welt-4, 2019-08-18T19:03, 2019-08-18, 19:03, 
12345678.0004],
+I[5, 50, Hello-5, 500, 5.05, false, Welt-5, 2019-08-19T19:10, 2019-08-19, 
19:10, 12345678.0005],
+I[6, 60, Hello-6, 600, 6.06, true, Welt-6, 2019-08-19T19:20, 2019-08-19, 
19:20, 12345678.0006],
+I[7, 70, Hello-7, 700, 7.07, false, Welt-7, 2019-08-19T19:30, 2019-08-19, 
19:30, 12345678.0007],
+I[8, 80, null, 800, 8.08, true, Welt-8, 2019-08-19T19:40, 2019-08-19, 19:40, 
12345678.0008]]>

 but was:<[
 +I[4, 40, null, 400, 4.04, true, Welt-4, 2019-08-18T19:03, 2019-08-18, 19:03, 
12345678.0004],
 +I[5, 50, Hello-5, 500, 5.05, false, Welt-5, 2019-08-19T19:10, 2019-08-19, 
19:10, 12345678.0005],
 +I[6, 60, Hello-6, 600, 6.06, true, Welt-6, 2019-08-19T19:20, 2019-08-19, 
19:20, 12345678.0006],
 +I[7, 70, Hello-7, 700, 7.07, false, Welt-7, 2019-08-19T19:30, 2019-08-19, 
19:30, 12345678.0007],
 +I[8, 80, null, 800, 8.08, true, Welt-8, 2019-08-19T19:40, 2019-08-19, 19:40, 
12345678.0008]]> {code}
So the first 3 record was missing. I think I'll have some time in the next 2 
days to do some digging.

> HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure
> -
>
> Key: FLINK-25343
> URL: https://issues.apache.org/jira/browse/FLINK-25343
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.14.3, 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> Dec 15 16:53:00 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Dec 15 16:53:01 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:53:05 Formatting using clusterid: testClusterID
> Dec 15 16:54:20 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
> Dec 15 16:54:20 Thread[HFileArchiver-8,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-9,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-10,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-11,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-12,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-13,5,PEWorkerGroup]
> Dec 15 16:54:20 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 79.639 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:54:20 
> testTableSourceSinkWithDDL(org.apache.flink.connector.hbase2.HBaseConnectorITCase)
>   Time elapsed: 6.843 sec  <<< FAILURE!
> Dec 15 16:54:20 java.lang.AssertionError: expected:<8> but was:<3>
> Dec 15 16:54:20   at org.junit.Assert.fail(Assert.java:89)
> Dec 15 16:54:20   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:120)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:146)
> Dec 15 16:54:20   at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:371)
> Dec 15 16:54:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 15 16:54:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 15 16:54:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 15 16:54:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Dec 15 16:54:20   at 
> 

[jira] [Commented] (FLINK-30062) Move existing HBase connector code from Flink repo to dedicated HBase repo

2022-11-21 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636651#comment-17636651
 ] 

Ferenc Csaky commented on FLINK-30062:
--

[~martijnvisser] can you assign this one to me? Thanks!

> Move existing HBase connector code from Flink repo to dedicated HBase repo
> --
>
> Key: FLINK-30062
> URL: https://issues.apache.org/jira/browse/FLINK-30062
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29707) Fix possible comparator violation for "flink list"

2022-10-20 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621182#comment-17621182
 ] 

Ferenc Csaky commented on FLINK-29707:
--

Just opened a PR with the proposed fix.

> Fix possible comparator violation for "flink list"
> --
>
> Key: FLINK-29707
> URL: https://issues.apache.org/jira/browse/FLINK-29707
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.16.0
>Reporter: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> For the {{list}} CLI option, the code that prints the jobs, there is a 
> {{startTimeComparator}} definition, which orders the jobs and it is done this 
> way:
> {code:java}
> Comparator startTimeComparator =
> (o1, o2) -> (int) (o1.getStartTime() - o2.getStartTime());
> {code}
> In some rare situation this can lead to this:
> {code:java}
> 2022-10-19 09:58:11,690 ERROR org.apache.flink.client.cli.CliFrontend 
>  [] - Error while running the command.
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeLo(TimSort.java:777) ~[?:1.8.0_312]
>   at java.util.TimSort.mergeAt(TimSort.java:514) ~[?:1.8.0_312]
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:457) ~[?:1.8.0_312]
>   at java.util.TimSort.sort(TimSort.java:254) ~[?:1.8.0_312]
>   at java.util.Arrays.sort(Arrays.java:1512) ~[?:1.8.0_312]
>   at java.util.ArrayList.sort(ArrayList.java:1464) ~[?:1.8.0_312]
>   at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:392) 
> ~[?:1.8.0_312]
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258) 
> ~[?:1.8.0_312]
>   at java.util.stream.Sink$ChainedReference.end(Sink.java:258) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:363) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:483) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
>  ~[?:1.8.0_312]
>   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
> ~[?:1.8.0_312]
>   at 
> java.util.stream.ReferencePipeline.forEachOrdered(ReferencePipeline.java:490) 
> ~[?:1.8.0_312]
>   at 
> org.apache.flink.client.cli.CliFrontend.printJobStatusMessages(CliFrontend.java:574)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29707) Fix possible comparator violation for "flink list"

2022-10-20 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-29707:


 Summary: Fix possible comparator violation for "flink list"
 Key: FLINK-29707
 URL: https://issues.apache.org/jira/browse/FLINK-29707
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.16.0
Reporter: Ferenc Csaky


For the {{list}} CLI option, the code that prints the jobs, there is a 
{{startTimeComparator}} definition, which orders the jobs and it is done this 
way:
{code:java}
Comparator startTimeComparator =
(o1, o2) -> (int) (o1.getStartTime() - o2.getStartTime());
{code}
In some rare situation this can lead to this:
{code:java}
2022-10-19 09:58:11,690 ERROR org.apache.flink.client.cli.CliFrontend   
   [] - Error while running the command.
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
at java.util.TimSort.mergeLo(TimSort.java:777) ~[?:1.8.0_312]
at java.util.TimSort.mergeAt(TimSort.java:514) ~[?:1.8.0_312]
at java.util.TimSort.mergeForceCollapse(TimSort.java:457) ~[?:1.8.0_312]
at java.util.TimSort.sort(TimSort.java:254) ~[?:1.8.0_312]
at java.util.Arrays.sort(Arrays.java:1512) ~[?:1.8.0_312]
at java.util.ArrayList.sort(ArrayList.java:1464) ~[?:1.8.0_312]
at java.util.stream.SortedOps$RefSortingSink.end(SortedOps.java:392) 
~[?:1.8.0_312]
at java.util.stream.Sink$ChainedReference.end(Sink.java:258) 
~[?:1.8.0_312]
at java.util.stream.Sink$ChainedReference.end(Sink.java:258) 
~[?:1.8.0_312]
at 
java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:363) 
~[?:1.8.0_312]
at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:483) 
~[?:1.8.0_312]
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) 
~[?:1.8.0_312]
at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) 
~[?:1.8.0_312]
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
 ~[?:1.8.0_312]
at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
~[?:1.8.0_312]
at 
java.util.stream.ReferencePipeline.forEachOrdered(ReferencePipeline.java:490) 
~[?:1.8.0_312]
at 
org.apache.flink.client.cli.CliFrontend.printJobStatusMessages(CliFrontend.java:574)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25343) HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure

2022-10-17 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618873#comment-17618873
 ] 

Ferenc Csaky commented on FLINK-25343:
--

I just updated my open PR, which I suggest to merge and after that the next 
time this fails, it will tell more about the actual records. After that, it 
will be easier to know where to look.

> HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure
> -
>
> Key: FLINK-25343
> URL: https://issues.apache.org/jira/browse/FLINK-25343
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.14.3, 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> Dec 15 16:53:00 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Dec 15 16:53:01 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:53:05 Formatting using clusterid: testClusterID
> Dec 15 16:54:20 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
> Dec 15 16:54:20 Thread[HFileArchiver-8,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-9,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-10,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-11,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-12,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-13,5,PEWorkerGroup]
> Dec 15 16:54:20 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 79.639 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:54:20 
> testTableSourceSinkWithDDL(org.apache.flink.connector.hbase2.HBaseConnectorITCase)
>   Time elapsed: 6.843 sec  <<< FAILURE!
> Dec 15 16:54:20 java.lang.AssertionError: expected:<8> but was:<3>
> Dec 15 16:54:20   at org.junit.Assert.fail(Assert.java:89)
> Dec 15 16:54:20   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:120)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:146)
> Dec 15 16:54:20   at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:371)
> Dec 15 16:54:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 15 16:54:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 15 16:54:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 15 16:54:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28204=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25343) HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure

2022-10-12 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616627#comment-17616627
 ] 

Ferenc Csaky commented on FLINK-25343:
--

Was that validated before that when the test fails what the query returns? I 
just created a PR, which does that. I do not intend to merge it at this form, 
but probably triggering a faulty run will not happen soon with it.

> HBaseConnectorITCase.testTableSourceSinkWithDDL fail on azure
> -
>
> Key: FLINK-25343
> URL: https://issues.apache.org/jira/browse/FLINK-25343
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.14.3, 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> Dec 15 16:53:00 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Dec 15 16:53:01 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:53:05 Formatting using clusterid: testClusterID
> Dec 15 16:54:20 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
> Dec 15 16:54:20 Thread[HFileArchiver-8,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-9,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-10,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-11,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-12,5,PEWorkerGroup]
> Dec 15 16:54:20 Thread[HFileArchiver-13,5,PEWorkerGroup]
> Dec 15 16:54:20 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 79.639 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase
> Dec 15 16:54:20 
> testTableSourceSinkWithDDL(org.apache.flink.connector.hbase2.HBaseConnectorITCase)
>   Time elapsed: 6.843 sec  <<< FAILURE!
> Dec 15 16:54:20 java.lang.AssertionError: expected:<8> but was:<3>
> Dec 15 16:54:20   at org.junit.Assert.fail(Assert.java:89)
> Dec 15 16:54:20   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:120)
> Dec 15 16:54:20   at org.junit.Assert.assertEquals(Assert.java:146)
> Dec 15 16:54:20   at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:371)
> Dec 15 16:54:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 15 16:54:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 15 16:54:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 15 16:54:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 15 16:54:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 15 16:54:20   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Dec 15 16:54:20   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Dec 15 16:54:20   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28204=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12375



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28910) CDC From Mysql To Hbase Bugs

2022-09-30 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17611505#comment-17611505
 ] 

Ferenc Csaky commented on FLINK-28910:
--

My HBase knowledge is a bit rusty, so bear with me if any of my assumptions is 
not correct. As per my understanding, if we would like to move towards the 
atomic table operations way, we would lose the currently leveraged buffering 
functionality, because it is possible to define multiple mutations for 1 
specific row with {{{}RowMutations{}}}, but that's it, the operations will be 
sent right away, which will probably affect performance.

According to the PR, it will do the job I think, would that be possible to 
handle the action itself smarter? I'm wondering about would it make sense to 
omit the delete op. in specific cases?

> CDC From Mysql To Hbase Bugs
> 
>
> Key: FLINK-28910
> URL: https://issues.apache.org/jira/browse/FLINK-28910
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: TE
>Priority: Major
>  Labels: pull-request-available, stale-blocker
>
> I use Flink for CDC from Mysql to Hbase.
> The problem I encountered is that the Mysql record is updated (not deleted), 
> but the record in hbase is deleted sometimes.
> I tried to analyze the problem and found the reason as follows:
> The update action of Mysql will be decomposed into delete + insert by Flink.
> The Hbase connector uses a mutator to handle this set of actions.
> However, if the order of these actions is not actively set, the processing of 
> the mutator will not guarantee the order of execution.
> Therefore, when the update of Mysql is triggered, it is possible that hbase 
> actually performed the actions in the order of put + delete, resulting in the 
> data being deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28910) CDC From Mysql To Hbase Bugs

2022-09-28 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610452#comment-17610452
 ] 

Ferenc Csaky commented on FLINK-28910:
--

Sure, I'll take a look.

> CDC From Mysql To Hbase Bugs
> 
>
> Key: FLINK-28910
> URL: https://issues.apache.org/jira/browse/FLINK-28910
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: TE
>Priority: Major
>  Labels: pull-request-available, stale-blocker
>
> I use Flink for CDC from Mysql to Hbase.
> The problem I encountered is that the Mysql record is updated (not deleted), 
> but the record in hbase is deleted sometimes.
> I tried to analyze the problem and found the reason as follows:
> The update action of Mysql will be decomposed into delete + insert by Flink.
> The Hbase connector uses a mutator to handle this set of actions.
> However, if the order of these actions is not actively set, the processing of 
> the mutator will not guarantee the order of execution.
> Therefore, when the update of Mysql is triggered, it is possible that hbase 
> actually performed the actions in the order of put + delete, resulting in the 
> data being deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28199) Failures on YARNHighAvailabilityITCase.testClusterClientRetrieval and YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint

2022-06-23 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558171#comment-17558171
 ] 

Ferenc Csaky commented on FLINK-28199:
--

I would say to keep it open for now, it is possible it will happen again.

> Failures on YARNHighAvailabilityITCase.testClusterClientRetrieval and 
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint
> -
>
> Key: FLINK-28199
> URL: https://issues.apache.org/jira/browse/FLINK-28199
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 22 08:57:50 [ERROR] Errors: 
> Jun 22 08:57:50 [ERROR]   
> YARNHighAvailabilityITCase.testClusterClientRetrieval » Timeout 
> testClusterCli...
> Jun 22 08:57:50 [ERROR]   
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint:156->YarnTestBase.runTest:288->lambda$testKillYarnSessionClusterEntrypoint$0:182->waitForJobTermination:325
>  » Execution
> Jun 22 08:57:50 [INFO] 
> Jun 22 08:57:50 [ERROR] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37037=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=29523



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28199) Failures on YARNHighAvailabilityITCase.testClusterClientRetrieval and YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint

2022-06-22 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557498#comment-17557498
 ] 

Ferenc Csaky commented on FLINK-28199:
--

Hm, this is weird. As I see this happened only 1 time since the fix for 
FLINK-27667 got merged, so this thing can be unrelated.

testKillYarnSessionClusterEntrypoint() timed out after 60s:
{code:java}
java.lang.AssertionError: There is at least one application on the cluster that 
is not finished.[App application_1655885546027_0002 is in state RUNNING.]
Jun 22 08:44:47 at 
org.apache.flink.yarn.YarnTestBase$CleanupYarnApplication.close(YarnTestBase.java:325)
Jun 22 08:44:47 at 
org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:289)
Jun 22 08:44:47 ... 60 more {code}
testClusterClientRetrieval() timed out after 30 minutes, cause:
{code:java}
java.lang.AssertionError: There is at least one application on the cluster that 
is not finished.[App application_1655885546027_0003 is in state ACCEPTED., App 
application_1655885546027_0002 is in state RUNNING.]
Jun 22 08:44:47 at 
org.apache.flink.yarn.YarnTestBase$CleanupYarnApplication.close(YarnTestBase.java:325)
Jun 22 08:44:47 at 
org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:289)
Jun 22 08:44:47 ... 60 more {code}
It failed to shutdown the 0002 app.

> Failures on YARNHighAvailabilityITCase.testClusterClientRetrieval and 
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint
> -
>
> Key: FLINK-28199
> URL: https://issues.apache.org/jira/browse/FLINK-28199
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 22 08:57:50 [ERROR] Errors: 
> Jun 22 08:57:50 [ERROR]   
> YARNHighAvailabilityITCase.testClusterClientRetrieval » Timeout 
> testClusterCli...
> Jun 22 08:57:50 [ERROR]   
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint:156->YarnTestBase.runTest:288->lambda$testKillYarnSessionClusterEntrypoint$0:182->waitForJobTermination:325
>  » Execution
> Jun 22 08:57:50 [INFO] 
> Jun 22 08:57:50 [ERROR] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37037=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=29523



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27667) YARNHighAvailabilityITCase fails with "Failed to delete temp directory /tmp/junit1681"

2022-06-21 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556949#comment-17556949
 ] 

Ferenc Csaky commented on FLINK-27667:
--

[~martijnvisser] [~bgeng777] FYI: I ended up incorporating those changes as 
well in the #20012 PR.

> YARNHighAvailabilityITCase fails with "Failed to delete temp directory 
> /tmp/junit1681"
> --
>
> Key: FLINK-27667
> URL: https://issues.apache.org/jira/browse/FLINK-27667
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Assignee: Ferenc Csaky
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35733=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=29208
>  
> {code:bash}
> May 17 08:36:22 [INFO] Results: 
> May 17 08:36:22 [INFO] 
> May 17 08:36:22 [ERROR] Errors: 
> May 17 08:36:22 [ERROR] YARNHighAvailabilityITCase » IO Failed to delete temp 
> directory /tmp/junit1681... 
> May 17 08:36:22 [INFO] 
> May 17 08:36:22 [ERROR] Tests run: 28, Failures: 0, Errors: 1, Skipped: 0 
> May 17 08:36:22 [INFO] 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27441) Scrollbar is missing for particular UI elements (Accumulators, Backpressure, Watermarks)

2022-04-28 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529443#comment-17529443
 ] 

Ferenc Csaky commented on FLINK-27441:
--

Since this is a trivial fix, I opened a PR for it: 
https://github.com/apache/flink/pull/19606

> Scrollbar is missing for particular UI elements (Accumulators, Backpressure, 
> Watermarks)
> 
>
> Key: FLINK-27441
> URL: https://issues.apache.org/jira/browse/FLINK-27441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Ferenc Csaky
>Priority: Minor
>  Labels: pull-request-available
>
> The angular version bump introduced a bug, where for {{nzScroll}} does not 
> support percentage in CSS calc, so the scrollbar will be invisible. There is 
> an easy workaround, the linked Angular discussion covers it.
> Angular issue: https://github.com/NG-ZORRO/ng-zorro-antd/issues/3090



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27441) Scrollbar is missing for particular UI elements (Accumulators, Backpressure, Watermarks)

2022-04-28 Thread Ferenc Csaky (Jira)
Ferenc Csaky created FLINK-27441:


 Summary: Scrollbar is missing for particular UI elements 
(Accumulators, Backpressure, Watermarks)
 Key: FLINK-27441
 URL: https://issues.apache.org/jira/browse/FLINK-27441
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.14.3, 1.15.0
Reporter: Ferenc Csaky


The angular version bump introduced a bug, where for {{nzScroll}} does not 
support percentage in CSS calc, so the scrollbar will be invisible. There is an 
easy workaround, the linked Angular discussion covers it.

Angular issue: https://github.com/NG-ZORRO/ng-zorro-antd/issues/3090



--
This message was sent by Atlassian Jira
(v8.20.7#820007)