[jira] [Commented] (FLINK-20952) Changelog json formats should support inherit options from JSON format

2021-02-11 Thread Harvey Yue (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283542#comment-17283542
 ] 

Harvey Yue commented on FLINK-20952:


Hi, I'm working this feature, currently, I add all json attributes including 
debezium, maxwell and canal, also the builder class in JsonOptions. Instead of 
implementing which DebeziumJsonOptions, CanalJsonOptions and MaxwellJsonOptions 
inherit JsonOptions, when add a new attribute to JsonOptions, we have no need 
to touch the builder class in these subclass.

If this solution is ok, I will post a pr.

> Changelog json formats should support inherit options from JSON format
> --
>
> Key: FLINK-20952
> URL: https://issues.apache.org/jira/browse/FLINK-20952
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0
>
>
> Recently, we introduced several config options for json format, e.g. 
> FLINK-20861. It reveals a potential problem that adding a small config option 
> into json may need touch debezium-json, canal-json, maxwell-json formats. 
> This is verbose and error-prone. We need an abstract machanism support 
> reuable options. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on a change in pull request #14847: [FLINK-21030][runtime] Add global failover in case of a stop-with-savepoint failure

2021-02-11 Thread GitBox


rmetzger commented on a change in pull request #14847:
URL: https://github.com/apache/flink/pull/14847#discussion_r575033594



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java
##
@@ -908,38 +909,56 @@ public void reportCheckpointMetrics(
 // will be restarted by the CheckpointCoordinatorDeActivator.
 checkpointCoordinator.stopCheckpointScheduler();
 
+final CompletableFuture> 
executionGraphTerminationFuture =
+FutureUtils.combineAll(
+StreamSupport.stream(
+
executionGraph.getAllExecutionVertices().spliterator(),

Review comment:
   I don't think we have access to the pipelined regions in SchedulerBase 
(they are in the SchedulingStrategy of the DefaultScheduler). Secondly, the 
Regions don't have termination futures.
   
   Disclaimer: My experience with this part of the flink code base is based on 
reading the code the last few minutes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14743: [FLINK-21366][doc] mentions Maxwell as CDC tool in Kafka connector documentation

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14743:
URL: https://github.com/apache/flink/pull/14743#issuecomment-766412125


   
   ## CI report:
   
   * d1095358d7e5c1982fb362e9b6641cc84a70647c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12428)
 
   * f3d0ce7680adcb60b6f31cdc169447d68d2b80f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13271)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2021-02-11 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283538#comment-17283538
 ] 

Dawid Wysakowicz commented on FLINK-16947:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13266&view=logs&j=3e60b793-4158-5027-ac6d-4cdc51dffe1e&t=d5ed4970-7667-5f7e-2ece-62e410f74748

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2021-02-11 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283537#comment-17283537
 ] 

Dawid Wysakowicz commented on FLINK-16947:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13264&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2021-02-11 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283536#comment-17283536
 ] 

Dawid Wysakowicz commented on FLINK-16947:
--

Thanks for the update [~rmetzger] and the efforts!

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21366) Kafka connector documentation should mentions Maxwell as CDC mechanism

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21366:
---
Labels: pull-request-available  (was: )

> Kafka connector documentation should mentions Maxwell as CDC mechanism
> --
>
> Key: FLINK-21366
> URL: https://issues.apache.org/jira/browse/FLINK-21366
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Svend Vanderveken
>Priority: Minor
>  Labels: pull-request-available
>
> The current [Kafka connector changelog section of the 
> documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/kafka.html#changelog-source]
>  mentions Debezium and Canal CDC tools but not the recently added Maxwell 
> format.
> This PR linked to this ticket edits the text to add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14743: [FLINK-21366][doc] mentions Maxwell as CDC tool in Kafka connector documentation

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14743:
URL: https://github.com/apache/flink/pull/14743#issuecomment-766412125


   
   ## CI report:
   
   * d1095358d7e5c1982fb362e9b6641cc84a70647c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12428)
 
   * f3d0ce7680adcb60b6f31cdc169447d68d2b80f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21366) Kafka connector documentation should mentions Maxwell as CDC mechanism

2021-02-11 Thread Svend Vanderveken (Jira)
Svend Vanderveken created FLINK-21366:
-

 Summary: Kafka connector documentation should mentions Maxwell as 
CDC mechanism
 Key: FLINK-21366
 URL: https://issues.apache.org/jira/browse/FLINK-21366
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Svend Vanderveken


The current [Kafka connector changelog section of the 
documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/kafka.html#changelog-source]
 mentions Debezium and Canal CDC tools but not the recently added Maxwell 
format.

This PR linked to this ticket edits the text to add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:28 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually)

Operator ID present in our code and their respective hash

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The green highlighted ones are not from our any operator as you can see.

The yellow highlighted one are the operator id which is not present in the 
metafile.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'?


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The green highlighted ones are not from our any operator as you can see.

The yellow highlighted one are the operator id which is not present in the 
metafile.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to

[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:25 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The green highlighted ones are not from our any operator as you can see.

The yellow highlighted one are the operator id which is not present in the 
metafile.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'?


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The green highlighted ones are not from our any operator as you can see.

The yellow highlighted one are the operator id which is not present in the 
metafile.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`,

[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:24 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The green highlighted ones are not from our any operator as you can see.

The yellow highlighted one are the operator id which is not present in the 
metafile.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.

> Error in restorin

[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:22 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs (our code snippet to make you understand the different operator uid 
specified manually) 

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs and the different operator uid specified.

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -

[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:20 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually. I have also attached the DataStream 
APIs and the different operator uid specified.

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
 {color:#ffab00} Operator UID : b Hashed 
:eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually.

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
{color:#ffab00} Operator UID : b Hashed :eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> P

[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283508#comment-17283508
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

!image-2021-02-12-10-50-26-411.png!

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png, 
> image-2021-02-12-10-50-26-411.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamC

[jira] [Updated] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Partha Pradeep Mishra updated FLINK-20376:
--
Attachment: image-2021-02-12-10-50-26-411.png

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png, 
> image-2021-02-12-10-50-26-411.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)\n\tat
>  
> org.a

[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually.

Operator UID : a Hashed :897859f665855a890e51483ab5e6
Operator UID : b Hashed :eed1d3b157a9987ae9944e541e132efa
Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0
Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
 

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\

[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283507#comment-17283507
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:18 AM:
-

[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually.

{color:#ffab00}Operator UID : a Hashed :897859f665855a890e51483ab5e6{color}
{color:#ffab00} Operator UID : b Hashed :eed1d3b157a9987ae9944e541e132efa{color}
 Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
 Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
 Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
 Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
 Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
 Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
 {color:#ffab00}Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0{color}
 Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
 
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
  

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.


was (Author: partha mishra):
[~pnowojski] I used ur above method to calculate the hash of all the operators 
for which i have specified uid manually.

Operator UID : a Hashed :897859f665855a890e51483ab5e6
Operator UID : b Hashed :eed1d3b157a9987ae9944e541e132efa
Operator UID : c Hashed :d7741f4a6cdf388e747557749a0f0d21
Operator UID : d Hashed :76f74784cdf272cbdd1c37d471a532a0
Operator UID : e Hashed :94e9d5a34992b6c51461d69a7ca2eb56
Operator UID : f Hashed :afa3664e2d13439221e8d041382a4dc1
Operator UID : g Hashed :da9aa6f89ab75dbc6233a02db1b171fd
Operator UID : h Hashed :2345cb61bbb2fcd603d786389726830c
Operator UID : z Hashed :936222da3bb558848f700bb01edb34c0
Operator UID : i Hashed :bdf3ca0e5e6bde27f22d80457a5a19a9

 

But the metafile contains the below hashed value.
{d7741f4a6cdf388e747557749a0f0d21=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5af97850,
 
94e9d5a34992b6c51461d69a7ca2eb56=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@5ef60048,
 
{color:#00875a}647a0a5ff84846c52775ce89f51a5edc=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@1d548a08,{color}
 
2345cb61bbb2fcd603d786389726830c=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@16aa0a0a,
 
76f74784cdf272cbdd1c37d471a532a0=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@780cb77,
 
bdf3ca0e5e6bde27f22d80457a5a19a9=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@691a7f8f,
 
da9aa6f89ab75dbc6233a02db1b171fd=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@50a7bc6e,
 
{color:#00875a}43b792ffaf5a610180059cb432d4a71d=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@161b062a,{color}
 
afa3664e2d13439221e8d041382a4dc1=org.apache.flink.state.api.runtime.metadata.OperatorStateSpec@17c1bced}
 

The highlighted one is not from our any operator as you can see.

Also out of the 10 I have generated only 7 are present in the metafile, what 
happened to the remaining 3. i.e. `897859f665855a890e51483ab5e6`, 
`eed1d3b157a9987ae9944e541e132efa`,  '936222da3bb558848f700bb01edb34c0'.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>

[jira] [Issue Comment Deleted] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Partha Pradeep Mishra updated FLINK-20376:
--
Comment: was deleted

(was: !image-2021-01-18-14-42-49-814.png!)

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)\n\tat
>  
> org.apache.flink.streami

[GitHub] [flink] vthinkxie commented on pull request #14848: [FLINK-21268]Fix scrolling issue in Firefox

2021-02-11 Thread GitBox


vthinkxie commented on pull request #14848:
URL: https://github.com/apache/flink/pull/14848#issuecomment-777976114


   Hi @lovelock 
   `@ant-design/icons-angular` is the dependency of ng-zorro-antd, which should 
be installed automatically.
   can you check your npm version via run `npm version`?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283503#comment-17283503
 ] 

Partha Pradeep Mishra edited comment on FLINK-20376 at 2/12/21, 5:04 AM:
-

[~pnowojski] Yes,  `43b792ffaf5a610180059cb432d4a71d` is not from any of the 
operators visible in the job graphs.

1. I have used state processor APIs removeOperator(uid) to remove all the uids 
which I have manually specified in various operators of our DataStream 
application but I was not able to remove these two operators 
`43b792ffaf5a610180059cb432d4a71d` and  
 `647a0a5ff84846c52775ce89f51a5edc`. So, I assumed its not generated from our 
operators. I have attached the code snippet below. You will get a clear picture 
of all the operators and their respective uid set.


was (Author: partha mishra):
[~pnowojski] Yes,  `43b792ffaf5a610180059cb432d4a71d` is not from any of the 
operators visible in the job graphs.

1. I have used state processor APIs removeOperator(uid) to remove all the uids 
which I have manually specified in various operators of our DataStream 
application but I was not able to remove these two operators 
`43b792ffaf5a610180059cb432d4a71d` and  
`647a0a5ff84846c52775ce89f51a5edc`. So, I assumed its not generated from our 
operators.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.Pro

[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283504#comment-17283504
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

!image-2021-01-18-14-42-49-814.png!

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)\n\tat
>  
>

[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2021-02-11 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283503#comment-17283503
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

[~pnowojski] Yes,  `43b792ffaf5a610180059cb432d4a71d` is not from any of the 
operators visible in the job graphs.

1. I have used state processor APIs removeOperator(uid) to remove all the uids 
which I have manually specified in various operators of our DataStream 
application but I was not able to remove these two operators 
`43b792ffaf5a610180059cb432d4a71d` and  
`647a0a5ff84846c52775ce89f51a5edc`. So, I assumed its not generated from our 
operators.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Partha Pradeep Mishra
>Priority: Major
> Attachments: MetaData.zip, image-2020-12-10-15-04-39-624.png, 
> image-2020-12-10-15-06-48-013.png, image-2020-12-10-15-09-13-527.png, 
> image-2021-01-18-14-42-49-814.png, image-2021-02-11-14-37-31-793.png
>
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14839: [FLINK-21353][state] Add DFS-based StateChangelog

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14839:
URL: https://github.com/apache/flink/pull/14839#issuecomment-772060196


   
   ## CI report:
   
   * 190ba65a7779f7d389b8a0d74004b15afb34e5a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13270)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14839: [FLINK-21353][state] Add DFS-based StateChangelog

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14839:
URL: https://github.com/apache/flink/pull/14839#issuecomment-772060196


   
   ## CI report:
   
   * 813a44dd88fc0357316be02c53119c7def6f3eac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13218)
 
   * 5fb0c3ec5ea91de6195d8219a2a3fc27ba56806c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13269)
 
   * 190ba65a7779f7d389b8a0d74004b15afb34e5a6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13270)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * 7705bf347fbf193d02630a58bbb75128c60025ed Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13268)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14839: [FLINK-21353][state] Add DFS-based StateChangelog

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14839:
URL: https://github.com/apache/flink/pull/14839#issuecomment-772060196


   
   ## CI report:
   
   * 813a44dd88fc0357316be02c53119c7def6f3eac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13218)
 
   * 5fb0c3ec5ea91de6195d8219a2a3fc27ba56806c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13269)
 
   * 190ba65a7779f7d389b8a0d74004b15afb34e5a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * dfd5c341320f3ccb7291c5833ae52a3d3f1632f1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12716)
 
   * 1e1d99c8221213984c9c47f94b7bdf80ed3340fb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13267)
 
   * 7705bf347fbf193d02630a58bbb75128c60025ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13268)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14839: [FLINK-21353][state] Add DFS-based StateChangelog

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14839:
URL: https://github.com/apache/flink/pull/14839#issuecomment-772060196


   
   ## CI report:
   
   * 813a44dd88fc0357316be02c53119c7def6f3eac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13218)
 
   * 5fb0c3ec5ea91de6195d8219a2a3fc27ba56806c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * dfd5c341320f3ccb7291c5833ae52a3d3f1632f1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12716)
 
   * 1e1d99c8221213984c9c47f94b7bdf80ed3340fb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13267)
 
   * 7705bf347fbf193d02630a58bbb75128c60025ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14919: [FLINK-21338][test] Relax ITCase naming constraints

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14919:
URL: https://github.com/apache/flink/pull/14919#issuecomment-776686704


   
   ## CI report:
   
   * 7a95b8eb6d45c5e3c38f0c6cbed995b6d81a640b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13262)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14929: [FLINK-21364][connector] piggyback finishedSplitIds in RequestSplitEv…

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14929:
URL: https://github.com/apache/flink/pull/14929#issuecomment-53257


   
   ## CI report:
   
   * fbac76fb3634855a72878de1336c85f09562e3b6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13263)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * dfd5c341320f3ccb7291c5833ae52a3d3f1632f1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12716)
 
   * 1e1d99c8221213984c9c47f94b7bdf80ed3340fb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13267)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * dfd5c341320f3ccb7291c5833ae52a3d3f1632f1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12716)
 
   * 1e1d99c8221213984c9c47f94b7bdf80ed3340fb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-02-11 Thread GitBox


sjwiesman commented on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-777852596


   @rkhachatryan I believe I have addressed all your comments. I'm signing off 
tonight before seeing if the e2e tests pass but if anything fails I will fix it 
first thing in the morning. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 061b9d38187a7fb933b0b07c146c96d9e9ee1c17 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13260)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14928: [FLINK-21360][coordination] Make resource timeout configurable

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14928:
URL: https://github.com/apache/flink/pull/14928#issuecomment-14828


   
   ## CI report:
   
   * f2be4ac1ab7e9faeb7d1aac40fe003903edff497 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13259)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14929: [FLINK-21364][connector] piggyback finishedSplitIds in RequestSplitEv…

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14929:
URL: https://github.com/apache/flink/pull/14929#issuecomment-53257


   
   ## CI report:
   
   * fbac76fb3634855a72878de1336c85f09562e3b6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13263)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14919: [FLINK-21338][test] Relax ITCase naming constraints

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14919:
URL: https://github.com/apache/flink/pull/14919#issuecomment-776686704


   
   ## CI report:
   
   * 65a33645d217d8756e40a5c630680e30058f64cb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13209)
 
   * 7a95b8eb6d45c5e3c38f0c6cbed995b6d81a640b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13262)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14910: [FLINK-21259] Add Failing state for DeclarativeScheduler

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14910:
URL: https://github.com/apache/flink/pull/14910#issuecomment-775986669


   
   ## CI report:
   
   * e7130748c1cabb0a2e7fc436070da0c1d1c8450a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13248)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] LinyuYao1021 commented on a change in pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-02-11 Thread GitBox


LinyuYao1021 commented on a change in pull request #14737:
URL: https://github.com/apache/flink/pull/14737#discussion_r574800498



##
File path: flink-formats/flink-avro-glue-schema-registry/pom.xml
##
@@ -0,0 +1,99 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+   
+   flink-formats
+   org.apache.flink
+   1.13-SNAPSHOT
+   ..
+   
+   4.0.0
+
+   flink-avro-glue-schema-registry
+   Flink : Formats : Avro AWS Glue Schema Registry
+   jar
+
+   
+   
1.0.0
+   5.6.2
+   true
+   
+
+   
+   
+
+   
+   org.apache.flink
+   flink-core
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
+   flink-avro
+   ${project.version}
+   
+
+   
+   org.apache.flink
+   
flink-streaming-java_${scala.binary.version}
+   ${project.version}
+   
+
+   
+   org.apache.flink
+   
flink-clients_${scala.binary.version}
+   ${project.version}
+   

Review comment:
   I mean the testing of how customer would use this module.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14929: [FLINK-21364][connector] piggyback finishedSplitIds in RequestSplitEv…

2021-02-11 Thread GitBox


flinkbot commented on pull request #14929:
URL: https://github.com/apache/flink/pull/14929#issuecomment-53257


   
   ## CI report:
   
   * fbac76fb3634855a72878de1336c85f09562e3b6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14919: [FLINK-21338][test] Relax ITCase naming constraints

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14919:
URL: https://github.com/apache/flink/pull/14919#issuecomment-776686704


   
   ## CI report:
   
   * 65a33645d217d8756e40a5c630680e30058f64cb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13209)
 
   * 7a95b8eb6d45c5e3c38f0c6cbed995b6d81a640b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 61a5d30ea74d58a16124fb9e38587322aa0a1720 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13252)
 
   * 061b9d38187a7fb933b0b07c146c96d9e9ee1c17 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13260)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21315) Support to set operator names in state processor API

2021-02-11 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-21315:
-
Fix Version/s: 1.13.0

> Support to set operator names in state processor API
> 
>
> Key: FLINK-21315
> URL: https://issues.apache.org/jira/browse/FLINK-21315
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, it is not possible to set a user-friendly operator name when using 
> state processor API. For example, when you use `readKeyedState()`, the 
> operator name shows on the Flink UI is: 
> {{DataSource (at readKeyedState(ExistingSavepoint.java:282) 
> (org.apache.flink.state.api.input.KeyedStateInputFormat))}}
> The same long name is shown on Grafana when Flink metrics are displayed.  
> This Jira aims to provide users an option to set operator names in state 
> processor API. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21315) Support to set operator names in state processor API

2021-02-11 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283325#comment-17283325
 ] 

Seth Wiesman commented on FLINK-21315:
--

fixed in master: ae5bea39491860ccdff3316877ef28e64f466f64

> Support to set operator names in state processor API
> 
>
> Key: FLINK-21315
> URL: https://issues.apache.org/jira/browse/FLINK-21315
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, it is not possible to set a user-friendly operator name when using 
> state processor API. For example, when you use `readKeyedState()`, the 
> operator name shows on the Flink UI is: 
> {{DataSource (at readKeyedState(ExistingSavepoint.java:282) 
> (org.apache.flink.state.api.input.KeyedStateInputFormat))}}
> The same long name is shown on Grafana when Flink metrics are displayed.  
> This Jira aims to provide users an option to set operator names in state 
> processor API. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-21315) Support to set operator names in state processor API

2021-02-11 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman resolved FLINK-21315.
--
Resolution: Fixed

> Support to set operator names in state processor API
> 
>
> Key: FLINK-21315
> URL: https://issues.apache.org/jira/browse/FLINK-21315
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, it is not possible to set a user-friendly operator name when using 
> state processor API. For example, when you use `readKeyedState()`, the 
> operator name shows on the Flink UI is: 
> {{DataSource (at readKeyedState(ExistingSavepoint.java:282) 
> (org.apache.flink.state.api.input.KeyedStateInputFormat))}}
> The same long name is shown on Grafana when Flink metrics are displayed.  
> This Jira aims to provide users an option to set operator names in state 
> processor API. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman closed pull request #14907: [FLINK-21315][state-processor-api]allow users to set DataSource operator names

2021-02-11 Thread GitBox


sjwiesman closed pull request #14907:
URL: https://github.com/apache/flink/pull/14907


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14929: [FLINK-21364][connector] piggyback finishedSplitIds in RequestSplitEv…

2021-02-11 Thread GitBox


flinkbot commented on pull request #14929:
URL: https://github.com/apache/flink/pull/14929#issuecomment-44320


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fbac76fb3634855a72878de1336c85f09562e3b6 (Thu Feb 11 
19:44:50 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-21364).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21364) piggyback finishedSplitIds in RequestSplitEvent

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21364:
---
Labels: pull-request-available  (was: )

> piggyback finishedSplitIds in RequestSplitEvent
> ---
>
> Key: FLINK-21364
> URL: https://issues.apache.org/jira/browse/FLINK-21364
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.12.1
>Reporter: Steven Zhen Wu
>Priority: Major
>  Labels: pull-request-available
>
> For some split assignment strategy, the enumerator/assigner needs to track 
> the completed splits to advance watermark for event time alignment or rough 
> ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
> pass-along of the `finishedSplitIds` info and hence we have to create our own 
> custom source event type for Iceberg source. 
> Here is the proposal of add such optional info to `RequestSplitEvent`.
> {code}
> public RequestSplitEvent(
> @Nullable String hostName, 
> @Nullable Collection finishedSplitIds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] stevenzwu opened a new pull request #14929: [FLINK-21364][connector] piggyback finishedSplitIds in RequestSplitEv…

2021-02-11 Thread GitBox


stevenzwu opened a new pull request #14929:
URL: https://github.com/apache/flink/pull/14929


   …ent for FLIP-27 source
   
   ## What is the purpose of the change
   
   For some split assignment strategy, the enumerator/assigner needs to track 
the completed splits to advance watermark for event time alignment or rough 
ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
pass-along of the `finishedSplitIds` info and hence we have to create our own 
custom source event type for Iceberg source.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 61a5d30ea74d58a16124fb9e38587322aa0a1720 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13252)
 
   * 061b9d38187a7fb933b0b07c146c96d9e9ee1c17 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on pull request #14848: [FLINK-21268]Fix scrolling issue in Firefox

2021-02-11 Thread GitBox


zentol commented on pull request #14848:
URL: https://github.com/apache/flink/pull/14848#issuecomment-36286


   @vthinkxie Could you take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14928: [FLINK-21360][coordination] Make resource timeout configurable

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14928:
URL: https://github.com/apache/flink/pull/14928#issuecomment-14828


   
   ## CI report:
   
   * f2be4ac1ab7e9faeb7d1aac40fe003903edff497 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13259)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14847: [FLINK-21030][runtime] Add global failover in case of a stop-with-savepoint failure

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14847:
URL: https://github.com/apache/flink/pull/14847#issuecomment-772387941


   
   ## CI report:
   
   * 9a2ea20ce0803e48edfc3ab7bcc02078b7410fbf UNKNOWN
   * 42dbd4fa164e86f7637935e89eb6af720085b3de Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13245)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14928: [FLINK-21360][coordination] Make resource timeout configurable

2021-02-11 Thread GitBox


flinkbot commented on pull request #14928:
URL: https://github.com/apache/flink/pull/14928#issuecomment-14828


   
   ## CI report:
   
   * f2be4ac1ab7e9faeb7d1aac40fe003903edff497 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14925: [FLINK-21206] Write savepoints in unified format from HeapStateBackend

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14925:
URL: https://github.com/apache/flink/pull/14925#issuecomment-777491311


   
   ## CI report:
   
   * 8c19b00edc4416f67507f34c5fd51c8fe5424f76 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13238)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14928: [FLINK-21360][coordination] Make resource timeout configurable

2021-02-11 Thread GitBox


flinkbot commented on pull request #14928:
URL: https://github.com/apache/flink/pull/14928#issuecomment-06506


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f2be4ac1ab7e9faeb7d1aac40fe003903edff497 (Thu Feb 11 
18:43:44 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21360) Add resourceTimeout configuration

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21360:
---
Labels: pull-request-available  (was: )

> Add resourceTimeout configuration
> -
>
> Key: FLINK-21360
> URL: https://issues.apache.org/jira/browse/FLINK-21360
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> resourceTimeout is currently a hardcoded value. Make it configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #14928: [FLINK-21360][coordination] Make resource timeout configurable

2021-02-11 Thread GitBox


zentol opened a new pull request #14928:
URL: https://github.com/apache/flink/pull/14928


   Based on #14921.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 61a5d30ea74d58a16124fb9e38587322aa0a1720 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13252)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14847: [FLINK-21030][runtime] Add global failover in case of a stop-with-savepoint failure

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14847:
URL: https://github.com/apache/flink/pull/14847#issuecomment-772387941


   
   ## CI report:
   
   * 9a2ea20ce0803e48edfc3ab7bcc02078b7410fbf UNKNOWN
   * 1e5c54f362a2c38363325b757e361c86f28e8128 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13235)
 
   * 42dbd4fa164e86f7637935e89eb6af720085b3de Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13245)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann commented on pull request #14902: [FLINK-21138][queryableState] - User ClassLoader in KvStateServerHandler

2021-02-11 Thread GitBox


tillrohrmann commented on pull request #14902:
URL: https://github.com/apache/flink/pull/14902#issuecomment-01464


   Manually merged via aeeb6171adcd7e305e8c0c64a13ade64170fa799



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann closed pull request #14902: [FLINK-21138][queryableState] - User ClassLoader in KvStateServerHandler

2021-02-11 Thread GitBox


tillrohrmann closed pull request #14902:
URL: https://github.com/apache/flink/pull/14902


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21274) At per-job mode, during the exit of the JobManager process, if ioExecutor exits at the end, the System.exit() method will not be executed.

2021-02-11 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-21274.
-
Resolution: Fixed

Fixed via

1.13.0:

e6585365c931042a94408e0a58b0316b40a270e5
cf451aa73f050b69031366e1dda7bc0a3e0f9f81

1.12.2:

a7f898ab3f6bc887c590abf6e2c6eab9a89d1d12
a7f3b369229328ddc717776fca767ae4428df53a

1.11.4:

b4e3b498ec47deb383b83de87ac00b7707b26314
9946891bb3d6568ae29af6d8c23ccb39bfbfba22

> At per-job mode, during the exit of the JobManager process, if ioExecutor 
> exits at the end, the System.exit() method will not be executed.
> --
>
> Key: FLINK-21274
> URL: https://issues.apache.org/jira/browse/FLINK-21274
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.3, 1.10.1, 1.11.0, 1.12.0
>Reporter: Jichao Wang
>Assignee: Jichao Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.12.2, 1.13.0
>
> Attachments: 1.png, 2.png, Add wait 5 seconds in 
> org.apache.flink.runtime.history.FsJobArchivist#archiveJob.log, Not add wait 
> 5 seconds.log, application_1612404624605_0010-JobManager.log
>
>
> h2. =Latest issue description(2021.02.07)==
> I want to try to describe the issue in a more concise way:
> *My issue only appears in per-job mode,*
> In JsonResponseHistoryServerArchivist#archiveExecutionGraph, submit the 
> archive task to ioExecutor for execution. At the same time, 
> ClusterEntrypoint#stopClusterServices exits multiple thread pools in parallel 
> (for example, commonRpcService, metricRegistry, 
> MetricRegistryImpl#executor(in metricRegistry.shutdown())). Think about it, 
> assuming that the archiving process takes 10 seconds to execute, then 
> ExecutorUtils.nonBlockingShutdown will wait 10 before exiting. However, 
> through testing, it was found that the JobManager process exited immediately 
> after commonRpcService and metricRegistry exited. At this time, 
> ExecutorUtils.nonBlockingShutdown is still waiting for the end of the 
> archiving process, so the archiving process will not be completely executed.
> *There are two specific reproduction methods:*
> *Method one:*
> Modify the org.apache.flink.runtime.history.FsJobArchivist#archiveJob method 
> to wait 5 seconds before actually writing to HDFS (simulating a slow write 
> speed scenario).
> {code:java}
> public static Path archiveJob(Path rootPath, JobID jobId, 
> Collection jsonToArchive)
> throws IOException {
> try {
> FileSystem fs = rootPath.getFileSystem();
> Path path = new Path(rootPath, jobId.toString());
> OutputStream out = fs.create(path, FileSystem.WriteMode.NO_OVERWRITE);
> try {
> LOG.info("===Wait 5 seconds..");
> Thread.sleep(5000);
> } catch (InterruptedException e) {
> e.printStackTrace();
> }
> try (JsonGenerator gen = jacksonFactory.createGenerator(out, 
> JsonEncoding.UTF8)) {
> ...  // Part of the code is omitted here
> } catch (Exception e) {
> fs.delete(path, false);
> throw e;
> }
> LOG.info("Job {} has been archived at {}.", jobId, path);
> return path;
> } catch (IOException e) {
> LOG.error("Failed to archive job.", e);
> throw e;
> }
> }
> {code}
> The above modification will cause the archive to fail.
> *Method two:*
> In ClusterEntrypoint#stopClusterServices, before 
> ExecutorUtils.nonBlockingShutdown is called, submit a task that waits 10 
> seconds to ioExecutor.
> {code:java}
> ioExecutor.execute(new Runnable() {
> @Override
> public void run() {
> try {
> LOG.info("===ioExecutor before sleep");
> Thread.sleep(1);
> LOG.info("===ioExecutor after sleep");
> } catch (InterruptedException e) {
> e.printStackTrace();
> }
> }
> });
> terminationFutures.add(ExecutorUtils.nonBlockingShutdown(shutdownTimeout, 
> TimeUnit.MILLISECONDS, ioExecutor));
> {code}
> According to the above modification, ===ioExecutor before sleep will be 
> printed, but ===ioExecutor after sleep will not be printed.
> *The root cause of the above issue is that all user threads (in Akka 
> ActorSystem) have exited during the waiting, and finally the daemon thread 
> (in ioExecutor) cannot be executed completely.*
>  
> {color:#de350b} *If you already understand my issue, you can skip the 
> following old version of the issue description, and browse the comment area 
> directly*{color}
>  
>  
>  
>  
>  
> h2. Older issue description(2021.02.04)=

[GitHub] [flink] tillrohrmann closed pull request #14915: [BP-1.11][FLINK-21274][runtime] Change the ClusterEntrypoint.runClusterEntrypoint to wait on the result of clusterEntrypoint.getTerminationFut

2021-02-11 Thread GitBox


tillrohrmann closed pull request #14915:
URL: https://github.com/apache/flink/pull/14915


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann commented on pull request #14915: [BP-1.11][FLINK-21274][runtime] Change the ClusterEntrypoint.runClusterEntrypoint to wait on the result of clusterEntrypoint.getTerminat

2021-02-11 Thread GitBox


tillrohrmann commented on pull request #14915:
URL: https://github.com/apache/flink/pull/14915#issuecomment-00722


   Manually merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-02-11 Thread GitBox


rmetzger commented on a change in pull request #14737:
URL: https://github.com/apache/flink/pull/14737#discussion_r574733754



##
File path: 
flink-formats/flink-avro-glue-schema-registry/src/test/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroSchemaCoderTest.java
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.avro.glue.schema.registry;
+
+import 
com.amazonaws.services.schemaregistry.caching.AWSSchemaRegistrySerializerCache;
+import com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient;
+import 
com.amazonaws.services.schemaregistry.common.configs.GlueSchemaRegistryConfiguration;
+import 
com.amazonaws.services.schemaregistry.exception.AWSSchemaRegistryException;
+import 
com.amazonaws.services.schemaregistry.serializers.GlueSchemaRegistrySerializationFacade;
+import com.amazonaws.services.schemaregistry.utils.AWSSchemaRegistryConstants;
+import org.apache.avro.Schema;
+import org.hamcrest.Matchers;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.EnumSource;
+import org.mockito.Mock;
+import org.mockito.junit.jupiter.MockitoExtension;
+import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
+import software.amazon.awssdk.services.glue.model.EntityNotFoundException;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.reflect.Field;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+
+import static org.hamcrest.CoreMatchers.equalTo;
+import static org.hamcrest.CoreMatchers.notNullValue;
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyMap;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.Mockito.doCallRealMethod;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;

Review comment:
   For example getting rid of the mocking in 
`testReadSchema_withValidParams_succeeds()` is actually fairly easy:
   
   ```diff
   --- 
a/flink-formats/flink-avro-glue-schema-registry/src/test/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroSchemaCoderTest.java
   +++ 
b/flink-formats/flink-avro-glue-schema-registry/src/test/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroSchemaCoderTest.java
   @@ -21,6 +21,7 @@ package org.apache.flink.formats.avro.glue.schema.registry;
import 
com.amazonaws.services.schemaregistry.caching.AWSSchemaRegistrySerializerCache;
import com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient;
import 
com.amazonaws.services.schemaregistry.common.configs.GlueSchemaRegistryConfiguration;
   +import com.amazonaws.services.schemaregistry.deserializers.AWSDeserializer;
import 
com.amazonaws.services.schemaregistry.exception.AWSSchemaRegistryException;
import 
com.amazonaws.services.schemaregistry.serializers.GlueSchemaRegistrySerializationFacade;
import 
com.amazonaws.services.schemaregistry.utils.AWSSchemaRegistryConstants;
   @@ -64,7 +65,8 @@ public class GlueSchemaRegistryAvroSchemaCoderTest {
@Mock private AWSSchemaRegistryClient mockClient;
@Mock private AwsCredentialsProvider mockCred;
@Mock private GlueSchemaRegistryConfiguration mockConfigs;
   -@Mock private GlueSchemaRegistryInputStreamDeserializer 
mockInputStreamDeserializer;
   +private GlueSchemaRegistryInputStreamDeserializer 
mockInputStreamDeserializer =
   +new MockGlueSchemaRegistryInputStreamDeserializer();
   
private static Schema userSchema;
private static User userDefinedPojo;
   @@ -83,6 +85,18 @@ public class GlueSchemaRegistryAvroSchemaCoderTest {
8, 116, 101, 115, 116, 0, 20, 0, 12, 118,

[GitHub] [flink] tillrohrmann closed pull request #14914: [BP-1.12][FLINK-21274][runtime] Change the ClusterEntrypoint.runClusterEntrypoint to wait on the result of clusterEntrypoint.getTerminationFut

2021-02-11 Thread GitBox


tillrohrmann closed pull request #14914:
URL: https://github.com/apache/flink/pull/14914


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann commented on pull request #14914: [BP-1.12][FLINK-21274][runtime] Change the ClusterEntrypoint.runClusterEntrypoint to wait on the result of clusterEntrypoint.getTerminat

2021-02-11 Thread GitBox


tillrohrmann commented on pull request #14914:
URL: https://github.com/apache/flink/pull/14914#issuecomment-777681802


   Manually merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann closed pull request #14906: [FLINK-21274][runtime] Change the ClusterEntrypoint.runClusterEntrypoint to wait on the result of clusterEntrypoint.getTerminationFuture().get

2021-02-11 Thread GitBox


tillrohrmann closed pull request #14906:
URL: https://github.com/apache/flink/pull/14906


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20580) Missing null value handling for SerializedValue's getByteArray()

2021-02-11 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-20580:
-

Assignee: Kezhu Wang

> Missing null value handling for SerializedValue's getByteArray() 
> -
>
> Key: FLINK-20580
> URL: https://issues.apache.org/jira/browse/FLINK-20580
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.13.0
>Reporter: Matthias
>Assignee: Kezhu Wang
>Priority: Minor
>  Labels: pull-request-available, starter
>
> {{SerializedValue}} allows to wrap {{null}} values. Because of this, 
> {{SerializedValue.getByteArray()}} might return {{null}} which is not 
> properly handled in different locations (it's probably the best to use the 
> IDEs "Find usages" to identify these locations). The most recent findings 
> (for now) are listed in the comments.
> We should add null handling in these cases and add tests for these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21138) KvStateServerHandler is not invoked with user code classloader

2021-02-11 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-21138.
-
Fix Version/s: 1.12.2
   1.11.4
   Resolution: Fixed

Fixed via 

1.13.0:

8d9ffcdb2d05f1d9931a4ed4bb7fd9de6e770551
1e6d1d24ed03eb6c7a5fb1d67fea47cf69552e6c
9c04e64be9eb12d07bc4c97572c658fc6ddca97d
aeeb6171adcd7e305e8c0c64a13ade64170fa799

1.12.2: 

86bdfff24112fcf2793edb82f27ea5a5c248cc5a
98da61b8e6dbd54b09da864e9f7af33eba9e0c40
dad7f57e61d53c076ea9b69a61a71d9ac6edf094
6cc60de8cd5415ff40d3eae6467c756a54506973

1.11.4:

d69d134590f46fea74ac1f7c664435393625533f
5f6a804d11bcdf930a6a44ea48dd7e8f663954a3
95281a7313d107e60a8103db3104a19cda5f9ece
30351d6a3d3f1a9949bf5f01e63293bf4a657424 

> KvStateServerHandler is not invoked with user code classloader
> --
>
> Key: FLINK-21138
> URL: https://issues.apache.org/jira/browse/FLINK-21138
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Affects Versions: 1.11.2
>Reporter: Maciej Prochniak
>Assignee: Maciej Prochniak
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.12.2, 1.13.0
>
> Attachments: TestJob.java, stacktrace
>
>
> When using e.g. custom Kryo serializers user code classloader has to be set 
> as context classloader during invocation of methods such as 
> TypeSerializer.duplicat()
> KvStateServerHandler does not do this, which leads to exceptions like 
> ClassNotFound etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 61a5d30ea74d58a16124fb9e38587322aa0a1720 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13252)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14913: [FLINK-21344] Support switching from/to rocks db with heap timers

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14913:
URL: https://github.com/apache/flink/pull/14913#issuecomment-776225670


   
   ## CI report:
   
   * 341578b2d03693afbd8179683c6f2696c229 UNKNOWN
   * 8919ba2b495f83126a54a110bc278547d506a131 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13250)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20580) Missing null value handling for SerializedValue's getByteArray()

2021-02-11 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283227#comment-17283227
 ] 

Kezhu Wang commented on FLINK-20580:


[~trohrmann] Yeh, I could give it a try.

> Missing null value handling for SerializedValue's getByteArray() 
> -
>
> Key: FLINK-20580
> URL: https://issues.apache.org/jira/browse/FLINK-20580
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Minor
>  Labels: pull-request-available, starter
>
> {{SerializedValue}} allows to wrap {{null}} values. Because of this, 
> {{SerializedValue.getByteArray()}} might return {{null}} which is not 
> properly handled in different locations (it's probably the best to use the 
> IDEs "Find usages" to identify these locations). The most recent findings 
> (for now) are listed in the comments.
> We should add null handling in these cases and add tests for these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20580) Missing null value handling for SerializedValue's getByteArray()

2021-02-11 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283220#comment-17283220
 ] 

Till Rohrmann commented on FLINK-20580:
---

I think this is the better solution [~kezhuw]. Do you wanna take a stab at this 
problem?

> Missing null value handling for SerializedValue's getByteArray() 
> -
>
> Key: FLINK-20580
> URL: https://issues.apache.org/jira/browse/FLINK-20580
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Minor
>  Labels: pull-request-available, starter
>
> {{SerializedValue}} allows to wrap {{null}} values. Because of this, 
> {{SerializedValue.getByteArray()}} might return {{null}} which is not 
> properly handled in different locations (it's probably the best to use the 
> IDEs "Find usages" to identify these locations). The most recent findings 
> (for now) are listed in the comments.
> We should add null handling in these cases and add tests for these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot commented on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777653410


   
   ## CI report:
   
   * 61a5d30ea74d58a16124fb9e38587322aa0a1720 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


flinkbot commented on pull request #14927:
URL: https://github.com/apache/flink/pull/14927#issuecomment-777646925


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 61a5d30ea74d58a16124fb9e38587322aa0a1720 (Thu Feb 11 
17:07:02 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21339) ExceptionUtilsITCases is not run and fails

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21339:
---
Labels: pull-request-available  (was: )

> ExceptionUtilsITCases is not run and fails
> --
>
> Key: FLINK-21339
> URL: https://issues.apache.org/jira/browse/FLINK-21339
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> {code:java}
> [ERROR] 
> testIsDirectOutOfMemoryError(org.apache.flink.runtime.util.ExceptionUtilsITCases)
>   Time elapsed: 0.773 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: is ""
>  but: was "Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError"
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCases.run(ExceptionUtilsITCases.java:92)
>   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCases.testIsDirectOutOfMemoryError(ExceptionUtilsITCases.java:58)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #14927: [FLINK-21339][tests] Enable and fix ExceptionUtilsITCase

2021-02-11 Thread GitBox


zentol opened a new pull request #14927:
URL: https://github.com/apache/flink/pull/14927


   Renames the test so that it is picked up by surefire, and allows the test to 
pass on CI by accounting for the possibility of JAVA_TOOL_OPTIONS being set.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-21339) ExceptionUtilsITCases is not run and fails

2021-02-11 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-21339:


Assignee: Chesnay Schepler

> ExceptionUtilsITCases is not run and fails
> --
>
> Key: FLINK-21339
> URL: https://issues.apache.org/jira/browse/FLINK-21339
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.13.0
>
>
> {code:java}
> [ERROR] 
> testIsDirectOutOfMemoryError(org.apache.flink.runtime.util.ExceptionUtilsITCases)
>   Time elapsed: 0.773 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: is ""
>  but: was "Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError"
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCases.run(ExceptionUtilsITCases.java:92)
>   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCases.testIsDirectOutOfMemoryError(ExceptionUtilsITCases.java:58)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21365) Visibility issue in FutureUtils.ResultConjunctFuture.handleCompletedFuture

2021-02-11 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-21365:
-

 Summary: Visibility issue in 
FutureUtils.ResultConjunctFuture.handleCompletedFuture 
 Key: FLINK-21365
 URL: https://issues.apache.org/jira/browse/FLINK-21365
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.12.1
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.12.2, 1.13.0


* FutureUtils.ResultConjunctFuture.handleCompletedFuture can update *results* 
array from multiple threads
* The array is declared as volatile but this only means the reference is 
volatile, not the contents
* There are no other guards




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14913: [FLINK-21344] Support switching from/to rocks db with heap timers

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14913:
URL: https://github.com/apache/flink/pull/14913#issuecomment-776225670


   
   ## CI report:
   
   * 07288a4dde5a2fc8e5ea44c94b73ddfc93a17f90 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13187)
 
   * 341578b2d03693afbd8179683c6f2696c229 UNKNOWN
   * 8919ba2b495f83126a54a110bc278547d506a131 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13250)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21227) Fixed: Upgrade Version com.google.protobuf:protoc:3.5.1:exe to 3.7.0 for (power)ppc64le support

2021-02-11 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-21227:
-
Priority: Major  (was: Blocker)

> Fixed: Upgrade Version com.google.protobuf:protoc:3.5.1:exe to 3.7.0 for 
> (power)ppc64le support
> ---
>
> Key: FLINK-21227
> URL: https://issues.apache.org/jira/browse/FLINK-21227
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Bivas
>Priority: Major
>
> com.google.protobuf:*protoc:3.5.1:exe* was not supported by power. Later 
> versions released multi-arch support including power(ppc64le).Using 
> *protoc:3.7.0:exe* able to build and E2E tests passed successfully.
> https://github.com/bivasda1/flink/blob/master/flink-formats/flink-parquet/pom.xml#L253



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21227) Upgrade Protobof 3.7.0 for (power)ppc64le support

2021-02-11 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-21227:
-
Summary: Upgrade Protobof 3.7.0 for (power)ppc64le support  (was: Fixed: 
Upgrade Version com.google.protobuf:protoc:3.5.1:exe to 3.7.0 for 
(power)ppc64le support)

> Upgrade Protobof 3.7.0 for (power)ppc64le support
> -
>
> Key: FLINK-21227
> URL: https://issues.apache.org/jira/browse/FLINK-21227
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Bivas
>Priority: Major
>
> com.google.protobuf:*protoc:3.5.1:exe* was not supported by power. Later 
> versions released multi-arch support including power(ppc64le).Using 
> *protoc:3.7.0:exe* able to build and E2E tests passed successfully.
> https://github.com/bivasda1/flink/blob/master/flink-formats/flink-parquet/pom.xml#L253



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21364) piggyback finishedSplitIds in RequestSplitEvent

2021-02-11 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-21364:
---
Description: 
For some split assignment strategy, the enumerator/assigner needs to track the 
completed splits to advance watermark for event time alignment or rough 
ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
pass-along of the `finishedSplitIds` info and hence we have to create our own 
custom source event type for Iceberg source. 

Here is the proposal of add such optional info to `RequestSplitEvent`.
{code}
public RequestSplitEvent(
@Nullable String hostName, 
@Nullable Collection finishedSplitIds)
{code}

  was:
For some split assignment strategy, the enumerator/assigner needs to track the 
completed splits to advance watermark for event time alignment or rough 
ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
pass-along of the `finishedSplitIds` info and hence we have to create our own 
custom source event type for Iceberg source. 

Here is the proposal of add such optional info to `RequestSplitEvent`.
```
public RequestSplitEvent(
@Nullable String hostName, 
@Nullable Collection finishedSplitIds)
```


> piggyback finishedSplitIds in RequestSplitEvent
> ---
>
> Key: FLINK-21364
> URL: https://issues.apache.org/jira/browse/FLINK-21364
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.12.1
>Reporter: Steven Zhen Wu
>Priority: Major
>
> For some split assignment strategy, the enumerator/assigner needs to track 
> the completed splits to advance watermark for event time alignment or rough 
> ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
> pass-along of the `finishedSplitIds` info and hence we have to create our own 
> custom source event type for Iceberg source. 
> Here is the proposal of add such optional info to `RequestSplitEvent`.
> {code}
> public RequestSplitEvent(
> @Nullable String hostName, 
> @Nullable Collection finishedSplitIds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21364) piggyback finishedSplitIds in RequestSplitEvent

2021-02-11 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283152#comment-17283152
 ] 

Steven Zhen Wu commented on FLINK-21364:


cc [~sewen] [~jqin] [~thomasWeise]

> piggyback finishedSplitIds in RequestSplitEvent
> ---
>
> Key: FLINK-21364
> URL: https://issues.apache.org/jira/browse/FLINK-21364
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.12.1
>Reporter: Steven Zhen Wu
>Priority: Major
>
> For some split assignment strategy, the enumerator/assigner needs to track 
> the completed splits to advance watermark for event time alignment or rough 
> ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
> pass-along of the `finishedSplitIds` info and hence we have to create our own 
> custom source event type for Iceberg source. 
> Here is the proposal of add such optional info to `RequestSplitEvent`.
> ```
> public RequestSplitEvent(
> @Nullable String hostName, 
> @Nullable Collection finishedSplitIds)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21364) piggyback finishedSplitIds in RequestSplitEvent

2021-02-11 Thread Steven Zhen Wu (Jira)
Steven Zhen Wu created FLINK-21364:
--

 Summary: piggyback finishedSplitIds in RequestSplitEvent
 Key: FLINK-21364
 URL: https://issues.apache.org/jira/browse/FLINK-21364
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Common
Affects Versions: 1.12.1
Reporter: Steven Zhen Wu


For some split assignment strategy, the enumerator/assigner needs to track the 
completed splits to advance watermark for event time alignment or rough 
ordering. Right now, `RequestSplitEvent` for FLIP-27 source doesn't support 
pass-along of the `finishedSplitIds` info and hence we have to create our own 
custom source event type for Iceberg source. 

Here is the proposal of add such optional info to `RequestSplitEvent`.
```
public RequestSplitEvent(
@Nullable String hostName, 
@Nullable Collection finishedSplitIds)
```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14913: [FLINK-21344] Support switching from/to rocks db with heap timers

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14913:
URL: https://github.com/apache/flink/pull/14913#issuecomment-776225670


   
   ## CI report:
   
   * 07288a4dde5a2fc8e5ea44c94b73ddfc93a17f90 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13187)
 
   * 341578b2d03693afbd8179683c6f2696c229 UNKNOWN
   * 8919ba2b495f83126a54a110bc278547d506a131 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20580) Missing null value handling for SerializedValue's getByteArray()

2021-02-11 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283134#comment-17283134
 ] 

Kezhu Wang commented on FLINK-20580:


[~mapohl] [~trohrmann]  May be better to not nullable but empty array ?

> Missing null value handling for SerializedValue's getByteArray() 
> -
>
> Key: FLINK-20580
> URL: https://issues.apache.org/jira/browse/FLINK-20580
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Minor
>  Labels: pull-request-available, starter
>
> {{SerializedValue}} allows to wrap {{null}} values. Because of this, 
> {{SerializedValue.getByteArray()}} might return {{null}} which is not 
> properly handled in different locations (it's probably the best to use the 
> IDEs "Find usages" to identify these locations). The most recent findings 
> (for now) are listed in the comments.
> We should add null handling in these cases and add tests for these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21361) FlinkRelMdUniqueKeys matches on AbstractCatalogTable instead of CatalogTable

2021-02-11 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-21361.

Fix Version/s: 1.13.0
   1.12.2
   Resolution: Fixed

Fixed in 1.12.2: a3ec04128dc27b63be13671357c4ffb0f853e749
Fixed in 1.13.0: 3f7db36fe0ac1196fd33db48e4a0ac9729b02012

> FlinkRelMdUniqueKeys matches on AbstractCatalogTable instead of CatalogTable
> 
>
> Key: FLINK-21361
> URL: https://issues.apache.org/jira/browse/FLINK-21361
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Ingo Bürk
>Assignee: Ingo Bürk
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.2, 1.13.0
>
>
> In FlinkRelMdUniqueKeys there's a match on AbstractCatalogTable rather than 
> the underlying interface CatalogTable. This causes exceptions e.g. during 
> temporal table joins when using alternative catalog table implementations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19763) Missing test MetricUtilsTest.testNonHeapMetricUsageNotStatic

2021-02-11 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17283125#comment-17283125
 ] 

Kezhu Wang commented on FLINK-19763:


Hi all, I checked the reverted commit and think it fails because of direct 
memory does not belong to what jmx called "non-heap" memory.

Besides this, {{MetricUtilsTest.testMetaspaceMetricUsageNotStatic}} fails with 
rate 1/5 in IDEA "Repeat Until Failure".

I plan to define a class loader to redefine/reload existing class(say, 
{{MetricUtils}}) to solve both. I pushed a [preview 
work|https://github.com/kezhuw/flink/commit/f5676dcfbe1986e78d8a51434305ddeb1d0fd9fb]
 for evaluation.

In first glance, it seems overkill to resort to define a new class at runtime, 
but actually it performs well(100K runs without failure) and resistant to 
optimization in my opinion.

[~mapohl] [~chesnay] Could I take over this issue if this approach sounds good 
to you ?

> Missing test MetricUtilsTest.testNonHeapMetricUsageNotStatic
> 
>
> Key: FLINK-19763
> URL: https://issues.apache.org/jira/browse/FLINK-19763
> Project: Flink
>  Issue Type: Test
>  Components: Runtime / Metrics
>Affects Versions: 1.10.2, 1.11.2
>Reporter: Matthias
>Priority: Minor
>  Labels: starter
> Fix For: 1.13.0
>
>
> We have tests for the heap and metaspace to check whether the metric is 
> dynamically generated. The test for the non-heap space is missing. There was 
> a test added in [296107e|https://github.com/apache/flink/commit/296107e] but 
> reverted in [2d86256|https://github.com/apache/flink/commit/2d86256] as it 
> appeared that the test is partially failing.
> We might want to add the test again fixing the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14913: [FLINK-21344] Support switching from/to rocks db with heap timers

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14913:
URL: https://github.com/apache/flink/pull/14913#issuecomment-776225670


   
   ## CI report:
   
   * 07288a4dde5a2fc8e5ea44c94b73ddfc93a17f90 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13187)
 
   * 341578b2d03693afbd8179683c6f2696c229 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #14905: [FLINK-19608][table-planner-blink] Support TVF based window aggreagte in planner

2021-02-11 Thread GitBox


beyond1920 commented on a change in pull request #14905:
URL: https://github.com/apache/flink/pull/14905#discussion_r573791008



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/trait/RelWindowProperties.java
##
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.trait;
+
+import org.apache.flink.table.planner.plan.logical.WindowSpec;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeChecks;
+
+import org.apache.calcite.util.ImmutableBitSet;
+
+import javax.annotation.Nullable;
+
+import java.util.Objects;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** It describes the information of window properties of a RelNode. */
+public class RelWindowProperties {
+
+private final ImmutableBitSet windowStartColumns;
+private final ImmutableBitSet windowEndColumns;
+private final ImmutableBitSet windowTimeColumns;

Review comment:
   @wuchong ,  `windowStartColumns`/`windowEndColumns `/ `windowTimeColumns 
` could only be empty or contain 1 element, right?  Is there any possible that 
those `Immutablebitset`s contain more than one element? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #14905: [FLINK-19608][table-planner-blink] Support TVF based window aggreagte in planner

2021-02-11 Thread GitBox


beyond1920 commented on a change in pull request #14905:
URL: https://github.com/apache/flink/pull/14905#discussion_r573811344



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/logical/windowingSpecs.scala
##
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.logical
+
+import org.apache.flink.table.types.logical.LogicalType
+import org.apache.flink.table.types.logical.utils.LogicalTypeChecks
+import org.apache.flink.util.TimeUtils.formatWithHighestUnit
+
+import java.time.Duration
+import java.util.Objects
+
+/**
+ * Logical representation of a windowing strategy.
+ */
+sealed trait WindowingStrategy {
+  val window: WindowSpec
+  val timeAttributeType: LogicalType
+  val isRowtime: Boolean = 
LogicalTypeChecks.isRowtimeAttribute(timeAttributeType)
+  def toSummaryString(inputFieldNames: Array[String]): String
+}
+
+case class TimeAttributeWindowingStrategy(
+timeAttribute: Int,
+timeAttributeType: LogicalType,
+window: WindowSpec)
+extends WindowingStrategy {
+  override def toSummaryString(inputFieldNames: Array[String]): String = {
+val windowing = s"time_col=[${inputFieldNames(timeAttribute)}]"
+window.toSummaryString(windowing)
+  }
+}
+
+case class WindowAttachedWindowingStrategy(
+windowStart: Int,
+windowEnd: Int,
+timeAttributeType: LogicalType,
+window: WindowSpec)
+  extends WindowingStrategy {
+  override def toSummaryString(inputFieldNames: Array[String]): String = {
+val windowing = s"win_start=[${inputFieldNames(windowStart)}], " +
+  s"win_end=[${inputFieldNames(windowEnd)}]"
+window.toSummaryString(windowing)
+  }
+}
+
+// 

+// Window specifications
+// 

+
+/**
+ * Logical representation of a window specification.
+ */
+sealed trait WindowSpec {
+
+  def toSummaryString(windowing: String): String
+
+  def hashCode(): Int
+
+  def equals(obj: Any): Boolean
+}
+
+case class TumblingWindowSpec(size: Duration) extends WindowSpec {

Review comment:
   Does this pr plan to support `offset`?

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/PushWindowTableFunctionIntoWindowAggregateRule.scala
##
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.physical.stream
+
+import org.apache.calcite.plan.RelOptRule.{any, operand}
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.{RelCollations, RelNode}
+import org.apache.calcite.rex._
+import org.apache.calcite.util.ImmutableBitSet
+import org.apache.flink.table.planner.plan.`trait`.FlinkRelDistribution
+import 
org.apache.flink.table.planner.plan.logical.TimeAttributeWindowingStrategy
+import org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery
+import org.apache.flink.table.planner.plan.nodes.FlinkConventions
+import 
org.apache.flink.table.planner.plan.nodes.physical.stream.{StreamPhysicalCalc, 
StreamPhysicalExchange, StreamPhysicalWindowAggregate, 
StreamPhysicalWindowTableFunction}
+impo

[GitHub] [flink] flinkbot edited a comment on pull request #14910: [FLINK-21259] Add Failing state for DeclarativeScheduler

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14910:
URL: https://github.com/apache/flink/pull/14910#issuecomment-775986669


   
   ## CI report:
   
   * e711078acde12b997d01ac2282840d3cd958d7bd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13217)
 
   * e7130748c1cabb0a2e7fc436070da0c1d1c8450a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13248)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14847: [FLINK-21030][runtime] Add global failover in case of a stop-with-savepoint failure

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14847:
URL: https://github.com/apache/flink/pull/14847#issuecomment-772387941


   
   ## CI report:
   
   * 9a2ea20ce0803e48edfc3ab7bcc02078b7410fbf UNKNOWN
   * b79fc6cf9ccc55013142d91b980177912e0a05ac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13228)
 
   * 1e5c54f362a2c38363325b757e361c86f28e8128 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13235)
 
   * 42dbd4fa164e86f7637935e89eb6af720085b3de Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13245)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14838: [FLINK-19503][state] Add StateChangelog API

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14838:
URL: https://github.com/apache/flink/pull/14838#issuecomment-772060058


   
   ## CI report:
   
   * 38b7950b0c94efc6ed0dfc70828712e3b353ca65 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13231)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14910: [FLINK-21259] Add Failing state for DeclarativeScheduler

2021-02-11 Thread GitBox


flinkbot edited a comment on pull request #14910:
URL: https://github.com/apache/flink/pull/14910#issuecomment-775986669


   
   ## CI report:
   
   * e711078acde12b997d01ac2282840d3cd958d7bd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13217)
 
   * e7130748c1cabb0a2e7fc436070da0c1d1c8450a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >