[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-25 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271944#comment-17271944
 ] 

Yang Wang commented on FLINK-21143:
---

Do you mean the config option {{yarn.provided.lib.dirs}} does not take effect?

It will help a lot if you could share your submission command and client logs?

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21147) Resuming Savepoint (file, async, no parallelism change) fails with UnknownHostException

2021-01-25 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-21147:
-
Labels: test-stability  (was: )

> Resuming Savepoint (file, async, no parallelism change) fails with 
> UnknownHostException
> ---
>
> Key: FLINK-21147
> URL: https://issues.apache.org/jira/browse/FLINK-21147
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.11.4
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12484=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2021-01-25 21:31:14,388 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: jobmanager.rpc.address, localhost
> 2021-01-25 21:31:14,399 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: jobmanager.rpc.port, 6123
> 2021-01-25 21:31:14,399 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: jobmanager.memory.process.size, 1600m
> 2021-01-25 21:31:14,400 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: taskmanager.memory.process.size, 1728m
> 2021-01-25 21:31:14,400 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: parallelism.default, 1
> 2021-01-25 21:31:14,400 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: jobmanager.execution.failover-strategy, region
> 2021-01-25 21:31:14,400 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: taskmanager.numberOfTaskSlots, 2
> 2021-01-25 21:31:14,400 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: metrics.fetcher.update-interval, 2000
> 2021-01-25 21:31:14,401 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: metrics.reporter.slf4j.factory.class, 
> org.apache.flink.metrics.slf4j.Slf4jReporterFactory
> 2021-01-25 21:31:14,401 INFO  
> org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
> configuration property: metrics.reporter.slf4j.interval, 1 SECONDS
> 2021-01-25 21:31:14,470 INFO  org.apache.flink.core.fs.FileSystem 
>  [] - Hadoop is not in the classpath/dependencies. The extended 
> set of supported File Systems via Hadoop is not available.
> 2021-01-25 21:31:14,535 ERROR org.apache.flink.core.fs.local.LocalFileSystem  
>  [] - Could not resolve local host
> java.net.UnknownHostException: fv-az227-139: fv-az227-139: Name or service 
> not known
>   at java.net.InetAddress.getLocalHost(InetAddress.java:1506) 
> ~[?:1.8.0_275]
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.(LocalFileSystem.java:95)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.(LocalFileSystem.java:71)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at 
> org.apache.flink.core.fs.local.LocalFileSystemFactory.getScheme(LocalFileSystemFactory.java:33)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:344) 
> [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:374)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:360)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:336)
>  [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
> Caused by: java.net.UnknownHostException: fv-az227-139: Name or service not 
> known
>   at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 
> ~[?:1.8.0_275]
>   at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) 
> ~[?:1.8.0_275]
>   at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) 
> ~[?:1.8.0_275]
>   at java.net.InetAddress.getLocalHost(InetAddress.java:1501) 
> ~[?:1.8.0_275]
>   ... 7 more
> 2021-01-25 21:31:14,599 INFO  
> org.apache.flink.runtime.security.modules.HadoopModuleFactory [] - Cannot 
> create Hadoop Security Module because Hadoop cannot be found in the Classpath.
> 2021-01-25 

[jira] [Commented] (FLINK-21006) HBaseTablePlanTest tests failed in haoop 3.1.3 with "java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Obje

2021-01-25 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271942#comment-17271942
 ] 

Dawid Wysakowicz commented on FLINK-21006:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12483=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51

> HBaseTablePlanTest tests failed in haoop 3.1.3 with 
> "java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V"
> -
>
> Key: FLINK-21006
> URL: https://issues.apache.org/jira/browse/FLINK-21006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12159=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51]
> {code:java}
> 2021-01-15T22:48:58.1843544Z Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> 2021-01-15T22:48:58.1844358Z  at 
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> 2021-01-15T22:48:58.1845035Z  at 
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> 2021-01-15T22:48:58.1845805Z  at 
> org.apache.flink.connector.hbase.options.HBaseOptions.getHBaseConfiguration(HBaseOptions.java:157)
> 2021-01-15T22:48:58.1846960Z  at 
> org.apache.flink.connector.hbase1.HBase1DynamicTableFactory.createDynamicTableSource(HBase1DynamicTableFactory.java:73)
> 2021-01-15T22:48:58.1848020Z  at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:119)
> 2021-01-15T22:48:58.1848574Z  ... 49 more
> {code}
> The exception seems that the different version of guava caused. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21148) YARNSessionFIFOSecuredITCase cannot connect to BlobServer

2021-01-25 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21148:


 Summary: YARNSessionFIFOSecuredITCase cannot connect to BlobServer
 Key: FLINK-21148
 URL: https://issues.apache.org/jira/browse/FLINK-21148
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN, Tests
Affects Versions: 1.13.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12483=logs=f450c1a5-64b1-5955-e215-49cb1ad5ec88=ea63c80c-957f-50d1-8f67-3671c14686b9

{code}
java.io.IOException: Could not connect to BlobServer at address 
29c91476178c/172.21.0.2:44412
java.io.IOException: Could not connect to BlobServer at address 
29c91476178c/172.21.0.2:44412
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:102) 
~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
at 
org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:137)
 [flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
at 
org.apache.flink.yarn.YarnTestBase.ensureNoProhibitedStringInLogFiles(YarnTestBase.java:538)
at 
org.apache.flink.yarn.YARNSessionFIFOITCase.checkForProhibitedLogContents(YARNSessionFIFOITCase.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)


{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14729: [FLINK-21092][FLINK-21093][FLINK-21094][FLINK-21096][table-planner-blink] Support ExecNode plan serialization/deserialization for `IN

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14729:
URL: https://github.com/apache/flink/pull/14729#issuecomment-765399004


   
   ## CI report:
   
   * b39c8c7d0f68afd25fec0cf79b7fdf9f0d8d37d7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12492)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14733: [FLINK-20968][table-planner-blink] Remove legacy exec nodes

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14733:
URL: https://github.com/apache/flink/pull/14733#issuecomment-765889414


   
   ## CI report:
   
   * de291068835c9b7540c3a0544c6dd589f0c3ada2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12408)
 
   * d1520a7a19f9028db964c2332d22693996299cfe Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12501)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-21104) UnalignedCheckpointITCase.execute failed with "IllegalStateException"

2021-01-25 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reopened FLINK-21104:
--

It seems it still appears on 1.12:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12485=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0

> UnalignedCheckpointITCase.execute failed with "IllegalStateException"
> -
>
> Key: FLINK-21104
> URL: https://issues.apache.org/jira/browse/FLINK-21104
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0, 1.12.2
>Reporter: Huang Xingbo
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.2
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12383=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0]
> {code:java}
> 2021-01-22T15:17:34.6711152Z [ERROR] execute[Parallel union, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 3.903 s  <<< ERROR!
> 2021-01-22T15:17:34.6711736Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-01-22T15:17:34.6712204Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-01-22T15:17:34.6712779Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
> 2021-01-22T15:17:34.6713344Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-01-22T15:17:34.6713816Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-01-22T15:17:34.6714454Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-01-22T15:17:34.6714952Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-01-22T15:17:34.6715472Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:238)
> 2021-01-22T15:17:34.6716026Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-01-22T15:17:34.6716631Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-01-22T15:17:34.6717128Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-01-22T15:17:34.6717616Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-01-22T15:17:34.6718105Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046)
> 2021-01-22T15:17:34.6718596Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-01-22T15:17:34.6718973Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-01-22T15:17:34.6719364Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-01-22T15:17:34.6719748Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-01-22T15:17:34.6720155Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-01-22T15:17:34.6720641Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-01-22T15:17:34.6721236Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-01-22T15:17:34.6721706Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-01-22T15:17:34.6722205Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-01-22T15:17:34.6722663Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-01-22T15:17:34.6723214Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-01-22T15:17:34.6723723Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-01-22T15:17:34.6724146Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-01-22T15:17:34.6724726Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-01-22T15:17:34.6725198Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-01-22T15:17:34.6725861Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> 2021-01-22T15:17:34.6726525Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-01-22T15:17:34.6727278Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 

[jira] [Updated] (FLINK-20431) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134 expected:<10> but was:<1>

2021-01-25 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20431:
-
Affects Version/s: 1.12.2

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
> -
>
> Key: FLINK-20431
> URL: https://issues.apache.org/jira/browse/FLINK-20431
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.2
>Reporter: Huang Xingbo
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.2
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10351=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5]
> [ERROR] Failures: 
> [ERROR] 
> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20431) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134 expected:<10> but was:<1>

2021-01-25 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271937#comment-17271937
 ] 

Dawid Wysakowicz commented on FLINK-20431:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12485=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
> -
>
> Key: FLINK-20431
> URL: https://issues.apache.org/jira/browse/FLINK-20431
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.2
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10351=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5]
> [ERROR] Failures: 
> [ERROR] 
> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21147) Resuming Savepoint (file, async, no parallelism change) fails with UnknownHostException

2021-01-25 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21147:


 Summary: Resuming Savepoint (file, async, no parallelism change) 
fails with UnknownHostException
 Key: FLINK-21147
 URL: https://issues.apache.org/jira/browse/FLINK-21147
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.11.4
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12484=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9

{code}
2021-01-25 21:31:14,388 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: jobmanager.rpc.address, localhost
2021-01-25 21:31:14,399 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: jobmanager.rpc.port, 6123
2021-01-25 21:31:14,399 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: jobmanager.memory.process.size, 1600m
2021-01-25 21:31:14,400 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: taskmanager.memory.process.size, 1728m
2021-01-25 21:31:14,400 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: parallelism.default, 1
2021-01-25 21:31:14,400 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: jobmanager.execution.failover-strategy, region
2021-01-25 21:31:14,400 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: taskmanager.numberOfTaskSlots, 2
2021-01-25 21:31:14,400 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: metrics.fetcher.update-interval, 2000
2021-01-25 21:31:14,401 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: metrics.reporter.slf4j.factory.class, 
org.apache.flink.metrics.slf4j.Slf4jReporterFactory
2021-01-25 21:31:14,401 INFO  
org.apache.flink.configuration.GlobalConfiguration   [] - Loading 
configuration property: metrics.reporter.slf4j.interval, 1 SECONDS
2021-01-25 21:31:14,470 INFO  org.apache.flink.core.fs.FileSystem   
   [] - Hadoop is not in the classpath/dependencies. The extended set 
of supported File Systems via Hadoop is not available.
2021-01-25 21:31:14,535 ERROR org.apache.flink.core.fs.local.LocalFileSystem
   [] - Could not resolve local host
java.net.UnknownHostException: fv-az227-139: fv-az227-139: Name or service not 
known
at java.net.InetAddress.getLocalHost(InetAddress.java:1506) 
~[?:1.8.0_275]
at 
org.apache.flink.core.fs.local.LocalFileSystem.(LocalFileSystem.java:95) 
[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at 
org.apache.flink.core.fs.local.LocalFileSystem.(LocalFileSystem.java:71)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at 
org.apache.flink.core.fs.local.LocalFileSystemFactory.getScheme(LocalFileSystemFactory.java:33)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:344) 
[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:374)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:360)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:336)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT]
Caused by: java.net.UnknownHostException: fv-az227-139: Name or service not 
known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 
~[?:1.8.0_275]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) 
~[?:1.8.0_275]
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) 
~[?:1.8.0_275]
at java.net.InetAddress.getLocalHost(InetAddress.java:1501) 
~[?:1.8.0_275]
... 7 more
2021-01-25 21:31:14,599 INFO  
org.apache.flink.runtime.security.modules.HadoopModuleFactory [] - Cannot 
create Hadoop Security Module because Hadoop cannot be found in the Classpath.
2021-01-25 21:31:14,603 INFO  
org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file 
will be created as /tmp/jaas-4712731418375480039.conf.
2021-01-25 21:31:14,630 INFO  
org.apache.flink.runtime.security.contexts.HadoopSecurityContextFactory [] - 
Cannot install HadoopSecurityContext because Hadoop cannot be found in the 
Classpath.
2021-01-25 21:31:14,698 INFO  org.apache.flink.configuration.Configuration  
   [] - Config 

[jira] [Commented] (FLINK-15534) YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed due to NPE

2021-01-25 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271933#comment-17271933
 ] 

Dawid Wysakowicz commented on FLINK-15534:
--

For the sake of reporting: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12474=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf

> YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed 
> due to NPE
> -
>
> Key: FLINK-15534
> URL: https://issues.apache.org/jira/browse/FLINK-15534
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Yang Wang
>Priority: Blocker
>
> As titled, travis run fails with below error:
> {code}
> 07:29:22.417 [ERROR] 
> perJobYarnClusterWithParallelism(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)
>   Time elapsed: 16.263 s  <<< ERROR!
> java.lang.NullPointerException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
>   at 
> org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405)
> Caused by: org.apache.hadoop.ipc.RemoteException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
>   at 
> org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405)
> {code}
> https://api.travis-ci.org/v3/job/634588108/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   * 0eb2a54315aab1058d59918ce625391dda994b6a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12496)
 
   * 3f9f835376f1e81b438983af9ce3e263991543aa Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12499)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14733: [FLINK-20968][table-planner-blink] Remove legacy exec nodes

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14733:
URL: https://github.com/apache/flink/pull/14733#issuecomment-765889414


   
   ## CI report:
   
   * de291068835c9b7540c3a0544c6dd589f0c3ada2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12408)
 
   * d1520a7a19f9028db964c2332d22693996299cfe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21144) KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error

2021-01-25 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-21144:
--
Fix Version/s: (was: 1.13.0)

> KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error
> --
>
> Key: FLINK-21144
> URL: https://issues.apache.org/jira/browse/FLINK-21144
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.1
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.12.2
>
>
> {{KubernetesResourceManagerDriver#tryResetPodCreationCoolDown}} is calling a 
> not implemented method {{RpcEndpoint.MainThreadExecutor#schedule(Callable 
> callable, long delay, TimeUnit unit)}}. This will cause a fatal error and 
> make JobManager terminate exceptionally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21144) KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error

2021-01-25 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271929#comment-17271929
 ] 

Yang Wang commented on FLINK-21144:
---

Since {{KubernetesResourceManagerDriver#tryResetPodCreationCoolDown}} has 
already been replaced with {{ThresholdMeter}} in FLINK-10868. Maybe we need to 
backport it to 1.12 or add a quick fix just for 1.12 branch.

> KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error
> --
>
> Key: FLINK-21144
> URL: https://issues.apache.org/jira/browse/FLINK-21144
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.1
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.13.0, 1.12.2
>
>
> {{KubernetesResourceManagerDriver#tryResetPodCreationCoolDown}} is calling a 
> not implemented method {{RpcEndpoint.MainThreadExecutor#schedule(Callable 
> callable, long delay, TimeUnit unit)}}. This will cause a fatal error and 
> make JobManager terminate exceptionally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21146) 【SQL】Flink SQL Client not support specify the queue to submit the job

2021-01-25 Thread zhisheng (Jira)
zhisheng created FLINK-21146:


 Summary: 【SQL】Flink SQL Client not support specify the queue to 
submit the job
 Key: FLINK-21146
 URL: https://issues.apache.org/jira/browse/FLINK-21146
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN, Table SQL / Client
Affects Versions: 1.12.0
Reporter: zhisheng


We can submit the job to specify yarn queue in Hive like : 
{code:java}
set mapreduce.job.queuename=queue1;
{code}
 

 

we can submit the spark-sql job to specify yarn queue like : 
{code:java}
spark-sql --queue xxx {code}
 

but Flink SQL Client can not specify the job submit to which queue, default is 
`default` queue. it is not friendly in pro env.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21137) Maven plugin surefire failed due to test failure

2021-01-25 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271926#comment-17271926
 ] 

Matthias commented on FLINK-21137:
--

You could run the test locally using {{mvn -pl flink-tests test 
-Dtest=UnalignedCheckpointITCase}}. Alternatively, you could run the test 
{{org.apache.flink.test.checkpointing.UnalignedCheckpointITCase}} from within 
your IDE. Intellij has a feature where you can run this test until a failure 
occurs. This might be handy since the {{UnalignedCheckpointITCase}} is flaky 
(i.e. does not fail regularly) recently.

But just rebasing your branch should be enough. Without knowing all the details 
I would suspect that the failure is unrelated to your changes.

> Maven plugin surefire failed due to test failure
> 
>
> Key: FLINK-21137
> URL: https://issues.apache.org/jira/browse/FLINK-21137
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System / CI
>Reporter: Linyu Yao
>Priority: Major
>
> I'm creating a pull request to add integration with AWS Glue Schema Registry, 
> more details of this new feature can be found here 
> https://issues.apache.org/jira/browse/FLINK-19667
> The package can build successfully locally and pass end to end test. But the 
> tests failed in CI. See more 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12419=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0]
> How can replicate this issue on my local environment and fix it?
> {code:java}
> Starting 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMinMaxParallelism[statebackend
>  type =ROCKSDB_INCREMENTAL].
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 155.879 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel union, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 3.382 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:238)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> 

[jira] [Updated] (FLINK-21026) Align column list specification with Hive in INSERT statement

2021-01-25 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-21026:
-
Fix Version/s: 1.13.0

> Align column list specification with Hive in INSERT statement
> -
>
> Key: FLINK-21026
> URL: https://issues.apache.org/jira/browse/FLINK-21026
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> HIVE-9481 allows column list specification in INSERT statement. The syntax is:
> {code:java}
> INSERT INTO TABLE table_name 
> [PARTITION (partcol1[=val1], partcol2[=val2] ...)] 
> [(column list)] 
> select_statement FROM from_statement
> {code}
> In the MeanWhile, flink introduces PARTITION syntax that the PARTITION clause 
> appears after the COLUMN LIST clause. It looks weird and luckily we don't 
> support COLUMN LIST clause now.  I think it'a good chance to align this with 
> Hive now.
>  
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21026) Align column list specification with Hive in INSERT statement

2021-01-25 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-21026.

Resolution: Fixed

master (release-1.13): 35247bb07ccba43ac537a914b82d84da17aca8fb

> Align column list specification with Hive in INSERT statement
> -
>
> Key: FLINK-21026
> URL: https://issues.apache.org/jira/browse/FLINK-21026
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
>
> HIVE-9481 allows column list specification in INSERT statement. The syntax is:
> {code:java}
> INSERT INTO TABLE table_name 
> [PARTITION (partcol1[=val1], partcol2[=val2] ...)] 
> [(column list)] 
> select_statement FROM from_statement
> {code}
> In the MeanWhile, flink introduces PARTITION syntax that the PARTITION clause 
> appears after the COLUMN LIST clause. It looks weird and luckily we don't 
> support COLUMN LIST clause now.  I think it'a good chance to align this with 
> Hive now.
>  
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #14726: [FLINK-21026][flink-sql-parser] Align column list specification synta…

2021-01-25 Thread GitBox


JingsongLi merged pull request #14726:
URL: https://github.com/apache/flink/pull/14726


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #14726: [FLINK-21026][flink-sql-parser] Align column list specification synta…

2021-01-25 Thread GitBox


JingsongLi commented on pull request #14726:
URL: https://github.com/apache/flink/pull/14726#issuecomment-767352939


   Thanks @docete for the contribution, looks good to me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21145) Flink Temporal Join Hive optimization

2021-01-25 Thread HideOnBush (Jira)
HideOnBush created FLINK-21145:
--

 Summary: Flink Temporal Join Hive optimization
 Key: FLINK-21145
 URL: https://issues.apache.org/jira/browse/FLINK-21145
 Project: Flink
  Issue Type: Wish
  Components: Connectors / Hive
Affects Versions: 1.12.0
Reporter: HideOnBush


When flink temporal join hive dimension table, the latest partition data will 
be loaded into task memory in full, which will lead to high memory overhead. In 
fact, sometimes the latest full data is not required. You can add options like 
options in future versions. Is the dimension table data filtered?
For example, select * from dim /*'streaming-source.partition.include' ='latest' 
condition='fild1=ab'*/ filter the latest partition data as long as fild1=ab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21045) Support 'load module' and 'unload module' SQL syntax

2021-01-25 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271910#comment-17271910
 ] 

Nicholas Jiang edited comment on FLINK-21045 at 1/26/21, 7:13 AM:
--

[~qingyue], [~jark], IMO, the WITH is more suitable to `CREATE MODULE` syntax, 
doesn't make much sense for `LOAD MODULE`. `LOAD MODULE` syntax only needs to 
specify the module name.


was (Author: nicholasjiang):
[~qingyue][~jark], IMO, the WITH is more suitable to `CREATE MODULE` syntax, 
doesn't make much sense for `LOAD MODULE`. `LOAD MODULE` syntax only needs to 
specify the module name.

> Support 'load module' and 'unload module' SQL syntax
> 
>
> Key: FLINK-21045
> URL: https://issues.apache.org/jira/browse/FLINK-21045
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Nicholas Jiang
>Assignee: Jane Chan
>Priority: Major
> Fix For: 1.13.0
>
>
> At present, Flink SQL doesn't support the 'load module' and 'unload module' 
> SQL syntax. It's necessary for uses in the situation that users load and 
> unload user-defined module through table api or sql client.
> SQL syntax has been proposed in FLIP-68: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-68%3A+Extend+Core+Table+System+with+Pluggable+Modules



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21144) KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error

2021-01-25 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271922#comment-17271922
 ] 

Yang Wang commented on FLINK-21144:
---

cc [~xintongsong]

> KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error
> --
>
> Key: FLINK-21144
> URL: https://issues.apache.org/jira/browse/FLINK-21144
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.1
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.13.0, 1.12.2
>
>
> {{KubernetesResourceManagerDriver#tryResetPodCreationCoolDown}} is calling a 
> not implemented method {{RpcEndpoint.MainThreadExecutor#schedule(Callable 
> callable, long delay, TimeUnit unit)}}. This will cause a fatal error and 
> make JobManager terminate exceptionally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21144) KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error

2021-01-25 Thread Yang Wang (Jira)
Yang Wang created FLINK-21144:
-

 Summary: 
KubernetesResourceManagerDriver#tryResetPodCreationCoolDown causes fatal error
 Key: FLINK-21144
 URL: https://issues.apache.org/jira/browse/FLINK-21144
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.12.1
Reporter: Yang Wang
 Fix For: 1.13.0, 1.12.2


{{KubernetesResourceManagerDriver#tryResetPodCreationCoolDown}} is calling a 
not implemented method {{RpcEndpoint.MainThreadExecutor#schedule(Callable 
callable, long delay, TimeUnit unit)}}. This will cause a fatal error and make 
JobManager terminate exceptionally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   * 0eb2a54315aab1058d59918ce625391dda994b6a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12496)
 
   * 3f9f835376f1e81b438983af9ce3e263991543aa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271919#comment-17271919
 ] 

wiwa888 commented on FLINK-9806:


http://bigbonus888.net/

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wiwa888 updated FLINK-9806:
---
Comment: was deleted

(was: http://bigbonus888.net/)

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271918#comment-17271918
 ] 

wiwa888 commented on FLINK-9806:


http://bigbonus888.net/

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271916#comment-17271916
 ] 

wiwa888 commented on FLINK-9806:


https://mgwin88-th.com/register.html

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271915#comment-17271915
 ] 

wiwa888 commented on FLINK-9806:


MGWIN88 https://mgwin88-th.com/register.html; 
rel="nofollow">สมัครสมาชิก

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-9806) Add a canonical link element to documentation HTML

2021-01-25 Thread wiwa888 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wiwa888 updated FLINK-9806:
---
Comment: was deleted

(was: MGWIN88 https://mgwin88-th.com/register.html; 
rel="nofollow">สมัครสมาชิก)

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271912#comment-17271912
 ] 

YUJIANBO commented on FLINK-21142:
--

[~lirui]That's also my question. I didn't notice your letter. Thank you very 
much. 
I have quesiton, I will try first method to rebuild hive-exe, but my cluster 
guava' version next. Should I replace all guava dependencies with 27.0-jre's 
version?

/usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
/usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
/usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
/usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
/usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar



> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>      flink1.12.0 
>      Hadoop 3.3.0
>      hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above shell, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21045) Support 'load module' and 'unload module' SQL syntax

2021-01-25 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271910#comment-17271910
 ] 

Nicholas Jiang commented on FLINK-21045:


[~qingyue][~jark], IMO, the WITH is more suitable to `CREATE MODULE` syntax, 
doesn't make much sense for `LOAD MODULE`. `LOAD MODULE` syntax only needs to 
specify the module name.

> Support 'load module' and 'unload module' SQL syntax
> 
>
> Key: FLINK-21045
> URL: https://issues.apache.org/jira/browse/FLINK-21045
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Nicholas Jiang
>Assignee: Jane Chan
>Priority: Major
> Fix For: 1.13.0
>
>
> At present, Flink SQL doesn't support the 'load module' and 'unload module' 
> SQL syntax. It's necessary for uses in the situation that users load and 
> unload user-defined module through table api or sql client.
> SQL syntax has been proposed in FLIP-68: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-68%3A+Extend+Core+Table+System+with+Pluggable+Modules



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14748: [FLINK-20894][Table SQL / API] Introduce SupportsAggregatePushDown interface

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14748:
URL: https://github.com/apache/flink/pull/14748#issuecomment-766689601


   
   ## CI report:
   
   * dd9eb0f167f8a9d43ce5f505cf09a9870c55e146 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12497)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14756:
URL: https://github.com/apache/flink/pull/14756#issuecomment-767326796


   
   ## CI report:
   
   * 08f7682c4217a85102b02cb4e54be30fa4f628eb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12498)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


dianfu commented on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767340342


   @WeiZhong94 Thanks a lot for the review. Updated the PR~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-25 Thread zhisheng (Jira)
zhisheng created FLINK-21143:


 Summary: 【runtime】flink job use the lib jars instead of the 
`yarn.provided.lib.dirs` config jars
 Key: FLINK-21143
 URL: https://issues.apache.org/jira/browse/FLINK-21143
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN, Runtime / Configuration
Affects Versions: 1.12.0
Reporter: zhisheng


Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new HDFS 
jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yuruguo commented on a change in pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


yuruguo commented on a change in pull request #14756:
URL: https://github.com/apache/flink/pull/14756#discussion_r564279419



##
File path: docs/ops/monitoring/checkpoint_monitoring.zh.md
##
@@ -60,6 +60,8 @@ Flink 的 Web 界面提供了`选项卡/标签(tab)`来监视作业的 check
 
 Checkpoint 历史记录保存有关最近触发的 checkpoint 的统计信息,包括当前正在进行的 checkpoint。
 
+注意,对于失败的 checkpoint,指标会被尽最大努力进行更新并且可能不准确。

Review comment:
   Nice, I think your translation is better. Can you @wuchong take a 
look?THX!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on a change in pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


WeiZhong94 commented on a change in pull request #14755:
URL: https://github.com/apache/flink/pull/14755#discussion_r564277842



##
File path: 
flink-python/src/test/java/org/apache/flink/client/python/PythonEnvUtilsTest.java
##
@@ -76,10 +79,16 @@ public void testPreparePythonEnvironment() throws 
IOException {
 File relativeFile =
 new File(tmpDirPath + File.separator + "subdir" + 
File.separator + "b.py");
 File schemeFile =
-new File(tmpDirPath + File.separator + "subdir" + 
File.separator + "c.zip");
+new File(tmpDirPath + File.separator + "subdir" + 
File.separator + "c.py");

Review comment:
   We can rename the "c.zip" to "c.egg" so that its expected python path 
would not be the same as "b.py".





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


RocMarshal commented on a change in pull request #14756:
URL: https://github.com/apache/flink/pull/14756#discussion_r564274181



##
File path: docs/ops/monitoring/checkpoint_monitoring.zh.md
##
@@ -60,6 +60,8 @@ Flink 的 Web 界面提供了`选项卡/标签(tab)`来监视作业的 check
 
 Checkpoint 历史记录保存有关最近触发的 checkpoint 的统计信息,包括当前正在进行的 checkpoint。
 
+注意,对于失败的 checkpoint,指标会被尽最大努力进行更新并且可能不准确。

Review comment:
   ```suggestion
   注意,对于失败的 checkpoint,指标会尽最大努力进行更新,但是结果可能不准确。
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


flinkbot commented on pull request #14756:
URL: https://github.com/apache/flink/pull/14756#issuecomment-767326796


   
   ## CI report:
   
   * 08f7682c4217a85102b02cb4e54be30fa4f628eb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271899#comment-17271899
 ] 

Rui Li commented on FLINK-21142:


Hi [~YUJIANBO], the same issue has been reported in the [mailing 
list|http://apache-flink.147419.n8.nabble.com/Caused-by-java-lang-NoSuchMethodError-com-google-common-base-Preconditions-checkArgument-ZLjava-langV-td10474.html],
 and you can check out my reply there.
The root cause is your hadoop version is incompatible with your hive version. 
Hive-3.1.2 depends on hadoop-3.1.0, so choosing a hadoop version <=3.1.0 would 
be safer.

> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>      flink1.12.0 
>      Hadoop 3.3.0
>      hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above shell, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


flinkbot commented on pull request #14756:
URL: https://github.com/apache/flink/pull/14756#issuecomment-767322667


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 08f7682c4217a85102b02cb4e54be30fa4f628eb (Tue Jan 26 
06:18:50 UTC 2021)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21122) Update checkpoint_monitoring.zh.md

2021-01-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21122:
---
Labels: chinese-translation pull-request-available  (was: 
chinese-translation)

> Update checkpoint_monitoring.zh.md
> --
>
> Key: FLINK-21122
> URL: https://issues.apache.org/jira/browse/FLINK-21122
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Roman Khachatryan
>Assignee: Ruguo Yu
>Priority: Major
>  Labels: chinese-translation, pull-request-available
> Fix For: 1.13.0
>
>
> In FLINK-19462, docs/ops/monitoring/checkpoint_monitoring.md was updated.
> Chinese version has to be synchronized.
> Please see [https://github.com/apache/flink/pull/14635]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yuruguo opened a new pull request #14756: [FLINK-21122][docs] Update checkpoint_monitoring.zh.md

2021-01-25 Thread GitBox


yuruguo opened a new pull request #14756:
URL: https://github.com/apache/flink/pull/14756


   
   
   ## What is the purpose of the change
   
   Update checkpoint_monitoring.zh.md
   
   
   ## Brief change log
   
 - Update docs/ops/monitoring/checkpoint_monitoring.zh.md
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] mohitpali edited a comment on pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-01-25 Thread GitBox


mohitpali edited a comment on pull request #14737:
URL: https://github.com/apache/flink/pull/14737#issuecomment-767211338


   Apologies for the confusion, we have closed the other PR. We had to create 
another PR because two developers were working on it and hence the different 
login. I have included some CI compilation fixes in this PR and rebased.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14748: [FLINK-20894][Table SQL / API] Introduce SupportsAggregatePushDown interface

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14748:
URL: https://github.com/apache/flink/pull/14748#issuecomment-766689601


   
   ## CI report:
   
   * e05f2896e3f72f053214d4204a62102f840b65dd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12449)
 
   * dd9eb0f167f8a9d43ce5f505cf09a9870c55e146 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12497)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-25 Thread GitBox


curcur commented on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-767315581


   Hoops, I commented before I noticed this PR is closed.
   
   There are a couple of places that "state-backed" should be replaced by 
"checkpoint storage", but we can fix them in the follow-up PRs I guess.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-25 Thread GitBox


curcur commented on a change in pull request #13797:
URL: https://github.com/apache/flink/pull/13797#discussion_r563782279



##
File path: 
flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java
##
@@ -34,6 +34,16 @@
 .noDefaultValue()
 .withDescription("The state backend to be used to store 
and checkpoint state.");
 
+/** The checkpoint storage used to checkpoint state. */
+@Documentation.Section(value = 
Documentation.Sections.COMMON_STATE_BACKENDS, position = 2)
+@Documentation.ExcludeFromDocumentation(
+"Hidden until FileSystemStorage and JobManagerStorage are 
implemented")
+public static final ConfigOption CHECKPOINT_STORAGE =
+ConfigOptions.key("state.checkpoint-storage")
+.stringType()
+.noDefaultValue()
+.withDescription("The state backend to be used to 
checkpoint state.");

Review comment:
   "The state backend to be used to checkpoint state."  ==>
   
   "The checkpoint storage to be used to store checkpoints"

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorage.java
##
@@ -0,0 +1,67 @@
+/*

Review comment:
   Maybe add a "see also" reference of `CheckpointStorage` in the Java doc 
of `StateBackend`? The `StateBackend` Java Doc is the same as before which 
mentions "durable persistent storage".

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/Checkpoints.java
##
@@ -231,15 +233,15 @@ private static void throwNonRestoredStateException(
 // 
 
 public static void disposeSavepoint(
-String pointer, StateBackend stateBackend, ClassLoader classLoader)
+String pointer, CheckpointStorage checkpointStorage, ClassLoader 
classLoader)
 throws IOException, FlinkException {
 
 checkNotNull(pointer, "location");
-checkNotNull(stateBackend, "stateBackend");
+checkNotNull(checkpointStorage, "stateBackend");

Review comment:
   "stateBackend" => "checkpoint storage"

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java
##
@@ -58,28 +60,35 @@
  * the factory class was not found or the factory could not be 
instantiated
  * @throws IllegalConfigurationException May be thrown by the 
CheckpointStorageFactory when
  * creating / configuring the checkpoint storage in the factory
- * @throws IOException May be thrown by the CheckpointStorageFactory when 
instantiating the
- * checkpoint storage
  */
-public static CheckpointStorage loadCheckpointStorageFromConfig(
+public static Optional fromConfig(
 ReadableConfig config, ClassLoader classLoader, @Nullable Logger 
logger)
-throws IllegalStateException, DynamicCodeLoadingException, 
IOException {
+throws IllegalStateException, DynamicCodeLoadingException {
 
 Preconditions.checkNotNull(config, "config");
 Preconditions.checkNotNull(classLoader, "classLoader");
 
 final String storageName = 
config.get(CheckpointingOptions.CHECKPOINT_STORAGE);
 if (storageName == null) {
-return null;
+if (logger != null) {
+logger.warn(
+"The configuration {} has not be set in the current"
++ " sessions flink-conf.yaml. Falling back to 
a default CheckpointStorage"
++ " type. Users are strongly encouraged 
explicitly set this configuration"

Review comment:
   "Users are strongly encouraged explicitly"  =>
   Users are strongly encouraged to explicitly

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state;
+
+import org.apache.flink.configuration.CheckpointingOptions;
+import 

[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   * 0eb2a54315aab1058d59918ce625391dda994b6a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12496)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14748: [FLINK-20894][Table SQL / API] Introduce SupportsAggregatePushDown interface

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14748:
URL: https://github.com/apache/flink/pull/14748#issuecomment-766689601


   
   ## CI report:
   
   * e05f2896e3f72f053214d4204a62102f840b65dd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12449)
 
   * dd9eb0f167f8a9d43ce5f505cf09a9870c55e146 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14737:
URL: https://github.com/apache/flink/pull/14737#issuecomment-766281483


   
   ## CI report:
   
   * d6f06f07e0895117b345b99533ba7eda672ba765 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12489)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sebastianliu commented on pull request #14748: [FLINK-20894][Table SQL / API] Introduce SupportsAggregatePushDown interface

2021-01-25 Thread GitBox


sebastianliu commented on pull request #14748:
URL: https://github.com/apache/flink/pull/14748#issuecomment-767306427


   Hi @wuchong , appreciate for your time and thx a lot for your carefully 
review. I have resolved all of comments. Looking forward another round code 
review when you have time.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   * 0eb2a54315aab1058d59918ce625391dda994b6a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YUJIANBO updated FLINK-21142:
-
Description: 
We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will report 
errors.

*Operating environment:*
     flink1.12.0 
     Hadoop 3.3.0
     hive 3.1.2

*Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run -m 
yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*

If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
*hive-exec-3.1.2.jar* in the Lib directory and execute the above shell, an 
error will be reported  java.lang.NoSuchMethodError : com.google.common . 
base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. *We 
can see that it's the dependency conflict of guava.*

*My cluster guava‘s version:*
 /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
 /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
 /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
 /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
 /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar

*Can you give me some advice?*
 Thank you!

  was:
We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will report 
errors.

*Operating environment:*
    flink1.12.0 
    Hadoop 3.3.0
    hive 3.1.2

*Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run -m 
yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*

If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
*hive-exec-3.1.2.jar* in the Lib directory and execute the above command, an 
error will be reported  java.lang.NoSuchMethodError : com.google.common . 
base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. *We 
can see that it's the dependency conflict of guava.*

*My cluster guava‘s version:*
 /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
 /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
 /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
 /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
 /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar

*Can you give me some advice?*
 Thank you!


> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>      flink1.12.0 
>      Hadoop 3.3.0
>      hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above shell, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YUJIANBO updated FLINK-21142:
-
Labels: guava  (was: )

> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>  Labels: guava
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>     flink1.12.0 
>     Hadoop 3.3.0
>     hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above command, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YUJIANBO updated FLINK-21142:
-
Labels:   (was: guava)

> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>     flink1.12.0 
>     Hadoop 3.3.0
>     hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above command, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17271881#comment-17271881
 ] 

YUJIANBO commented on FLINK-21142:
--

[~jark] [~lirui] Hello!Can you give me some advice?

> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>     flink1.12.0 
>     Hadoop 3.3.0
>     hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above command, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21142) Flink guava Dependence problem

2021-01-25 Thread YUJIANBO (Jira)
YUJIANBO created FLINK-21142:


 Summary: Flink guava Dependence problem
 Key: FLINK-21142
 URL: https://issues.apache.org/jira/browse/FLINK-21142
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hadoop Compatibility, Connectors / Hive
Affects Versions: 1.12.0
Reporter: YUJIANBO


We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will report 
errors.

*Operating environment:*
    flink1.12.0 
    Hadoop 3.3.0
    hive 3.1.2

*Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run -m 
yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*

If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
*hive-exec-3.1.2.jar* in the Lib directory and execute the above command, an 
error will be reported  java.lang.NoSuchMethodError : com.google.common . 
base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. *We 
can see that it's the dependency conflict of guava.*

*My cluster guava‘s version:*
 /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
 /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
 /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
 /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
 /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar

*Can you give me some advice?*
 Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14752: [BP-1.12][FLINK-20961][table-planner-blink] Fix NPE when no assigned timestamp defined on DataStream

2021-01-25 Thread GitBox


wuchong merged pull request #14752:
URL: https://github.com/apache/flink/pull/14752


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037


   
   ## CI report:
   
   * e76016badce64cb235a2fcf4be60e6a226eed5e1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12493)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767282148


   
   ## CI report:
   
   * eacd669b14e1be1dee5d589016318b51e5bdd56e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12494)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong closed pull request #14753: [BP-1.11][FLINK-20961][table-planner-blink] Fix NPE when no assigned timestamp defined on DataStream

2021-01-25 Thread GitBox


wuchong closed pull request #14753:
URL: https://github.com/apache/flink/pull/14753


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14752: [BP-1.12][FLINK-20961][table-planner-blink] Fix NPE when no assigned timestamp defined on DataStream

2021-01-25 Thread GitBox


wuchong commented on pull request #14752:
URL: https://github.com/apache/flink/pull/14752#issuecomment-767284084


   The failed test is not related to this PR 
`UnalignedCheckpointITCase.execute`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13912: [FLINK-19466][FLINK-19467][runtime / state backends] Add Flip-142 public interfaces and methods

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #13912:
URL: https://github.com/apache/flink/pull/13912#issuecomment-721398037







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #14749: [FLINK-21123][fs] Bump beanutils to 1.9.4

2021-01-25 Thread GitBox


rmetzger commented on pull request #14749:
URL: https://github.com/apache/flink/pull/14749#issuecomment-766794435







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong closed pull request #14713: [BP-1.12][FLINK-20961][table-planner-blink] Fix NPE when no assigned timestamp defined on DataStream

2021-01-25 Thread GitBox


wuchong closed pull request #14713:
URL: https://github.com/apache/flink/pull/14713


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14750: [FLINK-21104][network] Ensure that converted input channels start in the correct persisting state. [1.12]

2021-01-25 Thread GitBox


flinkbot commented on pull request #14750:
URL: https://github.com/apache/flink/pull/14750#issuecomment-766734277







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14754: [FLINK-21127][runtime][checkpoint] Stores finished status for fully finished operators in checkpoint

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14754:
URL: https://github.com/apache/flink/pull/14754#issuecomment-766965862


   
   ## CI report:
   
   * 37484933592e8b5a4f5cf83425ac0b7b612c7480 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wenlong88 commented on a change in pull request #14729: [FLINK-21092][FLINK-21093][FLINK-21094][FLINK-21096][table-planner-blink] Support ExecNode plan serialization/deserialization f

2021-01-25 Thread GitBox


wenlong88 commented on a change in pull request #14729:
URL: https://github.com/apache/flink/pull/14729#discussion_r564170827



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecEdge.java
##
@@ -21,39 +21,97 @@
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.util.Preconditions;
 
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnore;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonGenerator;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonProcessingException;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.SerializerProvider;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonDeserialize;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonSerialize;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.StdSerializer;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Objects;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /** The representation of an edge connecting two {@link ExecNode}. */
 @Internal
+@JsonIgnoreProperties(ignoreUnknown = true)
 public class ExecEdge {
 
 public static final ExecEdge DEFAULT = ExecEdge.builder().build();
 
+public static final String FIELD_NAME_REQUIRED_SHUFFLE = "requiredShuffle";
+public static final String FIELD_NAME_DAM_BEHAVIOR = "damBehavior";
+public static final String FIELD_NAME_PRIORITY = "priority";
+
+@JsonProperty(FIELD_NAME_REQUIRED_SHUFFLE)
+@JsonSerialize(using = RequiredShuffleJsonSerializer.class)
+@JsonDeserialize(using = RequiredShuffleJsonDeserializer.class)

Review comment:
   why do we need the serializer? I think just add json annotation to 
RequiredShuffle may be enough?

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/CatalogTableSpecBase.java
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.serde;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogTableImpl;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnore;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonGenerator;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonProcessingException;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.type.TypeReference;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.SerializerProvider;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonDeserialize;
+import 

[GitHub] [flink] flinkbot commented on pull request #14754: [FLINK-21127][runtime][checkpoint] Stores finished status for fully finished operators in checkpoint

2021-01-25 Thread GitBox


flinkbot commented on pull request #14754:
URL: https://github.com/apache/flink/pull/14754#issuecomment-766954030







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14591: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14591:
URL: https://github.com/apache/flink/pull/14591#issuecomment-756825239







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-01-25 Thread GitBox


rmetzger commented on pull request #14737:
URL: https://github.com/apache/flink/pull/14737#issuecomment-766724268







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14748: [FLINK-20894][Table SQL / API] Introduce SupportsAggregatePushDown interface

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14748:
URL: https://github.com/apache/flink/pull/14748#issuecomment-766689601







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14749: [FLINK-21123][fs] Bump beanutils to 1.9.4

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14749:
URL: https://github.com/apache/flink/pull/14749#issuecomment-766689736







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14717: [FLINK-21020][build] Bump Jackson to 2.12.1

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14717:
URL: https://github.com/apache/flink/pull/14717#issuecomment-764599651







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14734: [FLINK-21066][runtime][checkpoint] Refactor CheckpointCoordinator to compute tasks to trigger/ack/commit dynamically

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14734:
URL: https://github.com/apache/flink/pull/14734#issuecomment-766092219







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on pull request #14749: [FLINK-21123][fs] Bump beanutils to 1.9.4

2021-01-25 Thread GitBox


zentol commented on pull request #14749:
URL: https://github.com/apache/flink/pull/14749#issuecomment-766851391







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski commented on a change in pull request #14691: [FLINK-21018] Update checkpoint related documentation for UI

2021-01-25 Thread GitBox


pnowojski commented on a change in pull request #14691:
URL: https://github.com/apache/flink/pull/14691#discussion_r563686026



##
File path: docs/ops/monitoring/checkpoint_monitoring.md
##
@@ -74,7 +74,8 @@ For subtasks there are a couple of more detailed stats 
available.
 - **Sync Duration**: The duration of the synchronous part of the checkpoint. 
This includes snapshotting state of the operators and blocks all other activity 
on the subtask (processing records, firing timers, etc).
 - **Async Duration**: The duration of the asynchronous part of the checkpoint. 
This includes time it took to write the checkpoint on to the selected 
filesystem. For unaligned checkpoints this also includes also the time the 
subtask had to wait for last of the checkpoint barriers to arrive (alignment 
duration) and the time it took to persist the in-flight data.
 - **Alignment Duration**: The time between processing the first and the last 
checkpoint barrier. For aligned checkpoints, during the alignment, the channels 
that have already received checkpoint barrier are blocked from processing more 
data.
-- **Start Delay**: The time it took for the first checkpoint barrier to reach 
this subtasks since the checkpoint barrier has been created.
+- **Start Delay**: The time it took for the first checkpoint barrier to reach 
this subtask since the checkpoint barrier has been created.
+- **Unaligned Checkpoint**: Whether the checkpoint for the subtask is 
completed as an unaligned checkpoint. An aligned checkpoint can switch to an 
unaligned checkpoint if the alignment timeouts.

Review comment:
   I guess we also need to update `checkpoint_monitoring.zh.md`? As you 
know Chinese could you do it yourself? If not, the procedure is to create a 
special JIRA ticket (described in the `docs/README.md`) for someone to pick up.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on pull request #14744: [FLINK-17511][tests] Rerun RocksDB memory control tests if necessary

2021-01-25 Thread GitBox


Myasuka commented on pull request #14744:
URL: https://github.com/apache/flink/pull/14744#issuecomment-766651035


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #14718: [FLINK-21071][release][docker] Update docker dev branch in snapshot branch

2021-01-25 Thread GitBox


rmetzger commented on pull request #14718:
URL: https://github.com/apache/flink/pull/14718#issuecomment-766795264


   Thx for extending this script! +1 to merge



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14749: [FLINK-21123][fs] Bump beanutils to 1.9.4

2021-01-25 Thread GitBox


flinkbot commented on pull request #14749:
URL: https://github.com/apache/flink/pull/14749#issuecomment-766686218







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14751: [BP-1.12][FLINK-21069][table-planner-blink] Configuration parallelism.default doesn't take effect for TableEnvironment#explainSql

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14751:
URL: https://github.com/apache/flink/pull/14751#issuecomment-766812086







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on a change in pull request #14719: [FLINK-21072] Refactor the SnapshotStrategy hierarchy

2021-01-25 Thread GitBox


aljoscha commented on a change in pull request #14719:
URL: https://github.com/apache/flink/pull/14719#discussion_r563860681



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotStrategyRunner.java
##
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state;
+
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.runtime.checkpoint.CheckpointOptions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+
+import java.util.concurrent.RunnableFuture;
+
+/**
+ * A class to execute a {@link SnapshotStrategy}. It can execute a strategy 
either synchronously or
+ * asynchronously. It takes care of common logging and resource cleaning.
+ *
+ * @param  type of the snapshot result.
+ */
+public final class SnapshotStrategyRunner {
+/** Flag to tell how the strategy should be executed. */
+public enum ExecutionType {
+SYNCHRONOUS,
+ASYNCHRONOUS
+}
+
+private static final Logger LOG = 
LoggerFactory.getLogger(SnapshotStrategyRunner.class);
+
+private static final String LOG_SYNC_COMPLETED_TEMPLATE =
+"{} ({}, synchronous part) in thread {} took {} ms.";
+private static final String LOG_ASYNC_COMPLETED_TEMPLATE =
+"{} ({}, asynchronous part) in thread {} took {} ms.";
+
+/**
+ * Descriptive name of the snapshot strategy that will appear in the log 
outputs and {@link
+ * #toString()}.
+ */
+@Nonnull private final String description;
+
+@Nonnull private final SnapshotStrategy snapshotStrategy;
+@Nonnull private final CloseableRegistry cancelStreamRegistry;
+
+@Nonnull private final ExecutionType executionType;
+
+public SnapshotStrategyRunner(
+@Nonnull String description,
+@Nonnull SnapshotStrategy snapshotStrategy,
+@Nonnull CloseableRegistry cancelStreamRegistry,
+@Nonnull ExecutionType executionType) {
+this.description = description;
+this.snapshotStrategy = snapshotStrategy;
+this.cancelStreamRegistry = cancelStreamRegistry;
+this.executionType = executionType;
+}
+
+@Nonnull
+public final RunnableFuture> snapshot(
+long checkpointId,
+long timestamp,
+@Nonnull CheckpointStreamFactory streamFactory,
+@Nonnull CheckpointOptions checkpointOptions)
+throws Exception {
+long startTime = System.currentTimeMillis();
+SR snapshotResources = 
snapshotStrategy.syncPrepareResources(checkpointId);
+logCompletedInternal(LOG_SYNC_COMPLETED_TEMPLATE, streamFactory, 
startTime);
+SnapshotStrategy.SnapshotResultSupplier asyncSnapshot =
+snapshotStrategy.asyncSnapshot(
+snapshotResources,
+checkpointId,
+timestamp,
+streamFactory,
+checkpointOptions);
+
+switch (executionType) {
+case SYNCHRONOUS:
+return DoneFuture.of(asyncSnapshot.get(cancelStreamRegistry));

Review comment:
   Is this using the same registry as before? (Say in the 
`HeapSnapshotStrategy`)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14751: [BP-1.12][FLINK-21069][table-planner-blink] Configuration parallelism.default doesn't take effect for TableEnvironment#explainSql

2021-01-25 Thread GitBox


wuchong merged pull request #14751:
URL: https://github.com/apache/flink/pull/14751


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14755: [FLINK-21140][python] Extract zip file dependencies before adding to PYTHONPATH

2021-01-25 Thread GitBox


flinkbot commented on pull request #14755:
URL: https://github.com/apache/flink/pull/14755#issuecomment-767280247







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #14635: [FLINK-19462][checkpointing] Update failed checkpoint stats

2021-01-25 Thread GitBox


rkhachatryan commented on pull request #14635:
URL: https://github.com/apache/flink/pull/14635#issuecomment-766707720


   Thanks for the review @pnowojski .
   I've added the space and created a ticket to translate the docs.
   I've also squashed the commits.
   
   > for example AsynCheckpointRunnable fails (throws an exception), I can not 
see any stats for any subtasks that have finished after the failure
   
   As discussed offline, this happens because the failed upstream doesn't sent 
barrier downstream.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #14678: [FLINK-20833][runtime] Add pluggable failure listener in job manager

2021-01-25 Thread GitBox


rmetzger commented on a change in pull request #14678:
URL: https://github.com/apache/flink/pull/14678#discussion_r563667477



##
File path: docs/deployment/advanced/platform.md
##
@@ -0,0 +1,49 @@
+---
+title: "Customizable Features for Platform Users"
+nav-title: platform
+nav-parent_id: advanced
+nav-pos: 3
+---
+
+Flink provides a set of customizable features for users to extend from the 
default behavior through the plugin framework.
+
+## Customize Failure Listener
+For each of execution exceptions in a flink job, it will be passed to the job 
master. The default failure listener is only
+to record the failure count and emit the metrics numJobFailure for the job. If 
you need an advanced classification on exceptions, 

Review comment:
   ```suggestion
   to record the failure count and emit the metric "numJobFailure" for the job. 
If you need an advanced classification on exceptions, 
   ```

##
File path: docs/deployment/advanced/platform.md
##
@@ -0,0 +1,49 @@
+---
+title: "Customizable Features for Platform Users"
+nav-title: platform
+nav-parent_id: advanced
+nav-pos: 3
+---
+
+Flink provides a set of customizable features for users to extend from the 
default behavior through the plugin framework.
+
+## Customize Failure Listener
+For each of execution exceptions in a flink job, it will be passed to the job 
master. The default failure listener is only
+to record the failure count and emit the metrics numJobFailure for the job. If 
you need an advanced classification on exceptions, 
+you can build a plugin to customize failure listener. For example, it can 
distinguish whether it is a flink runtime error or an 

Review comment:
   ```suggestion
   you can build a plugin to customize the failure listener. For example, it 
can distinguish whether it is a flink runtime error or an 
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/failurelistener/FailureListenerUtils.java
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.failurelistener;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.failurelistener.FailureListener;
+import org.apache.flink.core.failurelistener.FailureListenerFactory;
+import org.apache.flink.core.plugin.PluginManager;
+import org.apache.flink.core.plugin.PluginUtils;
+import org.apache.flink.runtime.metrics.groups.JobManagerJobMetricGroup;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+/** Utils for creating failure listener. */
+public class FailureListenerUtils {
+
+public static List createFailureListener(

Review comment:
   ```suggestion
   public static List getFailureListeners(
   ```

##
File path: docs/deployment/advanced/platform.md
##
@@ -0,0 +1,49 @@
+---
+title: "Customizable Features for Platform Users"
+nav-title: platform
+nav-parent_id: advanced
+nav-pos: 3
+---
+
+Flink provides a set of customizable features for users to extend from the 
default behavior through the plugin framework.
+
+## Customize Failure Listener
+For each of execution exceptions in a flink job, it will be passed to the job 
master. The default failure listener is only

Review comment:
   ```suggestion 
   Each execution exception in a Flink job, will be passed to the JobManager. 
The default failure listener is only
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/ExecutionFailureHandler.java
##
@@ -98,11 +103,25 @@ public FailureHandlingResult 
getGlobalFailureHandlingResult(final Throwable caus
 true);
 }
 
+/** @param failureListener the failure listener to be registered */
+public void registerFailureListener(FailureListener failureListener) {
+failureListeners.add(failureListener);
+}
+
 private FailureHandlingResult handleFailure(
 final Throwable cause,
 final Set verticesToRestart,
 final boolean globalFailure) {
 
+try {
+for (FailureListener listener : failureListeners) {
+

[GitHub] [flink] flinkbot edited a comment on pull request #14750: [FLINK-21104][network] Ensure that converted input channels start in the correct persisting state. [1.12]

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14750:
URL: https://github.com/apache/flink/pull/14750#issuecomment-766739099


   
   ## CI report:
   
   * e813f2f6a5b0e185fb82f378a3af2cd78f2080b0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12456)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] LinyuYao1021 commented on pull request #14737: [FLINK-19667] Add AWS Glue Schema Registry integration

2021-01-25 Thread GitBox


LinyuYao1021 commented on pull request #14737:
URL: https://github.com/apache/flink/pull/14737#issuecomment-766989857







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #14751: [BP-1.12][FLINK-21069][table-planner-blink] Configuration parallelism.default doesn't take effect for TableEnvironment#explainSql

2021-01-25 Thread GitBox


SteNicholas commented on pull request #14751:
URL: https://github.com/apache/flink/pull/14751#issuecomment-767195162


   @wuchong , could you please help to review this backport?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol edited a comment on pull request #14749: [FLINK-21123][fs] Bump beanutils to 1.9.4

2021-01-25 Thread GitBox


zentol edited a comment on pull request #14749:
URL: https://github.com/apache/flink/pull/14749#issuecomment-766851391


   The test failure is unlikely to be related (I can't see how that could 
affect things); I'll re-run the e2e tests to be sure.
   
   The core assumption I have is that the `flink-fs-swift-hadoop` filesystem 
currently works, by virtue of being excluded from bigger changes that the other 
filesystems underwent (like the hadoop3 migration) and not having been touched 
since it was merged, outside of some smaller security fixes.
   
   I'd argue that we should just drop `flink-fs-swift-hadoop` if we don't 
intend to actively maintain it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise merged pull request #14750: [FLINK-21104][network] Ensure that converted input channels start in the correct persisting state. [1.12]

2021-01-25 Thread GitBox


AHeise merged pull request #14750:
URL: https://github.com/apache/flink/pull/14750


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise merged pull request #14736: [FLINK-21104][network] Ensure that converted input channels start in the correct persisting state.

2021-01-25 Thread GitBox


AHeise merged pull request #14736:
URL: https://github.com/apache/flink/pull/14736


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14746: [FLINK-21020][build] Bump Jackson to 2.12.1

2021-01-25 Thread GitBox


flinkbot commented on pull request #14746:
URL: https://github.com/apache/flink/pull/14746#issuecomment-766580728







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14733: [FLINK-20968][table-planner-blink] Remove legacy exec nodes

2021-01-25 Thread GitBox


flinkbot edited a comment on pull request #14733:
URL: https://github.com/apache/flink/pull/14733#issuecomment-765889414







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14747: [FLINK-19436][tests] Properly shutdown cluster in e2e tests

2021-01-25 Thread GitBox


flinkbot commented on pull request #14747:
URL: https://github.com/apache/flink/pull/14747#issuecomment-766625223







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >