[GitHub] [flink] flinkbot edited a comment on pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14437:
URL: https://github.com/apache/flink/pull/14437#issuecomment-748619719


   
   ## CI report:
   
   * 8192b4f076bc5e5c62d283ef5ff67d2182a128ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11256)
 
   * 28010bc17e55f3b04a68eb4d8d42bc7fe0201191 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11371)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255458#comment-17255458
 ] 

Xintong Song commented on FLINK-20781:
--

Looks like a real and severe bug to me. Tagging it as release blocker of 1.12.1 
for now.

We can downgrade it if it's proven not a real bug later.

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> 

[GitHub] [flink] flinkbot commented on pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java

2020-12-27 Thread GitBox


flinkbot commented on pull request #14503:
URL: https://github.com/apache/flink/pull/14503#issuecomment-751619779


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2f6a9b8a8d27d5c290ebe1e7a2eec7beb7c77151 (Mon Dec 28 
07:50:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20693).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20781:
-
Priority: Blocker  (was: Major)

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20781:
-
Fix Version/s: 1.12.1

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[jira] [Commented] (FLINK-17767) Add offset support for TUMBLE, HOP, SESSION group windows

2020-12-27 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255455#comment-17255455
 ] 

hailong wang commented on FLINK-17767:
--

Great!It good to be included in TVF. I can give a hand if there is anything to 
do.

> Add offset support for TUMBLE, HOP, SESSION group windows
> -
>
> Key: FLINK-17767
> URL: https://issues.apache.org/jira/browse/FLINK-17767
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> TUMBLE window and HOP window with alignment is not supported yet. We can 
> support by 
> (, , )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20693) Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java

2020-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20693:
---
Labels: pull-request-available  (was: )

> Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
> ---
>
> Key: FLINK-20693
> URL: https://issues.apache.org/jira/browse/FLINK-20693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #14503: [FLINK-20693][table-planner-blink][python] Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java

2020-12-27 Thread GitBox


HuangXingBo opened a new pull request #14503:
URL: https://github.com/apache/flink/pull/14503


   ## What is the purpose of the change
   
   *This pull request Port BatchExecPythonCorrelate and 
StreamExecPythonCorrelate to Java*
   
   ## Brief change log
   
 - *Port `BatchExecPythonCorrelate` and `StreamExecPythonCorrelate` to Java*
 - *Add `CommonExecPythonCorrelate.java`*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Original Tests*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jjiey commented on pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md

2020-12-27 Thread GitBox


jjiey commented on pull request #14437:
URL: https://github.com/apache/flink/pull/14437#issuecomment-751618750


   > Thanks for your contribution. Sorry for the late response and I left some 
comments.
   
   Thanks @fsk119 for your review comments. I've updated and commited the 
modifications.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14451:
URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360


   
   ## CI report:
   
   * b3976f5cf20bd5a78289cf388c830a1739992bd0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11365)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14437:
URL: https://github.com/apache/flink/pull/14437#issuecomment-748619719


   
   ## CI report:
   
   * 8192b4f076bc5e5c62d283ef5ff67d2182a128ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11256)
 
   * 28010bc17e55f3b04a68eb4d8d42bc7fe0201191 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255442#comment-17255442
 ] 

xx chai commented on FLINK-20780:
-

[^ps-flink-process.txt]

the flie is I execute aux | grep flink-process-id 

You can see from the file that hadoop classpath is loaded

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png, ps-flink-process.txt
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on pull request #14281: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes setup

2020-12-27 Thread GitBox


wangyang0918 commented on pull request #14281:
URL: https://github.com/apache/flink/pull/14281#issuecomment-751613713


   @blublinsky Thanks for your comments.
   
   First, I believe parsing the `KubernetesOwnerReference` is not the ability 
of `KubernetesJobManagerFactory`. Moving the parsing logics out could make it 
more testable and reusable[1]. Second, we already have the mechanism to parse 
the structured string to a map. It will help a lot to convert them to a 
`KubernetesOwnerReference`. Benefit from the map, we do not require the users 
to always set all the fields(e.g. `apiVersion`, `blockOwnerDeletion`, 
`controller`, `kind`, `name`, `uid`) in specific order. It will be easier to 
use. Last, I do not think it will introduce two much codes except for the tests.
   
   [1]. 
https://flink.apache.org/contributing/code-style-and-quality-common.html#design-for-testability



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xx chai updated FLINK-20780:

Attachment: ps-flink-process.txt

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png, ps-flink-process.txt
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 edited a comment on pull request #14281: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes setup

2020-12-27 Thread GitBox


wangyang0918 edited a comment on pull request #14281:
URL: https://github.com/apache/flink/pull/14281#issuecomment-751185846


   @blublinsky Generally, this PR could work. But I think we could make the 
implementation more gracefully.
   * We could introduce a new Kubernetes resource in Flink 
`KubernetesOwnerReference`, similar to `KubernetesToleration`.
   * `KubernetesOwnerReference` could be built from a map, which is configured 
via `kubernetes.jobmanager.ownerref`. It is a config option with map 
`.mapType()`, similar to `kubernetes.jobmanager.tolerations`. 
   * `KubernetesOwnerReference` could be tested separately.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14495:
URL: https://github.com/apache/flink/pull/14495#issuecomment-751225470


   
   ## CI report:
   
   * 06d95b81212492278e8bc45f78a86e720b739f0d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11331)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11368)
 
   * 798b245caf1f4c1de2f3e14e08941792017bd4da Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11369)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255437#comment-17255437
 ] 

Rui Li commented on FLINK-20780:


Hmm... then I think you need to check the class path of JM/TM of your cluster. 
The stack trace indicates the error happens during job graph deserialization. 
So it means the {{JobConf}} class is available to sql-client but unavailable to 
JM/TM.
You can try {{ps aux | grep }} to display the launch command of JM/TM 
which should contain the class path information.

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias reassigned FLINK-20781:


Assignee: Jiangjie Qin

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20781:
-
Affects Version/s: 1.12.0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, 
> backoffTimeMS=100)
>

[jira] [Closed] (FLINK-20540) The baseurl for pg database is incorrect in JdbcCatalog page

2020-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20540.
---
Fix Version/s: 1.13.0
   Resolution: Fixed

Fixed in master: 8da64d7982de7287766de658f6d6f03037be94e6

> The baseurl for pg database is incorrect in JdbcCatalog page
> 
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> scala.App$$anonfun$main$1.apply(App.scala:76)

[jira] [Updated] (FLINK-20540) JdbcCatalog throws connection exception when baseUrl doesn't end with slash

2020-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20540:

Summary: JdbcCatalog throws connection exception when baseUrl doesn't end 
with slash  (was: The baseurl for pg database is incorrect in JdbcCatalog page)

> JdbcCatalog throws connection exception when baseUrl doesn't end with slash
> ---
>
> Key: FLINK-20540
> URL: https://issues.apache.org/jira/browse/FLINK-20540
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.1
>Reporter: zhangzhao
>Assignee: zhangzhao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  
> {code:java}
> //代码占位符
> import org.apache.flink.connector.jdbc.catalog.JdbcCatalog
> new JdbcCatalog(name, defaultDatabase, username, password, baseUrl){code}
>  
> The baseUrl must be endswith / when instantiate JdbcCatalog.
> But according to [Flink 
> document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/jdbc.html#usage-of-postgrescatalog]
>  and code comments, baseUrl should be support  format 
> {{"jdbc:postgresql://:"}}
>  
> When i use baseUrl "{{jdbc:postgresql://:}}", the error stack is:
> {code:java}
> //代码占位符
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)\\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute application.
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\\n\\t...
>  7 more\\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\\n\\t...
>  7 more\\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed connecting to 
> jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\\n\\t...
>  10 more\\nCaused by: org.apache.flink.table.api.ValidationException: Failed 
> connecting to jdbc:postgresql://flink-postgres.cdn-flink:5432flink via JDBC.
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog.open(AbstractJdbcCatalog.java:100)
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:191)
> org.apache.flink.table.api.internal.TableEnvImpl.registerCatalog(TableEnvImpl.scala:267)
> com.upai.jobs.TableBodySentFields.registerCatalog(TableBodySentFields.scala:25)
> com.upai.jobs.FusionGifShow$.run(FusionGifShow.scala:28)
> com.upai.jobs.FlinkTask$.delayedEndpoint$com$upai$jobs$FlinkTask$1(FlinkTask.scala:41)
> com.upai.jobs.FlinkTask$delayedInit$body.apply(FlinkTask.scala:11)
> scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> 

[jira] [Created] (FLINK-20783) Separate the implementation of BatchExec nodes for Join

2020-12-27 Thread Wenlong Lyu (Jira)
Wenlong Lyu created FLINK-20783:
---

 Summary: Separate the implementation of BatchExec nodes for Join
 Key: FLINK-20783
 URL: https://issues.apache.org/jira/browse/FLINK-20783
 Project: Flink
  Issue Type: Sub-task
Reporter: Wenlong Lyu
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14362: [FLINK-20540][jdbc] Fix the defaultUrl for AbstractJdbcCatalog in JDBC connector

2020-12-27 Thread GitBox


wuchong merged pull request #14362:
URL: https://github.com/apache/flink/pull/14362


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias closed FLINK-20389.

Resolution: Fixed

I close this issue again after opening FLINK-20781 as it the underlying issue 
seems to be a different one as [Arvid pointed 
out|https://issues.apache.org/jira/browse/FLINK-20389?focusedCommentId=17251651=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17251651].

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Arvid Heise
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.1
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> 

[jira] [Created] (FLINK-20782) Separate the implementation of BatchExecRank

2020-12-27 Thread Wenlong Lyu (Jira)
Wenlong Lyu created FLINK-20782:
---

 Summary: Separate the implementation of BatchExecRank
 Key: FLINK-20782
 URL: https://issues.apache.org/jira/browse/FLINK-20782
 Project: Flink
  Issue Type: Sub-task
Reporter: Wenlong Lyu
 Fix For: 1.13.0


separate the ExeNode and PhysicalNode of Rank in batch



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] jjiey commented on a change in pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md

2020-12-27 Thread GitBox


jjiey commented on a change in pull request #14437:
URL: https://github.com/apache/flink/pull/14437#discussion_r549238266



##
File path: docs/dev/table/sql/gettingStarted.zh.md
##
@@ -129,11 +128,11 @@ FROM employee_information
 GROUP BY dep_id;
  {% endhighlight %} 
 
-Such queries are considered _stateful_. Flink's advanced fault-tolerance 
mechanism will maintain internal state and consistency, so queries always 
return the correct result, even in the face of hardware failure. 
+这样的查询被认为是 _有状态的_。Flink 的高级容错机制将维持内部状态和一致性,因此即使遇到硬件故障,查询也始终返回正确结果。
 
-## Sink Tables
+## Sink 表
 
-When running this query, the SQL client provides output in real-time but in a 
read-only fashion. Storing results - to power a report or dashboard - requires 
writing out to another table. This can be achieved using an `INSERT INTO` 
statement. The table referenced in this clause is known as a sink table. An 
`INSERT INTO` statement will be submitted as a detached query to the Flink 
cluster. 
+当运行此查询时,SQL 客户端实时但是以只读方式提供输出。存储结果(为报表或仪表板提供数据来源)需要写到另一个表。这可以使用 `INSERT INTO` 
语句来实现。本节中引用的表称为 sink 表。`INSERT INTO` 语句将作为一个独立查询被提交到 Flink 集群中。

Review comment:
   “作为报表或仪表板提供数据来源” -> "作为报表或仪表板的数据来源"   理解您的意思了,这里“提供”改为“的”应该更通顺一些。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255430#comment-17255430
 ] 

Matthias commented on FLINK-20781:
--

FLINK-20389 was linked as the most recent comments in this issue refer to the 
same failure that is described here.

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: 

[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255428#comment-17255428
 ] 

Matthias commented on FLINK-20781:
--

FLINK-20492 was linked as it might have been caused by changes related to this 
ticket. [~becket_qin] may you have a look?

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: 

[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20781:
-
Affects Version/s: 1.13.0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, 
> backoffTimeMS=100)
>   at 

[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20781:
-
Labels: test-stability  (was: )

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, 
> backoffTimeMS=100)
>   at 
> 

[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20781:
-
Description: 
{{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
{code:java}
[ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
152.186 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
[ERROR] execute[Parallel cogroup, p = 
10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
elapsed: 34.869 s  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
at akka.dispatch.OnComplete.internal(Future.scala:264)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, 
backoffTimeMS=100)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:221)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:214)
at 

[GitHub] [flink] flinkbot edited a comment on pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14495:
URL: https://github.com/apache/flink/pull/14495#issuecomment-751225470


   
   ## CI report:
   
   * 06d95b81212492278e8bc45f78a86e720b739f0d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11331)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11368)
 
   * 798b245caf1f4c1de2f3e14e08941792017bd4da UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20781:
-
Summary: UnalignedCheckpointITCase failure caused by NullPointerException  
(was: Unal)

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Reporter: Matthias
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20781) Unal

2020-12-27 Thread Matthias (Jira)
Matthias created FLINK-20781:


 Summary: Unal
 Key: FLINK-20781
 URL: https://issues.apache.org/jira/browse/FLINK-20781
 Project: Flink
  Issue Type: Bug
Reporter: Matthias






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20654) Unaligned checkpoint recovery may lead to corrupted data stream

2020-12-27 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255421#comment-17255421
 ] 

Matthias commented on FLINK-20654:
--

[Build 
#20201221.2|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11090=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0=3994]
 contained a test failure of {{UnalignedCheckpointITCase}} due to an 
{{EOFException}}.

> Unaligned checkpoint recovery may lead to corrupted data stream
> ---
>
> Key: FLINK-20654
> URL: https://issues.apache.org/jira/browse/FLINK-20654
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Arvid Heise
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.1
>
>
> Fix of FLINK-20433 shows potential corruption after recovery for all 
> variations of UnalignedCheckpointITCase.
> To reproduce, run UCITCase a couple hundreds times. The issue showed for me 
> in:
> - execute [Parallel union, p = 5]
> - execute [Parallel union, p = 10]
> - execute [Parallel cogroup, p = 5]
> - execute [parallel pipeline with remote channels, p = 5]
> with decreasing frequency.
> The issue manifests as one of the following issues:
> - stream corrupted exception
> - EOF exception
> - assertion failure in NUM_LOST or NUM_OUT_OF_ORDER
> - (for union) ArithmeticException overflow (because the number that should be 
> [0;10] has been mis-deserialized)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14492: [FLINK-20756][python] Add PythonCalcSplitConditionRexFieldRule

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14492:
URL: https://github.com/apache/flink/pull/14492#issuecomment-751194123


   
   ## CI report:
   
   * 5e1ebe24b8fcdacdc2c96a205f84208c25c4f653 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11363)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14495:
URL: https://github.com/apache/flink/pull/14495#issuecomment-751225470


   
   ## CI report:
   
   * 06d95b81212492278e8bc45f78a86e720b739f0d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11331)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11368)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhisheng17 commented on pull request #14495: [hotfix][runtime]fix typo in HistoryServerUtils class

2020-12-27 Thread GitBox


zhisheng17 commented on pull request #14495:
URL: https://github.com/apache/flink/pull/14495#issuecomment-751599949


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20654) Unaligned checkpoint recovery may lead to corrupted data stream

2020-12-27 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255416#comment-17255416
 ] 

Matthias commented on FLINK-20654:
--

[Build 
failed|https://dev.azure.com/mapohl/flink/_build/results?buildId=142=logs=6e55a443-5252-5db5-c632-109baf464772=9df6efca-61d0-513a-97ad-edb76d85786a=8809]
 due to {{IndexOutOfBoundsException}} similar to what is listed in FLINK-20662.

> Unaligned checkpoint recovery may lead to corrupted data stream
> ---
>
> Key: FLINK-20654
> URL: https://issues.apache.org/jira/browse/FLINK-20654
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Arvid Heise
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.1
>
>
> Fix of FLINK-20433 shows potential corruption after recovery for all 
> variations of UnalignedCheckpointITCase.
> To reproduce, run UCITCase a couple hundreds times. The issue showed for me 
> in:
> - execute [Parallel union, p = 5]
> - execute [Parallel union, p = 10]
> - execute [Parallel cogroup, p = 5]
> - execute [parallel pipeline with remote channels, p = 5]
> with decreasing frequency.
> The issue manifests as one of the following issues:
> - stream corrupted exception
> - EOF exception
> - assertion failure in NUM_LOST or NUM_OUT_OF_ORDER
> - (for union) ArithmeticException overflow (because the number that should be 
> [0;10] has been mis-deserialized)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255407#comment-17255407
 ] 

xx chai commented on FLINK-20780:
-

There is only one node in my cluster,the cluster is my test cluster

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255406#comment-17255406
 ] 

Rui Li commented on FLINK-20780:


Hi [~chaixiaoxue] have you set the HADOOP_CLASSPATH variable on every node of 
the standalone cluster?

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20744) org.apache.flink.test.checkpointing.UnalignedCheckpointITCase fails due to java.lang.ArrayIndexOutOfBoundsException

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20744:
-
Labels: test-stability  (was: testability)

> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase fails due to 
> java.lang.ArrayIndexOutOfBoundsException
> ---
>
> Key: FLINK-20744
> URL: https://issues.apache.org/jira/browse/FLINK-20744
> Project: Flink
>  Issue Type: Test
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=140=logs=6e55a443-5252-5db5-c632-109baf464772=9df6efca-61d0-513a-97ad-edb76d85786a=8819]
>  failed due to {{UnalignedCheckpointITCase}} failure:
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 144.454 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 5](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 12.282 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery 

[jira] [Updated] (FLINK-20654) Unaligned checkpoint recovery may lead to corrupted data stream

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20654:
-
Labels: pull-request-available test-stability  (was: pull-request-available)

> Unaligned checkpoint recovery may lead to corrupted data stream
> ---
>
> Key: FLINK-20654
> URL: https://issues.apache.org/jira/browse/FLINK-20654
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Arvid Heise
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.1
>
>
> Fix of FLINK-20433 shows potential corruption after recovery for all 
> variations of UnalignedCheckpointITCase.
> To reproduce, run UCITCase a couple hundreds times. The issue showed for me 
> in:
> - execute [Parallel union, p = 5]
> - execute [Parallel union, p = 10]
> - execute [Parallel cogroup, p = 5]
> - execute [parallel pipeline with remote channels, p = 5]
> with decreasing frequency.
> The issue manifests as one of the following issues:
> - stream corrupted exception
> - EOF exception
> - assertion failure in NUM_LOST or NUM_OUT_OF_ORDER
> - (for union) ArithmeticException overflow (because the number that should be 
> [0;10] has been mis-deserialized)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20770) Incorrect description for config option kubernetes.rest-service.exposed.type

2020-12-27 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20770:
-
Labels: starter  (was: )

> Incorrect description for config option kubernetes.rest-service.exposed.type
> 
>
> Key: FLINK-20770
> URL: https://issues.apache.org/jira/browse/FLINK-20770
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0, 1.11.3
>Reporter: Yang Wang
>Priority: Major
>  Labels: starter
>
> {code:java}
> public static final ConfigOption 
> REST_SERVICE_EXPOSED_TYPE =
>key("kubernetes.rest-service.exposed.type")
>.enumType(ServiceExposedType.class)
>.defaultValue(ServiceExposedType.LoadBalancer)
>.withDescription("The type of the rest service (ClusterIP or NodePort or 
> LoadBalancer). " +
>   "When set to ClusterIP, the rest service will not be created.");
> {code}
> The description of the config option is not correct. We will always create 
> the rest service after refactoring the Kubernetes decorators in FLINK-16194. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255393#comment-17255393
 ] 

xx chai commented on FLINK-20780:
-

export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
export PATH=${HADOOP_HOME}/bin:$PATH
export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`

 

I execute hadoop classpath,It prints the correct result

 

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255390#comment-17255390
 ] 

Jark Wu commented on FLINK-20780:
-

How did you configure the hadoop classpath?

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kougazhang commented on pull request #14362: [FLINK-20540][jdbc] Fix the defaultUrl for AbstractJdbcCatalog in JDBC connector

2020-12-27 Thread GitBox


kougazhang commented on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-751585903


   > @kougazhang There're failed tests, the test 
[code](https://github.com/apache/flink/blob/aa34f060a06041b6e84afa54c8ae5507dba0342d/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/catalog/PostgresCatalogTestBase.java#L125)
 also need to update, please update the PR.
   
   Hi, I updated the test code. And the code passed auto test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255387#comment-17255387
 ] 

xx chai commented on FLINK-20780:
-

Every time  i restart the cluster  after you change jars in the lib folder

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255386#comment-17255386
 ] 

Rui Li commented on FLINK-20780:


You need to make sure the jars are available to both sql-client and JM/TM. 
Judging from the stack trace, it seems the jars are missing on JM/TM. And for 
standalone mode, you need to restart the cluster after you change jars in the 
lib folder.

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20583) Support to define the data range of decimal types in datagen table source connector

2020-12-27 Thread tonychen0716 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255384#comment-17255384
 ] 

tonychen0716 commented on FLINK-20583:
--

any result?

> Support to define the data range of decimal types in datagen table source 
> connector
> ---
>
> Key: FLINK-20583
> URL: https://issues.apache.org/jira/browse/FLINK-20583
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.0
> Environment: Flink 1.12.0
>Reporter: tonychen0716
>Priority: Minor
>
>  
> {code:java}
> – FLink SQL
>  CREATE TABLE datagen
>  (
>  lon DECIMAL(38, 18) COMMENT 'lon',
>  lat DECIMAL(38, 18) COMMENT 'lat'
>  )
>  WITH (
>  'connector' = 'datagen',
>  'rows-per-second' = '10',
>  'fields.lon.kind' = 'random',
>  'fields.lon.min' = '116.40',
>  'fields.lon.max' = '116.41',
>  'fields.lat.kind' = 'random',
>  'fields.lat.min' = '39.894454',
>  'fields.lat.max' = '39.90'
>  );
> CREATE TABLE sink
>  (
>  lon DECIMAL(38, 18),
>  lat DECIMAL(38, 18)
>  )
>  WITH (
>  'connector' = 'print'
>  );
> INSERT INTO sink
>  SELECT lon,
>  lat
>  from datagen;
> {code}
>  
> It give exception:
>  
> {code:java}
> Table options are:
> 'connector'='datagen'
> 'fields.lat.kind'='random'
> 'fields.lat.max'='39.90'
> 'fields.lat.min'='39.894454'
> 'fields.lon.kind'='random'
> 'fields.lon.max'='116.41'
> 'fields.lon.min'='116.40'
> 'rows-per-second'='10'
> Unsupported options found for connector 'datagen'.
> Unsupported options:
> fields.lat.max
> fields.lat.min
> fields.lon.max
> fields.lon.min
> Supported options:
> connector
> fields.lat.kind
> fields.lon.kind
> number-of-rows
> rows-per-second
> INSERT INTO sink
> SELECT lon,
>  lat
> from datagen 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461


   
   ## CI report:
   
   * 9abb79a2bb6e487d3f1deb0cfc4ecfff74a7d780 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11360)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14362: [FLINK-20540][jdbc] Fix the defaultUrl for AbstractJdbcCatalog in JDBC connector

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * acdafc09aa2916dd50f6e2139a841ed1bfd793db Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11361)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14502: [Flink-20766][table-planner-blink] Separate implementations of sort ExecNode and PhysicalNode.

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14502:
URL: https://github.com/apache/flink/pull/14502#issuecomment-751570047


   
   ## CI report:
   
   * feec24b2df7f31564e217290d7a0d1636fac27b2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11367)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14502: [Flink-20766][table-planner-blink] Separate implementations of sort ExecNode and PhysicalNode.

2020-12-27 Thread GitBox


flinkbot commented on pull request #14502:
URL: https://github.com/apache/flink/pull/14502#issuecomment-751570047


   
   ## CI report:
   
   * feec24b2df7f31564e217290d7a0d1636fac27b2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255375#comment-17255375
 ] 

xx chai commented on FLINK-20780:
-

I have deleted  {{flink-connector-hive_2.11-1.12.0.jar}}, 
{{hive-exec-2.1.1-cdh6.3.2.jar}}, {{hive-exec-2.2.0.jar.bak, but when I query 
hive same mistake.}}

now lib:

flink-connector-jdbc_2.11-1.12.0.jar
flink-csv-1.12.0.jar
flink-dist_2.11-1.12.0.jar
flink-json-1.12.0.jar
flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar
flink-shaded-zookeeper-3.4.14.jar
flink-sql-connector-hive-2.2.0_2.11-1.12.0.jar
flink-sql-connector-kafka_2.11-1.12.0.jar
flink-table_2.11-1.12.0.jar
flink-table-api-java-bridge_2.11-1.12.0.jar
flink-table-blink_2.11-1.12.0.jar
log4j-1.2-api-2.12.1.jar
log4j-api-2.12.1.jar
log4j-core-2.12.1.jar
log4j-slf4j-impl-2.12.1.jar
mysql-connector-java-8.0.22.jar

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14502: [Flink-20766][table-planner-blink] Separate implementations of sort ExecNode and PhysicalNode.

2020-12-27 Thread GitBox


flinkbot commented on pull request #14502:
URL: https://github.com/apache/flink/pull/14502#issuecomment-751567825


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit feec24b2df7f31564e217290d7a0d1636fac27b2 (Mon Dec 28 
04:11:52 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255371#comment-17255371
 ] 

Jark Wu commented on FLINK-20780:
-

Please remove {{flink-connector-hive_2.11-1.12.0.jar}}, 
{{hive-exec-2.1.1-cdh6.3.2.jar}}, {{hive-exec-2.2.0.jar.bak}}, because 
{{flink-sql-connector-hive-2.2.0_2.11-1.12.0.jar}} should already contain them.

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14501: [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14501:
URL: https://github.com/apache/flink/pull/14501#issuecomment-751565111


   
   ## CI report:
   
   * b6663b070dc3a6e687d6b68d08ba12366a4ef80e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11366)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wenlong88 opened a new pull request #14502: [Flink-20766][table-planner-blink] Separate implementations of sort ExecNode and PhysicalNode.

2020-12-27 Thread GitBox


wenlong88 opened a new pull request #14502:
URL: https://github.com/apache/flink/pull/14502


   ## What is the purpose of the change
   Separate the implementation of sort ExeNode
   
   ## Brief change log
   1. Introduce StreamPhysicalSort, and make StreamExecSort only extended from 
ExecNode
   2. Introduce StreamPhysicalSortLimi, and make StreamExecSortLimit only 
extends from ExecNode
   3. Introduce StreamPhysicalTemporalSort, and make StreamExecTemporalSort 
only extended from ExecNode
   4. Introduce BatchPhysicalSort, and make BatchExeSort only extended from 
ExecNode
   5. Introduce BatchPhysicalSortLimit, and make BatchExecSortLimit only 
extended from ExecNode
   
   ## Verifying this change
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
 - Does this pull request introduce a new feature? (no)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-27 Thread GitBox


wuchong commented on a change in pull request #14464:
URL: https://github.com/apache/flink/pull/14464#discussion_r549207143



##
File path: docs/dev/table/connectors/formats/canal.md
##
@@ -142,6 +142,79 @@ SELECT * FROM topic_products;
 
 
 
+Available Metadata
+--
+
+The following format metadata can be exposed as read-only (`VIRTUAL`) columns 
in a table definition.
+
+Attention Format metadata fields are 
only available if the
+corresponding connector forwards format metadata. Currently, only the Kafka 
connector is able to expose
+metadata fields for its value format.
+
+
+
+
+  Key
+  Data Type
+  Description
+
+
+
+
+  database
+  STRING NULL
+  The originating database. Corresponds to the database 
field in the
+  Canal record if available.
+
+
+  table
+  STRING NULL
+  The originating database table. Corresponds to the 
table field in the
+  Canal record if available.
+
+
+  sql-type
+  MAPSTRING, INT NULL
+  Map of various sql types. Corresponds to the sqlType 
field in the 
+  Canal record if available.
+
+
+  pk-names
+  ARRAYSTRING NULL
+  Array of primary key names. Corresponds to the pkNames 
field in the 
+  Canal record if available.
+
+
+  ingestion-timestamp
+  TIMESTAMP(3) WITH LOCAL TIME ZONE NULL
+  The timestamp at which the connector processed the event. 
Corresponds to the ts
+  field in the Canal record.
+
+
+
+
+The following example shows how to access Canal metadata fields in Kafka:
+
+
+
+{% highlight sql %}
+CREATE TABLE KafkaTable (
+  `origin_database` STRING METADATA FROM 'value.database' VIRTUAL,
+  `origin_table` STRING METADATA FROM 'value.table' VIRTUAL,

Review comment:
   Could you list all the metadata columns? I think that would be helpful, 
esp. for the complex types. 

##
File path: docs/dev/table/connectors/formats/canal.md
##
@@ -142,6 +142,79 @@ SELECT * FROM topic_products;
 
 
 
+Available Metadata
+--
+
+The following format metadata can be exposed as read-only (`VIRTUAL`) columns 
in a table definition.
+
+Attention Format metadata fields are 
only available if the
+corresponding connector forwards format metadata. Currently, only the Kafka 
connector is able to expose
+metadata fields for its value format.
+
+
+
+
+  Key
+  Data Type
+  Description
+
+
+
+
+  database
+  STRING NULL
+  The originating database. Corresponds to the database 
field in the
+  Canal record if available.
+
+
+  table
+  STRING NULL
+  The originating database table. Corresponds to the 
table field in the
+  Canal record if available.
+
+
+  sql-type
+  MAPSTRING, INT NULL
+  Map of various sql types. Corresponds to the sqlType 
field in the 
+  Canal record if available.
+
+
+  pk-names
+  ARRAYSTRING NULL
+  Array of primary key names. Corresponds to the pkNames 
field in the 
+  Canal record if available.
+
+
+  ingestion-timestamp
+  TIMESTAMP(3) WITH LOCAL TIME ZONE NULL
+  The timestamp at which the connector processed the event. 
Corresponds to the ts
+  field in the Canal record.
+
+
+
+
+The following example shows how to access Canal metadata fields in Kafka:
+
+
+
+{% highlight sql %}
+CREATE TABLE KafkaTable (
+  `origin_database` STRING METADATA FROM 'value.database' VIRTUAL,
+  `origin_table` STRING METADATA FROM 'value.table' VIRTUAL,
+  `user_id` BIGINT,
+  `item_id` BIGINT,
+  `behavior` STRING

Review comment:
   I think we don't need to add backquotes around the column names, because 
they are not keywords. 

##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java
##
@@ -198,4 +198,130 @@ public void testKafkaDebeziumChangelogSource() throws 
Exception {
tableResult.getJobClient().get().cancel().get(); // stop the job
deleteTestTopic(topic);
}
+
+   @Test
+   public void testKafkaCanalChangelogSource() throws Exception {
+   final String topic = "changelog_canal";
+   createTestTopic(topic, 1, 1);
+
+   // enables MiniBatch processing to verify MiniBatch + FLIP-95, 
see FLINK-18769
+   Configuration tableConf = tEnv.getConfig().getConfiguration();
+   tableConf.setString("table.exec.mini-batch.enabled", "true");
+   tableConf.setString("table.exec.mini-batch.allow-latency", 
"1s");
+   tableConf.setString("table.exec.mini-batch.size", "5000");
+   tableConf.setString("table.optimizer.agg-phase-strategy", 
"TWO_PHASE");
+
+   // -- Write the Canal json into Kafka 
---
+   List lines = 

[jira] [Comment Edited] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255369#comment-17255369
 ] 

xx chai edited comment on FLINK-20780 at 12/28/20, 3:59 AM:


Flink version is 1.12

chd version is 6.3.2

hadoop version is 3.0.0+cdh6.3.2

flink lib :

flink-connector-hive_2.11-1.12.0.jar
flink-connector-jdbc_2.11-1.12.0.jar
flink-csv-1.12.0.jar
flink-dist_2.11-1.12.0.jar
flink-json-1.12.0.jar
flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar
flink-shaded-zookeeper-3.4.14.jar
flink-sql-connector-hive-2.2.0_2.11-1.12.0.jar
flink-sql-connector-kafka_2.11-1.12.0.jar
flink-table_2.11-1.12.0.jar
flink-table-api-java-bridge_2.11-1.12.0.jar
flink-table-blink_2.11-1.12.0.jar
hive-exec-2.1.1-cdh6.3.2.jar
hive-exec-2.2.0.jar.bak
log4j-1.2-api-2.12.1.jar
log4j-api-2.12.1.jar
log4j-core-2.12.1.jar
log4j-slf4j-impl-2.12.1.jar
mysql-connector-java-8.0.22.jar


was (Author: chaixiaoxue):
Flink version is 1.12

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255369#comment-17255369
 ] 

xx chai commented on FLINK-20780:
-

Flink version is 1.12

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255368#comment-17255368
 ] 

Jark Wu commented on FLINK-20780:
-

Could you provide the versions you are using, including Flink, Hive, Hadoop? 
Besides, could you provides the jars under {{flink/lib}}?

cc [~lirui]

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20780:

Component/s: Connectors / Hive

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14501: [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread GitBox


xiaoHoly commented on pull request #14501:
URL: https://github.com/apache/flink/pull/14501#issuecomment-751565702


   thanks for contribution,LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14501: [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread GitBox


flinkbot commented on pull request #14501:
URL: https://github.com/apache/flink/pull/14501#issuecomment-751565111


   
   ## CI report:
   
   * b6663b070dc3a6e687d6b68d08ba12366a4ef80e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14451:
URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360


   
   ## CI report:
   
   * 22b39c8ed7af195647f9eb507e4147346380ccb8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11313)
 
   * b3976f5cf20bd5a78289cf388c830a1739992bd0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11365)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2020-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255367#comment-17255367
 ] 

Jark Wu commented on FLINK-14055:
-

Yes [~hpeter], a FLIP design doc would be helpful. 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: sprint
> Fix For: 1.13.0
>
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xx chai updated FLINK-20780:

Attachment: 1609127183(1).png

> Flink-sql-client query hive 
> 
>
> Key: FLINK-20780
> URL: https://issues.apache.org/jira/browse/FLINK-20780
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
> Environment: standalone
>Reporter: xx chai
>Priority: Major
> Attachments: 1609127183(1).png
>
>
> flink-sql-client query hive is fail.
> I've already configured hadoop classpath
> and my hadoop is cdh
> error :
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.mapred.JobConf at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20766) Separate the implementation of sort nodes

2020-12-27 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-20766:

Description: 
includes:
StreamExecSort
StreamExecSortLimit
StreamExecTemporalSort
BatchExecSort
BatchExecSortLimit
Summary: Separate the implementation of sort nodes  (was: Separate the 
implementation of stream sort nodes)

> Separate the implementation of sort nodes
> -
>
> Key: FLINK-20766
> URL: https://issues.apache.org/jira/browse/FLINK-20766
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.13.0
>
>
> includes:
> StreamExecSort
> StreamExecSortLimit
> StreamExecTemporalSort
> BatchExecSort
> BatchExecSortLimit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18726) Support INSERT INTO specific columns

2020-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255366#comment-17255366
 ] 

Jark Wu commented on FLINK-18726:
-

Thanks [~atri], could you prepare a design doc on Google doc to describe how do 
you want to support this and which parts should be touched? 
So that we can give some feedbacks.

> Support INSERT INTO specific columns
> 
>
> Key: FLINK-18726
> URL: https://issues.apache.org/jira/browse/FLINK-18726
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Caizhi Weng
>Assignee: Atri Sharma
>Priority: Major
>  Labels: sprint
> Fix For: 1.13.0
>
>
> Currently Flink only supports insert into a table without specifying columns, 
> but most database systems support insert into specific columns by
> {code:sql}
> INSERT INTO table_name(column1, column2, ...) ...
> {code}
> The columns not specified will be filled with default values or {{NULL}} if 
> no default value is given when creating the table.
> As Flink currently does not support default values when creating tables, we 
> can fill the unspecified columns with {{NULL}} and throw exceptions if there 
> are columns with {{NOT NULL}} constraints.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20780) Flink-sql-client query hive

2020-12-27 Thread xx chai (Jira)
xx chai created FLINK-20780:
---

 Summary: Flink-sql-client query hive 
 Key: FLINK-20780
 URL: https://issues.apache.org/jira/browse/FLINK-20780
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
 Environment: standalone
Reporter: xx chai


flink-sql-client query hive is fail.

I've already configured hadoop classpath

and my hadoop is cdh

error :

Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.JobConfCaused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.JobConf at 
java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 56 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20778) [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-20778:
---

Assignee: Jeremy Mei

> [FLINK-20778] the comment for the type of kafka consumer  is wrong at 
> KafkaPartitionSplit
> -
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Assignee: Jeremy Mei
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14501: [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread GitBox


flinkbot commented on pull request #14501:
URL: https://github.com/apache/flink/pull/14501#issuecomment-751562731


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b6663b070dc3a6e687d6b68d08ba12366a4ef80e (Mon Dec 28 
03:35:39 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20778).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20778) [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20778:
---
Labels: pull-request-available  (was: )

> [FLINK-20778] the comment for the type of kafka consumer  is wrong at 
> KafkaPartitionSplit
> -
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14451:
URL: https://github.com/apache/flink/pull/14451#issuecomment-749321360


   
   ## CI report:
   
   * 22b39c8ed7af195647f9eb507e4147346380ccb8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11313)
 
   * b3976f5cf20bd5a78289cf388c830a1739992bd0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meijies opened a new pull request #14501: [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread GitBox


meijies opened a new pull request #14501:
URL: https://github.com/apache/flink/pull/14501


   ## What is the purpose of the change
   fix comment for tyhe type of kafka consumer  at  ```KafkaPartitionSplit```.
   
   ## Brief change log
   switch the comment to related type of kafka consume at 
```KafkaPartitionSplit```.
   
   ## Verifying this change
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
 - Does this pull request introduce a new feature? (no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20778) [FLINK-20778] the comment for the type of kafka consumer is wrong at KafkaPartitionSplit

2020-12-27 Thread Jeremy Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Mei updated FLINK-20778:
---
Summary: [FLINK-20778] the comment for the type of kafka consumer  is wrong 
at KafkaPartitionSplit  (was: the comment for kafka split offset type is wrong)

> [FLINK-20778] the comment for the type of kafka consumer  is wrong at 
> KafkaPartitionSplit
> -
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255360#comment-17255360
 ] 

jiawen xiao commented on FLINK-20778:
-

[~meijies],please wait ,Jark can assign to you.i think you can fix it directly 

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255358#comment-17255358
 ] 

jiawen xiao commented on FLINK-20778:
-

hi,[~jark],cc

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16201) Support JSON_VALUE for blink planner

2020-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16201:
---

Assignee: Forward Xu

> Support JSON_VALUE for blink planner
> 
>
> Key: FLINK-16201
> URL: https://issues.apache.org/jira/browse/FLINK-16201
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available, sprint
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread Jeremy Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255355#comment-17255355
 ] 

Jeremy Mei commented on FLINK-20778:


[~873925...@qq.com] Ok, Please assign it to me.

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20254) HiveTableSourceITCase.testStreamPartitionReadByCreateTime times out

2020-12-27 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255354#comment-17255354
 ] 

Huang Xingbo commented on FLINK-20254:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11359=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime times out
> ---
>
> Key: FLINK-20254
> URL: https://issues.apache.org/jira/browse/FLINK-20254
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9808=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-11-19T10:34:23.5591765Z [ERROR] Tests run: 18, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 192.243 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> 2020-11-19T10:34:23.5593193Z [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.075 s  <<< ERROR!
> 2020-11-19T10:34:23.5593929Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 12 milliseconds
> 2020-11-19T10:34:23.5594321Z  at java.lang.Thread.sleep(Native Method)
> 2020-11-19T10:34:23.5594777Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:231)
> 2020-11-19T10:34:23.5595378Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:119)
> 2020-11-19T10:34:23.5596001Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103)
> 2020-11-19T10:34:23.5596610Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
> 2020-11-19T10:34:23.5597218Z  at 
> org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115)
> 2020-11-19T10:34:23.5597811Z  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355)
> 2020-11-19T10:34:23.5598555Z  at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:653)
> 2020-11-19T10:34:23.5599407Z  at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:594)
> 2020-11-19T10:34:23.5599982Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-19T10:34:23.5600393Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-19T10:34:23.5600865Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-19T10:34:23.5601300Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-19T10:34:23.5601713Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-19T10:34:23.5602211Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-19T10:34:23.5602688Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-19T10:34:23.5603181Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-19T10:34:23.5603753Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-11-19T10:34:23.5604308Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-11-19T10:34:23.5604780Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-11-19T10:34:23.5605114Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-19T10:34:23.5605299Z 
> 2020-11-19T10:34:24.4180149Z [INFO] Running 
> org.apache.flink.connectors.hive.TableEnvHiveConnectorITCase
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20776) Error message is incomplete in JSON format

2020-12-27 Thread xiaozilong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaozilong closed FLINK-20776.
--
Resolution: Not A Problem

> Error message is incomplete in JSON format
> --
>
> Key: FLINK-20776
> URL: https://issues.apache.org/jira/browse/FLINK-20776
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: xiaozilong
>Priority: Major
> Attachments: image-2020-12-27-20-25-10-884.png
>
>
> Maybe we should throw `Type information can't be null`? 
> !image-2020-12-27-20-25-10-884.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20776) Error message is incomplete in JSON format

2020-12-27 Thread xiaozilong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255353#comment-17255353
 ] 

xiaozilong commented on FLINK-20776:


ok

> Error message is incomplete in JSON format
> --
>
> Key: FLINK-20776
> URL: https://issues.apache.org/jira/browse/FLINK-20776
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: xiaozilong
>Priority: Major
> Attachments: image-2020-12-27-20-25-10-884.png
>
>
> Maybe we should throw `Type information can't be null`? 
> !image-2020-12-27-20-25-10-884.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20756) PythonCalcSplitConditionRule is not working as expected

2020-12-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-20756:
---

Assignee: Huang Xingbo

> PythonCalcSplitConditionRule is not working as expected
> ---
>
> Key: FLINK-20756
> URL: https://issues.apache.org/jira/browse/FLINK-20756
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Wei Zhong
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
>
> Currently if users write such a SQL:
> `SELECT pyFunc5(f0, f1) FROM (SELECT e.f0, e.f1 FROM (SELECT pyFunc5(a) as e 
> FROM MyTable) where e.f0 is NULL)`
> It will be optimized to:
> `FlinkLogicalCalc(select=[pyFunc5(pyFunc5(a)) AS f0])
> +- FlinkLogicalCalc(select=[a], where=[IS NULL(pyFunc5(a).f0)])
>  +- FlinkLogicalLegacyTableSourceScan(table=[[default_catalog, 
> default_database, MyTable, source: [TestTableSource(a, b, c, d)]]], 
> fields=[a, b, c, d])`
> The optimized plan is not runnable, we need to fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255350#comment-17255350
 ] 

jiawen xiao commented on FLINK-20777:
-

hi,[~renqs] thanks for raising this question. which version did  you find?

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14492: [FLINK-20756][python] Add PythonCalcSplitConditionRexFieldRule

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14492:
URL: https://github.com/apache/flink/pull/14492#issuecomment-751194123


   
   ## CI report:
   
   * de5551ef03f1bfeb9d6a85aaf877df944059895a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11350)
 
   * 5e1ebe24b8fcdacdc2c96a205f84208c25c4f653 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11363)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255349#comment-17255349
 ] 

jiawen xiao commented on FLINK-20778:
-

[~meijies],you are right. could you create a pr to fix? 

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20779) Add documentation for row-based operation in Python Table API

2020-12-27 Thread Dian Fu (Jira)
Dian Fu created FLINK-20779:
---

 Summary: Add documentation for row-based operation in Python Table 
API
 Key: FLINK-20779
 URL: https://issues.apache.org/jira/browse/FLINK-20779
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python, Documentation
Reporter: Dian Fu
Assignee: Huang Xingbo
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20620) Port BatchExecPythonCalc and StreamExecPythonCalc to Java

2020-12-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20620.
---
  Assignee: Huang Xingbo
Resolution: Fixed

Merged to master via 2848d4d44c0f09a9e49b70609fd5f9be30af60ea

> Port BatchExecPythonCalc and StreamExecPythonCalc to Java
> -
>
> Key: FLINK-20620
> URL: https://issues.apache.org/jira/browse/FLINK-20620
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Table SQL / Planner
>Reporter: godfrey he
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> https://issues.apache.org/jira/browse/FLINK-20610 will separate the 
> implementation of BatchExecCalc and StreamExecCalc, and port BatchExecCalc 
> and StreamExecCalc to Java. 
> {{StreamExecPythonCalc}} extends from {{CommonExecPythonCalc}} and 
> {{CommonExecPythonCalc}} extends from {{CommonPythonBase}}, they are all 
> Scala classes, and involves a lot of code. Java class can't extend Scala 
> interface with default implementation. So I create an issue separately to 
> port them to Java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #14496: [FLINK-20620][python] Port BatchExecPythonCalc and StreamExecPythonCalc to Java

2020-12-27 Thread GitBox


dianfu closed pull request #14496:
URL: https://github.com/apache/flink/pull/14496


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14492: [FLINK-20756][python] Add PythonCalcSplitConditionRexFieldRule

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14492:
URL: https://github.com/apache/flink/pull/14492#issuecomment-751194123


   
   ## CI report:
   
   * de5551ef03f1bfeb9d6a85aaf877df944059895a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11350)
 
   * 5e1ebe24b8fcdacdc2c96a205f84208c25c4f653 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20712) Support join_lateral/left_outer_join_lateral to accept table function directly in Python Table API

2020-12-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20712.
---
Resolution: Fixed

Merged to master via 7ea384b950ed53924201604074e0748a1416cdea

> Support join_lateral/left_outer_join_lateral to accept table function 
> directly in Python Table API
> --
>
> Key: FLINK-20712
> URL: https://issues.apache.org/jira/browse/FLINK-20712
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #14500: [FLINK-20712][python] Support join_lateral/left_outer_join_lateral to accept table function directly in Python Table API

2020-12-27 Thread GitBox


dianfu closed pull request #14500:
URL: https://github.com/apache/flink/pull/14500


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20712) Support join_lateral/left_outer_join_lateral to accept table function directly in Python Table API

2020-12-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20712:

Summary: Support join_lateral/left_outer_join_lateral to accept table 
function directly in Python Table API  (was: Add row-based support in 
join_lateral/left_outer_join_lateral in Python Table API)

> Support join_lateral/left_outer_join_lateral to accept table function 
> directly in Python Table API
> --
>
> Key: FLINK-20712
> URL: https://issues.apache.org/jira/browse/FLINK-20712
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14362: [FLINK-20540][jdbc] Fix the defaultUrl for AbstractJdbcCatalog in JDBC connector

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * fee80e39f5a51f96e98b11dc23b056c18681abaa Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11353)
 
   * acdafc09aa2916dd50f6e2139a841ed1bfd793db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11361)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20775) Missed Docker Images Flink 1.12

2020-12-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20775.
---
Resolution: Duplicate

> Missed Docker Images Flink 1.12
> ---
>
> Key: FLINK-20775
> URL: https://issues.apache.org/jira/browse/FLINK-20775
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.12.0
>Reporter: Nazar Volynets
>Priority: Blocker
>
> Apache Flink 1.12 has been release more than 2 weeks ago:
> [https://flink.apache.org/news/2020/12/10/release-1.12.0.html]
> but corresponding images have been not exposed into Flink's *official* Docker 
> Hub repo:
>  
> [https://hub.docker.com/_/flink?tab=tags=1=last_updated=1.12]
> Consequently, missed image(s) *blocks* to use Apache Flink 1.12 to spin up 
> Flink in Standalone Per-Job mode within Kubernetes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14362: [FLINK-20540][jdbc] Fix the defaultUrl for AbstractJdbcCatalog in JDBC connector

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14362:
URL: https://github.com/apache/flink/pull/14362#issuecomment-742906429


   
   ## CI report:
   
   * fee80e39f5a51f96e98b11dc23b056c18681abaa Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11353)
 
   * acdafc09aa2916dd50f6e2139a841ed1bfd793db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-27 Thread GitBox


flinkbot edited a comment on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461


   
   ## CI report:
   
   * 4ded1f32d30bf98ad7ac4753a2831acf2e78a4f5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11356)
 
   * 9abb79a2bb6e487d3f1deb0cfc4ecfff74a7d780 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11360)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >