[jira] [Updated] (FLINK-23685) Translate "Java Lambda Expressions" page into Chinese

2021-08-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23685:
---
Labels: pull-request-available  (was: )

> Translate "Java Lambda Expressions" page into Chinese
> -
>
> Key: FLINK-23685
> URL: https://issues.apache.org/jira/browse/FLINK-23685
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Priority: Minor
>  Labels: pull-request-available
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/java_lambdas.html].
> This page might be reworked. (Related to 
> :https://issues.apache.org/jira/browse/FLINK-13505 )
> The markdown file is located in " 
> docs/content.zh/docs/dev/datastream/java_lambdas.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hapihu opened a new pull request #16753: [FLINK-23685][docs-zh] Translate "Java Lambda Expressions" page into …

2021-08-08 Thread GitBox


hapihu opened a new pull request #16753:
URL: https://github.com/apache/flink/pull/16753


   Translate "Java Lambda Expressions" page into Chinese'


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Airblader commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


Airblader commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894963011


   > i think the patch don't need FLINK-22878,because the options of 
catalogTable is Map, and then we just need to convert map to hiveConfig
   
   But these "passthrough" options are options for the Hive connector, right? 
I'm not really familiar with the Hive connector itself. We eventually want to 
generate all connector documentation from the `ConfigOption`s since they should 
be the source of truth. If we introduce things that bypass the configuration 
mechanism, they wouldn't be covered by the documentation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing commented on a change in pull request #16687: [FLINK-22767][coordination] Optimize the initialization of LocalInputPreferredSlotSharingStrategy

2021-08-08 Thread GitBox


Thesharing commented on a change in pull request #16687:
URL: https://github.com/apache/flink/pull/16687#discussion_r684919002



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/LocalInputPreferredSlotSharingStrategy.java
##
@@ -214,87 +245,87 @@ private ExecutionSlotSharingGroup 
tryFindAvailableProducerExecutionSlotSharingGr
 
 final ExecutionVertexID executionVertexId = 
executionVertex.getId();
 
-for (SchedulingResultPartition partition : 
executionVertex.getConsumedResults()) {
-final ExecutionVertexID producerVertexId = 
partition.getProducer().getId();
-if (!inSameLogicalSlotSharingGroup(producerVertexId, 
executionVertexId)) {
-continue;
-}
-
-final ExecutionSlotSharingGroup producerGroup =
-executionSlotSharingGroupMap.get(producerVertexId);
-
-checkState(producerGroup != null);
-if (isGroupAvailableForVertex(producerGroup, 
executionVertexId)) {
-return producerGroup;
+for (ConsumedPartitionGroup consumedPartitionGroup :
+executionVertex.getConsumedPartitionGroups()) {
+
+Iterator availableGroupIterator =
+getAvailableGroupsForConsumedPartitionGroup(
+executionVertexId.getJobVertexId(), 
consumedPartitionGroup)
+.iterator();
+if (availableGroupIterator.hasNext()) {
+ExecutionSlotSharingGroup nextAvailableGroup = 
availableGroupIterator.next();
+availableGroupIterator.remove();
+return nextAvailableGroup;
 }
 }
 
 return null;
 }
 
 private boolean inSameLogicalSlotSharingGroup(
-final ExecutionVertexID executionVertexId1,
-final ExecutionVertexID executionVertexId2) {
+final JobVertexID jobVertexId1, final JobVertexID 
jobVertexId2) {
 
 return Objects.equals(
-
getSlotSharingGroup(executionVertexId1).getSlotSharingGroupId(),
-
getSlotSharingGroup(executionVertexId2).getSlotSharingGroupId());
+
checkNotNull(slotSharingGroupMap.get(jobVertexId1)).getSlotSharingGroupId(),
+
checkNotNull(slotSharingGroupMap.get(jobVertexId2)).getSlotSharingGroupId());
 }
 
 private SlotSharingGroup getSlotSharingGroup(final ExecutionVertexID 
executionVertexId) {
 // slot sharing group of a vertex would never be null in production
 return 
checkNotNull(slotSharingGroupMap.get(executionVertexId.getJobVertexId()));
 }
 
-private boolean isGroupAvailableForVertex(
-final ExecutionSlotSharingGroup executionSlotSharingGroup,
-final ExecutionVertexID executionVertexId) {
-
-final Set assignedVertices =
-
assignedJobVerticesForGroups.get(executionSlotSharingGroup);
-return assignedVertices == null
-|| 
!assignedVertices.contains(executionVertexId.getJobVertexId());
-}
-
 private void addVertexToExecutionSlotSharingGroup(
 final SchedulingExecutionVertex vertex, final 
ExecutionSlotSharingGroup group) {
 
-group.addVertex(vertex.getId());
-executionSlotSharingGroupMap.put(vertex.getId(), group);
-assignedJobVerticesForGroups
-.computeIfAbsent(group, k -> new HashSet<>())
-.add(vertex.getId().getJobVertexId());
+ExecutionVertexID executionVertexId = vertex.getId();
+group.addVertex(executionVertexId);
+executionSlotSharingGroupMap.put(executionVertexId, group);
+
+
getAvailableGroupsForJobVertex(executionVertexId.getJobVertexId()).remove(group);
 }
 
 private void findAvailableOrCreateNewExecutionSlotSharingGroupFor(
 final List executionVertices) {
 
 for (SchedulingExecutionVertex executionVertex : 
executionVertices) {
-final SlotSharingGroup slotSharingGroup =
-getSlotSharingGroup(executionVertex.getId());
-final List groups =
-executionSlotSharingGroups.computeIfAbsent(
-slotSharingGroup.getSlotSharingGroupId(), k -> 
new ArrayList<>());
 
 ExecutionSlotSharingGroup group = null;
-for (ExecutionSlotSharingGroup executionSlotSharingGroup : 
groups) {
-if (isGroupAvailableForVertex(
-executionSlotSharingGroup, 
executionVertex.getId())) {
-group = executionSlotSharingGroup;
-break;
-}

[GitHub] [flink] Airblader commented on pull request #16603: [FLINK-23111][runtime-web] Bump angular's and ng-zorro's version to 12

2021-08-08 Thread GitBox


Airblader commented on pull request #16603:
URL: https://github.com/apache/flink/pull/16603#issuecomment-894960676


   @yangjunhan Yes, the NOTICE file @twalthr mentioned needs to be updated as 
well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


lirui-apache commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894959489


   @cuibo01 I guess we have two questions here:
   1. How to get the user name.
   2. Whether user name should be a table-level or session-level setting.
   
   For 1st question, I noticed hive supports pluggable 
`HiveAuthenticationProvider` to do it and by default it uses current UGI. If we 
really want to follow Hive's behavior, we need to read 
`HiveAuthenticationProvider` configurations from `HiveConf` and create 
`HiveAuthenticationProvider` instances via reflection. But I think we should do 
that only if there's strong need for different `HiveAuthenticationProvider` 
implementations. For now, we can just use UGI.
   
   For 2nd question, I think the setting should be bound to a `HiveCatalog` 
instance and thus is session-level, which is also in line with Hive itself. 
This is to avoid the scenario where a single `HiveCatalog` can create tables 
with different user identities.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16732: [FLINK-21222][python] Support loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16732:
URL: https://github.com/apache/flink/pull/16732#issuecomment-893984711


   
   ## CI report:
   
   * 478cd2f5873830700bec057e02d30dcaf3fd35f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21673)
 
   * 98b4a1bff13659aca4807333671acef9f0e1949b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23685) Translate "Java Lambda Expressions" page into Chinese

2021-08-08 Thread wuguihu (Jira)
wuguihu created FLINK-23685:
---

 Summary: Translate "Java Lambda Expressions" page into Chinese
 Key: FLINK-23685
 URL: https://issues.apache.org/jira/browse/FLINK-23685
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: wuguihu


The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/java_lambdas.html].

This page might be reworked. (Related to 
:https://issues.apache.org/jira/browse/FLINK-13505 )

The markdown file is located in " 
docs/content.zh/docs/dev/datastream/java_lambdas.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16732: [FLINK-21222][python] Support loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16732:
URL: https://github.com/apache/flink/pull/16732#issuecomment-893984711


   
   ## CI report:
   
   * 478cd2f5873830700bec057e02d30dcaf3fd35f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21673)
 
   * 98b4a1bff13659aca4807333671acef9f0e1949b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16699: [FLINK-23420][table-runtime] LinkedListSerializer now checks for null elements in the list

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16699:
URL: https://github.com/apache/flink/pull/16699#issuecomment-892415455


   
   ## CI report:
   
   * fb666804b2cf16e3aefac6dc51026384749f589f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21610)
 
   * 7343f8b1adde98e86eba0f3bcbc647df92f82223 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21751)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16599: [FLINK-23479][table-planner] Fix the unstable test cases about json plan

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16599:
URL: https://github.com/apache/flink/pull/16599#issuecomment-886750468


   
   ## CI report:
   
   * b3e7026169e39a12c7fe8629a358cd3bf66a7153 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21700)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21565)
 
   * 3f2840a8f2462e77ad9ef67947852559c14fc44e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21750)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23048) GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane fails due to akka timeout

2021-08-08 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-23048:

Component/s: (was: Table SQL / Runtime)
 Runtime / Coordination

> GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane
>  fails due to akka timeout
> --
>
> Key: FLINK-23048
> URL: https://issues.apache.org/jira/browse/FLINK-23048
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=7957
> {code}
> [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 48.296 s <<< FAILURE! - in 
> org.apache.flink.table.runtime.stream.table.GroupWindowITCase
> [ERROR] 
> testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane(org.apache.flink.table.runtime.stream.table.GroupWindowITCase)
>   Time elapsed: 40.358 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1061)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: 

[GitHub] [flink] flinkbot edited a comment on pull request #16699: [FLINK-23420][table-runtime] LinkedListSerializer now checks for null elements in the list

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16699:
URL: https://github.com/apache/flink/pull/16699#issuecomment-892415455


   
   ## CI report:
   
   * fb666804b2cf16e3aefac6dc51026384749f589f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21610)
 
   * 7343f8b1adde98e86eba0f3bcbc647df92f82223 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16669: [FLINK-23544][table-planner]Window TVF Supports session window

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16669:
URL: https://github.com/apache/flink/pull/16669#issuecomment-890847741


   
   ## CI report:
   
   * 889dfbb1b28097728eea054580fa4e0e6bb61c22 UNKNOWN
   * 26a9f1ad70ba1a7f1231bdc8cd8b62fef6510472 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21707)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21689)
 
   * 666aa0b83c06a3ce506a89a359287d10e4cd49b8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21749)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16599: [FLINK-23479][table-planner] Fix the unstable test cases about json plan

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16599:
URL: https://github.com/apache/flink/pull/16599#issuecomment-886750468


   
   ## CI report:
   
   * b3e7026169e39a12c7fe8629a358cd3bf66a7153 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21700)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21565)
 
   * 3f2840a8f2462e77ad9ef67947852559c14fc44e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] godfreyhe commented on pull request #459: Add release 1.11.4

2021-08-08 Thread GitBox


godfreyhe commented on pull request #459:
URL: https://github.com/apache/flink-web/pull/459#issuecomment-894933141


   @carp84  please take another look, thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


cuibo01 commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894932918


   > Can you take a look at 
[FLINK-22878](https://issues.apache.org/jira/browse/FLINK-22878)? I think this 
can be implemented with map types now, which would properly use the config 
option stack.
   
   hi @Airblader , i think the patch don't need FLINK-22878,because the options 
of catalogTable is Map, and then we just need to convert map to hiveConfig


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16669: [FLINK-23544][table-planner]Window TVF Supports session window

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16669:
URL: https://github.com/apache/flink/pull/16669#issuecomment-890847741


   
   ## CI report:
   
   * 889dfbb1b28097728eea054580fa4e0e6bb61c22 UNKNOWN
   * 26a9f1ad70ba1a7f1231bdc8cd8b62fef6510472 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21707)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21689)
 
   * 666aa0b83c06a3ce506a89a359287d10e4cd49b8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-23512) Check for illegal modifications of JobGraph with partially finished operators

2021-08-08 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao reassigned FLINK-23512:
---

Assignee: Yun Gao

> Check for illegal modifications of JobGraph with partially finished operators
> -
>
> Key: FLINK-23512
> URL: https://issues.apache.org/jira/browse/FLINK-23512
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available
>
> Besides the fully finished operators, we also would like to disable inserting 
> new operators before the partially finished operators:
>  # If keyed state is used and discarded after the tasks get finished in the 
> first run, then if we received new records target at these keys, the result 
> would be not right.
>  # Similarly, for normal operator subtasks, if new records are emitted and 
> they relies on the discarded states, the result would also be confused. 
> Thus we would first disable such cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cuibo01 edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


cuibo01 edited a comment on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894931284






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


cuibo01 commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894931284


   @lirui-apache thx for your review
   Or in what situation would a user want to connect to HMS using one identity 
and create tables with another?
--> in secure cluster, this is not allowed. 
   
   as I discussed with Hive experts, 
   now many of HMS's interfaces are open, and it does not check the validity of 
the request, so it allows the some attribute to be empty. but we create table 
using hive interface(like jdbc), the Owner attribute must contain data.
   
   so i think, in secure cluster, owner is current UGI. and in non-secure 
cluster,we can set owner


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23684) KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka fails with NoSuchElementException

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23684:


 Summary: KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka fails 
with NoSuchElementException
 Key: FLINK-23684
 URL: https://issues.apache.org/jira/browse/FLINK-23684
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.2
Reporter: Xintong Song
 Fix For: 1.13.3


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21740=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6572

{code}
Aug 08 22:24:34 [ERROR] Tests run: 23, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 184 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.KafkaITCase
Aug 08 22:24:34 [ERROR] 
testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
  Time elapsed: 30.621 s  <<< ERROR!
Aug 08 22:24:34 java.util.NoSuchElementException
Aug 08 22:24:34 at java.util.ArrayList$Itr.next(ArrayList.java:864)
Aug 08 22:24:34 at 
org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.getOnlyElement(Iterators.java:302)
Aug 08 22:24:34 at 
org.apache.flink.shaded.guava18.com.google.common.collect.Iterables.getOnlyElement(Iterables.java:289)
Aug 08 22:24:34 at 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAutoOffsetRetrievalAndCommitToKafka(KafkaConsumerTestBase.java:374)
Aug 08 22:24:34 at 
org.apache.flink.streaming.connectors.kafka.KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka(KafkaITCase.java:178)
Aug 08 22:24:34 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Aug 08 22:24:34 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Aug 08 22:24:34 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Aug 08 22:24:34 at java.lang.reflect.Method.invoke(Method.java:498)
Aug 08 22:24:34 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Aug 08 22:24:34 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Aug 08 22:24:34 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Aug 08 22:24:34 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Aug 08 22:24:34 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
Aug 08 22:24:34 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
Aug 08 22:24:34 at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
Aug 08 22:24:34 at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] gaoyunhaii commented on pull request #16655: [FLINK-23512][runtime][checkpoint] Check for illegal modifications of JobGraph with partially finished operators

2021-08-08 Thread GitBox


gaoyunhaii commented on pull request #16655:
URL: https://github.com/apache/flink/pull/16655#issuecomment-894930752


   Hi @StephanEwen very thanks for the review!
   
   > You can only change the JobGraph operator chains from a checkpoint if it 
has no partially finished tasks. 
   
   I also agree that keeping a simple rule would be more easy to implement and 
maintain, and the rule would work if there are some difficulty in implementing 
the checks, the main block comes from that it seems we could not easily acquire 
the original job graph and do the comparison ?
   
   > The code only works with the generated Operator ID, and ignores the 
user-defined operator IDs. I am not sure this is correct, I think we need to 
register finished state for both generated and user-defined IDs, because on 
that level, we don't know under which ID the operator will communicate its 
state. @gaoyunhaii have you looked into this?
   
   For the`OperatorIDPair`, currently the name might have some misleadings: the 
`generatedOperatorID`  is based on both uid and the nature order if uid is not 
set (`StreamGraphHasherV2`), and the `userDefinedOperatorID` is based on the 
`uidHash` (`StreamGraphUserHashHasher`). The second one should only be used if 
users forget to set a customized `uid` on the first run, but want to restore 
from the state, then users could directly specify the `uidHash` and it will be 
used as the operator id directly. Thus in the currently logic the 
userDefinedOperatorID is only checked on restoring to query the corresponding 
state, but for creating new checkpoint, the `generatedOperatorID` is directly 
used, when storing the finished status we also follow this policy~
   
   > I think for a good design, the PendingCheckpoint should not need to be 
aware of ExecutionJobVertex and iterate over the status or implement the logic 
to check for finished state.
   
   I also agree with the refactor on the CheckpointPlan and PendingCheckpoint  
and very thanks for the suggestions! I'll try to update the PR according to the 
suggestions.

   From my side, we can do one of the following things:
   
   >1.  Bump the metadata format version. This is relatively simple.
   > The tests are the most work here. We need to create Snapshots of V3 
metadata that we reload with the latest setup to 
   > ensure backwards compatibility.
   > We can use the trick here, but then we should nit do this inline, but have 
two explicit static methods that we use, so that > this is explicit:
   > - int encodeSubtaskIndex(int subtaskIndex, boolean isFinished)
   > - SubtaskAndStatus decodeSubtaskAndStatus(int value) (where 
SubtaskAndStatus is like an (int, boolean) tuple.)
   
   For the state metadata format, I'll still more tend to the second options 
that we first do not upgrade the version and extract them to separate methods, 
we would still be able to consider upgrading the format if we found other 
requirements later~
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395733#comment-17395733
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21740=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=13435

> JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure
> 
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Dawid Wysakowicz
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23683) KafkaSinkITCase hangs on azure

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23683:


 Summary: KafkaSinkITCase hangs on azure
 Key: FLINK-23683
 URL: https://issues.apache.org/jira/browse/FLINK-23683
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21738=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=f7d83ad5-3324-5307-0eb3-819065cdcb38=7886



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23678) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395732#comment-17395732
 ] 

Xintong Song commented on FLINK-23678:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21738=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-23678
> URL: https://issues.apache.org/jira/browse/FLINK-23678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946
> {code}
> Aug 07 00:12:18 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 67.431 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase
> Aug 07 00:12:18 [ERROR] 
> testWriteRecordsToKafkaWithExactlyOnceGuarantee(org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase)
>   Time elapsed: 7.001 s  <<< FAILURE!
> Aug 07 00:12:18 java.lang.AssertionError: expected:<407799> but was:<407798>
> Aug 07 00:12:18   at org.junit.Assert.fail(Assert.java:89)
> Aug 07 00:12:18   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:647)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:633)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:334)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:173)
> Aug 07 00:12:18   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 07 00:12:18   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 07 00:12:18   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 07 00:12:18   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 07 00:12:18   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Aug 07 00:12:18   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Aug 07 00:12:18   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> 

[jira] [Commented] (FLINK-22387) UpsertKafkaTableITCase hangs when setting up kafka

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395731#comment-17395731
 ] 

Xintong Song commented on FLINK-22387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21739=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518=7545

> UpsertKafkaTableITCase hangs when setting up kafka
> --
>
> Key: FLINK-22387
> URL: https://issues.apache.org/jira/browse/FLINK-22387
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.14.0, 1.12.6, 1.13.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16901=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6932
> {code}
> 2021-04-20T20:01:32.2276988Z Apr 20 20:01:32 "main" #1 prio=5 os_prio=0 
> tid=0x7fe87400b000 nid=0x4028 runnable [0x7fe87df22000]
> 2021-04-20T20:01:32.2277666Z Apr 20 20:01:32java.lang.Thread.State: 
> RUNNABLE
> 2021-04-20T20:01:32.2278338Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.Buffer.getByte(Buffer.java:312)
> 2021-04-20T20:01:32.2279325Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:310)
> 2021-04-20T20:01:32.2280656Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492)
> 2021-04-20T20:01:32.2281603Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
> 2021-04-20T20:01:32.2282163Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
> 2021-04-20T20:01:32.2282870Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
> 2021-04-20T20:01:32.2283494Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
> 2021-04-20T20:01:32.2284460Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
> 2021-04-20T20:01:32.2285183Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
> 2021-04-20T20:01:32.2285756Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
> 2021-04-20T20:01:32.2286287Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
> 2021-04-20T20:01:32.2286795Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
> 2021-04-20T20:01:32.2287270Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
> 2021-04-20T20:01:32.2287913Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285)
> 2021-04-20T20:01:32.2288606Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
> 2021-04-20T20:01:32.2289295Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$340/2058508175.close(Unknown
>  Source)
> 2021-04-20T20:01:32.2289886Z Apr 20 20:01:32  at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
> 2021-04-20T20:01:32.2290567Z Apr 20 20:01:32  at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202)
> 2021-04-20T20:01:32.2291051Z Apr 20 20:01:32  at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205)
> 2021-04-20T20:01:32.2291879Z Apr 20 20:01:32  - locked <0xe9cd50f8> 
> (a [Ljava.lang.Object;)
> 2021-04-20T20:01:32.2292313Z Apr 20 20:01:32  at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
> 2021-04-20T20:01:32.2292870Z Apr 20 20:01:32  at 
> org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
> 2021-04-20T20:01:32.2293383Z Apr 20 20:01:32  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
> 2021-04-20T20:01:32.2293890Z Apr 20 20:01:32  at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029)
> 2021-04-20T20:01:32.2294578Z Apr 20 20:01:32  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> 

[jira] [Created] (FLINK-23682) HAQueryableStateFsBackendITCase.testCustomKryoSerializerHandling fails due to AskTimeoutException

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23682:


 Summary: 
HAQueryableStateFsBackendITCase.testCustomKryoSerializerHandling fails due to 
AskTimeoutException
 Key: FLINK-23682
 URL: https://issues.apache.org/jira/browse/FLINK-23682
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21739=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407=14674

{code}
[ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 21.647 
s <<< FAILURE! - in 
org.apache.flink.queryablestate.itcases.HAQueryableStateFsBackendITCase
[ERROR] 
testCustomKryoSerializerHandling(org.apache.flink.queryablestate.itcases.HAQueryableStateFsBackendITCase)
  Time elapsed: 10.99 s  <<< ERROR!
java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: 
Invocation of public abstract java.util.concurrent.CompletableFuture 
org.apache.flink.runtime.dispatcher.DispatcherGateway.submitJob(org.apache.flink.runtime.jobgraph.JobGraph,org.apache.flink.api.common.time.Time)
 timed out.
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase.testCustomKryoSerializerHandling(AbstractQueryableStateTestBase.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: java.util.concurrent.TimeoutException: Invocation of public abstract 
java.util.concurrent.CompletableFuture 
org.apache.flink.runtime.dispatcher.DispatcherGateway.submitJob(org.apache.flink.runtime.jobgraph.JobGraph,org.apache.flink.api.common.time.Time)
 timed out.
at com.sun.proxy.$Proxy35.submitJob(Unknown Source)
at 
org.apache.flink.runtime.minicluster.MiniCluster.lambda$submitJob$14(MiniCluster.java:829)
at 
java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1119)
at 

[jira] [Closed] (FLINK-16069) Creation of TaskDeploymentDescriptor can block main thread for long time

2021-08-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-16069.
---
Resolution: Duplicate

> Creation of TaskDeploymentDescriptor can block main thread for long time
> 
>
> Key: FLINK-16069
> URL: https://issues.apache.org/jira/browse/FLINK-16069
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: huweihua
>Priority: Major
> Attachments: FLINK-16069-POC-results, batch.png, streaming.png
>
>
> The deploy of tasks will take long time when we submit a high parallelism 
> job. And Execution#deploy run in mainThread, so it will block JobMaster 
> process other akka messages, such as Heartbeat. The creation of 
> TaskDeploymentDescriptor take most of time. We can put the creation in future.
> For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
> SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
> TaskManager timeout and job never success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395729#comment-17395729
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21739=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=21421

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16069) Creation of TaskDeploymentDescriptor can block main thread for long time

2021-08-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395728#comment-17395728
 ] 

Zhu Zhu commented on FLINK-16069:
-

Thanks for making the improvements and sharing the results! [~Thesharing]
The attached graph shows the improvement on TaskDeploymentDescriptor creation.
And the table shows the E2E deployment improvement.

> Creation of TaskDeploymentDescriptor can block main thread for long time
> 
>
> Key: FLINK-16069
> URL: https://issues.apache.org/jira/browse/FLINK-16069
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: huweihua
>Priority: Major
> Attachments: FLINK-16069-POC-results, batch.png, streaming.png
>
>
> The deploy of tasks will take long time when we submit a high parallelism 
> job. And Execution#deploy run in mainThread, so it will block JobMaster 
> process other akka messages, such as Heartbeat. The creation of 
> TaskDeploymentDescriptor take most of time. We can put the creation in future.
> For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
> SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
> TaskManager timeout and job never success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalDa

2021-08-08 Thread Yao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395727#comment-17395727
 ] 

Yao Zhang commented on FLINK-23602:
---

Hi [~TsReaper], 

 

I reproduced what you commented by printing the SQL explanation and I guess the 
issue might be that the data type of the second operand((c1 < 1) literally is 
Boolean but I will explain it later) could not be determined so that there 
would be no automatic data type casting applied to the first operand.

 

However, I tried to explain the SQL:

SELECT database5_t2.c0 <= (c1 < 1) FROM database5_t2

and then got an error:

Cannot apply '<=' to arguments of type ' <= '. 
Supported form(s): ' <= '

So I doubt that the second operand of the between clause is illegal.

 

Can you successfully run this SQL explanation?

 

> org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No 
> applicable constructor/method found for actual parameters 
> "org.apache.flink.table.data.DecimalData
> -
>
> Key: FLINK-23602
> URL: https://issues.apache.org/jira/browse/FLINK-23602
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> CREATE TABLE database5_t2 (
>   `c0` DECIMAL , `c1` BIGINT
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), 
> (-1070424438, -1787215649)
> SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS 
> STRING  AND
> (('-727278084') IN (CAST (database5_t2.c0 AS STRING), '0.9996987230442536')) 
> AS DOUBLE )) AS ref0
> FROM database5_t2 GROUP BY database5_t2.c1  ORDER BY database5_t2.c1
> {code}
> Running the sql above, will generate the error of this:
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at 

[jira] [Updated] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2021-08-08 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22194:
-
Priority: Major  (was: Minor)

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.4
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2021-08-08 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22194:
-
Labels: test-stability  (was: auto-deprioritized-critical 
auto-deprioritized-major test-stability)

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.4
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395725#comment-17395725
 ] 

Xintong Song commented on FLINK-22194:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21739=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6122

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.4
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23681) Rename the field name of OperatorIDPair to avoid the confusion

2021-08-08 Thread Yun Gao (Jira)
Yun Gao created FLINK-23681:
---

 Summary: Rename the field name of OperatorIDPair to avoid the 
confusion
 Key: FLINK-23681
 URL: https://issues.apache.org/jira/browse/FLINK-23681
 Project: Flink
  Issue Type: Bug
Reporter: Yun Gao


For the OperatorIDPair, currently the name might have some misleadings: the 
generatedOperatorID is based on both uid and the nature order if uid is not set 
(StreamGraphHasherV2), and the userDefinedOperatorID is based on the uidHash 
(StreamGraphUserHashHasher). The second one should only be used if users forget 
to set a customized uid on the first run, but want to restore from the state, 
then users could directly specify the uidHash and it will be used as the 
operator id directly. 

 

Thus the current name might cause the confusion that it is easily to think they 
are based on the nature order or the uid field. Thus we might rename the fields 
to be more clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23681) Rename the field name of OperatorIDPair to avoid the confusion

2021-08-08 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23681:

Affects Version/s: 1.14.0

> Rename the field name of OperatorIDPair to avoid the confusion
> --
>
> Key: FLINK-23681
> URL: https://issues.apache.org/jira/browse/FLINK-23681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>
> For the OperatorIDPair, currently the name might have some misleadings: the 
> generatedOperatorID is based on both uid and the nature order if uid is not 
> set (StreamGraphHasherV2), and the userDefinedOperatorID is based on the 
> uidHash (StreamGraphUserHashHasher). The second one should only be used if 
> users forget to set a customized uid on the first run, but want to restore 
> from the state, then users could directly specify the uidHash and it will be 
> used as the operator id directly. 
>  
> Thus the current name might cause the confusion that it is easily to think 
> they are based on the nature order or the uid field. Thus we might rename the 
> fields to be more clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395723#comment-17395723
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21728=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=41463ccd-0694-5d4d-220d-8f771e7d098b=13900

> JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure
> 
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Dawid Wysakowicz
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23681) Rename the field name of OperatorIDPair to avoid the confusion

2021-08-08 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao reassigned FLINK-23681:
---

Assignee: Yun Gao

> Rename the field name of OperatorIDPair to avoid the confusion
> --
>
> Key: FLINK-23681
> URL: https://issues.apache.org/jira/browse/FLINK-23681
> Project: Flink
>  Issue Type: Bug
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>
> For the OperatorIDPair, currently the name might have some misleadings: the 
> generatedOperatorID is based on both uid and the nature order if uid is not 
> set (StreamGraphHasherV2), and the userDefinedOperatorID is based on the 
> uidHash (StreamGraphUserHashHasher). The second one should only be used if 
> users forget to set a customized uid on the first run, but want to restore 
> from the state, then users could directly specify the uidHash and it will be 
> used as the operator id directly. 
>  
> Thus the current name might cause the confusion that it is easily to think 
> they are based on the nature order or the uid field. Thus we might rename the 
> fields to be more clear.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23680) StreamTaskTimerTest.testOpenCloseAndTimestamps fails on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395722#comment-17395722
 ] 

Xintong Song commented on FLINK-23680:
--

[~akalashnikov], this seems to be related to FLINK-23452. Could you please take 
a look?

> StreamTaskTimerTest.testOpenCloseAndTimestamps fails on azure
> -
>
> Key: FLINK-23680
> URL: https://issues.apache.org/jira/browse/FLINK-23680
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21728=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=8985
> {code}
> Aug 07 22:58:40 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 2.128 s <<< FAILURE! - in 
> org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest
> Aug 07 22:58:40 [ERROR] 
> testOpenCloseAndTimestamps(org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest)
>   Time elapsed: 0.041 s  <<< FAILURE!
> Aug 07 22:58:40 java.lang.AssertionError: expected:<2> but was:<1>
> Aug 07 22:58:40   at org.junit.Assert.fail(Assert.java:89)
> Aug 07 22:58:40   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 07 22:58:40   at org.junit.Assert.assertEquals(Assert.java:647)
> Aug 07 22:58:40   at org.junit.Assert.assertEquals(Assert.java:633)
> Aug 07 22:58:40   at 
> org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest.testOpenCloseAndTimestamps(StreamTaskTimerTest.java:77)
> Aug 07 22:58:40   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Aug 07 22:58:40   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 07 22:58:40   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 07 22:58:40   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Aug 07 22:58:40   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 07 22:58:40   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 07 22:58:40   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 07 22:58:40   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 07 22:58:40   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 07 22:58:40   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 07 22:58:40   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 07 22:58:40   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 22:58:40   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 07 22:58:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 07 22:58:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 07 22:58:40   at org.junit.runners.Suite.runChild(Suite.java:128)
> Aug 07 22:58:40   at org.junit.runners.Suite.runChild(Suite.java:27)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Aug 07 22:58:40   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Aug 07 22:58:40  

[jira] [Created] (FLINK-23680) StreamTaskTimerTest.testOpenCloseAndTimestamps fails on azure

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23680:


 Summary: StreamTaskTimerTest.testOpenCloseAndTimestamps fails on 
azure
 Key: FLINK-23680
 URL: https://issues.apache.org/jira/browse/FLINK-23680
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21728=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=8985

{code}
Aug 07 22:58:40 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 2.128 s <<< FAILURE! - in 
org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest
Aug 07 22:58:40 [ERROR] 
testOpenCloseAndTimestamps(org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest)
  Time elapsed: 0.041 s  <<< FAILURE!
Aug 07 22:58:40 java.lang.AssertionError: expected:<2> but was:<1>
Aug 07 22:58:40 at org.junit.Assert.fail(Assert.java:89)
Aug 07 22:58:40 at org.junit.Assert.failNotEquals(Assert.java:835)
Aug 07 22:58:40 at org.junit.Assert.assertEquals(Assert.java:647)
Aug 07 22:58:40 at org.junit.Assert.assertEquals(Assert.java:633)
Aug 07 22:58:40 at 
org.apache.flink.streaming.runtime.operators.StreamTaskTimerTest.testOpenCloseAndTimestamps(StreamTaskTimerTest.java:77)
Aug 07 22:58:40 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Aug 07 22:58:40 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Aug 07 22:58:40 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Aug 07 22:58:40 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Aug 07 22:58:40 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Aug 07 22:58:40 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Aug 07 22:58:40 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Aug 07 22:58:40 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Aug 07 22:58:40 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Aug 07 22:58:40 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Aug 07 22:58:40 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Aug 07 22:58:40 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 07 22:58:40 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Aug 07 22:58:40 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Aug 07 22:58:40 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Aug 07 22:58:40 at org.junit.runners.Suite.runChild(Suite.java:128)
Aug 07 22:58:40 at org.junit.runners.Suite.runChild(Suite.java:27)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 07 22:58:40 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Aug 07 22:58:40 at 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
Aug 07 22:58:40 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
Aug 07 22:58:40 at 

[jira] [Updated] (FLINK-23671) Failed to inference type in correlate

2021-08-08 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-23671:
--
Description: 

Please also turn off the assert by running the query *without* jvm parameter 
-ea .

 !screenshot-1.png! 

{code:java}
CREATE FUNCTION func111 AS 
'org.apache.flink.table.client.gateway.utils.CPDetailOriginMatchV2UDF';

CREATE TABLE side(
  `id2` VARCHAR,
  PRIMARY KEY (`id2`) NOT ENFORCED
) WITH (
  'connector' = 'values'
);

CREATE TABLE main(
  `id` VARCHAR,
  `proctime` as proctime()
) WITH (
  'connector' = 'datagen',
  'number-of-rows' = '10'
);

CREATE TABLE blackhole(
  `id` VARCHAR
) WITH (
  'connector' = 'blackhole'
);

INSERT INTO blackhole
SELECT `id`
FROM main
JOIN side FOR SYSTEM_TIME AS OF main.`proctime` ON main.`id` = side.`id2`
INNER join lateral table(func111(side.`id2`)) as T(`is_match`, `match_bizline`, 
`match_page_id`, `source_type`) ON 1 = 1;

{code}

The implementation of the udf is as follow 

{code:java}
package org.apache.flink.table.client.gateway.utils;

import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.catalog.DataTypeFactory;
import org.apache.flink.table.functions.TableFunction;
import org.apache.flink.table.types.DataType;
import org.apache.flink.table.types.inference.TypeInference;
import org.apache.flink.types.Row;

import java.util.Optional;

public class CPDetailOriginMatchV2UDF extends TableFunction {

public void eval(String original) {
collect(null);
}

// is_matched, match_bizline, match_page_id, scene
@Override
public TypeInference getTypeInference(DataTypeFactory typeFactory) {
return TypeInference.newBuilder()
.outputTypeStrategy(
callContext -> {
DataType[] array = new DataType[4];
array[0] = DataTypes.BOOLEAN();
array[1] = DataTypes.STRING();
// page_id 是Long类型, BIGINT 是否可以支持?
array[2] = DataTypes.BIGINT();
array[3] = DataTypes.STRING();
return Optional.of(DataTypes.ROW(array));
})
.build();
}
}


{code}

The exception stack as follows.
{code:java}
org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of function's 
argument data type 'STRING NOT NULL' and actual argument type 'STRING'.
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:320)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:320)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:95)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
at 
org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:861)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:537)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateOperator(CorrelateCodeGenerator.scala:127)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateCorrelateTransformation(CorrelateCodeGenerator.scala:75)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator.generateCorrelateTransformation(CorrelateCodeGenerator.scala)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCorrelate.translateToPlanInternal(CommonExecCorrelate.java:102)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:210)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:289)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.lambda$translateInputToPlan$5(ExecNodeBase.java:244)

{code}


  

[jira] [Updated] (FLINK-23671) Failed to inference type in correlate

2021-08-08 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-23671:
--
Attachment: screenshot-1.png

> Failed to inference type in correlate 
> --
>
> Key: FLINK-23671
> URL: https://issues.apache.org/jira/browse/FLINK-23671
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.2
>Reporter: Shengkai Fang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> {code:java}
> CREATE FUNCTION func111 AS 
> 'org.apache.flink.table.client.gateway.utils.CPDetailOriginMatchV2UDF';
> CREATE TABLE side(
>   `id2` VARCHAR,
>   PRIMARY KEY (`id2`) NOT ENFORCED
> ) WITH (
>   'connector' = 'values'
> );
> CREATE TABLE main(
>   `id` VARCHAR,
>   `proctime` as proctime()
> ) WITH (
>   'connector' = 'datagen',
>   'number-of-rows' = '10'
> );
> CREATE TABLE blackhole(
>   `id` VARCHAR
> ) WITH (
>   'connector' = 'blackhole'
> );
> INSERT INTO blackhole
> SELECT `id`
> FROM main
> JOIN side FOR SYSTEM_TIME AS OF main.`proctime` ON main.`id` = side.`id2`
> INNER join lateral table(func111(side.`id2`)) as T(`is_match`, 
> `match_bizline`, `match_page_id`, `source_type`) ON 1 = 1;
> {code}
> The implementation of the udf is as follow 
> {code:java}
> package org.apache.flink.table.client.gateway.utils;
> import org.apache.flink.table.api.DataTypes;
> import org.apache.flink.table.catalog.DataTypeFactory;
> import org.apache.flink.table.functions.TableFunction;
> import org.apache.flink.table.types.DataType;
> import org.apache.flink.table.types.inference.TypeInference;
> import org.apache.flink.types.Row;
> import java.util.Optional;
> public class CPDetailOriginMatchV2UDF extends TableFunction {
> public void eval(String original) {
> collect(null);
> }
> // is_matched, match_bizline, match_page_id, scene
> @Override
> public TypeInference getTypeInference(DataTypeFactory typeFactory) {
> return TypeInference.newBuilder()
> .outputTypeStrategy(
> callContext -> {
> DataType[] array = new DataType[4];
> array[0] = DataTypes.BOOLEAN();
> array[1] = DataTypes.STRING();
> // page_id 是Long类型, BIGINT 是否可以支持?
> array[2] = DataTypes.BIGINT();
> array[3] = DataTypes.STRING();
> return Optional.of(DataTypes.ROW(array));
> })
> .build();
> }
> }
> {code}
> The exception stack as follows.
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of 
> function's argument data type 'STRING NOT NULL' and actual argument type 
> 'STRING'.
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:320)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:320)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:95)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:861)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:537)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateOperator(CorrelateCodeGenerator.scala:127)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateCorrelateTransformation(CorrelateCodeGenerator.scala:75)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator.generateCorrelateTransformation(CorrelateCodeGenerator.scala)
>   at 
> 

[jira] [Commented] (FLINK-23678) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395718#comment-17395718
 ] 

Xintong Song commented on FLINK-23678:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21728=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-23678
> URL: https://issues.apache.org/jira/browse/FLINK-23678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946
> {code}
> Aug 07 00:12:18 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 67.431 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase
> Aug 07 00:12:18 [ERROR] 
> testWriteRecordsToKafkaWithExactlyOnceGuarantee(org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase)
>   Time elapsed: 7.001 s  <<< FAILURE!
> Aug 07 00:12:18 java.lang.AssertionError: expected:<407799> but was:<407798>
> Aug 07 00:12:18   at org.junit.Assert.fail(Assert.java:89)
> Aug 07 00:12:18   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:647)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:633)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:334)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:173)
> Aug 07 00:12:18   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 07 00:12:18   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 07 00:12:18   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 07 00:12:18   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 07 00:12:18   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Aug 07 00:12:18   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Aug 07 00:12:18   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> 

[jira] [Created] (FLINK-23679) PyFlink end-to-end test fail due to test script contains errors

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23679:


 Summary: PyFlink end-to-end test fail due to test script contains 
errors
 Key: FLINK-23679
 URL: https://issues.apache.org/jira/browse/FLINK-23679
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21728=logs=4dd4dbdd-1802-5eb7-a518-6acd9d24d0fc=7c4a8fb8--5a77-f518-4176bfae300b=10433



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23671) Failed to inference type in correlate

2021-08-08 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395717#comment-17395717
 ] 

Shengkai Fang commented on FLINK-23671:
---

[~twalthr] I have updated the description.

> Failed to inference type in correlate 
> --
>
> Key: FLINK-23671
> URL: https://issues.apache.org/jira/browse/FLINK-23671
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.2
>Reporter: Shengkai Fang
>Priority: Major
>
> {code:java}
> CREATE FUNCTION func111 AS 
> 'org.apache.flink.table.client.gateway.utils.CPDetailOriginMatchV2UDF';
> CREATE TABLE side(
>   `id2` VARCHAR,
>   PRIMARY KEY (`id2`) NOT ENFORCED
> ) WITH (
>   'connector' = 'values'
> );
> CREATE TABLE main(
>   `id` VARCHAR,
>   `proctime` as proctime()
> ) WITH (
>   'connector' = 'datagen',
>   'number-of-rows' = '10'
> );
> CREATE TABLE blackhole(
>   `id` VARCHAR
> ) WITH (
>   'connector' = 'blackhole'
> );
> INSERT INTO blackhole
> SELECT `id`
> FROM main
> JOIN side FOR SYSTEM_TIME AS OF main.`proctime` ON main.`id` = side.`id2`
> INNER join lateral table(func111(side.`id2`)) as T(`is_match`, 
> `match_bizline`, `match_page_id`, `source_type`) ON 1 = 1;
> {code}
> The implementation of the udf is as follow 
> {code:java}
> package org.apache.flink.table.client.gateway.utils;
> import org.apache.flink.table.api.DataTypes;
> import org.apache.flink.table.catalog.DataTypeFactory;
> import org.apache.flink.table.functions.TableFunction;
> import org.apache.flink.table.types.DataType;
> import org.apache.flink.table.types.inference.TypeInference;
> import org.apache.flink.types.Row;
> import java.util.Optional;
> public class CPDetailOriginMatchV2UDF extends TableFunction {
> public void eval(String original) {
> collect(null);
> }
> // is_matched, match_bizline, match_page_id, scene
> @Override
> public TypeInference getTypeInference(DataTypeFactory typeFactory) {
> return TypeInference.newBuilder()
> .outputTypeStrategy(
> callContext -> {
> DataType[] array = new DataType[4];
> array[0] = DataTypes.BOOLEAN();
> array[1] = DataTypes.STRING();
> // page_id 是Long类型, BIGINT 是否可以支持?
> array[2] = DataTypes.BIGINT();
> array[3] = DataTypes.STRING();
> return Optional.of(DataTypes.ROW(array));
> })
> .build();
> }
> }
> {code}
> The exception stack as follows.
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of 
> function's argument data type 'STRING NOT NULL' and actual argument type 
> 'STRING'.
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:320)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:320)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:95)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:861)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:537)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateOperator(CorrelateCodeGenerator.scala:127)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateCorrelateTransformation(CorrelateCodeGenerator.scala:75)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator.generateCorrelateTransformation(CorrelateCodeGenerator.scala)
>  

[jira] [Updated] (FLINK-23671) Failed to inference type in correlate

2021-08-08 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-23671:
--
Description: 
{code:java}
CREATE FUNCTION func111 AS 
'org.apache.flink.table.client.gateway.utils.CPDetailOriginMatchV2UDF';

CREATE TABLE side(
  `id2` VARCHAR,
  PRIMARY KEY (`id2`) NOT ENFORCED
) WITH (
  'connector' = 'values'
);

CREATE TABLE main(
  `id` VARCHAR,
  `proctime` as proctime()
) WITH (
  'connector' = 'datagen',
  'number-of-rows' = '10'
);

CREATE TABLE blackhole(
  `id` VARCHAR
) WITH (
  'connector' = 'blackhole'
);

INSERT INTO blackhole
SELECT `id`
FROM main
JOIN side FOR SYSTEM_TIME AS OF main.`proctime` ON main.`id` = side.`id2`
INNER join lateral table(func111(side.`id2`)) as T(`is_match`, `match_bizline`, 
`match_page_id`, `source_type`) ON 1 = 1;

{code}

The implementation of the udf is as follow 

{code:java}
package org.apache.flink.table.client.gateway.utils;

import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.catalog.DataTypeFactory;
import org.apache.flink.table.functions.TableFunction;
import org.apache.flink.table.types.DataType;
import org.apache.flink.table.types.inference.TypeInference;
import org.apache.flink.types.Row;

import java.util.Optional;

public class CPDetailOriginMatchV2UDF extends TableFunction {

public void eval(String original) {
collect(null);
}

// is_matched, match_bizline, match_page_id, scene
@Override
public TypeInference getTypeInference(DataTypeFactory typeFactory) {
return TypeInference.newBuilder()
.outputTypeStrategy(
callContext -> {
DataType[] array = new DataType[4];
array[0] = DataTypes.BOOLEAN();
array[1] = DataTypes.STRING();
// page_id 是Long类型, BIGINT 是否可以支持?
array[2] = DataTypes.BIGINT();
array[3] = DataTypes.STRING();
return Optional.of(DataTypes.ROW(array));
})
.build();
}
}


{code}

The exception stack as follows.
{code:java}
org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of function's 
argument data type 'STRING NOT NULL' and actual argument type 'STRING'.
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:320)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:320)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:95)
at 
org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
at 
org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:861)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:537)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateOperator(CorrelateCodeGenerator.scala:127)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateCorrelateTransformation(CorrelateCodeGenerator.scala:75)
at 
org.apache.flink.table.planner.codegen.CorrelateCodeGenerator.generateCorrelateTransformation(CorrelateCodeGenerator.scala)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCorrelate.translateToPlanInternal(CommonExecCorrelate.java:102)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:210)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:289)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.lambda$translateInputToPlan$5(ExecNodeBase.java:244)

{code}


  was:
{code:java}
CREATE TABLE side(
  `id2` VARCHAR,
  PRIMARY KEY (`id2`) NOT ENFORCED
) WITH (
  

[GitHub] [flink] Myasuka commented on a change in pull request #16371: [FLINK-23198][Documentation] Fix the demo of ConfigurableRocksDBOptionsFactory in Documentation.

2021-08-08 Thread GitBox


Myasuka commented on a change in pull request #16371:
URL: https://github.com/apache/flink/pull/16371#discussion_r684881212



##
File path: docs/content/docs/ops/state/state_backends.md
##
@@ -293,29 +293,38 @@ Below is an example how to define a custom 
ConfigurableOptionsFactory (set class
 ```java
 
 public class MyOptionsFactory implements ConfigurableRocksDBOptionsFactory {
-
 private static final long DEFAULT_SIZE = 256 * 1024 * 1024;  // 256 MB
-private long blockCacheSize = DEFAULT_SIZE;
 
+public static final ConfigOption BLOCK_CACHE_SIZE = 
ConfigOptions
+.key("my.custom.rocksdb.block.cache.size")

Review comment:
   I think it's OK to modify like this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16368: ignore

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16368:
URL: https://github.com/apache/flink/pull/16368#issuecomment-873854539


   
   ## CI report:
   
   * d79db15ab05c86dcd82fe88dc7247fd413f5c0bf UNKNOWN
   * adf2f29416eaf6f819cbb2a7944ecec4c1be0eb2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21698)
 
   * a43db750e998ee75179abbd1fe24ff45842be6f4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21746)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23579) SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS ref0 can't compile

2021-08-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395716#comment-17395716
 ] 

Jingsong Lee commented on FLINK-23579:
--

master: 0d2b945729df8f0a0149d02ca24633ae52a1ef21

> SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS ref0 can't compile
> -
>
> Key: FLINK-23579
> URL: https://issues.apache.org/jira/browse/FLINK-23579
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Running the sql of SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS 
> ref0 will get the error below:
> {code}
> java.lang.RuntimeException: Could not instantiate generated class 
> 'ExpressionReducer$3'
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75)
>   at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
>   at 
> 

[jira] [Assigned] (FLINK-23579) SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS ref0 can't compile

2021-08-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-23579:


Assignee: Caizhi Weng

> SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS ref0 can't compile
> -
>
> Key: FLINK-23579
> URL: https://issues.apache.org/jira/browse/FLINK-23579
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Running the sql of SELECT SHA2(CAST(NULL AS VARBINARY), CAST(NULL AS INT)) AS 
> ref0 will get the error below:
> {code}
> java.lang.RuntimeException: Could not instantiate generated class 
> 'ExpressionReducer$3'
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75)
>   at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
>   at 
> 

[GitHub] [flink] Myasuka commented on a change in pull request #16371: [FLINK-23198][Documentation] Fix the demo of ConfigurableRocksDBOptionsFactory in Documentation.

2021-08-08 Thread GitBox


Myasuka commented on a change in pull request #16371:
URL: https://github.com/apache/flink/pull/16371#discussion_r684880764



##
File path: docs/content.zh/docs/ops/state/state_backends.md
##
@@ -233,7 +233,7 @@ Flink还提供了两个参数来控制*写路径*(MemTable)和*读路径*(
 
   - `state.backend.rocksdb.memory.write-buffer-ratio`,默认值 `0.5`,即 50% 
的给定内存会分配给写缓冲区使用。
   - `state.backend.rocksdb.memory.high-prio-pool-ratio`,默认值 `0.1`,即 10% 的 
block cache 内存会优先分配给索引及过滤器。
-  
我们强烈建议不要将此值设置为零,以防止索引和过滤器被频繁踢出缓存而导致性能问题。此外,我们默认将L0级的过滤器和索引将被固定到缓存中以提高性能,更多详细信息请参阅
 [RocksDB 
文档](https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-filter-and-compression-dictionary-blocks)。
+
我们强烈建议不要将此值设置为零,以防止索引和过滤器被频繁踢出缓存而导致性能问题。此外,我们默认将L0级的过滤器和索引将被固定到缓存中以提高性能,更多详细信息请参阅
 [RocksDB 
文档](https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-filter-and-compression-dictionary-blocks)。

Review comment:
   Just update your branch which is used in this PR. You can find it in the 
top line: 
   >  yanchenyun wants to merge 1 commit into apache:master from 
yanchenyun:master
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #16718: [FLINK-23579][table-runtime] Fix compile exception in hash functions with varbinary arguments

2021-08-08 Thread GitBox


JingsongLi merged pull request #16718:
URL: https://github.com/apache/flink/pull/16718


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23678) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395711#comment-17395711
 ] 

Xintong Song commented on FLINK-23678:
--

[~fabian.paul], could you help take a look into this?

I'm making this a blocker for now, since the exactly-once consistency seems to 
be broken. Please downgrade if that is not the case.

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-23678
> URL: https://issues.apache.org/jira/browse/FLINK-23678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946
> {code}
> Aug 07 00:12:18 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 67.431 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase
> Aug 07 00:12:18 [ERROR] 
> testWriteRecordsToKafkaWithExactlyOnceGuarantee(org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase)
>   Time elapsed: 7.001 s  <<< FAILURE!
> Aug 07 00:12:18 java.lang.AssertionError: expected:<407799> but was:<407798>
> Aug 07 00:12:18   at org.junit.Assert.fail(Assert.java:89)
> Aug 07 00:12:18   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:647)
> Aug 07 00:12:18   at org.junit.Assert.assertEquals(Assert.java:633)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:334)
> Aug 07 00:12:18   at 
> org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:173)
> Aug 07 00:12:18   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 07 00:12:18   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 07 00:12:18   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 07 00:12:18   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 07 00:12:18   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 07 00:12:18   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 07 00:12:18   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 07 00:12:18   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 07 00:12:18   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Aug 07 00:12:18   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Aug 07 00:12:18   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 07 00:12:18   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 07 00:12:18   at 
> 

[GitHub] [flink] tsreaper commented on a change in pull request #16699: [FLINK-23420][table-runtime] LinkedListSerializer now checks for null elements in the list

2021-08-08 Thread GitBox


tsreaper commented on a change in pull request #16699:
URL: https://github.com/apache/flink/pull/16699#discussion_r684878442



##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/typeutils/LinkedListSerializer.java
##
@@ -110,17 +124,40 @@ public int getLength() {
 @Override
 public void serialize(LinkedList list, DataOutputView target) throws 
IOException {
 target.writeInt(list.size());
+if (hasNullMask) {
+MaskUtils.writeMask(getNullMask(list), target);
+}
 for (T element : list) {
-elementSerializer.serialize(element, target);
+if (element != null) {
+elementSerializer.serialize(element, target);
+}
 }
 }
 
+private boolean[] getNullMask(LinkedList list) {
+boolean[] mask = new boolean[list.size()];

Review comment:
   I suppose by "reusing" you're meant to not creating a boolean array 
every time and change its size when the length of list grows.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23678) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23678:


 Summary: 
KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
 Key: FLINK-23678
 URL: https://issues.apache.org/jira/browse/FLINK-23678
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6946

{code}
Aug 07 00:12:18 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 67.431 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase
Aug 07 00:12:18 [ERROR] 
testWriteRecordsToKafkaWithExactlyOnceGuarantee(org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase)
  Time elapsed: 7.001 s  <<< FAILURE!
Aug 07 00:12:18 java.lang.AssertionError: expected:<407799> but was:<407798>
Aug 07 00:12:18 at org.junit.Assert.fail(Assert.java:89)
Aug 07 00:12:18 at org.junit.Assert.failNotEquals(Assert.java:835)
Aug 07 00:12:18 at org.junit.Assert.assertEquals(Assert.java:647)
Aug 07 00:12:18 at org.junit.Assert.assertEquals(Assert.java:633)
Aug 07 00:12:18 at 
org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:334)
Aug 07 00:12:18 at 
org.apache.flink.streaming.connectors.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:173)
Aug 07 00:12:18 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Aug 07 00:12:18 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Aug 07 00:12:18 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Aug 07 00:12:18 at java.lang.reflect.Method.invoke(Method.java:498)
Aug 07 00:12:18 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Aug 07 00:12:18 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Aug 07 00:12:18 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Aug 07 00:12:18 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Aug 07 00:12:18 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Aug 07 00:12:18 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Aug 07 00:12:18 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Aug 07 00:12:18 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Aug 07 00:12:18 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Aug 07 00:12:18 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 07 00:12:18 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Aug 07 00:12:18 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Aug 07 00:12:18 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Aug 07 00:12:18 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
Aug 07 00:12:18 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Aug 07 00:12:18 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Aug 07 00:12:18 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Aug 07 00:12:18 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Aug 07 00:12:18 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Aug 07 00:12:18 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Aug 07 00:12:18 at 

[jira] [Comment Edited] (FLINK-23675) flink1.13 cumulate 函数不能和部分比较函数连用

2021-08-08 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395708#comment-17395708
 ] 

Leonard Xu edited comment on FLINK-23675 at 8/9/21, 2:54 AM:
-

Please use English in issue, thanks. Feel free to reopen once you updated the 
issue.

 


was (Author: leonard xu):
Please use English in issue, thanks

> flink1.13 cumulate 函数不能和部分比较函数连用
> 
>
> Key: FLINK-23675
> URL: https://issues.apache.org/jira/browse/FLINK-23675
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation
>Affects Versions: 1.13.1
> Environment: flink 1.13.1
>Reporter: lihangfei
>Priority: Major
>
> select count(clicknum) as num 
> from table(
> cumulate(table KafkaSource, desctiptor(app_date),interval '1'minutes, 
> interval '10' minutes))
> where clicknum <>'-99'
> group by window_start,window_end
>  
> 报错 不支持cumulate函数,not in函数也不可以



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16556) TopSpeedWindowing should implement checkpointing for its source

2021-08-08 Thread Liebing Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395709#comment-17395709
 ] 

Liebing Yu commented on FLINK-16556:


I have a question: why more changes are needed to use parallel source.

As for whether the Random instance needs to be serialized, I think it has no 
effect on the calculation of the entire example from a functional point of 
view. But from the perspective of the completeness of the example, I think it 
is necessary, because Random is a stateful field, and we should persist its 
state in a stateful operator.

> TopSpeedWindowing should implement checkpointing for its source
> ---
>
> Key: FLINK-16556
> URL: https://issues.apache.org/jira/browse/FLINK-16556
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.10.0
>Reporter: Nico Kruber
>Assignee: Liebing Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, starter
>
> {\{org.apache.flink.streaming.examples.windowing.TopSpeedWindowing.CarSource}}
>  does not implement checkpointing of its state, namely the current speeds and 
> distances per car. The main problem with this is that the window trigger only 
> fires if the new distance has increased by at least 50 but after restore, it 
> will be reset to 0 and could thus not produce output for a while.
>  
> Either the distance calculation could use {{Math.abs}} or the source needs 
> proper checkpointing. Optionally with allowing the number of cars to 
> increase/decrease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23675) flink1.13 cumulate 函数不能和部分比较函数连用

2021-08-08 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-23675.
--
Resolution: Invalid

Please use English in issue, thanks

> flink1.13 cumulate 函数不能和部分比较函数连用
> 
>
> Key: FLINK-23675
> URL: https://issues.apache.org/jira/browse/FLINK-23675
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation
>Affects Versions: 1.13.1
> Environment: flink 1.13.1
>Reporter: lihangfei
>Priority: Major
>
> select count(clicknum) as num 
> from table(
> cumulate(table KafkaSource, desctiptor(app_date),interval '1'minutes, 
> interval '10' minutes))
> where clicknum <>'-99'
> group by window_start,window_end
>  
> 报错 不支持cumulate函数,not in函数也不可以



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395707#comment-17395707
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23195

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395706#comment-17395706
 ] 

Xintong Song commented on FLINK-23556:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21711=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25258

> SQLClientSchemaRegistryITCase fails with " Subject ... not found"
> -
>
> Key: FLINK-23556
> URL: https://issues.apache.org/jira/browse/FLINK-23556
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: stale-blocker, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337
> {code}
> Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 209.44 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Jul 28 23:37:48 [ERROR] 
> testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase)  
> Time elapsed: 81.146 s  <<< ERROR!
> Jul 28 23:37:48 
> io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
> Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not 
> found.; error code: 40401
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364)
> Jul 28 23:37:48   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230)
> Jul 28 23:37:48   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195)
> Jul 28 23:37:48   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 28 23:37:48   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 28 23:37:48   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 28 23:37:48   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 28 23:37:48   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 28 23:37:48   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 28 23:37:48   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 28 23:37:48   at java.lang.Thread.run(Thread.java:748)
> Jul 28 23:37:48 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395705#comment-17395705
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21713=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=21910

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tsreaper commented on a change in pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-08 Thread GitBox


tsreaper commented on a change in pull request #16740:
URL: https://github.com/apache/flink/pull/16740#discussion_r684875945



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java
##
@@ -224,6 +224,15 @@ public void lookupOperatorOverloads(
 OperandTypes.or(OperandTypes.NUMERIC_INTEGER, 
OperandTypes.NUMERIC),
 SqlFunctionCategory.NUMERIC);
 
+public static final SqlFunction TRUNCATE =
+new SqlFunction(
+"TRUNCATE",
+SqlKind.OTHER_FUNCTION,
+FlinkReturnTypes.ROUND_FUNCTION_NULLABLE,

Review comment:
   If you look into this method you'll find that it is calling 
`LogicalTypeMerging#findRoundDecimalType`. In that method there is a line 
stating that
   ```java
   // NOTE: rounding may increase the digits by 1, therefore we need +1 on 
precisions.
   return new DecimalType(false, 1 + precision - scale + round, round);
   ```
   
   However for `truncate` function the number of digits will not increase, thus 
`FlinkReturnTypes.ROUND_FUNCTION_NULLABLE` is not the best choice to use here.
   
   What I'll suggest is that you create a new method in `LogicalTypeMerging` 
called `findTruncateDecimalType`. This method and `findRoundDecimalType` can 
together reuse some code. Then you might want to create 
`FlinkReturnTypes.TRUNCATE_FUNCTION_NULLABLE` which will also reuse a lot of 
code with `ROUND_FUNCTION_NULLABLE`.

##
File path: 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/MathFunctionsITCase.java
##
@@ -115,6 +115,14 @@
 $("f0").round(2),
 "ROUND(f0, 2)",
 new BigDecimal("12345.12"),
-DataTypes.DECIMAL(8, 2).notNull()));
+DataTypes.DECIMAL(8, 2).notNull()),
+TestSpec.forFunction(BuiltInFunctionDefinitions.TRUNCATE)
+.onFieldsWithData(new BigDecimal("123.456"))
+// TRUNCATE(DECIMAL(6, 3) NOT NULL, 2) => DECIMAL(6, 
2) NOT NULL
+.testResult(
+$("f0").truncate(2),
+"TRUNCATE(f0, 2)",
+new BigDecimal("123.45"),
+DataTypes.DECIMAL(6, 2).notNull()));

Review comment:
   This `MathFunctionITCase`, as stated in the java docs, is for 
`BuiltInFunctionDefinitions`. If you follow the usage of 
`BuiltinFunctionDefinitions` you'll see that it is converted to 
`FlinkSqlOperatorTable.TRUNCATE`. For Flink SQL scalar functions we always add 
tests in `ScalarFunctionsTest`. Please add your tests there.
   
   A thorough test for a function should include all data types it supported as 
well as their corresponding null values. The tests in 
`ScalarFunctionTest#testTruncate` may not be complete so please complete them 
with all supported data types. See `FlinkSqlOperatorTable.TRUNCATE` to get all 
its supported types.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23677) KafkaTableITCase.testKafkaSourceSinkWithKeyAndPartialValue fails due to timeout

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23677:


 Summary: 
KafkaTableITCase.testKafkaSourceSinkWithKeyAndPartialValue fails due to timeout
 Key: FLINK-23677
 URL: https://issues.apache.org/jira/browse/FLINK-23677
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21712=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6185

{code}
[INFO] Running 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase
java.util.concurrent.TimeoutException: The topic metadata failed to propagate 
to Kafka broker.
at 
org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:209)
at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:111)
at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSinkWithKeyAndPartialValue(KafkaTableITCase.java:430)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
[ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 87.046 
s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase
[ERROR] 

[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16740:
URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423


   
   ## CI report:
   
   * aceb2a25e00797b20ed5e3356f168ea12df0a8de Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21687)
 
   * c084abda71df7c5e1b0118dcbbd8761f89691526 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21745)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395704#comment-17395704
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21712=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=20845

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #16699: [FLINK-23420][table-runtime] LinkedListSerializer now checks for null elements in the list

2021-08-08 Thread GitBox


JingsongLi commented on a change in pull request #16699:
URL: https://github.com/apache/flink/pull/16699#discussion_r684875870



##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/typeutils/LinkedListSerializer.java
##
@@ -110,17 +124,40 @@ public int getLength() {
 @Override
 public void serialize(LinkedList list, DataOutputView target) throws 
IOException {
 target.writeInt(list.size());
+if (hasNullMask) {
+MaskUtils.writeMask(getNullMask(list), target);
+}
 for (T element : list) {
-elementSerializer.serialize(element, target);
+if (element != null) {
+elementSerializer.serialize(element, target);
+}
 }
 }
 
+private boolean[] getNullMask(LinkedList list) {
+boolean[] mask = new boolean[list.size()];

Review comment:
   I think we should reuse this mask.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23676) ReduceOnNeighborMethodsITCase.testSumOfOutNeighborsNoValue fails due to AskTimeoutException

2021-08-08 Thread Xintong Song (Jira)
Xintong Song created FLINK-23676:


 Summary: 
ReduceOnNeighborMethodsITCase.testSumOfOutNeighborsNoValue fails due to 
AskTimeoutException
 Key: FLINK-23676
 URL: https://issues.apache.org/jira/browse/FLINK-23676
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.2
Reporter: Xintong Song
 Fix For: 1.13.3


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21696=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=7075

{code}
Aug 06 13:54:59 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 18.547 s <<< FAILURE! - in 
org.apache.flink.graph.scala.test.operations.ReduceOnNeighborMethodsITCase
Aug 06 13:54:59 [ERROR] testSumOfOutNeighborsNoValue[Execution mode = 
CLUSTER](org.apache.flink.graph.scala.test.operations.ReduceOnNeighborMethodsITCase)
  Time elapsed: 11.25 s  <<< ERROR!
Aug 06 13:54:59 org.apache.flink.runtime.client.JobExecutionException: Job 
execution failed.
Aug 06 13:54:59 at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
Aug 06 13:54:59 at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
Aug 06 13:54:59 at 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
Aug 06 13:54:59 at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
Aug 06 13:54:59 at 
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
Aug 06 13:54:59 at akka.dispatch.OnComplete.internal(Future.scala:264)
Aug 06 13:54:59 at akka.dispatch.OnComplete.internal(Future.scala:261)
Aug 06 13:54:59 at 
akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
Aug 06 13:54:59 at 
akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
Aug 06 13:54:59 at 
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
Aug 06 13:54:59 at 
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
Aug 06 13:54:59 at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
Aug 06 13:54:59 at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
Aug 06 13:54:59 at 
akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
Aug 06 13:54:59 at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
Aug 06 13:54:59 at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
Aug 06 13:54:59 at 
scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
Aug 06 13:54:59 at 
scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
Aug 06 13:54:59 at 
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
Aug 06 13:54:59 at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
Aug 06 13:54:59 at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
Aug 06 13:54:59 at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
Aug 06 13:54:59 at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
Aug 06 13:54:59 at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
Aug 06 13:54:59 at 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
Aug 06 13:54:59 at 
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
Aug 06 13:54:59 at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
Aug 06 13:54:59 at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
Aug 06 13:54:59 at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
Aug 06 

[GitHub] [flink] wuchong commented on pull request #16295: [FLINK-11622][chinese-translation,Documentation]Translate the "Command-Line Interface" page into Chinese

2021-08-08 Thread GitBox


wuchong commented on pull request #16295:
URL: https://github.com/apache/flink/pull/16295#issuecomment-894916866


   @tsreaper are you interested in reviewing this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-12438) Translate "Task Lifecycle" page into Chinese

2021-08-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-12438.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: 3efd5d167a25da2b5d85a9e855acafe3449cea7b

> Translate "Task Lifecycle" page into Chinese
> 
>
> Key: FLINK-12438
> URL: https://issues.apache.org/jira/browse/FLINK-12438
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Armstrong Nova
>Assignee: wuguihu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "[https://ci.apache.org/projects/flink/flink-docs-master/internals/task_lifecycle.html];
>  into Chinese.
>  
> The doc located in "flink/docs/internals/task_lifecycle.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16304: [FLINK-12438][doc-zh]Translate Task Lifecycle document into Chinese

2021-08-08 Thread GitBox


wuchong merged pull request #16304:
URL: https://github.com/apache/flink/pull/16304


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-08 Thread GitBox


lirui-apache commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894916366


   Thanks @cuibo01 for working on this. However, I'm not sure whether we need 
table-level property to set the owner. Hive code uses current UGI (either 
simple or kerberized) when we open a `HiveCatalog` instance to connect to HMS. 
Shouldn't we also just use that as the table owner? Or in what situation would 
a user want to connect to HMS using one identity and create tables with another?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13505) Translate "Java Lambda Expressions" page into Chinese

2021-08-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395700#comment-17395700
 ] 

Jark Wu commented on FLINK-13505:
-

This page might be reworked. Feel free to open a new JIRA issue and pull 
request. [~hapihu]

> Translate "Java Lambda Expressions" page into Chinese
> -
>
> Key: FLINK-13505
> URL: https://issues.apache.org/jira/browse/FLINK-13505
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/java_lambdas.html].
> The markdown file is located in " flink/docs/dev/java_lambdas.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-08-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395698#comment-17395698
 ] 

Xintong Song commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21688=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12276

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  

[jira] [Commented] (FLINK-23614) The resulting scale of TRUNCATE(DECIMAL, ...) is not correct

2021-08-08 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395699#comment-17395699
 ] 

Caizhi Weng commented on FLINK-23614:
-

[~paul8263] I believe this class is copied from Calcite. However as it's not 
intended to replace the original class in Calcite I think removing unused 
methods are reasonable.

To check if a method is really unused it is not enough to see if it is called 
in the Flink code base. Some code generation may use these method as a string 
literal and we should really check that thoroughly. I think this is a good 
point but it may take some effort. What do you think [~lzljs3620320].

> The resulting scale of TRUNCATE(DECIMAL, ...) is not correct
> 
>
> Key: FLINK-23614
> URL: https://issues.apache.org/jira/browse/FLINK-23614
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: Caizhi Weng
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available, starter
>
> Run the following SQL
> {code:sql}
> SELECT
>   TRUNCATE(123.456, 2),
>   TRUNCATE(123.456, 0),
>   TRUNCATE(123.456, -2),
>   TRUNCATE(CAST(123.456 AS DOUBLE), 2),
>   TRUNCATE(CAST(123.456 AS DOUBLE), 0),
>   TRUNCATE(CAST(123.456 AS DOUBLE), -2)
> {code}
> The result is
> {code}
> 123.450
> 123.000
> 100.000
> 123.45
> 123.0
> 100.0
> {code}
> It seems that the resulting scale of {{TRUNCATE(DECIMAL, ...)}} is the same 
> as that of the input decimal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-12403) Translate "How to use logging" page into Chinese

2021-08-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-12403:
---

Assignee: wuguihu

> Translate "How to use logging" page into Chinese
> 
>
> Key: FLINK-12403
> URL: https://issues.apache.org/jira/browse/FLINK-12403
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Armstrong Nova
>Assignee: wuguihu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate 
> "[https://ci.apache.org/projects/flink/flink-docs-master/monitoring/logging.html];
>  page into Chinese.
> The doc located in "flink/docs/monitoring/logging.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #16313: [FLINK-12403][doc-zh]Translate 'How to use logging' page into Chinese

2021-08-08 Thread GitBox


wuchong commented on pull request #16313:
URL: https://github.com/apache/flink/pull/16313#issuecomment-894916002


   cc @RocMarshal , could you help to review this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23673) [DOCS-zh]In many Chinese pages, the prompt or warning text block does not take effect

2021-08-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-23673.
---
Fix Version/s: 1.14.0
 Assignee: wuguihu
   Resolution: Fixed

Fixed in master: 019111dc09303fd2398789015dcb1678d15f9463

> [DOCS-zh]In many Chinese pages, the prompt or warning text block does not 
> take effect
> -
>
> Key: FLINK-23673
> URL: https://issues.apache.org/jira/browse/FLINK-23673
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: image-20210808005849891.png, image-20210808023456047.png
>
>
> In many Chinese pages, the prompt or warning text block does not take effect.
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/libs/cep/]
>  
>  
>  
> The '\{% info 提示 %}' should be modified as follows:
> {code:java}
> {{< hint info >}}
> {{< /hint >}}
> {code}
>  
> The '\{% warn 注意 %}' should be modified as follows:
> {code:java}
> {{< hint warning >}}
> {{< /hint >}}{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23673) In many Chinese pages, the prompt or warning text block does not take effect

2021-08-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-23673:

Summary: In many Chinese pages, the prompt or warning text block does not 
take effect  (was: [DOCS-zh]In many Chinese pages, the prompt or warning text 
block does not take effect)

> In many Chinese pages, the prompt or warning text block does not take effect
> 
>
> Key: FLINK-23673
> URL: https://issues.apache.org/jira/browse/FLINK-23673
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: image-20210808005849891.png, image-20210808023456047.png
>
>
> In many Chinese pages, the prompt or warning text block does not take effect.
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/libs/cep/]
>  
>  
>  
> The '\{% info 提示 %}' should be modified as follows:
> {code:java}
> {{< hint info >}}
> {{< /hint >}}
> {code}
>  
> The '\{% warn 注意 %}' should be modified as follows:
> {code:java}
> {{< hint warning >}}
> {{< /hint >}}{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16748: [FLINK-23673][docs-zh] Fix the prompt and warning text block for Chin…

2021-08-08 Thread GitBox


wuchong merged pull request #16748:
URL: https://github.com/apache/flink/pull/16748


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16368: ignore

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16368:
URL: https://github.com/apache/flink/pull/16368#issuecomment-873854539


   
   ## CI report:
   
   * d79db15ab05c86dcd82fe88dc7247fd413f5c0bf UNKNOWN
   * adf2f29416eaf6f819cbb2a7944ecec4c1be0eb2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21698)
 
   * a43db750e998ee75179abbd1fe24ff45842be6f4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15304: [FLINK-20731][connector] Introduce Pulsar Source

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #15304:
URL: https://github.com/apache/flink/pull/15304#issuecomment-803430086


   
   ## CI report:
   
   * 04da1862ec1f107c74017283c916229b560d9731 UNKNOWN
   * 4dcee9f07135401160aae3f0d01bd480630f808f UNKNOWN
   * d326abc93378bb3c4b53616e7717c409d9876ade UNKNOWN
   * 60ce908829e752fb48d59eec10866c9c60752638 UNKNOWN
   * a3ef28b4e2a0331c27222064e120905ce0d423fa UNKNOWN
   * af4a703811a23c31420b7122d1eb8d72924a2f90 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23190) Make task-slot allocation much more evenly

2021-08-08 Thread loyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395691#comment-17395691
 ] 

loyi commented on FLINK-23190:
--

[~trohrmann]  hi, do you have a conclusion about this ? 

> Make task-slot allocation much more evenly
> --
>
> Key: FLINK-23190
> URL: https://issues.apache.org/jira/browse/FLINK-23190
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.3
>Reporter: loyi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-07-16-10-34-30-700.png
>
>
> FLINK-12122 only guarantees spreading out tasks across the set of TMs which 
> are registered at the time of scheduling, but our jobs are all runing on 
> active yarn mode, the job with smaller source parallelism offen cause 
> load-balance issues. 
>  
> For this job:
> {code:java}
> //  -ys 4 means 10 taskmanagers
> env.addSource(...).name("A").setParallelism(10).
>  map(...).name("B").setParallelism(30)
>  .map(...).name("C").setParallelism(40)
>  .addSink(...).name("D").setParallelism(20);
> {code}
>  
>  Flink-1.12.3 task allocation: 
> ||operator||tm1 ||tm2||tm3||tm4||tm5||5m6||tm7||tm8||tm9||tm10||
> |A| 
> 1|{color:#de350b}2{color}|{color:#de350b}2{color}|1|1|{color:#de350b}3{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|
> |B|3|3|3|3|3|3|3|3|{color:#de350b}2{color}|{color:#de350b}4{color}|
> |C|4|4|4|4|4|4|4|4|4|4|
> |D|2|2|2|2|2|{color:#de350b}1{color}|{color:#de350b}1{color}|2|2|{color:#de350b}4{color}|
>  
> Suggestions:
> When TaskManger start register slots to slotManager , current processing 
> logic will choose  the first pendingSlot which meet its resource 
> requirements.  The "random" strategy usually causes uneven task allocation 
> when source-operator's parallelism is significantly below process-operator's. 
>   A simple feasible idea  is  {color:#de350b}partition{color} the current  
> "{color:#de350b}pendingSlots{color}" by their "JobVertexIds" (such as  let 
> AllocationID bring the detail)  , then allocate the slots proportionally to 
> each JobVertexGroup.
>  
> For above case, the 40 pendingSlots could be divided into 4 groups:
> [ABCD]: 10        // A、B、C、D reprents  {color:#de350b}jobVertexId{color}
> [BCD]: 10
> [CD]: 10
> [D]: 10
>  
> Every taskmanager will provide 4 slots one time, and each group will get 1 
> slot according their proportion (1/4), the final allocation result is below:
> [ABCD] : deploye on 10 different taskmangers
> [BCD]: deploye on 10 different taskmangers
> [CD]: deploye on 10  different taskmangers
> [D]: deploye on 10 different taskmangers
>  
> I have implement a [concept 
> code|https://github.com/saddays/flink/commit/dc82e60a7c7599fbcb58c14f8e3445bc8d07ace1]
>   based on Flink-1.12.3 ,  the patch version has {color:#de350b}fully 
> evenly{color} task allocation , and works well on my workload .  Are there 
> other point that have not been considered or  does it conflict with future 
> plans?      Sorry for my poor english.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] JingsongLi merged pull request #455: Add release 1.12.5

2021-08-08 Thread GitBox


JingsongLi merged pull request #455:
URL: https://github.com/apache/flink-web/pull/455


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] JingsongLi commented on pull request #455: Add release 1.12.5

2021-08-08 Thread GitBox


JingsongLi commented on pull request #455:
URL: https://github.com/apache/flink-web/pull/455#issuecomment-894910825


   Thanks all for the review~ merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-22312) YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents due to the heartbeat exception with Yarn RM

2021-08-08 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-22312.

Resolution: Fixed

Fixed via
- master (1.14): 6e156bdbddc4cbdc0b79d469a7bfdb2e45071f12
- release-1.13: e73092c8597c3b446339f83e8f9232e504985b86
- release-1.12: a43db750e998ee75179abbd1fe24ff45842be6f4

> YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents
>  due to the heartbeat exception with Yarn RM
> 
>
> Key: FLINK-22312
> URL: https://issues.apache.org/jira/browse/FLINK-22312
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Guowei Ma
>Assignee: Xintong Song
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.14.0, 1.13.3, 1.12.5
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16633=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa=26614
> {code:java}
> 2021-04-15T22:11:39.5648550Z java.io.InterruptedIOException: Call interrupted
> 2021-04-15T22:11:39.5649145Z  at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483) 
> ~[hadoop-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5649823Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1435) 
> ~[hadoop-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5650488Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1345) 
> ~[hadoop-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5651387Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>  ~[hadoop-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5652193Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>  ~[hadoop-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5652675Z  at com.sun.proxy.$Proxy32.allocate(Unknown 
> Source) ~[?:?]
> 2021-04-15T22:11:39.5653478Z  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>  ~[hadoop-yarn-common-2.8.3.jar:?]
> 2021-04-15T22:11:39.5654223Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_275]
> 2021-04-15T22:11:39.5654742Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_275]
> 2021-04-15T22:11:39.5655269Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_275]
> 2021-04-15T22:11:39.5655625Z ]
> 2021-04-15T22:11:39.5655853Z  at org.junit.Assert.fail(Assert.java:88)
> 2021-04-15T22:11:39.5656281Z  at 
> org.apache.flink.yarn.YarnTestBase.ensureNoProhibitedStringInLogFiles(YarnTestBase.java:576)
> 2021-04-15T22:11:39.5656831Z  at 
> org.apache.flink.yarn.YARNSessionFIFOITCase.checkForProhibitedLogContents(YARNSessionFIFOITCase.java:86)
> 2021-04-15T22:11:39.5657360Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-04-15T22:11:39.565Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-04-15T22:11:39.5658252Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-04-15T22:11:39.5658723Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-04-15T22:11:39.5659311Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-04-15T22:11:39.5659780Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-04-15T22:11:39.5660248Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-04-15T22:11:39.5660829Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> 2021-04-15T22:11:39.5661247Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-04-15T22:11:39.5661652Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-04-15T22:11:39.5662006Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-04-15T22:11:39.5662379Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-04-15T22:11:39.5662812Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-04-15T22:11:39.5663260Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-04-15T22:11:39.5663935Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-04-15T22:11:39.5664384Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-04-15T22:11:39.5664849Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #16740:
URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423


   
   ## CI report:
   
   * aceb2a25e00797b20ed5e3356f168ea12df0a8de Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21687)
 
   * c084abda71df7c5e1b0118dcbbd8761f89691526 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong closed pull request #16733: [FLINK-22312][yarn][test] Fix log whitelist for AMRMClient heartbeat interruption.

2021-08-08 Thread GitBox


xintongsong closed pull request #16733:
URL: https://github.com/apache/flink/pull/16733


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15304: [FLINK-20731][connector] Introduce Pulsar Source

2021-08-08 Thread GitBox


flinkbot edited a comment on pull request #15304:
URL: https://github.com/apache/flink/pull/15304#issuecomment-803430086


   
   ## CI report:
   
   * 04da1862ec1f107c74017283c916229b560d9731 UNKNOWN
   * 4dcee9f07135401160aae3f0d01bd480630f808f UNKNOWN
   * d326abc93378bb3c4b53616e7717c409d9876ade UNKNOWN
   * 60ce908829e752fb48d59eec10866c9c60752638 UNKNOWN
   * a3ef28b4e2a0331c27222064e120905ce0d423fa UNKNOWN
   * 3105fda50763b9bc7840629c58462528a63ca014 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21635)
 
   * af4a703811a23c31420b7122d1eb8d72924a2f90 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangjunhan commented on pull request #16603: [FLINK-23111][runtime-web] Bump angular's and ng-zorro's version to 12

2021-08-08 Thread GitBox


yangjunhan commented on pull request #16603:
URL: https://github.com/apache/flink/pull/16603#issuecomment-894907539


   Hi, @Airblader @twalthr. How is the reviewing going? Could you kindly 
confirm whether I need to update the `NOTICE` file as well?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23675) flink1.13 cumulate 函数不能和部分比较函数连用

2021-08-08 Thread lihangfei (Jira)
lihangfei created FLINK-23675:
-

 Summary: flink1.13 cumulate 函数不能和部分比较函数连用
 Key: FLINK-23675
 URL: https://issues.apache.org/jira/browse/FLINK-23675
 Project: Flink
  Issue Type: Bug
  Components: chinese-translation
Affects Versions: 1.13.1
 Environment: flink 1.13.1
Reporter: lihangfei


select count(clicknum) as num 

from table(

cumulate(table KafkaSource, desctiptor(app_date),interval '1'minutes, interval 
'10' minutes))

where clicknum <>'-99'

group by window_start,window_end

 

报错 不支持cumulate函数,not in函数也不可以



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] syhily commented on pull request #15304: [FLINK-20731][connector] Introduce Pulsar Source

2021-08-08 Thread GitBox


syhily commented on pull request #15304:
URL: https://github.com/apache/flink/pull/15304#issuecomment-894902769


   @AHeise I have redesign the `PulsarDeserializationSchema` by following your 
review advice. There are stills some works need to be done. I'll push the 
remaining code tonight in china (CST 20:00).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yanchenyun commented on a change in pull request #16371: [FLINK-23198][Documentation] Fix the demo of ConfigurableRocksDBOptionsFactory in Documentation.

2021-08-08 Thread GitBox


yanchenyun commented on a change in pull request #16371:
URL: https://github.com/apache/flink/pull/16371#discussion_r684860617



##
File path: docs/content.zh/docs/ops/state/state_backends.md
##
@@ -283,7 +283,7 @@ Flink还提供了两个参数来控制*写路径*(MemTable)和*读路径*(
   - 通过 `state.backend.rocksdb.options-factory` 选项将工厂实现类的名称设置到`flink-conf.yaml` 
。
   
   - 通过程序设置,例如 `RocksDBStateBackend.setRocksDBOptions(new MyOptionsFactory());` 
。
-  
+

Review comment:
   It is a good comments. And I have the same question as above.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yanchenyun commented on a change in pull request #16371: [FLINK-23198][Documentation] Fix the demo of ConfigurableRocksDBOptionsFactory in Documentation.

2021-08-08 Thread GitBox


yanchenyun commented on a change in pull request #16371:
URL: https://github.com/apache/flink/pull/16371#discussion_r684859143



##
File path: docs/content.zh/docs/ops/state/state_backends.md
##
@@ -233,7 +233,7 @@ Flink还提供了两个参数来控制*写路径*(MemTable)和*读路径*(
 
   - `state.backend.rocksdb.memory.write-buffer-ratio`,默认值 `0.5`,即 50% 
的给定内存会分配给写缓冲区使用。
   - `state.backend.rocksdb.memory.high-prio-pool-ratio`,默认值 `0.1`,即 10% 的 
block cache 内存会优先分配给索引及过滤器。
-  
我们强烈建议不要将此值设置为零,以防止索引和过滤器被频繁踢出缓存而导致性能问题。此外,我们默认将L0级的过滤器和索引将被固定到缓存中以提高性能,更多详细信息请参阅
 [RocksDB 
文档](https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-filter-and-compression-dictionary-blocks)。
+
我们强烈建议不要将此值设置为零,以防止索引和过滤器被频繁踢出缓存而导致性能问题。此外,我们默认将L0级的过滤器和索引将被固定到缓存中以提高性能,更多详细信息请参阅
 [RocksDB 
文档](https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-filter-and-compression-dictionary-blocks)。

Review comment:
   How do I upload my changes? This is my first pr. thank you very much!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22227) Job name, Job ID and receiving Dispatcher should be logged by the client

2021-08-08 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395666#comment-17395666
 ] 

Shen Zhu commented on FLINK-7:
--

Hey Chesnay,

Plan to add logging before this line in 
StreamExecutionEnvironment#executeAsync: 
[https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java#L2091]
 for job ID(jobClient.getJobID()) and job name(streamGraph.getJobName()).

Do you think it's the correct place? Thanks!

Best Regards,
Shen Zhu 

> Job name, Job ID and receiving Dispatcher should be logged by the client
> 
>
> Key: FLINK-7
> URL: https://issues.apache.org/jira/browse/FLINK-7
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Chesnay Schepler
>Priority: Major
>
> Surprisingly we don't log for job submission where we submit them to or what 
> the job ID/name is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22217) Add quotes around job names

2021-08-08 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17395665#comment-17395665
 ] 

Shen Zhu commented on FLINK-22217:
--

Hey Chesnay,

Plan to modify this line in JobMaster#startJobExecution: 
[https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L871,]
 do you think it'll work?

Best Regards,
Shen Zhu

> Add quotes around job names
> ---
>
> Key: FLINK-22217
> URL: https://issues.apache.org/jira/browse/FLINK-22217
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Priority: Major
>
> Quotes could be neat here:
> {code}
> Starting execution of job State machine job [..]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"

2021-08-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23556:
---
Labels: stale-blocker test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> SQLClientSchemaRegistryITCase fails with " Subject ... not found"
> -
>
> Key: FLINK-23556
> URL: https://issues.apache.org/jira/browse/FLINK-23556
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: stale-blocker, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337
> {code}
> Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 209.44 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Jul 28 23:37:48 [ERROR] 
> testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase)  
> Time elapsed: 81.146 s  <<< ERROR!
> Jul 28 23:37:48 
> io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
> Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not 
> found.; error code: 40401
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760)
> Jul 28 23:37:48   at 
> io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364)
> Jul 28 23:37:48   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230)
> Jul 28 23:37:48   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195)
> Jul 28 23:37:48   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 28 23:37:48   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 28 23:37:48   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 28 23:37:48   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 28 23:37:48   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 28 23:37:48   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 28 23:37:48   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 28 23:37:48   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 28 23:37:48   at java.lang.Thread.run(Thread.java:748)
> Jul 28 23:37:48 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19937) Support sink parallelism option for all connectors

2021-08-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19937:
---
Labels: pull-request-available stale-major  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support sink parallelism option for all connectors
> --
>
> Key: FLINK-19937
> URL: https://issues.apache.org/jira/browse/FLINK-19937
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: Lsw_aka_laplace
>Priority: Major
>  Labels: pull-request-available, stale-major
>
> Since https://issues.apache.org/jira/browse/FLINK-19727 has been done.
> SINK_PARALLELISM option and `ParallelismProvider` should be applied for all 
> existing `DynamicTableSink` of connectors in order to give users access to 
> setting their own sink parallelism.
>  
>  
> Update:
> Anybody who works on this issue should refrence to FLINK-19727~
> `ParallelismProvider` should work with `SinkRuntimeProvider`, actually 
> `SinkFunctionProvider` and `OutputFormatProvider`  has implemented 
> `ParallelismProvider`. And `SINK_PARALLELISM`  has already defined in 
> `FactoryUtil`, plz reuse it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22769) yarnship do not support symbolic directory

2021-08-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22769:
---
Labels: flink-yarn pull-request-available pull-requests-available 
stale-major  (was: flink-yarn pull-request-available pull-requests-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> yarnship do not support symbolic directory
> --
>
> Key: FLINK-22769
> URL: https://issues.apache.org/jira/browse/FLINK-22769
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.2, 1.13.0, 1.13.1
>Reporter: Adrian Zhong
>Priority: Major
>  Labels: flink-yarn, pull-request-available, 
> pull-requests-available, stale-major
> Attachments: image-2021-05-25-18-35-00-319.png
>
>
> If we pass `-yt ` a symbolic directory, we will get an exception:
> !image-2021-05-25-18-35-00-319.png|width=666,height=105!
> Please assign to me, I spent a whole day working on it, and I'd like to be a 
> contributor (already implemented by me).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22085) KafkaSourceLegacyITCase hangs/fails on azure

2021-08-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22085:
---
Labels: pull-request-available stale-assigned test-stability  (was: 
pull-request-available test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> KafkaSourceLegacyITCase hangs/fails on azure
> 
>
> Key: FLINK-22085
> URL: https://issues.apache.org/jira/browse/FLINK-22085
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.14.0
>
>
> 1) Observations
> a) The Azure pipeline would occasionally hang without printing any test error 
> information.
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15939=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=8219]
> b) By running the test KafkaSourceLegacyITCase::testBrokerFailure() with INFO 
> level logging, the the test would hang with the following error message 
> printed repeatedly:
> {code:java}
> 20451 [New I/O boss #50] ERROR 
> org.apache.flink.networking.NetworkFailureHandler [] - Closing communication 
> channel because of an exception
> java.net.ConnectException: Connection refused: localhost/127.0.0.1:50073
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[?:1.8.0_151]
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
> ~[?:1.8.0_151]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
>  ~[flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> org.apache.flink.shaded.testutils.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>  [flink-test-utils_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_151]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_151]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
> {code}
> *2) Root cause explanations*
> The test would hang because it enters the following loop:
>  - closeOnFlush() is called for a given channel
>  - closeOnFlush() calls channel.write(..)
>  - channel.write() triggers the exceptionCaught(...) callback
>  - closeOnFlush() is called for the same channel again.
> *3) Solution*
> Update closeOnFlush() so that, if a channel is being closed by this method, 
> then closeOnFlush() would not try to write to this channel if it is called on 
> this channel again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >