[jira] [Created] (FLINK-21659) Running HA per-job cluster (rocks, incremental) end-to-end test fails
Guowei Ma created FLINK-21659: - Summary: Running HA per-job cluster (rocks, incremental) end-to-end test fails Key: FLINK-21659 URL: https://issues.apache.org/jira/browse/FLINK-21659 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.13.0 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14232=logs=4dd4dbdd-1802-5eb7-a518-6acd9d24d0fc=8d6b4dd3-4ca1-5611-1743-57a7d76b395a It seems that the task deployed to the TaskManager0 is stuck and cause that the checkpoint fails. {code:java} java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException: Invocation of public abstract java.util.concurrent.CompletableFuture org.apache.flink.runtime.taskexecutor.TaskExecutorGateway.submitTask(org.apache.flink.runtime.deployment.TaskDeploymentDescriptor,org.apache.flink.runtime.jobmaster.JobMasterId,org.apache.flink.api.common.time.Time) timed out. at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:925) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:913) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[?:1.8.0_282] at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:234) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[?:1.8.0_282] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[?:1.8.0_282] at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1064) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.dispatch.OnComplete.internal(Future.scala:263) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.dispatch.OnComplete.internal(Future.scala:261) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:644) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235) ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_282] Caused by: java.util.concurrent.TimeoutException: Invocation
[jira] [Created] (FLINK-21658) Align flink-benchmarks with job graph builder
Yun Tang created FLINK-21658: Summary: Align flink-benchmarks with job graph builder Key: FLINK-21658 URL: https://issues.apache.org/jira/browse/FLINK-21658 Project: Flink Issue Type: Bug Components: Benchmarks Reporter: Yun Tang Assignee: Yun Tang After FLINK-21401 resolved, previous job graph constructor has been removed: {code:java} public JobGraph(JobVertex... vertices) { this(null, vertices); } {code} However, this method is still used in flink-benchmarks, we need to adopt code to latest Flink master. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21657) flink doc fails to display a picture
zl created FLINK-21657: -- Summary: flink doc fails to display a picture Key: FLINK-21657 URL: https://issues.apache.org/jira/browse/FLINK-21657 Project: Flink Issue Type: Bug Components: Documentation Reporter: zl Attachments: image-2021-03-08-15-07-01-698.png [Once More, With Streaming!]([https://ci.apache.org/projects/flink/flink-docs-master/docs/try-flink/table_api/#once-more-with-streaming]) fails to display a picture. !image-2021-03-08-15-07-01-698.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21656) Add antlr parser for hive dialect
Rui Li created FLINK-21656: -- Summary: Add antlr parser for hive dialect Key: FLINK-21656 URL: https://issues.apache.org/jira/browse/FLINK-21656 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Reporter: Rui Li Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
How do I call an algorithm written in C++ in Flink?
My company has provided an algorithm written in C++, which has been packaged into a.so file. I have built a SpringBoot project, which uses JNI to operate the algorithm written in C++. Could you please tell me how to call it in Flink?Do i need to define operators, chains of operators?
[jira] [Created] (FLINK-21655) Incorrect simplification for coalesce call on a groupingsets' result
lincoln lee created FLINK-21655: --- Summary: Incorrect simplification for coalesce call on a groupingsets' result Key: FLINK-21655 URL: https://issues.apache.org/jira/browse/FLINK-21655 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.12.2, 1.11.3 Reporter: lincoln lee Fix For: 1.13.0 Currently the planner will do an incorrectly simplification for `coalesce` call on a `groupingsets`'s result, see the following query example: {code:scala} val sqlQuery = """ |SELECT | a, | b, | coalesce(c, 'empty'), | avg(d) |FROM t1 |GROUP BY GROUPING SETS ((a, b), (a, b, c)) """.stripMargin {code} will generate a wrong plan which lost the `coalesce` call: {code} Calc(select=[a, b, c AS EXPR$2, EXPR$3]) +- HashAggregate(isMerge=[true], groupBy=[a, b, c, $e], select=[a, b, c, $e, Final_AVG(sum$0, count$1) AS EXPR$3]) +- Exchange(distribution=[hash[a, b, c, $e]]) +- LocalHashAggregate(groupBy=[a, b, c, $e], select=[a, b, c, $e, Partial_AVG(d) AS (sum$0, count$1)]) +- Expand(projects=[\{a, b, c, d, 0 AS $e}, \{a, b, null AS c, d, 1 AS $e}]) +- TableSourceScan(table=[[default_catalog, default_database, t1]], fields=[a, b, c, d]) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21654) YARNSessionCapacitySchedulerITCase.testStartYarnSessionClusterInQaTeamQueue fail because of NullPointerException
Guowei Ma created FLINK-21654: - Summary: YARNSessionCapacitySchedulerITCase.testStartYarnSessionClusterInQaTeamQueue fail because of NullPointerException Key: FLINK-21654 URL: https://issues.apache.org/jira/browse/FLINK-21654 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.12.2 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14265=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf {code:java} 2021-03-07T23:00:44.6390668Z [ERROR] testStartYarnSessionClusterInQaTeamQueue(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase) Time elapsed: 7.338 s <<< ERROR! 2021-03-07T23:00:44.6391415Z java.lang.NullPointerException: 2021-03-07T23:00:44.6403594Z java.lang.NullPointerException 2021-03-07T23:00:44.6404575Zat org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128) 2021-03-07T23:00:44.6405710Zat org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900) 2021-03-07T23:00:44.6406830Zat org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660) 2021-03-07T23:00:44.6407970Zat org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930) 2021-03-07T23:00:44.6409075Zat org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273) 2021-03-07T23:00:44.6412848Zat org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507) 2021-03-07T23:00:44.6417313Zat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2021-03-07T23:00:44.6421872Zat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2021-03-07T23:00:44.6423676Zat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847) 2021-03-07T23:00:44.6424387Zat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790) 2021-03-07T23:00:44.6424997Zat java.security.AccessController.doPrivileged(Native Method) 2021-03-07T23:00:44.6425608Zat javax.security.auth.Subject.doAs(Subject.java:422) 2021-03-07T23:00:44.6426513Zat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) 2021-03-07T23:00:44.6427351Zat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) 2021-03-07T23:00:44.6427767Z 2021-03-07T23:00:44.6428196Zat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2021-03-07T23:00:44.6428975Zat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 2021-03-07T23:00:44.6429888Zat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2021-03-07T23:00:44.6442419Zat java.lang.reflect.Constructor.newInstance(Constructor.java:423) 2021-03-07T23:00:44.6445364Zat org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) 2021-03-07T23:00:44.6644429Zat org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85) 2021-03-07T23:00:44.6658468Zat org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122) 2021-03-07T23:00:44.6669171Zat org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:291) 2021-03-07T23:00:44.6680027Zat sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) 2021-03-07T23:00:44.6690713Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-03-07T23:00:44.6701085Zat java.lang.reflect.Method.invoke(Method.java:498) 2021-03-07T23:00:44.6708626Zat org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2021-03-07T23:00:44.6709488Zat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2021-03-07T23:00:44.6710261Zat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2021-03-07T23:00:44.6711051Zat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2021-03-07T23:00:44.6711864Zat org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2021-03-07T23:00:44.6729939Zat com.sun.proxy.$Proxy111.getApplications(Unknown Source) 2021-03-07T23:00:44.6746044Zat org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:528) 2021-03-07T23:00:44.6747093Zat
[jira] [Created] (FLINK-21653) org.apache.flink.streaming.scala.api.TextOutputFormatITCase.testPath failed because of "Not enough resources available for scheduling."
Guowei Ma created FLINK-21653: - Summary: org.apache.flink.streaming.scala.api.TextOutputFormatITCase.testPath failed because of "Not enough resources available for scheduling." Key: FLINK-21653 URL: https://issues.apache.org/jira/browse/FLINK-21653 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.13.0 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14263=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7030a106-e977-5851-a05e-535de648c9c9 {code:java} 2021-03-07T22:45:25.9602602Z org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2021-03-07T22:45:25.9603364Zat org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) 2021-03-07T22:45:25.9604575Zat org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) 2021-03-07T22:45:25.9605462Zat java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) 2021-03-07T22:45:25.9606213Zat java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) 2021-03-07T22:45:25.9606962Zat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2021-03-07T22:45:25.9607735Zat java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2021-03-07T22:45:25.9608583Zat org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) 2021-03-07T22:45:25.9609568Zat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2021-03-07T22:45:25.9610371Zat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2021-03-07T22:45:25.9611186Zat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2021-03-07T22:45:25.9611975Zat java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2021-03-07T22:45:25.9612867Zat org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1066) 2021-03-07T22:45:25.9613555Zat akka.dispatch.OnComplete.internal(Future.scala:264) 2021-03-07T22:45:25.9614162Zat akka.dispatch.OnComplete.internal(Future.scala:261) 2021-03-07T22:45:25.9614778Zat akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) 2021-03-07T22:45:25.9615402Zat akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) 2021-03-07T22:45:25.9616020Zat scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) 2021-03-07T22:45:25.9616754Zat org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) 2021-03-07T22:45:25.9617618Zat scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) 2021-03-07T22:45:25.9618311Zat scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) 2021-03-07T22:45:25.9619177Zat akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) 2021-03-07T22:45:25.9619924Zat akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) 2021-03-07T22:45:25.9620764Zat akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) 2021-03-07T22:45:25.9621532Zat scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) 2021-03-07T22:45:25.9622191Zat scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) 2021-03-07T22:45:25.9622976Zat scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) 2021-03-07T22:45:25.9623782Zat akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) 2021-03-07T22:45:25.9624906Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) 2021-03-07T22:45:25.9625792Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) 2021-03-07T22:45:25.9626601Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) 2021-03-07T22:45:25.9627367Zat scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 2021-03-07T22:45:25.9628065Zat akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) 2021-03-07T22:45:25.9628728Zat akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) 2021-03-07T22:45:25.9629664Zat akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) 2021-03-07T22:45:25.9630495Zat akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 2021-03-07T22:45:25.9631224Zat akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 2021-03-07T22:45:25.9631909Zat
退订
退订
[jira] [Created] (FLINK-21652) Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi failed because of throwing SocketTimeOut Exception during closing stage.
Guowei Ma created FLINK-21652: - Summary: Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi failed because of throwing SocketTimeOut Exception during closing stage. Key: FLINK-21652 URL: https://issues.apache.org/jira/browse/FLINK-21652 Project: Flink Issue Type: Bug Components: Connectors / ElasticSearch Affects Versions: 1.13.0 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14263=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 {code:java} 2021-03-07T23:12:11.0702985Z [ERROR] testWritingDocumentsFromTableApi(org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase) Time elapsed: 31.256 s <<< ERROR! 2021-03-07T23:12:11.0704247Z java.util.concurrent.ExecutionException: org.apache.flink.table.api.TableException: Failed to wait job finish 2021-03-07T23:12:11.0705203Zat java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 2021-03-07T23:12:11.0705982Zat java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 2021-03-07T23:12:11.0706791Zat org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:123) 2021-03-07T23:12:11.0707583Zat org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:86) 2021-03-07T23:12:11.0708850Zat org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi(Elasticsearch6DynamicSinkITCase.java:205) 2021-03-07T23:12:11.0709804Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-03-07T23:12:11.0710479Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-03-07T23:12:11.0711251Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-03-07T23:12:11.0711974Zat java.lang.reflect.Method.invoke(Method.java:498) 2021-03-07T23:12:11.0712663Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2021-03-07T23:12:11.0713466Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2021-03-07T23:12:11.0715464Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2021-03-07T23:12:11.0716070Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2021-03-07T23:12:11.0716576Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2021-03-07T23:12:11.0717278Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2021-03-07T23:12:11.0717873Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2021-03-07T23:12:11.0718540Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2021-03-07T23:12:11.0718940Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2021-03-07T23:12:11.0719362Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2021-03-07T23:12:11.0719792Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2021-03-07T23:12:11.0720207Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2021-03-07T23:12:11.0720728Zat org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30) 2021-03-07T23:12:11.0721353Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2021-03-07T23:12:11.0721940Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2021-03-07T23:12:11.0753352Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) 2021-03-07T23:12:11.0754394Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) 2021-03-07T23:12:11.0755260Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) 2021-03-07T23:12:11.0756131Zat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) 2021-03-07T23:12:11.0757021Zat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) 2021-03-07T23:12:11.0758317Zat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) 2021-03-07T23:12:11.0759162Zat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 2021-03-07T23:12:11.0760036Zat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 2021-03-07T23:12:11.0760719Z Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish 2021-03-07T23:12:11.0761710Zat org.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56) 2021-03-07T23:12:11.0762716Zat
[jira] [Created] (FLINK-21651) Migrate module-related tests in LocalExecutorITCase to new integration test framework
Jane Chan created FLINK-21651: - Summary: Migrate module-related tests in LocalExecutorITCase to new integration test framework Key: FLINK-21651 URL: https://issues.apache.org/jira/browse/FLINK-21651 Project: Flink Issue Type: Sub-task Components: Table SQL / Client Affects Versions: 1.13.0 Reporter: Jane Chan Migrate module-related tests in `LocalExecutorITCase` after FLINK-21614 merged. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Apache Flink Jira Process
Thanks for the updates, Konstantin. The changes look good to me. Minor: - typo: The last two `auto-deprioritized-blocker` in rule 1 details should be `auto-deprioritized-critical/major`. Thank you~ Xintong Song On Fri, Mar 5, 2021 at 7:33 PM Konstantin Knauf wrote: > Hi everyone, > > Thank you for all the comments so far. As proposed, I have dropped the > "Trivial" Priority. > > I also added another section "Rules in Detail" to the document adding some > concrete numbers & labels that implement the rules. As a TLDR, here is an > example of the flow for a "Blocker", that is created and assigned to a > user, but never receives any updates afterwards. > > Day > > Status > > Priority > > Labels > > 0 > > Open > > Blocker > > 7 > > Open > > Blocker > > stale-assigned > > 14 > > Open > > Blocker > > auto-unassigned > > 15 > > Open > > Blocker > > auto-unassigned, stale-blocker > > 22 > > Open > > Critical > > auto-unassigned, auto-deprioritized-blocker > > 29 > > Open > > Critical > > auto-unassigned, auto-deprioritized-blocker, stale-critical > > 36 > > Open > > Major > > auto-unassigned, auto-deprioritized-blocker, auto-deprioritized-critical > > 66 > > Open > > Major > > auto-unassigned, auto-deprioritized-blocker, auto-deprioritized-critical, > stale-major > > 73 > > Open > > Minor > > auto-unassigned, auto-deprioritized-blocker, auto-deprioritized-critical, > auto-deprioritized-major > > 263 > > Open > > Minor > > auto-unassigned, auto-deprioritized-blocker, auto-deprioritized-critical, > auto-deprioritized-major, stale-minor > > 270 > > Closed > > Minor > > auto-unassigned, auto-deprioritized-blocker, auto-deprioritized-critical, > auto-deprioritized-major, auto-closed > > I am looking forward to further comments and would otherwise proceed to a > vote towards the end of next week. > > Cheers, > > Konstantin > > > On Tue, Mar 2, 2021 at 3:45 PM Robert Metzger wrote: > > > Thanks a lot for the proposal! > > > > +1 for doing it! > > > > On Tue, Mar 2, 2021 at 12:27 PM Khachatryan Roman < > > khachatryan.ro...@gmail.com> wrote: > > > > > Hi Konstantin, > > > > > > I think we should try it out. > > > Even if tickets don't work well it can be a good step towards managing > > > technical debt in some other way, like wiki. > > > > > > Thanks! > > > > > > Regards, > > > Roman > > > > > > > > > On Tue, Mar 2, 2021 at 9:32 AM Dawid Wysakowicz < > dwysakow...@apache.org> > > > wrote: > > > > > > > I'd be fine with dropping the "Trivial" priority in favour of > "starter" > > > > label. > > > > > > > > Best, > > > > > > > > Dawid > > > > > > > > On 01/03/2021 11:53, Konstantin Knauf wrote: > > > > > Hi Dawid, > > > > > > > > > > Thanks for the feedback. Do you think we should simply get rid of > the > > > > > "Trivial" priority then and use the "starter" label more > > aggressively? > > > > > > > > > > Best, > > > > > > > > > > Konstantin > > > > > > > > > > On Mon, Mar 1, 2021 at 11:44 AM Dawid Wysakowicz < > > > dwysakow...@apache.org > > > > > > > > > > wrote: > > > > > > > > > >> Hi Konstantin, > > > > >> > > > > >> I also like the idea. > > > > >> > > > > >> Two comments: > > > > >> > > > > >> * you describe the "Trivial" priority as one that needs to be > > > > >> implemented immediately. First of all it is not used to often, > but I > > > > >> think the way it works now is similar with a "starter" label. > Tasks > > > that > > > > >> are not bugs, are easy to implement and we think they are fine to > be > > > > >> taken by newcomers. Therefore they do not fall in my mind into > > > > >> "immediately" category. > > > > >> > > > > >> * I would still deprioritise test instabilities. I think there > > > shouldn't > > > > >> be a problem with that. We do post links to all failures therefore > > it > > > > >> will automatically priortise the tasks according to failure > > > frequencies. > > > > >> > > > > >> Best, > > > > >> > > > > >> Dawid > > > > >> > > > > >> On 01/03/2021 09:38, Konstantin Knauf wrote: > > > > >>> Hi Xintong, > > > > >>> > > > > >>> yes, such labels would make a lot of sense. I added a sentence to > > the > > > > >>> document. > > > > >>> > > > > >>> Thanks, > > > > >>> > > > > >>> Konstantin > > > > >>> > > > > >>> On Mon, Mar 1, 2021 at 8:51 AM Xintong Song < > tonysong...@gmail.com > > > > > > > >> wrote: > > > > Thanks for driving this discussion, Konstantin. > > > > > > > > I like the idea of having a bot reminding > > reporter/assignee/watchers > > > > >> about > > > > inactive tickets and if needed downgrade/close them > automatically. > > > > > > > > My two cents: > > > > We may have labels like "downgraded-by-bot" / "closed-by-bot", > so > > > that > > > > >> it's > > > > easier to filter and review tickets updated by the bot. > > > > We may want to review such tickets (e.g., monthly) in case a > valid > > > > >> ticket > > > > failed to draw the attention of relevant committers and the > > reporter > > > > doesn't know
[jira] [Created] (FLINK-21650) On heap state restore, skip key groups from other state backends
Roman Khachatryan created FLINK-21650: - Summary: On heap state restore, skip key groups from other state backends Key: FLINK-21650 URL: https://issues.apache.org/jira/browse/FLINK-21650 Project: Flink Issue Type: Sub-task Components: Runtime / State Backends Reporter: Roman Khachatryan Assignee: Roman Khachatryan Fix For: 1.13.0 In new incremental mode, heap backend wraps KeyGroupsStateHandle into IncrementalRemoteKeyedStateHandle. The latter don't compute intersection because it is not directly aware of the offsets. So it just returns a full keyrange if there is SOME intersection. On recovery, unused keyGroups are filtered out by RocksDB state backend. With this change, Heap state backend does the same. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21649) Refactor CopyOnWrite state classes
Roman Khachatryan created FLINK-21649: - Summary: Refactor CopyOnWrite state classes Key: FLINK-21649 URL: https://issues.apache.org/jira/browse/FLINK-21649 Project: Flink Issue Type: Sub-task Components: Runtime / State Backends Reporter: Roman Khachatryan Assignee: Roman Khachatryan Motivation: allow extension by incremental counterparts -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21648) FLIP-151: Incremental snapshots for heap-based state backend
Roman Khachatryan created FLINK-21648: - Summary: FLIP-151: Incremental snapshots for heap-based state backend Key: FLINK-21648 URL: https://issues.apache.org/jira/browse/FLINK-21648 Project: Flink Issue Type: New Feature Components: Runtime / State Backends Reporter: Roman Khachatryan Assignee: Roman Khachatryan Fix For: 1.13.0 Umbrella ticket for [https://cwiki.apache.org/confluence/display/FLINK/FLIP-151%3A+Incremental+snapshots+for+heap-based+state+backend] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21647) 'Run kubernetes session test (default input)' failed on Azure
Jark Wu created FLINK-21647: --- Summary: 'Run kubernetes session test (default input)' failed on Azure Key: FLINK-21647 URL: https://issues.apache.org/jira/browse/FLINK-21647 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes Affects Versions: 1.13.0 Reporter: Jark Wu https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14236=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=2247 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21646) Support json serialization/deserialization for the states in StreamExecGroupAggregate
godfrey he created FLINK-21646: -- Summary: Support json serialization/deserialization for the states in StreamExecGroupAggregate Key: FLINK-21646 URL: https://issues.apache.org/jira/browse/FLINK-21646 Project: Flink Issue Type: Sub-task Reporter: godfrey he -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21645) Introduce StateDeclaration
godfrey he created FLINK-21645: -- Summary: Introduce StateDeclaration Key: FLINK-21645 URL: https://issues.apache.org/jira/browse/FLINK-21645 Project: Flink Issue Type: Sub-task Components: Table SQL / Runtime Reporter: godfrey he Fix For: 1.13.0 StateDeclaration is used for letting the planner know the state declaration for an operator. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21644) Resuming Savepoint (rocks, scale up, heap timers) end-to-end test failed
Guowei Ma created FLINK-21644: - Summary: Resuming Savepoint (rocks, scale up, heap timers) end-to-end test failed Key: FLINK-21644 URL: https://issues.apache.org/jira/browse/FLINK-21644 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.11.3 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14213=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9 Due to the some operator exit slowly the test fail. We can find that the "EventSource -> Timestamps/Watermarks" exit very quickly. But the "ArtificalKeyedStateMapper_Kryo_and_Custom_Stateful" spend another 34s to exits. {code:java} 2021-03-05 21:41:15,327 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: EventSource -> Timestamps/Watermarks (2/2) (87b0121ed823482e9f5718e99793ee5c) switched from RUNNING to FINISHED. 2021-03-05 21:41:15,327 INFO org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for Source: EventSource -> Timestamps/Watermarks (2/2) (87b0121ed823482e9f5718e99793ee5c). 2021-03-05 21:41:15,332 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: EventSource -> Timestamps/Watermarks (1/2) (4ede785fe0f1c4c798030ac748da0a95) switched from RUNNING to FINISHED. 2021-03-05 21:41:15,332 INFO org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for Source: EventSource -> Timestamps/Watermarks (1/2) (4ede785fe0f1c4c798030ac748da0a95). 2021-03-05 21:41:15,336 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task and sending final execution state FINISHED to JobManager for task Source: EventSource -> Timestamps/Watermarks (2/2) 87b0121ed823482e9f5718e99793ee5c. 2021-03-05 21:41:15,338 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task and sending final execution state FINISHED to JobManager for task Source: EventSource -> Timestamps/Watermarks (1/2) 4ede785fe0f1c4c798030ac748da0a95. 2021-03-05 21:41:16,294 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:17,298 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:18,301 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:19,304 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:20,307 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:21,310 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:22,313 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:23,315 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:24,318 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:25,321 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:26,324 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:27,326 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:28,329 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:29,332 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:30,335 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:31,337 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:32,340 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:33,343 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:34,345 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:35,348 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:36,351 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:37,353 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:38,356 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:39,359 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:40,362 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:41,365 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:42,368 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:43,371 INFO org.apache.flink.metrics.slf4j.Slf4jReporter [] - 2021-03-05 21:41:44,373 INFO