[jira] [Created] (FLINK-20683) TaskSlotTableImplTest.testTryMarkSlotActiveDeactivatesSlotTimeout test failed with "The slot timeout should have been deactivated."
Huang Xingbo created FLINK-20683: Summary: TaskSlotTableImplTest.testTryMarkSlotActiveDeactivatesSlotTimeout test failed with "The slot timeout should have been deactivated." Key: FLINK-20683 URL: https://issues.apache.org/jira/browse/FLINK-20683 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.12.0, 1.13.0 Reporter: Huang Xingbo [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11071=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada] {code:java} 2020-12-19T22:56:49.8133545Z [ERROR] testTryMarkSlotActiveDeactivatesSlotTimeout(org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImplTest) Time elapsed: 0.082 s <<< FAILURE! 2020-12-19T22:56:49.8135672Z java.lang.AssertionError: The slot timeout should have been deactivated. 2020-12-19T22:56:49.8136417Zat org.junit.Assert.fail(Assert.java:88) 2020-12-19T22:56:49.8137153Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImplTest.runDeactivateSlotTimeoutTest(TaskSlotTableImplTest.java:344) 2020-12-19T22:56:49.8138303Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImplTest.testTryMarkSlotActiveDeactivatesSlotTimeout(TaskSlotTableImplTest.java:326) 2020-12-19T22:56:49.8139195Zat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-12-19T22:56:49.8139805Zat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-12-19T22:56:49.8167569Zat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-12-19T22:56:49.8168468Zat java.base/java.lang.reflect.Method.invoke(Method.java:566) 2020-12-19T22:56:49.8169248Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2020-12-19T22:56:49.8169942Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2020-12-19T22:56:49.8170657Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2020-12-19T22:56:49.8171359Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2020-12-19T22:56:49.8171983Zat org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) 2020-12-19T22:56:49.8172576Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-12-19T22:56:49.8173209Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2020-12-19T22:56:49.8173834Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2020-12-19T22:56:49.8174556Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2020-12-19T22:56:49.8174994Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-12-19T22:56:49.8175376Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-12-19T22:56:49.8175782Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-12-19T22:56:49.8176188Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-12-19T22:56:49.8176576Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-12-19T22:56:49.8176965Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-12-19T22:56:49.8177326Zat org.junit.runners.Suite.runChild(Suite.java:128) 2020-12-19T22:56:49.8177653Zat org.junit.runners.Suite.runChild(Suite.java:27) 2020-12-19T22:56:49.8178104Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-12-19T22:56:49.8178706Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-12-19T22:56:49.8179223Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-12-19T22:56:49.8179781Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-12-19T22:56:49.8180169Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-12-19T22:56:49.8180565Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-12-19T22:56:49.8180963Zat org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) 2020-12-19T22:56:49.8181435Zat org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) 2020-12-19T22:56:49.8181964Zat org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107) 2020-12-19T22:56:49.8182468Zat org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83) 2020-12-19T22:56:49.8182951Zat org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) 2020-12-19T22:56:49.8183446Zat org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) 2020-12-19T22:56:49.8183972Zat
[jira] [Created] (FLINK-20682) Add configuration options related to hadoop
Ruguo Yu created FLINK-20682: Summary: Add configuration options related to hadoop Key: FLINK-20682 URL: https://issues.apache.org/jira/browse/FLINK-20682 Project: Flink Issue Type: Improvement Components: Deployment / YARN Affects Versions: 1.12.0 Reporter: Ruguo Yu Fix For: 1.13.0, 1.12.1 {{Current, we submit flink job to yarn with run-application target and need to specify some configuration related to hadoop, because we use distributed filesystem similar to Ali oss to storage resources, in this case, we will pass special configuration option and set them to hadoopConfiguration.}} {{In order to solve such problems, we can provide a configuration option prefixed with "flink.hadoop."(such as -Dflink.hadoop.x.y.z), and then take it into H}}{{adoopConfiguration.}} {{A simple implementation code is as follows:}}{{}} {code:java} module: flink-filesystems/flink-hadoop-fs class: org.apache.flink.runtime.util.HadoopUtils //代码占位符 public static Configuration getHadoopConfiguration(org.apache.flink.configuration.Configuration flinkConfiguration) { .. // Copy any "flink.hadoop.xxx=yyy" flink configuration to hadoop configuration as "xxx=yyy" for (String key : flinkConfiguration.keySet()) { if (key.startsWith("flink.hadoop.")) { result.set(key.substring("flink.hadoop.".length()), flinkConfiguration.getString(key, null)); } } return result; }{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20681) Support specifying the hdfs path when ship archives or files
Ruguo Yu created FLINK-20681: Summary: Support specifying the hdfs path when ship archives or files Key: FLINK-20681 URL: https://issues.apache.org/jira/browse/FLINK-20681 Project: Flink Issue Type: Improvement Components: Deployment / YARN Affects Versions: 1.12.0 Reporter: Ruguo Yu Fix For: 1.13.0 Currently, our team try to submit flink job that depends extra resource with yarn-application target, and use two options: "yarn.ship-archives" and "yarn.ship-files". But above options only support specifying local resource and shiping them to hdfs, besides if it can support remote resource on distributed filesystem (such as hdfs), then get the following benefits: * client will exclude the local resource uploading to accelerate the job submission process * yarn will cache them on the nodes so that they doesn't need to be downloaded every time for each application -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20680) Fails to call var-arg function with no parameters
Rui Li created FLINK-20680: -- Summary: Fails to call var-arg function with no parameters Key: FLINK-20680 URL: https://issues.apache.org/jira/browse/FLINK-20680 Project: Flink Issue Type: Bug Components: Table SQL / API Reporter: Rui Li -- This message was sent by Atlassian Jira (v8.3.4#803005)