[GitHub] [flink] KurtYoung commented on a change in pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
KurtYoung commented on a change in pull request #15564: URL: https://github.com/apache/flink/pull/15564#discussion_r612150467 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/HiveCatalogTest.java ## @@ -78,4 +80,17 @@ public void testCreateHiveTable() { prop.keySet().stream() .noneMatch(k -> k.startsWith(CatalogPropertiesUtil.FLINK_PROPERTY_PREFIX))); } + +@Test +public void testRetrieveFlinkProperties() throws ClassNotFoundException, NoSuchMethodException, InvocationTargetException, IllegalAccessException { Review comment: Can you choose another way to test this? Relying on private method is not a good practice. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-21748) LocalExecutorITCase.testBatchQueryExecutionMultipleTimes[Planner: old] fails
[ https://issues.apache.org/jira/browse/FLINK-21748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young closed FLINK-21748. -- Resolution: Fixed fixed: b78ce3b9713c7b2792629d509a3951a96829012d > LocalExecutorITCase.testBatchQueryExecutionMultipleTimes[Planner: old] fails > > > Key: FLINK-21748 > URL: https://issues.apache.org/jira/browse/FLINK-21748 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Shengkai Fang >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14520=logs=b2f046ab-ae17-5406-acdc-240be7e870e4=93e5ae06-d194-513d-ba8d-150ef6da1d7c=8800 > {code} > [ERROR] testBatchQueryExecutionMultipleTimes[Planner: > old](org.apache.flink.table.client.gateway.local.LocalExecutorITCase) Time > elapsed: 0.438 s <<< ERROR! > org.apache.flink.table.client.gateway.SqlExecutionException: Error while > retrieving result. > at > org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:79) > Caused by: java.lang.NullPointerException > at > org.apache.flink.table.client.gateway.local.result.MaterializedCollectBatchResult.processRecord(MaterializedCollectBatchResult.java:48) > at > org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:76) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] KurtYoung merged pull request #15563: [FLINK-21748][sql-client] Fix unstable LocalExecutorITCase.testBatchQ…
KurtYoung merged pull request #15563: URL: https://github.com/apache/flink/pull/15563 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (FLINK-21879) ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma reopened FLINK-21879: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16357=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=5875 > ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime > fails on AZP > - > > Key: FLINK-21879 > URL: https://issues.apache.org/jira/browse/FLINK-21879 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Fabian Paul >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15047=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=6760 > {code:java} > [ERROR] > testWorkerRegistrationTimeoutNotCountingAllocationTime(org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest) > Time elapsed: 0.388 s <<< FAILURE! > java.lang.AssertionError: > Expected: an instance of > org.apache.flink.runtime.registration.RegistrationResponse$Success > but: The ResourceManager does not recognize this TaskExecutor.> is a > org.apache.flink.runtime.taskexecutor.TaskExecutorRegistrationRejection > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:956) > at org.junit.Assert.assertThat(Assert.java:923) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.lambda$new$2(ActiveResourceManagerTest.java:789) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$Context.runTest(ActiveResourceManagerTest.java:857) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.(ActiveResourceManagerTest.java:770) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime(ActiveResourceManagerTest.java:753) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message
[jira] [Issue Comment Deleted] (FLINK-21879) ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-21879: -- Comment: was deleted (was: another case https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16357=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=6705) > ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime > fails on AZP > - > > Key: FLINK-21879 > URL: https://issues.apache.org/jira/browse/FLINK-21879 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Fabian Paul >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15047=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=6760 > {code:java} > [ERROR] > testWorkerRegistrationTimeoutNotCountingAllocationTime(org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest) > Time elapsed: 0.388 s <<< FAILURE! > java.lang.AssertionError: > Expected: an instance of > org.apache.flink.runtime.registration.RegistrationResponse$Success > but: The ResourceManager does not recognize this TaskExecutor.> is a > org.apache.flink.runtime.taskexecutor.TaskExecutorRegistrationRejection > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:956) > at org.junit.Assert.assertThat(Assert.java:923) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.lambda$new$2(ActiveResourceManagerTest.java:789) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$Context.runTest(ActiveResourceManagerTest.java:857) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.(ActiveResourceManagerTest.java:770) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime(ActiveResourceManagerTest.java:753) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at >
[jira] [Closed] (FLINK-22166) Empty values with sort willl fail
[ https://issues.apache.org/jira/browse/FLINK-22166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-22166. Resolution: Fixed master (1.13): 0e82a998afbb1d4be4fbf1a48c2a232ccc53fb9c > Empty values with sort willl fail > -- > > Key: FLINK-22166 > URL: https://issues.apache.org/jira/browse/FLINK-22166 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > SELECT * FROM (VALUES 1, 2, 3) AS T (a) WHERE a = 1 and a = 2 ORDER BY a -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22190) no guarantee on Flink exactly_once sink to Kafka
[ https://issues.apache.org/jira/browse/FLINK-22190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319910#comment-17319910 ] Arvid Heise commented on FLINK-22190: - 1. You get byZeroException because you are dividing by 0 in user code {{/ Random.nextInt(5)}}. That's something that you need to fix on your end. 2. Could you provide example output to show the duplicates? Where does the fail-over happen? Note that exactly once does not mean deduplication of records or parts thereof. Exactly once ensures that there are no duplicates caused by fail-over/restarts. > no guarantee on Flink exactly_once sink to Kafka > - > > Key: FLINK-22190 > URL: https://issues.apache.org/jira/browse/FLINK-22190 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.12.2 > Environment: *flink: 1.12.2* > *kafka: 2.7.0* >Reporter: Spongebob >Priority: Major > > When I tried to test the function of flink exactly_once sink to kafka, I > found it can not run as expectation. here's the pipline of the flink > applications: raw data(flink app0)-> kafka topic1 -> flink app1 -> kafka > topic2 -> flink app2, flink tasks may met / byZeroException in random. Below > shows the codes: > {code:java} > //代码占位符 > raw data, flink app0: > class SimpleSource1 extends SourceFunction[String] { > var switch = true > val students: Array[String] = Array("Tom", "Jerry", "Gory") > override def run(sourceContext: SourceFunction.SourceContext[String]): Unit > = { > var i = 0 > while (switch) { > sourceContext.collect(s"${students(Random.nextInt(students.length))},$i") > i += 1 > Thread.sleep(5000) > } > } > override def cancel(): Unit = switch = false > } > val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment > val dataStream = streamEnv.addSource(new SimpleSource1) > dataStream.addSink(new FlinkKafkaProducer[String]("xfy:9092", > "single-partition-topic-2", new SimpleStringSchema())) > streamEnv.execute("sink kafka") > > flink-app1: > val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment > streamEnv.enableCheckpointing(1000, CheckpointingMode.EXACTLY_ONCE) > val prop = new Properties() > prop.setProperty("bootstrap.servers", "xfy:9092") > prop.setProperty("group.id", "test") > val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String]( > "single-partition-topic-2", > new SimpleStringSchema, > prop > )) > val resultStream = dataStream.map(x => { > val data = x.split(",") > (data(0), data(1), data(1).toInt / Random.nextInt(5)).toString() > } > ) > resultStream.print().setParallelism(1) > val propProducer = new Properties() > propProducer.setProperty("bootstrap.servers", "xfy:9092") > propProducer.setProperty("transaction.timeout.ms", s"${1000 * 60 * 5}") > resultStream.addSink(new FlinkKafkaProducer[String]( > "single-partition-topic", > new MyKafkaSerializationSchema("single-partition-topic"), > propProducer, > Semantic.EXACTLY_ONCE)) > streamEnv.execute("sink kafka") > > flink-app2: > val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment > val prop = new Properties() > prop.setProperty("bootstrap.servers", "xfy:9092") > prop.setProperty("group.id", "test") > prop.setProperty("isolation_level", "read_committed") > val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String]( > "single-partition-topic", > new SimpleStringSchema, > prop > )) > dataStream.print().setParallelism(1) > streamEnv.execute("consumer kafka"){code} > > flink app1 will print some duplicate numbers, and to my expectation flink > app2 will deduplicate them but the fact shows not. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi merged pull request #15571: [FLINK-22166][table] Empty values with sort willl fail
JingsongLi merged pull request #15571: URL: https://github.com/apache/flink/pull/15571 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-21879) ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319909#comment-17319909 ] Guowei Ma commented on FLINK-21879: --- another case https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16357=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=6705 > ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime > fails on AZP > - > > Key: FLINK-21879 > URL: https://issues.apache.org/jira/browse/FLINK-21879 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Fabian Paul >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15047=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=6760 > {code:java} > [ERROR] > testWorkerRegistrationTimeoutNotCountingAllocationTime(org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest) > Time elapsed: 0.388 s <<< FAILURE! > java.lang.AssertionError: > Expected: an instance of > org.apache.flink.runtime.registration.RegistrationResponse$Success > but: The ResourceManager does not recognize this TaskExecutor.> is a > org.apache.flink.runtime.taskexecutor.TaskExecutorRegistrationRejection > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:956) > at org.junit.Assert.assertThat(Assert.java:923) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.lambda$new$2(ActiveResourceManagerTest.java:789) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$Context.runTest(ActiveResourceManagerTest.java:857) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest$13.(ActiveResourceManagerTest.java:770) > at > org.apache.flink.runtime.resourcemanager.active.ActiveResourceManagerTest.testWorkerRegistrationTimeoutNotCountingAllocationTime(ActiveResourceManagerTest.java:753) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at >
[GitHub] [flink] chaozwn commented on pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
chaozwn commented on pull request #15582: URL: https://github.com/apache/flink/pull/15582#issuecomment-818457368 > What's the problem if we don't add it? There are two descriptions on the original file that have been added to the ParserResource.properties, and the calcite official also added the corresponding CalciteResource.properties file to its own CalciteException. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22100) BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no output for 900 seconds
[ https://issues.apache.org/jira/browse/FLINK-22100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319906#comment-17319906 ] Guowei Ma commented on FLINK-22100: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16357=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=17003 > BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no > output for 900 seconds > --- > > Key: FLINK-22100 > URL: https://issues.apache.org/jira/browse/FLINK-22100 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Assignee: Caizhi Weng >Priority: Critical > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15988=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=16696 > {code:java} > "main" #1 prio=5 os_prio=0 tid=0x7f844400b800 nid=0x7b96 waiting on > condition [0x7f844be37000] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:229) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:111) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > at > org.apache.flink.table.planner.connectors.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:117) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20747) ClassCastException when using MAX aggregate function
[ https://issues.apache.org/jira/browse/FLINK-20747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319905#comment-17319905 ] Jingsong Lee commented on FLINK-20747: -- I think this is caused by FLINK-13191, and is resolved by FLINK-21070. [~zengjinbo] You can try release-1.12.2, feel free to re-open this Jira if you have a problem. > ClassCastException when using MAX aggregate function > > > Key: FLINK-20747 > URL: https://issues.apache.org/jira/browse/FLINK-20747 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: zengjinbo >Assignee: Jingsong Lee >Priority: Critical > Fix For: 1.13.0 > > Attachments: image-2020-12-23-18-04-21-079.png > > > During the process of upgrading 1.12.0, I found that Flink SQL 1.11.1 is not > compatible > java.lang.ClassCastException: java.lang.Integer cannot be cast to > org.apache.flink.table.data.StringDatajava.lang.ClassCastException: > java.lang.Integer cannot be cast to org.apache.flink.table.data.StringData at > org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$MaxWithRetractAccumulator$Converter.toInternal(Unknown > Source) at > org.apache.flink.table.data.conversion.StructuredObjectConverter.toInternal(StructuredObjectConverter.java:92) > at > org.apache.flink.table.data.conversion.StructuredObjectConverter.toInternal(StructuredObjectConverter.java:47) > at > org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:59) > at GroupAggsHandler$875.getAccumulators(Unknown Source) at > org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:175) > at > org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:45) > at > org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85) > at > org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:193) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:179) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:152) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:372) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) at > org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) > > > !image-2020-12-23-18-04-21-079.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20747) ClassCastException when using MAX aggregate function
[ https://issues.apache.org/jira/browse/FLINK-20747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-20747. Resolution: Duplicate > ClassCastException when using MAX aggregate function > > > Key: FLINK-20747 > URL: https://issues.apache.org/jira/browse/FLINK-20747 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: zengjinbo >Assignee: Jingsong Lee >Priority: Critical > Fix For: 1.13.0 > > Attachments: image-2020-12-23-18-04-21-079.png > > > During the process of upgrading 1.12.0, I found that Flink SQL 1.11.1 is not > compatible > java.lang.ClassCastException: java.lang.Integer cannot be cast to > org.apache.flink.table.data.StringDatajava.lang.ClassCastException: > java.lang.Integer cannot be cast to org.apache.flink.table.data.StringData at > org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$MaxWithRetractAccumulator$Converter.toInternal(Unknown > Source) at > org.apache.flink.table.data.conversion.StructuredObjectConverter.toInternal(StructuredObjectConverter.java:92) > at > org.apache.flink.table.data.conversion.StructuredObjectConverter.toInternal(StructuredObjectConverter.java:47) > at > org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:59) > at GroupAggsHandler$875.getAccumulators(Unknown Source) at > org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:175) > at > org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:45) > at > org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85) > at > org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:193) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:179) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:152) > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:372) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) at > org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) > > > !image-2020-12-23-18-04-21-079.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
flinkbot edited a comment on pull request #15582: URL: https://github.com/apache/flink/pull/15582#issuecomment-818444729 ## CI report: * a1730e1eca63cb5ae0ef0468a5ff76c9efdb5ccc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16418) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-21025) SQLClientHBaseITCase fails when untarring HBase
[ https://issues.apache.org/jira/browse/FLINK-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319904#comment-17319904 ] Guowei Ma commented on FLINK-21025: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16348=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=27549 > SQLClientHBaseITCase fails when untarring HBase > --- > > Key: FLINK-21025 > URL: https://issues.apache.org/jira/browse/FLINK-21025 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase, Table SQL / Client, Tests >Affects Versions: 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12210=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529 > {code} > [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 908.614 s <<< FAILURE! - in > org.apache.flink.tests.util.hbase.SQLClientHBaseITCase > Jan 19 08:19:36 [ERROR] testHBase[1: > hbase-version:2.2.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) > Time elapsed: 615.099 s <<< ERROR! > Jan 19 08:19:36 java.io.IOException: > Jan 19 08:19:36 Process execution failed due error. Error output: > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:133) > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108) > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess.runBlocking(AutoClosableProcess.java:70) > Jan 19 08:19:36 at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.setupHBaseDist(LocalStandaloneHBaseResource.java:86) > Jan 19 08:19:36 at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.before(LocalStandaloneHBaseResource.java:76) > Jan 19 08:19:36 at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > Jan 19 08:19:36 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > Jan 19 08:19:36 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > Jan 19 08:19:36 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > Jan 19 08:19:36 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > Jan 19 08:19:36 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:128) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:27) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48) > Jan 19 08:19:36 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:128) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:27) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at >
[GitHub] [flink] flinkbot edited a comment on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
flinkbot edited a comment on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-817625124 ## CI report: * cd5ab564ce3aa46edd059bf1113f5053debc7418 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16417) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21025) SQLClientHBaseITCase fails when untarring HBase
[ https://issues.apache.org/jira/browse/FLINK-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-21025: -- Affects Version/s: 1.12.2 > SQLClientHBaseITCase fails when untarring HBase > --- > > Key: FLINK-21025 > URL: https://issues.apache.org/jira/browse/FLINK-21025 > Project: Flink > Issue Type: Bug > Components: Connectors / HBase, Table SQL / Client, Tests >Affects Versions: 1.12.2, 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12210=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529 > {code} > [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 908.614 s <<< FAILURE! - in > org.apache.flink.tests.util.hbase.SQLClientHBaseITCase > Jan 19 08:19:36 [ERROR] testHBase[1: > hbase-version:2.2.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) > Time elapsed: 615.099 s <<< ERROR! > Jan 19 08:19:36 java.io.IOException: > Jan 19 08:19:36 Process execution failed due error. Error output: > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:133) > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108) > Jan 19 08:19:36 at > org.apache.flink.tests.util.AutoClosableProcess.runBlocking(AutoClosableProcess.java:70) > Jan 19 08:19:36 at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.setupHBaseDist(LocalStandaloneHBaseResource.java:86) > Jan 19 08:19:36 at > org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.before(LocalStandaloneHBaseResource.java:76) > Jan 19 08:19:36 at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > Jan 19 08:19:36 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > Jan 19 08:19:36 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > Jan 19 08:19:36 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > Jan 19 08:19:36 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > Jan 19 08:19:36 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:128) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:27) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48) > Jan 19 08:19:36 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:128) > Jan 19 08:19:36 at org.junit.runners.Suite.runChild(Suite.java:27) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jan 19 08:19:36 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jan 19 08:19:36 at > org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) > Jan 19
[GitHub] [flink] leonardBang commented on pull request #15578: [FLINK-21431][upsert-kafka] Use testcontainers for UpsertKafkaTableITCase
leonardBang commented on pull request #15578: URL: https://github.com/apache/flink/pull/15578#issuecomment-818454928 cc @wuchong could you help review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
wuchong commented on pull request #15582: URL: https://github.com/apache/flink/pull/15582#issuecomment-818453831 What's the problem if we don't add it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22141) Manually test exactly-once JDBC sink
[ https://issues.apache.org/jira/browse/FLINK-22141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319900#comment-17319900 ] Yuan Mei commented on FLINK-22141: -- TODO List: 1. Document limitation for support mysql xa transactions. refer to [FLINK-22239] Improve support for JdbcXaSinkFunction > Manually test exactly-once JDBC sink > > > Key: FLINK-22141 > URL: https://issues.apache.org/jira/browse/FLINK-22141 > Project: Flink > Issue Type: Test > Components: Connectors / JDBC >Reporter: Roman Khachatryan >Priority: Blocker > Labels: release-testing > Fix For: 1.13.0 > > > In FLINK-15578, an API and its implementation were added to JDBC connector to > support exactly-once semantics for sinks. The implementation uses JDBC XA > transactions. > The scope of this task is to make sure: > # The feature is well-documented > # The API is reasonably easy to use > # The implementation works as expected > ## normal case: database is updated on checkpointing > ## failure and recovery case: no duplicates inserted, no records skipped > ## several DBs: postgressql, mssql, oracle (mysql has a known issue: > FLINK-21743) > ## concurrent checkpoints > 1, DoP > 1 > # Logging is meaningful -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
flinkbot commented on pull request #15582: URL: https://github.com/apache/flink/pull/15582#issuecomment-818444729 ## CI report: * a1730e1eca63cb5ae0ef0468a5ff76c9efdb5ccc UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
flinkbot commented on pull request #15582: URL: https://github.com/apache/flink/pull/15582#issuecomment-818439853 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit a1730e1eca63cb5ae0ef0468a5ff76c9efdb5ccc (Tue Apr 13 05:06:15 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-22250).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-22250) flink-sql-parser model Class ParserResource lack ParserResource.properties
[ https://issues.apache.org/jira/browse/FLINK-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-22250: --- Labels: pull-request-available (was: ) > flink-sql-parser model Class ParserResource lack ParserResource.properties > -- > > Key: FLINK-22250 > URL: https://issues.apache.org/jira/browse/FLINK-22250 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Affects Versions: 1.13.0 >Reporter: WeiNan Zhao >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > flink sql parser module lack a base message in ParserResource.properties. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] chaozwn opened a new pull request #15582: [FLINK-22250] flink sql parser module ParserResource.properties lack createSystemFunctionOnlySupportTemporary describe message.
chaozwn opened a new pull request #15582: URL: https://github.com/apache/flink/pull/15582 …createSystemFunctionOnlySupportTemporary describe message. ## What is the purpose of the change *(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)* ## Brief change log *(for example:)* - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact* - *Deployments RPC transmits only the blob storage reference* - *TaskManagers retrieve the TaskInfo from the blob cache* ## Verifying this change *(Please pick either of the following options)* This change is a trivial rework / code cleanup without any test coverage. *(or)* This change is already covered by existing tests, such as *(please describe tests)*. *(or)* This change added tests and can be verified as follows: *(example:)* - *Added integration tests for end-to-end deployment with large payloads (100MB)* - *Extended integration test for recovery after master (JobManager) failure* - *Added test that validates that TaskInfo is transferred only once across recoveries* - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15581: [Hotfix][python] Fix variable typo.
flinkbot edited a comment on pull request #15581: URL: https://github.com/apache/flink/pull/15581#issuecomment-818431544 ## CI report: * 7e80f21e80bb9d5663d31cb19cd2811cb19037ec Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16416) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
flinkbot edited a comment on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-817625124 ## CI report: * ad833779d272f0f35cb82b26f559131e2192cfee Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16415) * cd5ab564ce3aa46edd059bf1113f5053debc7418 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-22250) flink-sql-parser model Class ParserResource lack ParserResource.properties
WeiNan Zhao created FLINK-22250: --- Summary: flink-sql-parser model Class ParserResource lack ParserResource.properties Key: FLINK-22250 URL: https://issues.apache.org/jira/browse/FLINK-22250 Project: Flink Issue Type: Improvement Components: Table SQL / API Affects Versions: 1.13.0 Reporter: WeiNan Zhao Fix For: 1.13.0 flink sql parser module lack a base message in ParserResource.properties. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] leonardBang commented on a change in pull request #15495: [FLINK-21305][table-planner-blink] Fix Cumulative and Hopping window should accumulate late events belonging to the cleaned s
leonardBang commented on a change in pull request #15495: URL: https://github.com/apache/flink/pull/15495#discussion_r612122508 ## File path: flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/aggregate/window/processors/SliceSharedWindowAggProcessor.java ## @@ -46,6 +46,7 @@ private final SliceSharedAssigner sliceSharedAssigner; private final WindowIsEmptySupplier emptySupplier; +private final SliceMergeTargetGetter mergeTargetGetter = new SliceMergeTargetGetter(); Review comment: move the initialization to class constructor? ## File path: flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/aggregate/window/SlicingWindowAggOperatorTest.java ## @@ -186,20 +189,20 @@ public void testEventTimeHoppingWindows() throws Exception { ASSERTER.assertOutputEqualsSorted( "Output was not correct.", expectedOutput, testHarness.getOutput()); -// late element, should be dropped +// late element for [1K, 4K), but should be accumulated into [2K, 5K), [3K, 6K) testHarness.processElement(insertRecord("key2", 1, 3500L)); testHarness.processWatermark(new Watermark(4999)); -expectedOutput.add(insertRecord("key2", 2L, 2L, localMills(2000L), localMills(5000L))); +expectedOutput.add(insertRecord("key2", 3L, 3L, localMills(2000L), localMills(5000L))); expectedOutput.add(new Watermark(4999)); ASSERTER.assertOutputEqualsSorted( "Output was not correct.", expectedOutput, testHarness.getOutput()); -// late element, should be dropped -testHarness.processElement(insertRecord("key1", 1, 4999L)); +// totally late element, should be dropped Review comment: nit: we can use `late than the slice` and `late then the window` ## File path: flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/aggregate/window/processors/SliceSharedWindowAggProcessor.java ## @@ -123,6 +124,19 @@ public void merge(@Nullable Long mergeResult, Iterable toBeMerged) throws } } +protected long sliceStateMergeTarget(long sliceToMerge) throws Exception { +mergeTargetGetter.mergeTarget = null; +sliceSharedAssigner.mergeSlices(sliceToMerge, mergeTargetGetter); Review comment: I got the point, but looks this doesn't very clean. Could we rename `SliceMergeTargetGetter` to `SliceMergeTargetHelper` because it didn't expose `getter` method, and add `SetMergeTarget` `GetMergeTarget`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
godfreyhe commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818431839 @YuvalItzchakov , you can create a new pr to cherry pick this pr to 1.12 once this one is merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #15581: [Hotfix][python] Fix variable typo.
flinkbot commented on pull request #15581: URL: https://github.com/apache/flink/pull/15581#issuecomment-818431544 ## CI report: * 7e80f21e80bb9d5663d31cb19cd2811cb19037ec UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
flinkbot edited a comment on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-817625124 ## CI report: * ad833779d272f0f35cb82b26f559131e2192cfee Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16415) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-22249) JobMasterStopWithSavepointITCase failed due to status is FAILING
Guowei Ma created FLINK-22249: - Summary: JobMasterStopWithSavepointITCase failed due to status is FAILING Key: FLINK-22249 URL: https://issues.apache.org/jira/browse/FLINK-22249 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing, Runtime / Coordination Affects Versions: 1.13.0 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4472 {code:java} [ERROR] Failures: [ERROR] JobMasterStopWithSavepointITCase.throwingExceptionOnCallbackWithNoRestartsShouldFailTheSuspend:133->throwingExceptionOnCallbackWithoutRestartsHelper:155 Expected: but: was [ERROR] JobMasterStopWithSavepointITCase.throwingExceptionOnCallbackWithNoRestartsShouldFailTheTerminate:138->throwingExceptionOnCallbackWithoutRestartsHelper:155 Expected: but: was [ERROR] Errors: [ERROR] JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished:103->stopWithSavepointNormalExecutionHelper:113->setUpJobGraph:307 » IllegalState [ERROR] JobMasterStopWithSavepointITCase.testRestartCheckpointCoordinatorIfStopWithSavepointFails:237 » IllegalState [INFO] [ERROR] Tests run: 1645, Failures: 2, Errors: 2, Skipped: 51 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22100) BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no output for 900 seconds
[ https://issues.apache.org/jira/browse/FLINK-22100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319884#comment-17319884 ] Guowei Ma commented on FLINK-22100: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=13120 > BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no > output for 900 seconds > --- > > Key: FLINK-22100 > URL: https://issues.apache.org/jira/browse/FLINK-22100 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Assignee: Caizhi Weng >Priority: Critical > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15988=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=16696 > {code:java} > "main" #1 prio=5 os_prio=0 tid=0x7f844400b800 nid=0x7b96 waiting on > condition [0x7f844be37000] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:229) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:111) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > at > org.apache.flink.table.planner.connectors.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:117) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22248) JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException
[ https://issues.apache.org/jira/browse/FLINK-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319883#comment-17319883 ] Guowei Ma commented on FLINK-22248: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=f2b08047-82c3-520f-51ee-a30fd6254285=601c4a98-b94a-54db-dc38-ae78131029dc=10517 > JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished > fail because of TestTimedOutException > > > Key: FLINK-22248 > URL: https://issues.apache.org/jira/browse/FLINK-22248 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4325 > {code:java} > org.junit.runners.model.TestTimedOutException: test timed out after 5000 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.setUpJobGraph(JobMasterStopWithSavepointITCase.java:308) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:113) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22100) BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no output for 900 seconds
[ https://issues.apache.org/jira/browse/FLINK-22100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319881#comment-17319881 ] Guowei Ma commented on FLINK-22100: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23283 > BatchFileSystemITCaseBase.testPartialDynamicPartition fail because of no > output for 900 seconds > --- > > Key: FLINK-22100 > URL: https://issues.apache.org/jira/browse/FLINK-22100 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Assignee: Caizhi Weng >Priority: Critical > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15988=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=16696 > {code:java} > "main" #1 prio=5 os_prio=0 tid=0x7f844400b800 nid=0x7b96 waiting on > condition [0x7f844be37000] >java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:229) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:111) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > at > org.apache.flink.table.planner.connectors.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:117) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22248) JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException
[ https://issues.apache.org/jira/browse/FLINK-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319880#comment-17319880 ] Guowei Ma commented on FLINK-22248: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=6dff16b1-bf54-58f3-23c6-76282f49a185=4467 > JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished > fail because of TestTimedOutException > > > Key: FLINK-22248 > URL: https://issues.apache.org/jira/browse/FLINK-22248 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4325 > {code:java} > org.junit.runners.model.TestTimedOutException: test timed out after 5000 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.setUpJobGraph(JobMasterStopWithSavepointITCase.java:308) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:113) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
YuvalItzchakov commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818426492 @godfreyhe It maybe better for me as well as my source is currently built on top of 1.12 and still not entirely sure what changes go into 1.13. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
[ https://issues.apache.org/jira/browse/FLINK-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-22245: --- Assignee: Shengkai Fang > Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and > CliTableResultView > -- > > Key: FLINK-22245 > URL: https://issues.apache.org/jira/browse/FLINK-22245 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Assignee: Shengkai Fang >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > Attachments: sql-client.png > > > The max column width in CliChangelogResultView and CliTableResultView is 25. > In the picture, it's not enough. We should reuse the > {{PrintUtils.MAX_COLUMN_WIDTH}} > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22248) JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException
[ https://issues.apache.org/jira/browse/FLINK-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319879#comment-17319879 ] Guowei Ma commented on FLINK-22248: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=c2734c79-73b6-521c-e85a-67c7ecae9107=9619 > JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished > fail because of TestTimedOutException > > > Key: FLINK-22248 > URL: https://issues.apache.org/jira/browse/FLINK-22248 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4325 > {code:java} > org.junit.runners.model.TestTimedOutException: test timed out after 5000 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.setUpJobGraph(JobMasterStopWithSavepointITCase.java:308) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:113) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22248) JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException
[ https://issues.apache.org/jira/browse/FLINK-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-22248: -- Priority: Critical (was: Major) > JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished > fail because of TestTimedOutException > > > Key: FLINK-22248 > URL: https://issues.apache.org/jira/browse/FLINK-22248 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4325 > {code:java} > org.junit.runners.model.TestTimedOutException: test timed out after 5000 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) > at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.setUpJobGraph(JobMasterStopWithSavepointITCase.java:308) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:113) > at > org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22248) JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException
Guowei Ma created FLINK-22248: - Summary: JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished fail because of TestTimedOutException Key: FLINK-22248 URL: https://issues.apache.org/jira/browse/FLINK-22248 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing, Runtime / Coordination Affects Versions: 1.13.0 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16405=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4325 {code:java} org.junit.runners.model.TestTimedOutException: test timed out after 5000 milliseconds at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.setUpJobGraph(JobMasterStopWithSavepointITCase.java:308) at org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:113) at org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.suspendWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21995) YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents: File channel manager has shutdown.
[ https://issues.apache.org/jira/browse/FLINK-21995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319877#comment-17319877 ] Guowei Ma commented on FLINK-21995: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16406=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=02d88c1a-f1b3-5a8c-4b4a-cf43c70f99e1=27277 > YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents: > File channel manager has shutdown. > > > Key: FLINK-21995 > URL: https://issues.apache.org/jira/browse/FLINK-21995 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15546=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf=30260 > {code} > [ERROR] > YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents:84->YarnTestBase.ensureNoProhibitedStringInLogFiles:589 > Found a file > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-logDir-nm-1_0/application_1616768573754_0001/container_1616768573754_0001_01_02/taskmanager.log > with a prohibited string (one of [Exception, Started > SelectChannelConnector@0.0.0.0:8081]). Excerpts: > [ > 2021-03-26 14:23:37,554 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: taskmanager.numberOfTaskSlots, 1 > 2021-03-26 14:23:37,555 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: jobmanager.memory.process.size, 1600m > 2021-03-26 14:23:37,555 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: jobmanager.rpc.port, 6123 > 2021-03-26 14:23:37,660 INFO org.apache.flink.yarn.YarnTaskExecutorRunner > [] - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested. > 2021-03-26 14:23:37,663 INFO > org.apache.flink.runtime.blob.TransientBlobCache [] - Shutting > down BLOB cache > 2021-03-26 14:23:37,667 INFO > org.apache.flink.runtime.blob.PermanentBlobCache [] - Shutting > down BLOB cache > 2021-03-26 14:23:37,668 INFO > org.apache.flink.runtime.state.TaskExecutorLocalStateStoresManager [] - > Shutting down TaskExecutorLocalStateStoresManager. > 2021-03-26 14:23:37,677 INFO org.apache.flink.runtime.filecache.FileCache > [] - removed file cache directory > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-localDir-nm-1_0/usercache/hadoop/appcache/application_1616768573754_0001/flink-dist-cache-eefe237c-906b-4b9a-b63f-997ef24c58ba > 2021-03-26 14:23:37,678 INFO > org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - > FileChannelManager removed spill file directory > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-localDir-nm-1_0/usercache/hadoop/appcache/application_1616768573754_0001/flink-netty-shuffle-d0a7a4ea-92b2-4614-bfed-2e949a36b9bc > 2021-03-26 14:23:37,680 WARN org.apache.flink.runtime.taskmanager.Task > [] - Keyed Aggregation -> Sink: Print to Std. Out (1/1)#0 > (e81d01c2526376e7d0b4b33ccd8d8c7a) switched from RUNNING to FAILED with > failure cause: java.lang.IllegalStateException: File channel manager has > shutdown. > 2021-03-26 14:23:37,680 WARN org.apache.flink.runtime.taskmanager.Task > [] - Keyed Aggregation -> Sink: Print to Std. Out (1/1)#0 > (e81d01c2526376e7d0b4b33ccd8d8c7a) switched from RUNNING to FAILED with > failure cause: java.lang.IllegalStateException: File channel manager has > shutdown. > at > org.apache.flink.util.Preconditions.checkState(Preconditions.java:193) > at > org.apache.flink.runtime.io.disk.FileChannelManagerImpl.getPaths(FileChannelManagerImpl.java:122) > at > org.apache.flink.runtime.io.disk.iomanager.IOManager.getSpillingDirectoriesPaths(IOManager.java:118) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.lambda$getRecordDeserializers$0(StreamTaskNetworkInput.java:90) > at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321) > at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) > at > java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) > at >
[jira] [Updated] (FLINK-21995) YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents: File channel manager has shutdown.
[ https://issues.apache.org/jira/browse/FLINK-21995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-21995: -- Affects Version/s: 1.11.3 > YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents: > File channel manager has shutdown. > > > Key: FLINK-21995 > URL: https://issues.apache.org/jira/browse/FLINK-21995 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.3, 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15546=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf=30260 > {code} > [ERROR] > YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents:84->YarnTestBase.ensureNoProhibitedStringInLogFiles:589 > Found a file > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-logDir-nm-1_0/application_1616768573754_0001/container_1616768573754_0001_01_02/taskmanager.log > with a prohibited string (one of [Exception, Started > SelectChannelConnector@0.0.0.0:8081]). Excerpts: > [ > 2021-03-26 14:23:37,554 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: taskmanager.numberOfTaskSlots, 1 > 2021-03-26 14:23:37,555 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: jobmanager.memory.process.size, 1600m > 2021-03-26 14:23:37,555 INFO > org.apache.flink.configuration.GlobalConfiguration [] - Loading > configuration property: jobmanager.rpc.port, 6123 > 2021-03-26 14:23:37,660 INFO org.apache.flink.yarn.YarnTaskExecutorRunner > [] - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested. > 2021-03-26 14:23:37,663 INFO > org.apache.flink.runtime.blob.TransientBlobCache [] - Shutting > down BLOB cache > 2021-03-26 14:23:37,667 INFO > org.apache.flink.runtime.blob.PermanentBlobCache [] - Shutting > down BLOB cache > 2021-03-26 14:23:37,668 INFO > org.apache.flink.runtime.state.TaskExecutorLocalStateStoresManager [] - > Shutting down TaskExecutorLocalStateStoresManager. > 2021-03-26 14:23:37,677 INFO org.apache.flink.runtime.filecache.FileCache > [] - removed file cache directory > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-localDir-nm-1_0/usercache/hadoop/appcache/application_1616768573754_0001/flink-dist-cache-eefe237c-906b-4b9a-b63f-997ef24c58ba > 2021-03-26 14:23:37,678 INFO > org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - > FileChannelManager removed spill file directory > /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-localDir-nm-1_0/usercache/hadoop/appcache/application_1616768573754_0001/flink-netty-shuffle-d0a7a4ea-92b2-4614-bfed-2e949a36b9bc > 2021-03-26 14:23:37,680 WARN org.apache.flink.runtime.taskmanager.Task > [] - Keyed Aggregation -> Sink: Print to Std. Out (1/1)#0 > (e81d01c2526376e7d0b4b33ccd8d8c7a) switched from RUNNING to FAILED with > failure cause: java.lang.IllegalStateException: File channel manager has > shutdown. > 2021-03-26 14:23:37,680 WARN org.apache.flink.runtime.taskmanager.Task > [] - Keyed Aggregation -> Sink: Print to Std. Out (1/1)#0 > (e81d01c2526376e7d0b4b33ccd8d8c7a) switched from RUNNING to FAILED with > failure cause: java.lang.IllegalStateException: File channel manager has > shutdown. > at > org.apache.flink.util.Preconditions.checkState(Preconditions.java:193) > at > org.apache.flink.runtime.io.disk.FileChannelManagerImpl.getPaths(FileChannelManagerImpl.java:122) > at > org.apache.flink.runtime.io.disk.iomanager.IOManager.getSpillingDirectoriesPaths(IOManager.java:118) > at > org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.lambda$getRecordDeserializers$0(StreamTaskNetworkInput.java:90) > at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321) > at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) > at > java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) > at > java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at >
[GitHub] [flink] godfreyhe commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
godfreyhe commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818423595 AFAK, very few connectors implement filter push down. If any one needs this fix in 1.12, we can cherry pick it to 1.12. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-22247) can not pass AddressList when connecting to rabbitmq
Spongebob created FLINK-22247: - Summary: can not pass AddressList when connecting to rabbitmq Key: FLINK-22247 URL: https://issues.apache.org/jira/browse/FLINK-22247 Project: Flink Issue Type: Improvement Components: API / DataStream Affects Versions: 1.12.2 Environment: flink: 2.12.2 rabbitmq: 3.8.4 Reporter: Spongebob We hope to connect to rabbitmq cluster address when using rabbitmq connector, So we override the setupConnection function to pass the rabbitmq cluster address, but the address class is not serializable thereby flink throws exception. {code:java} //代码占位符 val rabbitmqAddresses = Array( new Address("xxx1", 5672), new Address("xxx2", 5672), new Address("xxx3", 5672)) val dataStream = streamEnv .addSource(new RMQSource[String]( rabbitmqConfig, // rabbitmq's connection config "queueName", // queue name true, // using correlation ids, assurance of exactly-once consume from rabbitmq new SimpleStringSchema // java deserialization ) { override def setupQueue(): Unit = {} override def setupConnection(): Connection = { rabbitmqConfig.getConnectionFactory.newConnection(rabbitmqAddresses) } }).setParallelism(1) {code} Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: [Lcom.rabbitmq.client.Address;@436a4e4b is not serializable. The object probably contains or references non serializable fields.Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: [Lcom.rabbitmq.client.Address;@436a4e4b is not serializable. The object probably contains or references non serializable fields. at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:164) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2000) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1685) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1668) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1652) at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.scala:693) at testConsumer$.main(testConsumer.scala:30) at testConsumer.main(testConsumer.scala)Caused by: java.io.NotSerializableException: com.rabbitmq.client.Address at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:624) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:143) ... 9 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka
[ https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319875#comment-17319875 ] Guowei Ma edited comment on FLINK-17510 at 4/13/21, 4:12 AM: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16406=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9=17576 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16406=logs=68a897ab-3047-5660-245a-cce8f83859f6=d47e27f5-9721-5d5f-1cf3-62adbf3d115d=17389 was (Author: maguowei): https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16406=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9=17576 > StreamingKafkaITCase. testKafka timeouts on downloading Kafka > - > > Key: FLINK-17510 > URL: https://issues.apache.org/jira/browse/FLINK-17510 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, Connectors / Kafka, Tests >Affects Versions: 1.11.3, 1.12.1, 1.13.0 >Reporter: Robert Metzger >Priority: Critical > Labels: test-stability > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 > {code} > 2020-05-05T00:06:49.7268716Z [INFO] > --- > 2020-05-05T00:06:49.7268938Z [INFO] T E S T S > 2020-05-05T00:06:49.7269282Z [INFO] > --- > 2020-05-05T00:06:50.5336315Z [INFO] Running > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, > Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: > kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) > Time elapsed: 120.024 s <<< ERROR! > 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, > /tmp/junit2815750531595874769/downloads/1290570732, > https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) > exceeded timeout (12) or number of retries (3). > 2020-05-05T00:11:26.8606732Z at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132) > 2020-05-05T00:11:26.8607321Z at > org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127) > 2020-05-05T00:11:26.8607826Z at > org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31) > 2020-05-05T00:11:26.8608343Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98) > 2020-05-05T00:11:26.8608892Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92) > 2020-05-05T00:11:26.8609602Z at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > 2020-05-05T00:11:26.8610026Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-05T00:11:26.8610553Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-05T00:11:26.8610958Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-05T00:11:26.8611388Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-05T00:11:26.8612214Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-05T00:11:26.8612706Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8613109Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8613551Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8614019Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-05T00:11:26.8614442Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-05T00:11:26.8614869Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-05T00:11:26.8615251Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-05T00:11:26.8615654Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-05T00:11:26.8616060Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8616465Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8616893Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8617893Z at >
[jira] [Commented] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka
[ https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319875#comment-17319875 ] Guowei Ma commented on FLINK-17510: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16406=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9=17576 > StreamingKafkaITCase. testKafka timeouts on downloading Kafka > - > > Key: FLINK-17510 > URL: https://issues.apache.org/jira/browse/FLINK-17510 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, Connectors / Kafka, Tests >Affects Versions: 1.11.3, 1.12.1, 1.13.0 >Reporter: Robert Metzger >Priority: Critical > Labels: test-stability > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 > {code} > 2020-05-05T00:06:49.7268716Z [INFO] > --- > 2020-05-05T00:06:49.7268938Z [INFO] T E S T S > 2020-05-05T00:06:49.7269282Z [INFO] > --- > 2020-05-05T00:06:50.5336315Z [INFO] Running > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, > Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: > kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) > Time elapsed: 120.024 s <<< ERROR! > 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, > /tmp/junit2815750531595874769/downloads/1290570732, > https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) > exceeded timeout (12) or number of retries (3). > 2020-05-05T00:11:26.8606732Z at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132) > 2020-05-05T00:11:26.8607321Z at > org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127) > 2020-05-05T00:11:26.8607826Z at > org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31) > 2020-05-05T00:11:26.8608343Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98) > 2020-05-05T00:11:26.8608892Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92) > 2020-05-05T00:11:26.8609602Z at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > 2020-05-05T00:11:26.8610026Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-05T00:11:26.8610553Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-05T00:11:26.8610958Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-05T00:11:26.8611388Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-05T00:11:26.8612214Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-05T00:11:26.8612706Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8613109Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8613551Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8614019Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-05T00:11:26.8614442Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-05T00:11:26.8614869Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-05T00:11:26.8615251Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-05T00:11:26.8615654Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-05T00:11:26.8616060Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8616465Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8616893Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8617893Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-05T00:11:26.8618490Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-05T00:11:26.8619056Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-05T00:11:26.8619589Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-05T00:11:26.8620073Z at > org.junit.runners.Suite.runChild(Suite.java:27) >
[jira] [Commented] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka
[ https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319872#comment-17319872 ] Guowei Ma commented on FLINK-17510: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16402=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=27698 > StreamingKafkaITCase. testKafka timeouts on downloading Kafka > - > > Key: FLINK-17510 > URL: https://issues.apache.org/jira/browse/FLINK-17510 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, Connectors / Kafka, Tests >Affects Versions: 1.11.3, 1.12.1, 1.13.0 >Reporter: Robert Metzger >Priority: Critical > Labels: test-stability > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 > {code} > 2020-05-05T00:06:49.7268716Z [INFO] > --- > 2020-05-05T00:06:49.7268938Z [INFO] T E S T S > 2020-05-05T00:06:49.7269282Z [INFO] > --- > 2020-05-05T00:06:50.5336315Z [INFO] Running > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, > Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.StreamingKafkaITCase > 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: > kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) > Time elapsed: 120.024 s <<< ERROR! > 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, > /tmp/junit2815750531595874769/downloads/1290570732, > https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) > exceeded timeout (12) or number of retries (3). > 2020-05-05T00:11:26.8606732Z at > org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132) > 2020-05-05T00:11:26.8607321Z at > org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127) > 2020-05-05T00:11:26.8607826Z at > org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31) > 2020-05-05T00:11:26.8608343Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98) > 2020-05-05T00:11:26.8608892Z at > org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92) > 2020-05-05T00:11:26.8609602Z at > org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) > 2020-05-05T00:11:26.8610026Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-05T00:11:26.8610553Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-05T00:11:26.8610958Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-05T00:11:26.8611388Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-05T00:11:26.8612214Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-05T00:11:26.8612706Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8613109Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8613551Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8614019Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-05T00:11:26.8614442Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-05T00:11:26.8614869Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-05T00:11:26.8615251Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-05T00:11:26.8615654Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-05T00:11:26.8616060Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-05T00:11:26.8616465Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-05T00:11:26.8616893Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-05T00:11:26.8617893Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-05T00:11:26.8618490Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-05T00:11:26.8619056Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-05T00:11:26.8619589Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-05T00:11:26.8620073Z at > org.junit.runners.Suite.runChild(Suite.java:27) >
[GitHub] [flink] flinkbot commented on pull request #15581: [Hotfix][python] Fix variable typo.
flinkbot commented on pull request #15581: URL: https://github.com/apache/flink/pull/15581#issuecomment-818414998 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 7e80f21e80bb9d5663d31cb19cd2811cb19037ec (Tue Apr 13 04:05:39 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15580: [FLINK-22245][sql-client] Reuse PrintUtils.MAX_COLUMN_WIDTH in CliCha…
flinkbot edited a comment on pull request #15580: URL: https://github.com/apache/flink/pull/15580#issuecomment-818409161 ## CI report: * a5613bd98763175b59d5d2a3ecbf8a1cb047299a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16413) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15568: [FLINK-17371][table-runtime] Add cases for decimal casting
flinkbot edited a comment on pull request #15568: URL: https://github.com/apache/flink/pull/15568#issuecomment-817675780 ## CI report: * 0fadc09a8fa301dc7a46c5893d16390f76379853 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16379) * f2eb5ba61975ecaa45a40c8ac18c0537e2cc9912 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16412) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
flinkbot edited a comment on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-817625124 ## CI report: * a793f435a93300e271013c875f098f30568c9b19 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16371) * ad833779d272f0f35cb82b26f559131e2192cfee UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] RocMarshal opened a new pull request #15581: [Hotfix][python] Fix variable typo.
RocMarshal opened a new pull request #15581: URL: https://github.com/apache/flink/pull/15581 ## What is the purpose of the change *(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)* ## Brief change log *(for example:)* - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact* - *Deployments RPC transmits only the blob storage reference* - *TaskManagers retrieve the TaskInfo from the blob cache* ## Verifying this change *(Please pick either of the following options)* This change is a trivial rework / code cleanup without any test coverage. *(or)* This change is already covered by existing tests, such as *(please describe tests)*. *(or)* This change added tests and can be verified as follows: *(example:)* - *Added integration tests for end-to-end deployment with large payloads (100MB)* - *Extended integration test for recovery after master (JobManager) failure* - *Added test that validates that TaskInfo is transferred only once across recoveries* - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
YuvalItzchakov commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818413801 @godfreyhe I just want to make sure since this is a critical fix on my end. I can also backport to 1.12 if it helps. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
godfreyhe commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818413113 @YuvalItzchakov master (1.13 now) is enough, do do you think ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-22121) FlinkLogicalRankRuleBase should check if name of rankNumberType already exists in the input
[ https://issues.apache.org/jira/browse/FLINK-22121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he closed FLINK-22121. -- Resolution: Fixed Fixed in 1.13.0: a2c429d9a626d5f4ea47aa58a3e9f2da7ce1f47e > FlinkLogicalRankRuleBase should check if name of rankNumberType already > exists in the input > --- > > Key: FLINK-22121 > URL: https://issues.apache.org/jira/browse/FLINK-22121 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.13.0 >Reporter: Caizhi Weng >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > Add the following test case to > {{org.apache.flink.table.planner.plan.stream.sql.RankTest}} to reproduce this > issue. > {code:scala} > @Test > def myTest(): Unit = { > val sql = > """ > |SELECT CAST(rna AS INT) AS rn1, CAST(rnb AS INT) AS rn2 FROM ( > | SELECT *, row_number() over (partition by a order by b desc) AS rnb > | FROM ( > |SELECT *, row_number() over (partition by a, c order by b desc) AS > rna > |FROM MyTable > | ) > | WHERE rna <= 100 > |) > |WHERE rnb <= 100 > |""".stripMargin > util.verifyExecPlan(sql) > } > {code} > The exception stack is > {code} > org.apache.flink.table.api.ValidationException: Field names must be unique. > Found duplicates: [w0$o0] > at > org.apache.flink.table.types.logical.RowType.validateFields(RowType.java:272) > at org.apache.flink.table.types.logical.RowType.(RowType.java:157) > at org.apache.flink.table.types.logical.RowType.of(RowType.java:297) > at org.apache.flink.table.types.logical.RowType.of(RowType.java:289) > at > org.apache.flink.table.planner.calcite.FlinkTypeFactory$.toLogicalRowType(FlinkTypeFactory.scala:632) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.translateToExecNode(StreamPhysicalRank.scala:117) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:74) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:71) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraphGenerator.generate(ExecNodeGraphGenerator.java:54) > at > org.apache.flink.table.planner.delegation.PlannerBase.translateToExecNodeGraph(PlannerBase.scala:314) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:895) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:780) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.verifyExecPlan(TableTestBase.scala:583) > {code} > This is because currently names of rank fields are all {{w0$o0}}, so if the > input of a Rank is another Rank the exception will occur. To solve this, we > should check if name of rank field has occurred in the input in > {{FlinkLogicalRankRuleBase}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] godfreyhe closed pull request #15498: [FLINK-22121][table-planner-blink] FlinkLogicalRankRuleBase now check if name of rankNumberType already exists in the input
godfreyhe closed pull request #15498: URL: https://github.com/apache/flink/pull/15498 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #15580: [FLINK-22245][sql-client] Reuse PrintUtils.MAX_COLUMN_WIDTH in CliCha…
flinkbot commented on pull request #15580: URL: https://github.com/apache/flink/pull/15580#issuecomment-818409161 ## CI report: * a5613bd98763175b59d5d2a3ecbf8a1cb047299a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15568: [FLINK-17371][table-runtime] Add cases for decimal casting
flinkbot edited a comment on pull request #15568: URL: https://github.com/apache/flink/pull/15568#issuecomment-817675780 ## CI report: * 0fadc09a8fa301dc7a46c5893d16390f76379853 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16379) * f2eb5ba61975ecaa45a40c8ac18c0537e2cc9912 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-22152) Fix the bug of same timers are registered multiple times
[ https://issues.apache.org/jira/browse/FLINK-22152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo resolved FLINK-22152. -- Resolution: Fixed Merged into master via 2d2a6876681b34e46fd7c090a3c98272626828ba > Fix the bug of same timers are registered multiple times > > > Key: FLINK-22152 > URL: https://issues.apache.org/jira/browse/FLINK-22152 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Affects Versions: 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > The same timer will be registered multiple times. We need to deduplicate same > timers -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo closed pull request #15575: [FLINK-22152][python] Fix the bug of same timers are registered multiple times
HuangXingBo closed pull request #15575: URL: https://github.com/apache/flink/pull/15575 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hameizi commented on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
hameizi commented on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-818406172 @KurtYoung I add test case in new commit. Please review. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-22246) when use HiveCatalog create table , can't set Table owner property correctly
xiangtao created FLINK-22246: Summary: when use HiveCatalog create table , can't set Table owner property correctly Key: FLINK-22246 URL: https://issues.apache.org/jira/browse/FLINK-22246 Project: Flink Issue Type: Bug Components: Connectors / Hive Affects Versions: 1.11.1 Reporter: xiangtao when i use HiveCatalog create table in sql-client , i found it can't set Hive Table `owner` property correctly. i debug code , i found in `HiveCatalog.createTable` method {code:java} Table hiveTable = org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable( tablePath.getDatabaseName(), tablePath.getObjectName()); {code} this get hiveTable obj , owner field is null . beacuse it set owner through {code:java} t.setOwner(SessionState.getUserFromAuthenticator()); {code} but SessionState is null . Fix this bug , we can add one code in HiveCatalog.open method . {code:java} SessionState.setCurrentSessionState(new SessionState(hiveConf)); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-22157) Join & Select a part of composite primary key will cause ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/FLINK-22157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he closed FLINK-22157. -- Resolution: Fixed Fixed in 1.13.0: 92fbe7f1fe5f0eade036b4184cdbab8f9b791647 > Join & Select a part of composite primary key will cause > ArrayIndexOutOfBoundsException > --- > > Key: FLINK-22157 > URL: https://issues.apache.org/jira/browse/FLINK-22157 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.13.0 >Reporter: Caizhi Weng >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > > Add the following test case to > {{org.apache.flink.table.planner.plan.stream.sql.join.JoinTest}} to reproduce > this bug. > {code:scala} > @Test > def myTest(): Unit = { > util.tableEnv.executeSql( > """ > |CREATE TABLE MyTable ( > | pk1 INT, > | pk2 BIGINT, > | PRIMARY KEY (pk1, pk2) NOT ENFORCED > |) WITH ( > | 'connector'='values' > |) > |""".stripMargin) > util.verifyExecPlan("SELECT A.a1 FROM A LEFT JOIN MyTable ON A.a1 = > MyTable.pk1") > } > {code} > The exception stack is > {code} > java.lang.RuntimeException: Error while applying rule StreamPhysicalJoinRule, > args [rel#141:FlinkLogicalJoin.LOGICAL.any.None: > 0.[NONE].[NONE](left=RelSubset#139,right=RelSubset#140,condition==($0, > $1),joinType=left), rel#138:FlinkLogicalCalc.LOGICAL.any.None: > 0.[NONE].[NONE](input=RelSubset#137,select=a1), > rel#121:FlinkLogicalTableSourceScan.LOGICAL.any.None: > 0.[NONE].[NONE](table=[default_catalog, default_database, MyTable, > project=[pk1]],fields=pk1)] > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:256) > at > org.apache.calcite.plan.volcano.IterativeRuleDriver.drive(IterativeRuleDriver.java:58) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:510) > at > org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:312) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) > at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) > at > org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163) > at > org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79) > at > org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) > at > org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:281) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.assertPlanEquals(TableTestBase.scala:889) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:780) > at > org.apache.flink.table.planner.utils.TableTestUtilBase.verifyExecPlan(TableTestBase.scala:583) > at > org.apache.flink.table.planner.plan.stream.sql.join.JoinTest.myTest(JoinTest.scala:300) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at
[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
YuvalItzchakov commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818405285 @godfreyhe Thanks! Will master be used to cut out the 1.13 release or do we need to backport this to any other branch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe closed pull request #15540: [FLINK-22157][table-planner-blink] Fix join & select a portion of composite primary key will cause ArrayIndexOutOfBoundsException
godfreyhe closed pull request #15540: URL: https://github.com/apache/flink/pull/15540 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yittg commented on pull request #15501: [FLINK-22054][k8s] Using a shared watcher for ConfigMap watching
yittg commented on pull request #15501: URL: https://github.com/apache/flink/pull/15501#issuecomment-818403355 Hi @wangyang0918 @tillrohrmann , So what do you think in summary? * using shared indexed informer or not? * provide a new interface to replace the old one? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan
godfreyhe commented on pull request #15307: URL: https://github.com/apache/flink/pull/15307#issuecomment-818400010 @YuvalItzchakov Thanks for the quick update, I will do a minor improvement in my local: extract a base test class for PushFilterInCalcIntoTableSourceRuleTest, PushFilterIntoLegacyTableSourceScanRuleTest and PushFilterIntoTableSourceScanRuleTest, and will merge this pr once the test is green -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #15580: [FLINK-22245][sql-client] Reuse PrintUtils.MAX_COLUMN_WIDTH in CliCha…
flinkbot commented on pull request #15580: URL: https://github.com/apache/flink/pull/15580#issuecomment-818399248 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit a5613bd98763175b59d5d2a3ecbf8a1cb047299a (Tue Apr 13 03:13:20 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-22245).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22082) Nested projection push down doesn't work for data such as row(array(row))
[ https://issues.apache.org/jira/browse/FLINK-22082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319842#comment-17319842 ] godfrey he commented on FLINK-22082: Fixed in 1.13.0: 2bbb1ba4072b9f03f9d3f9a17e004b5fe1eb4aa9 > Nested projection push down doesn't work for data such as row(array(row)) > - > > Key: FLINK-22082 > URL: https://issues.apache.org/jira/browse/FLINK-22082 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0, 1.13.0 >Reporter: Dian Fu >Assignee: Shengkai Fang >Priority: Critical > Labels: pull-request-available > Fix For: 1.13.0, 1.12.3 > > > For the following job: > {code} > from pyflink.datastream import StreamExecutionEnvironment > from pyflink.table import TableConfig, StreamTableEnvironment > config = TableConfig() > env = StreamExecutionEnvironment.get_execution_environment() > t_env = StreamTableEnvironment.create(env, config) > source_ddl = """ > CREATE TABLE InTable ( > `ID` STRING, > `Timestamp` TIMESTAMP(3), > `Result` ROW( > `data` ROW(`value` BIGINT) ARRAY), > WATERMARK FOR `Timestamp` AS `Timestamp` > ) WITH ( > 'connector' = 'filesystem', > 'format' = 'json', > 'path' = '/tmp/1.txt' > ) > """ > sink_ddl = """ > CREATE TABLE OutTable ( > `ID` STRING, > `value` BIGINT > ) WITH ( > 'connector' = 'print' > ) > """ > t_env.execute_sql(source_ddl) > t_env.execute_sql(sink_ddl) > table = t_env.from_path('InTable') > table \ > .select( > table.ID, > table.Result.data.at(1).value) \ > .execute_insert('OutTable') \ > .wait() > {code} > It will thrown the following exception: > {code} > : scala.MatchError: ITEM($2.data, 1) (of class org.apache.calcite.rex.RexCall) > at > org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisit$1(NestedProjectionUtil.scala:273) > at > org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:283) > at > org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:269) > at org.apache.calcite.rex.RexFieldAccess.accept(RexFieldAccess.java:92) > at > org.apache.flink.table.planner.plan.utils.NestedProjectionUtil$$anonfun$build$1.apply(NestedProjectionUtil.scala:112) > at > org.apache.flink.table.planner.plan.utils.NestedProjectionUtil$$anonfun$build$1.apply(NestedProjectionUtil.scala:111) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > org.apache.flink.table.planner.plan.utils.NestedProjectionUtil$.build(NestedProjectionUtil.scala:111) > at > org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala) > at > org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155) > at > org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65) > {code} > See > https://stackoverflow.com/questions/66888486/pyflink-extract-nested-fields-from-json-array > for more details -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
[ https://issues.apache.org/jira/browse/FLINK-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-22245: --- Labels: pull-request-available (was: ) > Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and > CliTableResultView > -- > > Key: FLINK-22245 > URL: https://issues.apache.org/jira/browse/FLINK-22245 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > Attachments: sql-client.png > > > The max column width in CliChangelogResultView and CliTableResultView is 25. > In the picture, it's not enough. We should reuse the > {{PrintUtils.MAX_COLUMN_WIDTH}} > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] fsk119 opened a new pull request #15580: [FLINK-22245][sql-client] Reuse PrintUtils.MAX_COLUMN_WIDTH in CliCha…
fsk119 opened a new pull request #15580: URL: https://github.com/apache/flink/pull/15580 After fix, it looks like https://user-images.githubusercontent.com/33114724/114491165-b731ba80-9c48-11eb-9077-e74f65b71462.png;> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
[ https://issues.apache.org/jira/browse/FLINK-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang updated FLINK-22245: -- Description: The max column width in CliChangelogResultView and CliTableResultView is 25. In the picture, it's not enough. We should reuse the {{PrintUtils.MAX_COLUMN_WIDTH}} was: The max column width in CliChangelogResultView and CliTableResultView is 25. In the picture, it's not enough. We should reuse the {{PrintUtils.MAX_COLUMN_WIDTH}} !image-2021-04-13-10-26-34-721.png! > Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and > CliTableResultView > -- > > Key: FLINK-22245 > URL: https://issues.apache.org/jira/browse/FLINK-22245 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > Attachments: sql-client.png > > > The max column width in CliChangelogResultView and CliTableResultView is 25. > In the picture, it's not enough. We should reuse the > {{PrintUtils.MAX_COLUMN_WIDTH}} > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
[ https://issues.apache.org/jira/browse/FLINK-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang updated FLINK-22245: -- Attachment: sql-client.png > Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and > CliTableResultView > -- > > Key: FLINK-22245 > URL: https://issues.apache.org/jira/browse/FLINK-22245 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > Attachments: sql-client.png > > > The max column width in CliChangelogResultView and CliTableResultView is 25. > In the picture, it's not enough. We should reuse the > {{PrintUtils.MAX_COLUMN_WIDTH}} > !image-2021-04-13-10-26-34-721.png! > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
[ https://issues.apache.org/jira/browse/FLINK-22245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang updated FLINK-22245: -- Description: The max column width in CliChangelogResultView and CliTableResultView is 25. In the picture, it's not enough. We should reuse the {{PrintUtils.MAX_COLUMN_WIDTH}} !image-2021-04-13-10-26-34-721.png! was: The max column width in CliChangelogResultView and CliTableResultView is 25. In the picture, it's not enough. We should reuse the \{{PrintUtils.MAX_COLUMN_WIDTH}} !image-2021-04-13-10-26-34-721.png! > Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and > CliTableResultView > -- > > Key: FLINK-22245 > URL: https://issues.apache.org/jira/browse/FLINK-22245 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > > The max column width in CliChangelogResultView and CliTableResultView is 25. > In the picture, it's not enough. We should reuse the > {{PrintUtils.MAX_COLUMN_WIDTH}} > !image-2021-04-13-10-26-34-721.png! > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo commented on pull request #15575: [FLINK-22152][python] Fix the bug of same timers are registered multiple times
HuangXingBo commented on pull request #15575: URL: https://github.com/apache/flink/pull/15575#issuecomment-818396906 The failed test has been recorded in https://issues.apache.org/jira/browse/FLINK-20723 which is irrelevant to this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-22245) Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView
Shengkai Fang created FLINK-22245: - Summary: Reuse PrintUtils.MAX_COLUMN_WIDTH in CliChangelogResultView and CliTableResultView Key: FLINK-22245 URL: https://issues.apache.org/jira/browse/FLINK-22245 Project: Flink Issue Type: Bug Components: Table SQL / Client Affects Versions: 1.13.0 Reporter: Shengkai Fang Fix For: 1.13.0 The max column width in CliChangelogResultView and CliTableResultView is 25. In the picture, it's not enough. We should reuse the \{{PrintUtils.MAX_COLUMN_WIDTH}} !image-2021-04-13-10-26-34-721.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20723) testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink failed due to NoHostAvailableException
[ https://issues.apache.org/jira/browse/FLINK-20723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319840#comment-17319840 ] Huang Xingbo commented on FLINK-20723: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16396=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20 > testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink failed due to > NoHostAvailableException > -- > > Key: FLINK-20723 > URL: https://issues.apache.org/jira/browse/FLINK-20723 > Project: Flink > Issue Type: Bug > Components: Connectors / Cassandra >Affects Versions: 1.11.3, 1.12.2, 1.13.0 > Environment: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11137=results >Reporter: Matthias >Assignee: Dawid Wysakowicz >Priority: Major > Labels: test-stability > Fix For: 1.13.0 > > > [Build > 20201221.17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11137=results] > failed due to {{NoHostAvailableException}}: > {code} > [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 167.927 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase > [ERROR] > testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase) > Time elapsed: 12.234 s <<< ERROR! > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: /127.0.0.1:9042 > (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] > Timed out waiting for server response)) > at > com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84) > at > com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37) > at > com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) > at > com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245) > at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63) > at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39) > at > org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:221) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at >
[jira] [Updated] (FLINK-22239) Improve support for JdbcXaSinkFunction
[ https://issues.apache.org/jira/browse/FLINK-22239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuan Mei updated FLINK-22239: - Description: JdbcXaSinkFunction uses Xa protocol/interface to implement exactly-once guarantee (within each subtask partition). Xa is a protocol/interface designed for two-phase commit of distributed DBS (RMs). XA guarantees that transactional updates are committed in all of the participating databases, or are fully rolled back out of all of the databases, reverting to the state prior to the start of the transaction. Hence some of the dbs that support XA treats XA transaction as global trans, and some of them does not support multiple global trans (per connection) at a time, MYSQL for example (see FLINK-21743). This ticket is a follow-up to address such limitations. was: JdbcXaSinkFunction uses Xa protocol/interface to implement exactly-once guarantee (within each subtask partition). Xa is a protocol/interface designed for two-phase commit of distributed DBS (RMs). XA guarantees that transactional updates are committed in all of the participating databases, or are fully rolled back out of all of the databases, reverting to the state prior to the start of the transaction. Hence some of the dbs that support XA treats XA transaction as global trans, and some of them does not support multiple global trans (per connection) at a time, MYSQL for example (see FLINK-21743) > Improve support for JdbcXaSinkFunction > -- > > Key: FLINK-22239 > URL: https://issues.apache.org/jira/browse/FLINK-22239 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Reporter: Yuan Mei >Priority: Major > Fix For: 1.14.0 > > > JdbcXaSinkFunction uses Xa protocol/interface to implement exactly-once > guarantee (within each subtask partition). > Xa is a protocol/interface designed for two-phase commit of distributed DBS > (RMs). > XA guarantees that transactional updates are committed in all of the > participating databases, or are fully rolled back out of all of the > databases, reverting to the state prior to the start of the transaction. > Hence some of the dbs that support XA treats XA transaction as global trans, > and some of them does not support multiple global trans (per connection) at a > time, MYSQL for example (see FLINK-21743). > This ticket is a follow-up to address such limitations. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22239) Improve support for JdbcXaSinkFunction
[ https://issues.apache.org/jira/browse/FLINK-22239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuan Mei updated FLINK-22239: - Description: JdbcXaSinkFunction uses Xa protocol/interface to implement exactly-once guarantee (within each subtask partition). Xa is a protocol/interface designed for two-phase commit of distributed DBS (RMs). XA guarantees that transactional updates are committed in all of the participating databases, or are fully rolled back out of all of the databases, reverting to the state prior to the start of the transaction. Hence some of the dbs that support XA treats XA transaction as global trans, and some of them does not support multiple global trans (per connection) at a time, MYSQL for example (see FLINK-21743) was:JdbcXaSinkFunction > Improve support for JdbcXaSinkFunction > -- > > Key: FLINK-22239 > URL: https://issues.apache.org/jira/browse/FLINK-22239 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Reporter: Yuan Mei >Priority: Major > Fix For: 1.14.0 > > > JdbcXaSinkFunction uses Xa protocol/interface to implement exactly-once > guarantee (within each subtask partition). > Xa is a protocol/interface designed for two-phase commit of distributed DBS > (RMs). > XA guarantees that transactional updates are committed in all of the > participating databases, or are fully rolled back out of all of the > databases, reverting to the state prior to the start of the transaction. > Hence some of the dbs that support XA treats XA transaction as global trans, > and some of them does not support multiple global trans (per connection) at a > time, MYSQL for example (see FLINK-21743) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21903) Translate "Importing Flink into an IDE" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-21903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319832#comment-17319832 ] zhaoxing commented on FLINK-21903: -- [~liguangyu] sorry,I have submitted a PR,cc [~jark] could you help to review this? thanks > Translate "Importing Flink into an IDE" page into Chinese > - > > Key: FLINK-21903 > URL: https://issues.apache.org/jira/browse/FLINK-21903 > Project: Flink > Issue Type: Task > Components: chinese-translation, Documentation >Reporter: zhaoxing >Assignee: zhaoxing >Priority: Major > Labels: pull-request-available > > The page url is > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/flinkDev/ide_setup.html] > The markdown file is located in 'docs/content.zh/docs/flinkDev/ide_setup.md' > now. > The doc is still in English on the master branch. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21464) Support ADD JAR command in sql client
[ https://issues.apache.org/jira/browse/FLINK-21464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319831#comment-17319831 ] Shengkai Fang commented on FLINK-21464: --- [~xiangtao] Thanks for your contribution. Will take a look soon. > Support ADD JAR command in sql client > - > > Key: FLINK-21464 > URL: https://issues.apache.org/jira/browse/FLINK-21464 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Client >Affects Versions: 1.13.0 >Reporter: Shengkai Fang >Assignee: xiangtao >Priority: Major > Fix For: 1.14.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17948) Use new type system for SQL Client collect sink
[ https://issues.apache.org/jira/browse/FLINK-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang closed FLINK-17948. - Resolution: Fixed > Use new type system for SQL Client collect sink > --- > > Key: FLINK-17948 > URL: https://issues.apache.org/jira/browse/FLINK-17948 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.11.0 > Environment: mysql: > image: mysql:8.0 > volumes: > - ./mysql/mktable.sql:/docker-entrypoint-initdb.d/mktable.sql > environment: > MYSQL_ROOT_PASSWORD: 123456 > ports: > - "3306:3306" >Reporter: Shengkai Fang >Assignee: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > Attachments: image-2020-05-26-22-56-43-835.png, > image-2020-05-26-22-58-02-326.png, image-2021-04-13-10-26-34-721.png > > > My job is following: > > {code:java} > CREATE TABLE currency ( > currency_id BIGINT, > currency_name STRING, > rate DOUBLE, > currency_timestamp TIMESTAMP, > country STRING, > precise_timestamp TIMESTAMP(6), > precise_time TIME(6), > gdp DECIMAL(10, 6) > ) WITH ( >'connector' = 'jdbc', >'url' = 'jdbc:mysql://localhost:3306/flink', >'username' = 'root', >'password' = '123456', >'table-name' = 'currency', >'driver' = 'com.mysql.jdbc.Driver', >'lookup.cache.max-rows' = '500', >'lookup.cache.ttl' = '10s', >'lookup.max-retries' = '3') > {code} > When select * from currency, the precision of results are not as same as > expected. The precision of field precise_timestamp is 3 not 6, and the > field gdp has more digit as expected. > > !image-2020-05-26-22-56-43-835.png! > The data in mysql is following: > !image-2020-05-26-22-58-02-326.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17948) Use new type system for SQL Client collect sink
[ https://issues.apache.org/jira/browse/FLINK-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319821#comment-17319821 ] Shengkai Fang commented on FLINK-17948: --- !image-2021-04-13-10-26-34-721.png! solved > Use new type system for SQL Client collect sink > --- > > Key: FLINK-17948 > URL: https://issues.apache.org/jira/browse/FLINK-17948 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.11.0 > Environment: mysql: > image: mysql:8.0 > volumes: > - ./mysql/mktable.sql:/docker-entrypoint-initdb.d/mktable.sql > environment: > MYSQL_ROOT_PASSWORD: 123456 > ports: > - "3306:3306" >Reporter: Shengkai Fang >Assignee: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > Attachments: image-2020-05-26-22-56-43-835.png, > image-2020-05-26-22-58-02-326.png, image-2021-04-13-10-26-34-721.png > > > My job is following: > > {code:java} > CREATE TABLE currency ( > currency_id BIGINT, > currency_name STRING, > rate DOUBLE, > currency_timestamp TIMESTAMP, > country STRING, > precise_timestamp TIMESTAMP(6), > precise_time TIME(6), > gdp DECIMAL(10, 6) > ) WITH ( >'connector' = 'jdbc', >'url' = 'jdbc:mysql://localhost:3306/flink', >'username' = 'root', >'password' = '123456', >'table-name' = 'currency', >'driver' = 'com.mysql.jdbc.Driver', >'lookup.cache.max-rows' = '500', >'lookup.cache.ttl' = '10s', >'lookup.max-retries' = '3') > {code} > When select * from currency, the precision of results are not as same as > expected. The precision of field precise_timestamp is 3 not 6, and the > field gdp has more digit as expected. > > !image-2020-05-26-22-56-43-835.png! > The data in mysql is following: > !image-2020-05-26-22-58-02-326.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17948) Use new type system for SQL Client collect sink
[ https://issues.apache.org/jira/browse/FLINK-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang updated FLINK-17948: -- Attachment: image-2021-04-13-10-26-34-721.png > Use new type system for SQL Client collect sink > --- > > Key: FLINK-17948 > URL: https://issues.apache.org/jira/browse/FLINK-17948 > Project: Flink > Issue Type: Bug > Components: Table SQL / Client >Affects Versions: 1.11.0 > Environment: mysql: > image: mysql:8.0 > volumes: > - ./mysql/mktable.sql:/docker-entrypoint-initdb.d/mktable.sql > environment: > MYSQL_ROOT_PASSWORD: 123456 > ports: > - "3306:3306" >Reporter: Shengkai Fang >Assignee: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > Attachments: image-2020-05-26-22-56-43-835.png, > image-2020-05-26-22-58-02-326.png, image-2021-04-13-10-26-34-721.png > > > My job is following: > > {code:java} > CREATE TABLE currency ( > currency_id BIGINT, > currency_name STRING, > rate DOUBLE, > currency_timestamp TIMESTAMP, > country STRING, > precise_timestamp TIMESTAMP(6), > precise_time TIME(6), > gdp DECIMAL(10, 6) > ) WITH ( >'connector' = 'jdbc', >'url' = 'jdbc:mysql://localhost:3306/flink', >'username' = 'root', >'password' = '123456', >'table-name' = 'currency', >'driver' = 'com.mysql.jdbc.Driver', >'lookup.cache.max-rows' = '500', >'lookup.cache.ttl' = '10s', >'lookup.max-retries' = '3') > {code} > When select * from currency, the precision of results are not as same as > expected. The precision of field precise_timestamp is 3 not 6, and the > field gdp has more digit as expected. > > !image-2020-05-26-22-56-43-835.png! > The data in mysql is following: > !image-2020-05-26-22-58-02-326.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #15548: [FLINK-22082][planner] Nested projection push down doesn't work for d…
flinkbot edited a comment on pull request #15548: URL: https://github.com/apache/flink/pull/15548#issuecomment-816538966 ## CI report: * 64160f09da52e0b873d1fbf1482bb7b6d7c7499e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16409) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16356) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22134) Test the reactive mode
[ https://issues.apache.org/jira/browse/FLINK-22134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319811#comment-17319811 ] Xintong Song commented on FLINK-22134: -- Thanks for the quick response, [~rmetzger]. {quote}B1: This is documented as a limitation {quote} True. I've only gone through the docs for Reactive Mode but not yet for Adaptive Scheduler. It does mention that limitations for the adaptive scheduler also apply to the reactive mode. {quote}B2: If you just enable reactive mode, the resource stabilization timeout should automatically be set to 0, and you shouldn't see these issues. {quote} Good point. I think it's still good to decrease the rescaling downtime when the stabilization timeout is set. However, since this is by default 0, maybe it's not that critical to fix for this release. {quote}L1: This should have been fixed by FLINK-21558 already [~chesnay] can you take a look? UPDATE: I looked at the code, and the logs and thing are behaving as expected: On every resource change, we are logging this statement. If we get more feedback on this log message, we need to consider adding a toggle for reactive mode to disable this log statement?! {quote} I think that fix is included in the snapshot that I tested. Attaching the log file. [^flink-xtsong-standalonejob-1-xtsong-mac.log] > Test the reactive mode > -- > > Key: FLINK-22134 > URL: https://issues.apache.org/jira/browse/FLINK-22134 > Project: Flink > Issue Type: Test > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Till Rohrmann >Assignee: Xintong Song >Priority: Blocker > Labels: release-testing > Fix For: 1.13.0 > > Attachments: flink-xtsong-standalonejob-1-xtsong-mac.log, > 截屏2021-04-12 上午10.30.25.png, 截屏2021-04-12 上午10.32.50.png > > > The newly introduced reactive mode (FLINK-10407) allows Flink to make use of > newly arriving resources while the job is running. The feature documentation > with the current set of limitations can be found here [1]. > In order to test this new feature I recommend to follow the documentation and > to try it out wrt the stated limitations. Everything which is not explicitly > contained in the set of limitations should work. > [1] > https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/elastic_scaling/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22134) Test the reactive mode
[ https://issues.apache.org/jira/browse/FLINK-22134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22134: - Attachment: flink-xtsong-standalonejob-1-xtsong-mac.log > Test the reactive mode > -- > > Key: FLINK-22134 > URL: https://issues.apache.org/jira/browse/FLINK-22134 > Project: Flink > Issue Type: Test > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Till Rohrmann >Assignee: Xintong Song >Priority: Blocker > Labels: release-testing > Fix For: 1.13.0 > > Attachments: flink-xtsong-standalonejob-1-xtsong-mac.log, > 截屏2021-04-12 上午10.30.25.png, 截屏2021-04-12 上午10.32.50.png > > > The newly introduced reactive mode (FLINK-10407) allows Flink to make use of > newly arriving resources while the job is running. The feature documentation > with the current set of limitations can be found here [1]. > In order to test this new feature I recommend to follow the documentation and > to try it out wrt the stated limitations. Everything which is not explicitly > contained in the set of limitations should work. > [1] > https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/elastic_scaling/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi commented on a change in pull request #15568: [FLINK-17371][table-runtime] Add cases for decimal casting
JingsongLi commented on a change in pull request #15568: URL: https://github.com/apache/flink/pull/15568#discussion_r612078803 ## File path: flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalCastTest.scala ## @@ -0,0 +1,203 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.expressions + +import org.apache.flink.api.java.typeutils.RowTypeInfo +import org.apache.flink.table.planner.expressions.utils.ExpressionTestBase +import org.apache.flink.types.Row + +import org.junit.Test + +import scala.util.Random + +class DecimalCastTest extends ExpressionTestBase { Review comment: Forgot this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-15509) Use sql cilents create view occur Unexpected exception
[ https://issues.apache.org/jira/browse/FLINK-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang closed FLINK-15509. - Resolution: Not A Problem > Use sql cilents create view occur Unexpected exception > -- > > Key: FLINK-15509 > URL: https://issues.apache.org/jira/browse/FLINK-15509 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.10.0 >Reporter: Xianxun Ye >Assignee: Shengkai Fang >Priority: Major > Fix For: 1.13.0 > > > version:master. > Firstly I created a table sucessful by sql clients, and then throw an > unexcepetd exp when created a view. > My steps: > Flink SQL> create table myTable (id int); > *[INFO] Table has been created.* > Flink SQL> show tables ; > myTable > Flink SQL> describe myTable ; > root > |-- id: INT > Flink SQL> create view myView as select * from myTable; > > Exception in thread "main" org.apache.flink.table.client.SqlClientException: > Unexpected exception. This is a bug. Please consider filing an issue. > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:190) > Caused by: org.apache.flink.table.api.ValidationException: SQL validation > failed. findAndCreateTableSource failed. > at > org.apache.flink.table.calcite.FlinkPlannerImpl.validateInternal(FlinkPlannerImpl.scala:130) > at > org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105) > at > org.apache.flink.table.sqlexec.SqlToOperationConverter.convert(SqlToOperationConverter.java:124) > at org.apache.flink.table.planner.ParserImpl.parse(ParserImpl.java:66) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.addView(LocalExecutor.java:300) > at > org.apache.flink.table.client.cli.CliClient.callCreateView(CliClient.java:579) > at > org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:308) > at java.util.Optional.ifPresent(Optional.java:159) > at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200) > at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125) > at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104) > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:178) > Caused by: org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:55) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:92) > at > org.apache.flink.table.catalog.DatabaseCalciteSchema.convertCatalogTable(DatabaseCalciteSchema.java:138) > at > org.apache.flink.table.catalog.DatabaseCalciteSchema.convertTable(DatabaseCalciteSchema.java:97) > at > org.apache.flink.table.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:86) > at java.util.Optional.map(Optional.java:215) > at > org.apache.flink.table.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:76) > at > org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83) > at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289) > at org.apache.calcite.sql.validate.EmptyScope.resolve_(EmptyScope.java:143) > at > org.apache.calcite.sql.validate.EmptyScope.resolveTable(EmptyScope.java:99) > at > org.apache.calcite.sql.validate.DelegatingScope.resolveTable(DelegatingScope.java:203) > at > org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:105) > at > org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:177) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1005) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:965) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3125) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3107) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3379) > at > org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1005) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:965) > at
[jira] [Commented] (FLINK-22181) SourceNAryInputChainingITCase hangs on azure
[ https://issues.apache.org/jira/browse/FLINK-22181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319806#comment-17319806 ] Leonard Xu commented on FLINK-22181: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16398=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=10600 > SourceNAryInputChainingITCase hangs on azure > > > Key: FLINK-22181 > URL: https://issues.apache.org/jira/browse/FLINK-22181 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: test-stability > Fix For: 1.13.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16277=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2=11607 > {code} > 2021-04-09T11:37:50.3331246Z "main" #1 prio=5 os_prio=0 > tid=0x7fae7000b800 nid=0x253e sleeping[0x7fae76c65000] > 2021-04-09T11:37:50.3331592Zjava.lang.Thread.State: TIMED_WAITING > (sleeping) > 2021-04-09T11:37:50.3331883Z at java.lang.Thread.sleep(Native Method) > 2021-04-09T11:37:50.3332313Z at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:229) > 2021-04-09T11:37:50.3332976Z at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:111) > 2021-04-09T11:37:50.577Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) > 2021-04-09T11:37:50.3334166Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-04-09T11:37:50.3334714Z at > org.apache.flink.streaming.api.datastream.DataStreamUtils.collectBoundedStream(DataStreamUtils.java:106) > 2021-04-09T11:37:50.3335356Z at > org.apache.flink.test.streaming.runtime.SourceNAryInputChainingITCase.testMixedInputsWithMultipleUnionsExecution(SourceNAryInputChainingITCase.java:140) > 2021-04-09T11:37:50.3335874Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-04-09T11:37:50.3336264Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-04-09T11:37:50.3336730Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-04-09T11:37:50.3337136Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-04-09T11:37:50.3337590Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2021-04-09T11:37:50.3338064Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-04-09T11:37:50.3338517Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2021-04-09T11:37:50.3339139Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-04-09T11:37:50.3339575Z at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-04-09T11:37:50.3339991Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2021-04-09T11:37:50.3340357Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2021-04-09T11:37:50.3340709Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2021-04-09T11:37:50.3341134Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2021-04-09T11:37:50.3341649Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2021-04-09T11:37:50.3342066Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2021-04-09T11:37:50.3342541Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2021-04-09T11:37:50.3342931Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2021-04-09T11:37:50.3343341Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2021-04-09T11:37:50.3343748Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2021-04-09T11:37:50.3344144Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2021-04-09T11:37:50.3344561Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2021-04-09T11:37:50.3344932Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2021-04-09T11:37:50.3345295Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2021-04-09T11:37:50.3345710Z at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > 2021-04-09T11:37:50.3346177Z at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) >
[GitHub] [flink] flinkbot edited a comment on pull request #15548: [FLINK-22082][planner] Nested projection push down doesn't work for d…
flinkbot edited a comment on pull request #15548: URL: https://github.com/apache/flink/pull/15548#issuecomment-816538966 ## CI report: * 64160f09da52e0b873d1fbf1482bb7b6d7c7499e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16356) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16409) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fsk119 commented on pull request #15548: [FLINK-22082][planner] Nested projection push down doesn't work for d…
fsk119 commented on pull request #15548: URL: https://github.com/apache/flink/pull/15548#issuecomment-818373770 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe closed pull request #15493: [FLINK-22082][table planner] Fix nested projection push down doesn't …
godfreyhe closed pull request #15493: URL: https://github.com/apache/flink/pull/15493 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-22131) Fix the bug of general udf and pandas udf chained together in map operation
[ https://issues.apache.org/jira/browse/FLINK-22131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo resolved FLINK-22131. -- Resolution: Fixed Merged into master via ba5cc583e1a086d6fc6e944ee90ec4aa1f4c2405 > Fix the bug of general udf and pandas udf chained together in map operation > --- > > Key: FLINK-22131 > URL: https://issues.apache.org/jira/browse/FLINK-22131 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Affects Versions: 1.13.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.13.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangXingBo closed pull request #15502: [FLINK-22131][python] Fix the bug of general udf and pandas udf chained together in map operation
HuangXingBo closed pull request #15502: URL: https://github.com/apache/flink/pull/15502 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fsk119 edited a comment on pull request #15563: [FLINK-21748][sql-client] Fix unstable LocalExecutorITCase.testBatchQ…
fsk119 edited a comment on pull request #15563: URL: https://github.com/apache/flink/pull/15563#issuecomment-818371872 To reproduce this bug, I think we can add these codes in the `CollectResultBase` constructor ``` try { Thread.sleep(1000); } catch (InterruptedException ex) { // ignore } ``` and run the failed test `LocalExecutorITCase#testBatchQueryExecutionMultipleTimes`. The added code is used to dealy the init of the resources. We can get the same exception stack reported in the issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fsk119 commented on pull request #15563: [FLINK-21748][sql-client] Fix unstable LocalExecutorITCase.testBatchQ…
fsk119 commented on pull request #15563: URL: https://github.com/apache/flink/pull/15563#issuecomment-818371872 To reproduce this bug, I think we can add these codes in the `CollectResultBase` constructor ``` try { Thread.sleep(1000); } catch (InterruptedException ex) { // ignore } ``` and run the failed test `LocalExecutorITCase#testBatchQueryExecutionMultipleTimes`. The added code is used to dealy the init of the resources. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KurtYoung removed a comment on pull request #15563: [FLINK-21748][sql-client] Fix unstable LocalExecutorITCase.testBatchQ…
KurtYoung removed a comment on pull request #15563: URL: https://github.com/apache/flink/pull/15563#issuecomment-818368661 Could you explain a little bit why moving the thread.start() into child class? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KurtYoung commented on pull request #15564: [FLINK-22207][connectors/hive]Hive Catalog retrieve Flink Properties …
KurtYoung commented on pull request #15564: URL: https://github.com/apache/flink/pull/15564#issuecomment-818369438 @hameizi I'm pretty sure you fixed the bug and the change also LGTM. But by adding a test case can help the project won't introduce the same bug by accident in the future. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org