[jira] [Created] (FLINK-23752) Add document for using HiveSource API
Rui Li created FLINK-23752: -- Summary: Add document for using HiveSource API Key: FLINK-23752 URL: https://issues.apache.org/jira/browse/FLINK-23752 Project: Flink Issue Type: Improvement Components: Connectors / Hive, Documentation Reporter: Rui Li Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23751) Testing Window Top-N after Windowing TVF
JING ZHANG created FLINK-23751: -- Summary: Testing Window Top-N after Windowing TVF Key: FLINK-23751 URL: https://issues.apache.org/jira/browse/FLINK-23751 Project: Flink Issue Type: Sub-task Components: Tests Reporter: JING ZHANG Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23750) Add document for Window Top-N after Windowing TVF
JING ZHANG created FLINK-23750: -- Summary: Add document for Window Top-N after Windowing TVF Key: FLINK-23750 URL: https://issues.apache.org/jira/browse/FLINK-23750 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: JING ZHANG Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23749) Testing Window Join
JING ZHANG created FLINK-23749: -- Summary: Testing Window Join Key: FLINK-23749 URL: https://issues.apache.org/jira/browse/FLINK-23749 Project: Flink Issue Type: Sub-task Components: Tests Reporter: JING ZHANG Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23748) Testing session window
JING ZHANG created FLINK-23748: -- Summary: Testing session window Key: FLINK-23748 URL: https://issues.apache.org/jira/browse/FLINK-23748 Project: Flink Issue Type: Sub-task Components: Tests Reporter: JING ZHANG Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23747) Teting Window TVF offset
JING ZHANG created FLINK-23747: -- Summary: Teting Window TVF offset Key: FLINK-23747 URL: https://issues.apache.org/jira/browse/FLINK-23747 Project: Flink Issue Type: Sub-task Components: Tests Reporter: JING ZHANG Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23746) SuccessAfterNetworkBuffersFailureITCase.testSuccessfulProgramAfterFailure fails due to AskTimeoutException
Xintong Song created FLINK-23746: Summary: SuccessAfterNetworkBuffersFailureITCase.testSuccessfulProgramAfterFailure fails due to AskTimeoutException Key: FLINK-23746 URL: https://issues.apache.org/jira/browse/FLINK-23746 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.14.0 Reporter: Xintong Song Fix For: 1.14.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22020=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=10401 {code} Aug 12 23:01:56 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 33.224 s <<< FAILURE! - in org.apache.flink.test.misc.SuccessAfterNetworkBuffersFailureITCase Aug 12 23:01:56 [ERROR] testSuccessfulProgramAfterFailure Time elapsed: 20.554 s <<< ERROR! Aug 12 23:01:56 org.apache.flink.runtime.client.JobExecutionException: Job execution failed. Aug 12 23:01:56 at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) Aug 12 23:01:56 at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Aug 12 23:01:56 at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:250) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Aug 12 23:01:56 at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) Aug 12 23:01:56 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) Aug 12 23:01:56 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) Aug 12 23:01:56 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Aug 12 23:01:56 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Aug 12 23:01:56 at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) Aug 12 23:01:56 at akka.dispatch.OnComplete.internal(Future.scala:300) Aug 12 23:01:56 at akka.dispatch.OnComplete.internal(Future.scala:297) Aug 12 23:01:56 at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) Aug 12 23:01:56 at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) Aug 12 23:01:56 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) Aug 12 23:01:56 at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) Aug 12 23:01:56 at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) Aug 12 23:01:56 at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) Aug 12 23:01:56 at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) Aug 12 23:01:56 at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) Aug 12 23:01:56 at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) Aug 12 23:01:56 at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24) Aug 12 23:01:56 at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) Aug 12 23:01:56 at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) Aug 12 23:01:56 at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) Aug 12 23:01:56 at
[jira] [Created] (FLINK-23745) Python test_keyed_co_process fails on azure
Xintong Song created FLINK-23745: Summary: Python test_keyed_co_process fails on azure Key: FLINK-23745 URL: https://issues.apache.org/jira/browse/FLINK-23745 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.14.0 Reporter: Xintong Song Fix For: 1.14.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22020=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=21602 {code} Aug 12 22:44:38 === FAILURES === Aug 12 22:44:38 __ StreamingModeDataStreamTests.test_keyed_co_process __ Aug 12 22:44:38 Aug 12 22:44:38 self = Aug 12 22:44:38 Aug 12 22:44:38 def test_keyed_co_process(self): Aug 12 22:44:38 ds1 = self.env.from_collection([("a", 1), ("b", 2), ("c", 3)], Aug 12 22:44:38 type_info=Types.ROW([Types.STRING(), Types.INT()])) Aug 12 22:44:38 ds2 = self.env.from_collection([("b", 2), ("c", 3), ("d", 4)], Aug 12 22:44:38 type_info=Types.ROW([Types.STRING(), Types.INT()])) Aug 12 22:44:38 ds1 = ds1.assign_timestamps_and_watermarks( Aug 12 22:44:38 WatermarkStrategy.for_monotonous_timestamps().with_timestamp_assigner( Aug 12 22:44:38 SecondColumnTimestampAssigner())) Aug 12 22:44:38 ds2 = ds2.assign_timestamps_and_watermarks( Aug 12 22:44:38 WatermarkStrategy.for_monotonous_timestamps().with_timestamp_assigner( Aug 12 22:44:38 SecondColumnTimestampAssigner())) Aug 12 22:44:38 ds1.connect(ds2) \ Aug 12 22:44:38 .key_by(lambda x: x[0], lambda x: x[0]) \ Aug 12 22:44:38 .process(MyKeyedCoProcessFunction()) \ Aug 12 22:44:38 .map(lambda x: Row(x[0], x[1] + 1)) \ Aug 12 22:44:38 .add_sink(self.test_sink) Aug 12 22:44:38 self.env.execute('test_keyed_co_process_function') Aug 12 22:44:38 results = self.test_sink.get_results(True) Aug 12 22:44:38 expected = ["", Aug 12 22:44:38 "", Aug 12 22:44:38 "", Aug 12 22:44:38 "", Aug 12 22:44:38 "", Aug 12 22:44:38 "", Aug 12 22:44:38 ""] Aug 12 22:44:38 > self.assert_equals_sorted(expected, results) Aug 12 22:44:38 Aug 12 22:44:38 pyflink/datastream/tests/test_data_stream.py:211: Aug 12 22:44:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Aug 12 22:44:38 pyflink/datastream/tests/test_data_stream.py:61: in assert_equals_sorted Aug 12 22:44:38 self.assertEqual(expected, actual) Aug 12 22:44:38 E AssertionError: Lists differ: ["", ""] != ["", "", ""] Aug 12 22:44:38 E Aug 12 22:44:38 E Second list contains 1 additional elements. Aug 12 22:44:38 E First extra element 7: Aug 12 22:44:38 E "" Aug 12 22:44:38 E Aug 12 22:44:38 E ["", Aug 12 22:44:38 E "", Aug 12 22:44:38 E "", Aug 12 22:44:38 E "", Aug 12 22:44:38 E "", Aug 12 22:44:38 E "", Aug 12 22:44:38 E + "", Aug 12 22:44:38 E ""] Aug 12 22:44:38 BatchModeDataStreamTests.test_keyed_co_process Aug 12 22:44:38 Aug 12 22:44:38 self = Aug 12 22:44:38 Aug 12 22:44:38 def test_keyed_co_process(self): Aug 12 22:44:38 ds1 = self.env.from_collection([("a", 1), ("b", 2), ("c", 3)], Aug 12 22:44:38 type_info=Types.ROW([Types.STRING(), Types.INT()])) Aug 12 22:44:38 ds2 = self.env.from_collection([("b", 2), ("c", 3), ("d", 4)], Aug 12 22:44:38 type_info=Types.ROW([Types.STRING(), Types.INT()])) Aug 12 22:44:38 ds1 = ds1.assign_timestamps_and_watermarks( Aug 12 22:44:38 WatermarkStrategy.for_monotonous_timestamps().with_timestamp_assigner( Aug 12 22:44:38 SecondColumnTimestampAssigner())) Aug 12 22:44:38 ds2 = ds2.assign_timestamps_and_watermarks( Aug 12 22:44:38 WatermarkStrategy.for_monotonous_timestamps().with_timestamp_assigner( Aug 12 22:44:38 SecondColumnTimestampAssigner())) Aug 12 22:44:38 ds1.connect(ds2) \ Aug 12 22:44:38 .key_by(lambda x: x[0], lambda x: x[0]) \ Aug 12 22:44:38 .process(MyKeyedCoProcessFunction()) \ Aug 12 22:44:38 .map(lambda x: Row(x[0], x[1] + 1)) \ Aug 12 22:44:38 .add_sink(self.test_sink) Aug 12 22:44:38 self.env.execute('test_keyed_co_process_function') Aug 12 22:44:38 results = self.test_sink.get_results(True) Aug 12 22:44:38 expected = ["", Aug 12 22:44:38 "", Aug 12 22:44:38 "", Aug 12 22:44:38
Re: Feature Freeze and Release Testing for 1.14
Thanks for the information, Joe, Dawid and Xintong! I've created the ticket for testing the fine-grained resource management and updated the wiki page. Best, Yangze Guo On Thu, Aug 12, 2021 at 4:59 PM Xintong Song wrote: > > Hi devs, > > We are approaching the feature freeze of release 1.14. There are a few > things that we'd like to bring to your attention. > > *1. Feature Freeze Time* > > As we agreed, the feature freeze will happen on *August 16, Monday (end of > day CEST)*. > > Here we quote Robert & Dian [1] for what a Feature Freeze means: > > *B) What does feature freeze mean?* > > After the feature freeze, no new features are allowed to be merged to > > master. Only bug fixes and documentation improvements. The release managers > > will revert new feature commits after the feature freeze. > > Rational: The goal of the feature freeze phase is to improve the system > > stability by addressing known bugs. New features tend to introduce new > > instabilities, which would prolong the release process. If you need to > > merge a new feature after the freeze, please open a discussion on the dev@ > > list. If there are no objections by a PMC member within 48 (workday) hours, > > the feature can be merged. > > > *2. Get prepared for the release testing* > > During the release testing phase, we will have 1 or 2 syncs per week. The > first release testing sync will be on *August 17, 9am CEST*. Similarly to > the previous bi-weekly syncs, everyone is welcomed to join the meeting > (link can be found on the release wiki page [2]). > > In preparation for the release testing, we'd like to ask all contributors > who have contributed major features to this release to do the following. > - Open JIRA tickets for creating *documentation* for new features. Tickets > should be opened with Priority Blocker and FixVersion 1.14.0. > - Open JIRA tickets for *testing* new features. Tickets should be opened > with Priority Blocker, FixVersion 1.14.0 and Label release-testing. > - For features listed in the release wiki page [2], please also update the > page with related documentation and testing tickets. If a feature does not > need documentation / testing tasks, please comment explicitly. > > Thank you~ > Joe, Dawid & Xintong > > [1] > https://lists.apache.org/thread.html/ra0ae9d8fb6ae1a3c031dc50368eaf000a023b8c78e48797611a882e8%40%3Cdev.flink.apache.org%3E > [2] https://cwiki.apache.org/confluence/display/FLINK/1.14+Release
[jira] [Created] (FLINK-23744) CliClientTest.testCancelExecutionInteractiveMode fails on azure
Xintong Song created FLINK-23744: Summary: CliClientTest.testCancelExecutionInteractiveMode fails on azure Key: FLINK-23744 URL: https://issues.apache.org/jira/browse/FLINK-23744 Project: Flink Issue Type: Bug Components: Table SQL / Client Affects Versions: 1.14.0 Reporter: Xintong Song Fix For: 1.14.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21983=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=9760 {code} Aug 12 13:14:02 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.496 s <<< FAILURE! - in org.apache.flink.table.client.cli.CliClientTest Aug 12 13:14:02 [ERROR] testCancelExecutionInteractiveMode Time elapsed: 0.078 s <<< FAILURE! Aug 12 13:14:02 java.lang.AssertionError Aug 12 13:14:02 at org.junit.Assert.fail(Assert.java:87) Aug 12 13:14:02 at org.junit.Assert.assertTrue(Assert.java:42) Aug 12 13:14:02 at org.junit.Assert.assertTrue(Assert.java:53) Aug 12 13:14:02 at org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode(CliClientTest.java:335) Aug 12 13:14:02 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Aug 12 13:14:02 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Aug 12 13:14:02 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Aug 12 13:14:02 at java.lang.reflect.Method.invoke(Method.java:498) Aug 12 13:14:02 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) Aug 12 13:14:02 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Aug 12 13:14:02 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) Aug 12 13:14:02 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Aug 12 13:14:02 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) Aug 12 13:14:02 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) Aug 12 13:14:02 at java.util.concurrent.FutureTask.run(FutureTask.java:266) Aug 12 13:14:02 at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23743) Test the fine-grained resource management
Yangze Guo created FLINK-23743: -- Summary: Test the fine-grained resource management Key: FLINK-23743 URL: https://issues.apache.org/jira/browse/FLINK-23743 Project: Flink Issue Type: Improvement Reporter: Yangze Guo Fix For: 1.14.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23742) test_keyed_co_process test failed in py36 and py37
Huang Xingbo created FLINK-23742: Summary: test_keyed_co_process test failed in py36 and py37 Key: FLINK-23742 URL: https://issues.apache.org/jira/browse/FLINK-23742 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.14.0 Reporter: Huang Xingbo https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22020=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23741) Waiting for final checkpoint can deadlock job
Piotr Nowojski created FLINK-23741: -- Summary: Waiting for final checkpoint can deadlock job Key: FLINK-23741 URL: https://issues.apache.org/jira/browse/FLINK-23741 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing, Runtime / Task Affects Versions: 1.14.0 Reporter: Piotr Nowojski With {{ENABLE_CHECKPOINTS_AFTER_TASKS_FINISH}} enabled, final checkpoint can deadlock (or timeout after very long time) if there is a race condition between selecting tasks to trigger checkpoint on and finishing tasks. FLINK-21246 was supposed to handle it, but it doesn't work as expected, because futures from: org.apache.flink.runtime.taskexecutor.TaskExecutor#triggerCheckpoint and org.apache.flink.streaming.runtime.tasks.StreamTask#triggerCheckpointAsync are not linked together. TaskExecutor#triggerCheckpoint reports that checkpoint has been successfully triggered, while {{StreamTask}} might have actually finished. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23740) SQL Full Outer Join bug
Fu Kai created FLINK-23740: -- Summary: SQL Full Outer Join bug Key: FLINK-23740 URL: https://issues.apache.org/jira/browse/FLINK-23740 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.13.2, 1.13.1 Reporter: Fu Kai Hi team, We encountered an issue about FULL OUTER JOIN of Flink SQL, which happens occasionally at very low probability that join output records can be cannot be correctly updated. We cannot locate the issue for now by glancing at the code of [StreamingJoinOperator.|https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java#L198] It cannot be stably reproduced and it happens with massive data volume. The reason we suspect it's the FULL OUER join problem instead of others like LEFT OUTER join is because the issue only arises after we introduced FULL OUTER into the join flow. The query we are using is like the following. The are two join code pieces, the fist one contains solely left join(though with nested) and there is no issue detected; the second one contains both left and full outer join, and the problem is that sometimes update from the left table A(and other tables before the full outer join operator) cannot be reflected in the final output. We suspect it could be the introduce of full outer join caused the problem though at a very low probability(~10 out of 30million). The bug also could be something else, the FULL OUT join issue is just based on our current experiment and observation. {code:java} create table A( k1 int, k2 int, k3 int, k4 int, k5 int, PRIMARY KEY (k1, k2, k3, k4, k5) NOT ENFORCED ) WITH ();create table B( k1 int, k2 int, k3 int, PRIMARY KEY (k1, k2, k3) NOT ENFORCED ) WITH ();create table C( k1 int, k2 int, k3 int, PRIMARY KEY (k1, k2, k3) NOT ENFORCED ) WITH ();create table D( k1 int, k2 int, PRIMARY KEY (k1, k2) NOT ENFORCED ) WITH (); // query with left join, no issue detected select * from A left outer join (select * from B left outer join C on B.k1 = C.k1 B.k2 = C.k2 B.k3 = C.k3 ) as BC on A.k1 = BC.k1 A.k2 = BC.k2 A.k3 = BC.k3 left outer join D on A.k1 = D.k1 A.k2 = D.k2 ; // query with full outer join combined with left outer join, record updates from left table A cannot be updated in the final output record some times select * from A left outer join (select * from B full outer join C on B.k1 = C.k1 B.k2 = C.k2 B.k3 = C.k3 ) as BC on A.k1 = BC.k1 A.k2 = BC.k2 A.k3 = BC.k3 left outer join D on A.k1 = D.k1 A.k2 = D.k2 ; {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23739) PrintTableSink do not implement SupportsPartitioning interface
Xianxun Ye created FLINK-23739: -- Summary: PrintTableSink do not implement SupportsPartitioning interface Key: FLINK-23739 URL: https://issues.apache.org/jira/browse/FLINK-23739 Project: Flink Issue Type: Improvement Components: Table SQL / API Affects Versions: 1.12.4 Reporter: Xianxun Ye {code:java} //代码占位符 tEnv.executeSql( "CREATE TABLE PrintTable (name STRING, score INT, da STRING, hr STRING)\n" + "PARTITIONED BY (da, hr)" + "WITH (\n" + " 'connector' = 'print'" + ")"); tEnv.executeSql("INSERT INTO PrintTable SELECT 'n1' as name, 1 as score, '2021-08-12' as da, '11' as hr"); {code} Now print records with a partitioned table is not supported. {code:java} //代码占位符 Exception in thread "main" org.apache.flink.table.api.TableException: Table 'default_catalog.default_database.PrintTable' is a partitioned table, but the underlying DynamicTableSink doesn't implement the SupportsPartitioning interface.Exception in thread "main" org.apache.flink.table.api.TableException: Table 'default_catalog.default_database.PrintTable' is a partitioned table, but the underlying DynamicTableSink doesn't implement the SupportsPartitioning interface. at org.apache.flink.table.planner.sinks.DynamicSinkUtils.validatePartitioning(DynamicSinkUtils.java:345) at org.apache.flink.table.planner.sinks.DynamicSinkUtils.prepareDynamicSink(DynamicSinkUtils.java:260) at org.apache.flink.table.planner.sinks.DynamicSinkUtils.toRel(DynamicSinkUtils.java:87) {code} `org.apache.flink.table.factories.PrintTableSinkFactory$PrintSink` and `org.apache.flink.table.factories.BlackHoleTableSinkFactory$BlackHoleSink` shoud implement `SupportsPartitioning` interface. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23738) Hide any configuration, API or docs
Roman Khachatryan created FLINK-23738: - Summary: Hide any configuration, API or docs Key: FLINK-23738 URL: https://issues.apache.org/jira/browse/FLINK-23738 Project: Flink Issue Type: Improvement Components: Runtime / State Backends Affects Versions: 1.14.0 Reporter: Roman Khachatryan Fix For: 1.14.0 As the feature will not make it to the upcoming 1.14 release, hide the related config options like CheckpointingOptions.ENABLE_STATE_CHANGE_LOG, API (e.g. StreamExecutionEnvironment.enableChangelogStateBackend). Also check Python and Scala APIs and the documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23737) Ensure publish-to-dockerhub.sh only publish specific flink versions
Yun Tang created FLINK-23737: Summary: Ensure publish-to-dockerhub.sh only publish specific flink versions Key: FLINK-23737 URL: https://issues.apache.org/jira/browse/FLINK-23737 Project: Flink Issue Type: New Feature Components: Build System Reporter: Yun Tang Current publish-to-dockerhub.sh in https://github.com/apache/flink-docker/blob/master/publish-to-dockerhub.sh actually publish all images in current maintaining flink versions. However, this is not necessary and release manager should only publish the specific version he needs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23736) Delete unnecessary test line in page `Deployment/Resource Providers/Yarn`
Liebing Yu created FLINK-23736: -- Summary: Delete unnecessary test line in page `Deployment/Resource Providers/Yarn` Key: FLINK-23736 URL: https://issues.apache.org/jira/browse/FLINK-23736 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.13.2 Reporter: Liebing Yu There is the following line in page `Deployment/Resource Providers/Yarn`: {code:java} [测试](\{{< downloads >}}) {code} I think it was introduced by mistake. It affected: [1. YARN | Apache Flink|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/yarn/] [2. YARN | Apache Flink|https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/yarn/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
Feature Freeze and Release Testing for 1.14
Hi devs, We are approaching the feature freeze of release 1.14. There are a few things that we'd like to bring to your attention. *1. Feature Freeze Time* As we agreed, the feature freeze will happen on *August 16, Monday (end of day CEST)*. Here we quote Robert & Dian [1] for what a Feature Freeze means: *B) What does feature freeze mean?* After the feature freeze, no new features are allowed to be merged to > master. Only bug fixes and documentation improvements. The release managers > will revert new feature commits after the feature freeze. Rational: The goal of the feature freeze phase is to improve the system > stability by addressing known bugs. New features tend to introduce new > instabilities, which would prolong the release process. If you need to > merge a new feature after the freeze, please open a discussion on the dev@ > list. If there are no objections by a PMC member within 48 (workday) hours, > the feature can be merged. *2. Get prepared for the release testing* During the release testing phase, we will have 1 or 2 syncs per week. The first release testing sync will be on *August 17, 9am CEST*. Similarly to the previous bi-weekly syncs, everyone is welcomed to join the meeting (link can be found on the release wiki page [2]). In preparation for the release testing, we'd like to ask all contributors who have contributed major features to this release to do the following. - Open JIRA tickets for creating *documentation* for new features. Tickets should be opened with Priority Blocker and FixVersion 1.14.0. - Open JIRA tickets for *testing* new features. Tickets should be opened with Priority Blocker, FixVersion 1.14.0 and Label release-testing. - For features listed in the release wiki page [2], please also update the page with related documentation and testing tickets. If a feature does not need documentation / testing tasks, please comment explicitly. Thank you~ Joe, Dawid & Xintong [1] https://lists.apache.org/thread.html/ra0ae9d8fb6ae1a3c031dc50368eaf000a023b8c78e48797611a882e8%40%3Cdev.flink.apache.org%3E [2] https://cwiki.apache.org/confluence/display/FLINK/1.14+Release
[jira] [Created] (FLINK-23735) Migrate BufferedUpsertSinkFunction to FLIP-143
Fabian Paul created FLINK-23735: --- Summary: Migrate BufferedUpsertSinkFunction to FLIP-143 Key: FLINK-23735 URL: https://issues.apache.org/jira/browse/FLINK-23735 Project: Flink Issue Type: Sub-task Reporter: Fabian Paul The BufferedUpsertSinkFunction is still using the old sink interfaces and relies on the old Kafka DataStream connector FlinkKafkaProducer. We need to migrate it to the new Sink API to also leverage the new KafkaSink connector and finally deprecate the FlinkKafkaProducer and all its belongings. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23734) Migrate ComponentFactory to the new Factory stack
Ingo Bürk created FLINK-23734: - Summary: Migrate ComponentFactory to the new Factory stack Key: FLINK-23734 URL: https://issues.apache.org/jira/browse/FLINK-23734 Project: Flink Issue Type: Improvement Components: Table SQL / API Reporter: Ingo Bürk Assignee: Ingo Bürk -- This message was sent by Atlassian Jira (v8.3.4#803005)