[jira] [Created] (FLINK-29091) Correct RAND and RAND_INTEGER function to evaluate once at query-level for batch mode
lincoln lee created FLINK-29091: --- Summary: Correct RAND and RAND_INTEGER function to evaluate once at query-level for batch mode Key: FLINK-29091 URL: https://issues.apache.org/jira/browse/FLINK-29091 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: lincoln lee RAND and RAND_INTEGER are dynamic function, it should only evaluate once at query-level (not per record) for batch mode, FLINK-21713 did the similar fix for temporal functions. Note: this a break change for batch jobs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[upsert-kafka][SQL] Why Upsert-Kafka SQL connector is a different connector from the Kafka SQL Connector
Hi all, I noticed there is a Upsert-Kafka SQL Connector[1] and the Kafka SQL Connector[2], my question is why do we implement them as two different connectors? For example, why not add a `upsert` config option in the Kafka SQL connector to specify whether the table enables upsert mode or not. Is there a special consideration that we have to use a different connector name like `upsert-kafka` and implement a different DynamicSource/SinkFactory ? Thank you in advance~ Cheers, Yufei - [1] : https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/upsert-kafka/ - [2]: https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/kafka
[jira] [Created] (FLINK-29090) Fix the code gen for ColumnarMapData and ColumnarArrayData
Danny Chen created FLINK-29090: -- Summary: Fix the code gen for ColumnarMapData and ColumnarArrayData Key: FLINK-29090 URL: https://issues.apache.org/jira/browse/FLINK-29090 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.16.0 Reporter: Danny Chen Fix For: 1.16.0 Attachments: image-2022-08-24-10-15-11-824.png !image-2022-08-24-10-15-11-824.png|width=589,height=284! Currently, the code generation for {{MapData}} assumes that it is the {{{}GenericMapData{}}}, but the new introduced {{ColumnarMapData}} and {{ColumnarArrayData}} can not be casted to {{{}GenericMapData{}}}. {{ColumnarMapData}} and {{ColumnarArrayData}} are introduced in FLINK-24614 [https://github.com/apache/flink/commit/5c731a37e1a8f71f9c9e813f6c741a1e203fa1a3] introduces, How to reproduce: {code:sql} create table parquet_source ( f_map map ) with ( 'connector' = 'filesystem', 'format' = 'parquet' ); select f_map['k1'] from table parquet_source; {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29089) Error when run test case in Windows
hjw created FLINK-29089: --- Summary: Error when run test case in Windows Key: FLINK-29089 URL: https://issues.apache.org/jira/browse/FLINK-29089 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Affects Versions: 1.15.1 Environment: deploy env: Windows10 flink version:1.15 Reporter: hjw When I run mvn clean install ,It will run Flink test case . However , I get Error: [ERROR] Failures: [ERROR] KubernetesClusterDescriptorTest.testDeployApplicationClusterWithNonLocalSchema:155 Previous method call should have failed but it returned: org.apache.flink.kubernetes.KubernetesClusterDescriptor$$Lambda$839/1619964974@70e5737f [ERROR] AbstractKubernetesParametersTest.testGetLocalHadoopConfigurationDirectoryFromHadoop1HomeEnv:132->runTestWithEmptyEnv:149->lambda$testGetLocalHadoopConfigurationDirectoryFromHadoop1HomeEnv$3:141 Expected: is "C:\Users\10104\AppData\Local\Temp\junit5662202040601670287/conf" but: was "C:\Users\10104\AppData\Local\Temp\junit5662202040601670287\conf" [ERROR] AbstractKubernetesParametersTest.testGetLocalHadoopConfigurationDirectoryFromHadoop2HomeEnv:117->runTestWithEmptyEnv:149->lambda$testGetLocalHadoopConfigurationDirectoryFromHadoop2HomeEnv$2:126 Expected: is "C:\Users\10104\AppData\Local\Temp\junit7094401822178578683/etc/hadoop" but: was "C:\Users\10104\AppData\Local\Temp\junit7094401822178578683\etc\hadoop" [ERROR] KubernetesUtilsTest.testLoadPodFromTemplateWithNonExistPathShouldFail:110 Expected: Expected error message is "Pod template file /path/of/non-exist.yaml does not exist." but: The throwable does not contain the expected error message "Pod template file /path/of/non-exist.yaml does not exist." I judge the error occurred due to different fileSysyem(unix,Windows..etc) separators. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Energy/performance research questions
Hi everyone, We are a team of researchers at Boston University investigating the energy and performance behavior of open-source stream processing platforms. We have started looking into Flink and we wanted to reach out to community to see if anyone has tried to optimize the underlying OS/VM/container to achieve these outcomes. Some of the specific aspects we would like to explore include the following: What Linux kernel configurations are used? Has any OS tuning been done? What workloads are used to evaluate performance/efficiency, both for turning and more generally to evaluate the impact of changes to either the software or hardware? What is considered a baseline network setup, with respect to both hardware and software? Has anyone investigated the policy used in terms of the cpufreq governor ( https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt)? It would be especially helpful to hear from people running Flink in production or offering it as a service. Thank you! Sana
[jira] [Created] (FLINK-29088) Project push down cause the source reuse can not work
Aitozi created FLINK-29088: -- Summary: Project push down cause the source reuse can not work Key: FLINK-29088 URL: https://issues.apache.org/jira/browse/FLINK-29088 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Reporter: Aitozi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29087) Jdbc connector sql ITCase failed when run in idea
dalongliu created FLINK-29087: - Summary: Jdbc connector sql ITCase failed when run in idea Key: FLINK-29087 URL: https://issues.apache.org/jira/browse/FLINK-29087 Project: Flink Issue Type: Bug Components: Connectors / JDBC Affects Versions: 1.16.0 Reporter: dalongliu Fix For: 1.16.0 java.lang.NoSuchFieldError: CORRELATE at org.apache.flink.table.planner.hint.FlinkHintStrategies.createHintStrategyTable(FlinkHintStrategies.java:91) at org.apache.flink.table.planner.delegation.PlannerContext.lambda$getSqlToRelConverterConfig$1(PlannerContext.java:288) at java.util.Optional.orElseGet(Optional.java:267) at org.apache.flink.table.planner.delegation.PlannerContext.getSqlToRelConverterConfig(PlannerContext.java:283) at org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:146) at org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:124) at org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:121) at org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:65) at org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:65) at org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:58) at org.apache.flink.table.api.internal.TableEnvironmentImpl.create(TableEnvironmentImpl.java:308) at org.apache.flink.table.api.TableEnvironment.create(TableEnvironment.java:93) at org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.setup(MySqlCatalogITCase.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38) at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29086) Fix the Helm chart's Pod env reference
Xin Hao created FLINK-29086: --- Summary: Fix the Helm chart's Pod env reference Key: FLINK-29086 URL: https://issues.apache.org/jira/browse/FLINK-29086 Project: Flink Issue Type: Improvement Components: Kubernetes Operator Reporter: Xin Hao We need to add a `quote` pipeline to the env params reference. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release 1.15.2, release candidate #2
Thanks everyone. The voting has concluded and the release has been approved. This thread is now closed. On Tue, Aug 23, 2022 at 9:37 AM Konstantin Knauf wrote: > +1 (binding) > > * checked checksum of binaries > * checked signatures of binaries & Maven artifacts > * checked dependency & NOTICE changes > * ran TopSpeedWindowing locally > > Thanks for driving the release, Danny! Good job. > > > Am Di., 23. Aug. 2022 um 04:16 Uhr schrieb Peng Kang > : > > > +1 > > > > -- Forwarded message - > > From: Dawid Wysakowicz > > Date: Mon, Aug 22, 2022 at 7:21 PM > > Subject: Re: [VOTE] Release 1.15.2, release candidate #2 > > To: > > Cc: Danny Cranmer > > > > > > +1 (binding) > > > > - signatures & checksums OK > > - checked changed licenses from 1.15.1 > > - PR OK > > - no excessive or binary files in the source distribution > > > > Best, > > > > Dawid > > > > On 19.08.2022 10:30, Xingbo Huang wrote: > > > +1 (non-binding) > > > > > > - verify signatures and checksums > > > - no binaries found in source archive > > > - reviewed the release note blog > > > - verify python wheel package contents > > > - pip install apache-flink-libraries and apache-flink wheel packages > > > - run the examples from Python Table API tutorial > > > > > > Best, > > > Xingbo > > > > > > Chesnay Schepler 于2022年8月19日周五 15:51写道: > > > > > >> +1 (binding) > > >> > > >> - signatures OK > > >> - all required artifacts on dist.apache.org > > >> - maven artifacts appear complete > > >> - tag exists > > >> - PR OK > > >> - no PaxHeader directories > > >> - no excessive files in the distribution > > >> > > >> On 17/08/2022 19:52, Danny Cranmer wrote: > > >>> Hi everyone, > > >>> > > >>> Please review and vote on the release candidate #2 for the version > > >> 1.15.2, > > >>> as follows: > > >>> [ ] +1, Approve the release > > >>> [ ] -1, Do not approve the release (please provide specific comments) > > >>> > > >>> The complete staging area is available for your review, which > includes: > > >>> > > >>> - JIRA release notes [1], > > >>> - the official Apache source release and binary convenience > > releases > > >> to > > >>> be deployed to dist.apache.org [2], which are signed with the > key > > >> with > > >>> fingerprint 125FD8DB [3], > > >>> - all artifacts to be deployed to the Maven Central Repository > > [4], > > >>> - source code tag "release-1.15.2-rc2" [5], > > >>> - website pull request listing the new release and adding > > >> announcement > > >>> blog post [6]. > > >>> > > >>> > > >>> The vote will be open for at least 72 hours. It is adopted by > majority > > >>> approval, with at least 3 PMC affirmative votes. > > >>> > > >>> 1.15.2-RC1 was rejected for 2x issues: > > >>> > > >>> 1. Dist/src archives contained PaxHeader files when > decompressing > > on > > >>> Windows. Root cause was tar default archive format on Mac, fixed > > by > > >> using > > >>> gnu-tar. I will follow up to update the release process to avoid > > >> this issue > > >>> in the future. > > >>> 2. Dist/src archives contained additional files. I had some > > locally > > >>> gitignored files that ended up in the archive. New build used a > > >> fresh clone > > >>> of Flink and I compared the archive contents of 1.15.1 with > > 1.15.2. > > >>> > > >>> > > >>> Thanks, > > >>> Danny Cranmer > > >>> > > >>> [1] > > >>> > > >> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351829 > > >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.15.2-rc2/ > > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > >>> [4] > > >> > https://repository.apache.org/content/repositories/orgapacheflink-1524 > > >>> [5] https://github.com/apache/flink/tree/release-1.15.2-rc2 > > >>> [6] https://github.com/apache/flink-web/pull/566 > > >>> > > >> > > > > > -- > https://twitter.com/snntrable > https://github.com/knaufk >
[RESULT][VOTE] Release 1.15.2, release candidate #2
I'm happy to announce that we have unanimously approved this release. There are 5 approving votes, 3 of which are binding: * Chesnay Schepler (binding) * Xingbo Huang (non-binding) * Dawid Wysakowicz (binding) * Peng Kang (non-binding) * Konstantin Knauf (binding) There are no disapproving votes. Thank you for verifying the release candidate. I will now proceed to finalize the release and announce it once everything is published. Best Regards Danny Cranmer
[jira] [Created] (FLINK-29085) Add the name for test as hint for BuiltInFunctionTestBase
Aitozi created FLINK-29085: -- Summary: Add the name for test as hint for BuiltInFunctionTestBase Key: FLINK-29085 URL: https://issues.apache.org/jira/browse/FLINK-29085 Project: Flink Issue Type: Improvement Components: Tests Reporter: Aitozi when running tests extends the {{BuiltInFunctionTestBase}}, I found it's hard to distinguish the failed tests, I think it will be easy to add the name prefix for the {{TestItem}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[SUMMARY] Flink 1.16 release sync of 2022-08-23
I would like to give you a brief update of the Flink 1.16 release sync meating of 2022-08-23. *Since the feature freeze(9th of August 2022, end of business CEST), we have started release-testing[1] in the last two weeks. Only two tickets have been completed so far. We hope that the testers of the release-testing JIRAs can update the progress in time. Our planned release time is the end of September 2022. Based on this, we plan to finish the release-testing work by the end of August.* *Currently, there are still some critical/blocker stability tickets[2] that need to be resolved. We will cut 1.16 branch once our CI is stable and release-testing work has been finished.* For more information about Flink release 1.16, you can refer to https://cwiki.apache.org/confluence/display/FLINK/1.16+Release The next Flink release sync will be on Tuesday the 30th of August at 9am CEST/ 3pm China Standard Time / 7am UTC. The link could be found on the following page https://cwiki.apache.org/confluence/display/FLINK/1.16+Release#id-1.16Release-Syncmeeting . On behalf of all the release managers, best regards, Xingbo [1] https://issues.apache.org/jira/browse/FLINK-28896 [2] https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20issuetype%20%3D%20Bug%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20fixVersion%20%3D%201.16.0%20ORDER%20BY%20summary%20ASC%2C%20priority%20DESC
[DISCUSS] ARM support for Flink
Hi Flinkers, In 2019, we raised an discussion in Flink about "ARM support for Flink"[1]. And we got so many helps and supports from Flink community about introducing a ARM CI system into Flink community, which is named "OpenLab"[2], and finally we create a full stack regression Flink tests on OpenLab ARM resources, then post a email to Flink Maillist about test result every day. We've been doing that for almost 2 years. But now, we are so apologized that OpenLab had been reached its EOL. We had to close it last month. So for sustaining the existing ARM CI still work on Flink Community and helping contributors to verify their code on ARM. We decide that we *will donate some ARM resources(Virtual Machines) *into Flink community to make this happen. And considering the existing Flink CICD had been moved to Azure Pipeline, and doesn't use github action. Now what we can provide are *ONLY *ARM resources(Virtual Machines) from us, so we think *Flink community is the right role to decide how to use them.* We only give several suggestions here: 1. Github action self-hosted Machines(Integrated with our ARM resources) 2. Azure Pipeline self-hosted Machines(Integrated with our ARM resources) 3. Any ideas from Flinkers? If community accepts our ARM resources and want to integrate with existing CICD in any way, please feel free to ping me about the quota(CPU nums, Memory size and so on) of VM we need to donate. Thank you very much. BR Bo Zhao [1] https://www.mail-archive.com/dev@flink.apache.org/msg27054.html [2] https://openlabtesting.org/
[jira] [Created] (FLINK-29084) Program argument containing # (pound sign) mistakenly truncated in Kubernetes mode
Weike Dong created FLINK-29084: -- Summary: Program argument containing # (pound sign) mistakenly truncated in Kubernetes mode Key: FLINK-29084 URL: https://issues.apache.org/jira/browse/FLINK-29084 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes Affects Versions: 1.15.1, 1.14.5, 1.13.6 Environment: Flink 1.13.6 Native Kubernetes (Application Mode) Reporter: Weike Dong We have found that when submitting jobs in native-Kubernetes mode, the main arguments of the Flink program would be truncated if it contains a # character. For example, if we pass 'ab#cd' as the argument for Flink programs, Flink actually gets only 'ab' from the variable `$internal.application.program-args` at runtime. After searching into the code, we found the reason might be that when `org.apache.flink.kubernetes.kubeclient.decorators.FlinkConfMountDecorator#buildAccompanyingKubernetesResources` transform Flink config data `Map` into `ConfigMap`, fabric8 Kubernetes client converts it to YAML internally, without any escaping procedures. Afterwards, when there is a # character in the YAML line, the decoder treats it as the start of a comment, thus the substring after the # character is ignored erroneously. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29083) how to split String to Array with scalar functions
Zha Ji created FLINK-29083: -- Summary: how to split String to Array with scalar functions Key: FLINK-29083 URL: https://issues.apache.org/jira/browse/FLINK-29083 Project: Flink Issue Type: New Feature Reporter: Zha Ji as hive split('a,b,c,d',',') -> ["a","b","c","d"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29082) Clean-up Leftovers for changelog pre-uploading files after failover
Yuan Mei created FLINK-29082: Summary: Clean-up Leftovers for changelog pre-uploading files after failover Key: FLINK-29082 URL: https://issues.apache.org/jira/browse/FLINK-29082 Project: Flink Issue Type: Improvement Reporter: Yuan Mei -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Apache Flink Table Store 0.2.0, release candidate #2
+1 (binding) - Checked release notes: *Action Required* - Minor: there're still 25 open issues with fix version marked as 0.2.0 and need to be updated [1] - Checked sums and signatures: *OK* - Checked the jars in the staging repo: *OK* - Checked source distribution doesn't include binaries: *OK* - Maven clean install from source: *OK* - Checked version consistency in pom files: *OK* - Went through the quick start: *OK* - Checked the website updates: *OK* - Minor: left some suggestions, please check. Thanks for driving this release, Jingsong! Best Regards, Yu [1] https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)%20AND%20fixVersion%20%3D%20table-store-0.2.0%20AND%20component%20%3D%20%22Table%20Store%22 On Thu, 18 Aug 2022 at 15:30, Nicholas Jiang wrote: > Hi all! > > +1 for the release (non-binding). I've verified the jar with SQL client > and listed the check items as follows: > > * Compiled the sources and built the source distribution - PASSED > * Ran through Quick Start Guide - PASSED > * Checked Spark 2.3.4&3.3.0 reader and catalog with table store jar - > PASSED > * Checked all NOTICE files - PASSED > > Regards, > Nicholas Jiang > > On 2022/08/17 10:16:54 Jingsong Li wrote: > > Hi everyone, > > > > Please review and vote on the release candidate #2 for the version 0.2.0 > of > > Apache Flink Table Store, as follows: > > > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > **Release Overview** > > > > As an overview, the release consists of the following: > > a) Table Store canonical source distribution to be deployed to the > > release repository at dist.apache.org > > b) Table Store binary convenience releases to be deployed to the > > release repository at dist.apache.org > > c) Maven artifacts to be deployed to the Maven Central Repository > > > > **Staging Areas to Review** > > > > The staging areas containing the above mentioned artifacts are as > follows, > > for your review: > > * All artifacts for a) and b) can be found in the corresponding dev > > repository at dist.apache.org [2] > > * All artifacts for c) can be found at the Apache Nexus Repository [3] > > > > All artifacts are signed with the key > > 2C2B6A653B07086B65E4369F7C76245E0A318150 [4] > > > > Other links for your review: > > * JIRA release notes [5] > > * source code tag "release-0.2.0-rc2" [6] > > * PR to update the website Downloads page to include Table Store links > [7] > > > > **Vote Duration** > > > > The voting time will run for at least 72 hours. > > It is adopted by majority approval, with at least 3 PMC affirmative > votes. > > > > Best, > > Jingsong Lee > > > > [1] > https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Table+Store+Release > > [2] > https://dist.apache.org/repos/dist/dev/flink/flink-table-store-0.2.0-rc2/ > > [3] > https://repository.apache.org/content/repositories/orgapacheflink-1523/ > > [4] https://dist.apache.org/repos/dist/release/flink/KEYS > > [5] > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351570 > > [6] https://github.com/apache/flink-table-store/tree/release-0.2.0-rc2 > > [7] https://github.com/apache/flink-web/pull/562 > > >
Re: [Discuss] Let's Session Cluster JobManager take a breather (FLIP-257: Flink JobManager Process Split)
Hi Zheng, Thanks for the write-up! I tend to agree with Chesnay that this introduces additional complexity to an already complex deployment model. One of the main focuses in this area is to reduce feature sparsity and to have fewer high-quality options. Example efforts are deprecation (and eventual removal) of per-job mode, removal of Mesos RM, ... Let's discuss your points: > This can save some JVM resources and reduce server costs if so, the saving would IMO be negligible; why? - JobMaster is by far the most resource-intensive component inside the JobManager - The CPU / memory ratio of the underlying hypervisor remains the same (or you'd have unused resources on the machine that you still need to pay for) - The most overhead of the JobMaster comes from the JVM itself, not from RM / Dispatcher > More adequate resource utilization Can you elaborate? Is this about sharing TMs between multiple jobs (I'd discourage that for long-running mission-critical workloads)? > Starting Application Mode has a long resource application and waiting (because SessionCluster has already applied for fixed TM and JM resources at startup) This means you have to overprovision your SessionCluster. This goes against resource utilization efforts from the previous point (you're shaving off little resources from JM, but have spare TMs instead, that are the order of magnitude more resource intensive). If you're able to start TMs upfront with the session cluster, you already know you're going to need them. If this is a concern, you could as well start the TMs that will eventually connect to your JM once it starts (you've decided to submit your job) - there might be some enhancements to ApplicationMode needed to make this robust, but efforts in this direction are where the things should IMO be headed. As for the resource utilization, the session cluster actually blocks you from leveraging reactive scaling efforts and eventually auto-scaling, because we'd need to enhance Flink surface area with multi-job scheduling capabilities (queues, pre-emptions, priorities between jobs) - I don't think we should ever go in that direction, that's outside Flink's scope. > Poor isolation between JobMaster threads in JobManager: When there are too many jobs, the JobManager is under great pressure. The session mode is mainly designed for interactive workloads but agreed that JM threads might interfere. Still, I fail to see this as a reason for introducing additional complexity because this could be mitigated on the user side (smarter job scheduling, multiple clusters, AM for streaming jobs). > there will inevitably be more rich functions running on JobMaster. This is a separate discussion. So far we were mostly pushing against running against any user code on JM (there are few exceptions already, but any enhancement should be carefully considered) > JobManager's functional responsibilities are too large from the "architecture perspective", it's just a bundle of independent components with clearly defined responsibilities, that makes their coordination simpler and more resource efficient (networking, fewer JVMs - each comes with a significant overhead) -- So far I'm under impression that this actually introduces more issues than it tries to solve. Best, D. On Thu, Aug 18, 2022 at 12:10 PM Zheng Yu Chen wrote: > You're right, this does add to the complexity of their communication > coordination > I can understand what you mean is similar to ngnix, load balancing to > different SessionClusters in the front, rather than one more component. In > fact, I have tried this myself, and it seems to solve the problem of high > load of cluster JM, but it cannot fundamentally solve the following > problems > > Deploying components is complicated and requires one more ngnix and related > configuration. You also need to make sure that your jobs are not assigned > to a busy JobManager > As well as my previous reply mentioned the problem, this is a trade-off > solution (after all, you can choose Application Mode, so there will be no > such problem), when we need to use SessionCluster for long-running jobs, we > Can you think like this? > > what do you think ~ > > > Chesnay Schepler 于2022年8月17日周三 22:31写道: > > > To be honest I'm terrified at the idea of splitting the Dispatcher into > > several processes, even more so if this is supposed to be opt-in and > > specific to session mode. > > It would fragment the coordination layer even more than it already is, > > and make ops more complicated (yet another set of processes to monitor, > > configure etc.). > > > > I'm not convinced that this proposal really gets us a lot of benefits; > > and would rather propose that you split your single session cluster into > > multiple session clusters (with the scheduling component in front of it > > to distribute jobs) to even the load. > > > > > The currently idling JobManagers could be utilized to take over some > > of the workload from the leader. > > > >
[jira] [Created] (FLINK-29081) Join Hint cannot be identified by lowercase
xuyang created FLINK-29081: -- Summary: Join Hint cannot be identified by lowercase Key: FLINK-29081 URL: https://issues.apache.org/jira/browse/FLINK-29081 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: xuyang The following sql can reproduce this bug: select /*+ bRoadCasT(t1) */* from t1 join t1 as t3 on t1.a = t3.a; -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release 1.15.2, release candidate #2
+1 (binding) * checked checksum of binaries * checked signatures of binaries & Maven artifacts * checked dependency & NOTICE changes * ran TopSpeedWindowing locally Thanks for driving the release, Danny! Good job. Am Di., 23. Aug. 2022 um 04:16 Uhr schrieb Peng Kang : > +1 > > -- Forwarded message - > From: Dawid Wysakowicz > Date: Mon, Aug 22, 2022 at 7:21 PM > Subject: Re: [VOTE] Release 1.15.2, release candidate #2 > To: > Cc: Danny Cranmer > > > +1 (binding) > > - signatures & checksums OK > - checked changed licenses from 1.15.1 > - PR OK > - no excessive or binary files in the source distribution > > Best, > > Dawid > > On 19.08.2022 10:30, Xingbo Huang wrote: > > +1 (non-binding) > > > > - verify signatures and checksums > > - no binaries found in source archive > > - reviewed the release note blog > > - verify python wheel package contents > > - pip install apache-flink-libraries and apache-flink wheel packages > > - run the examples from Python Table API tutorial > > > > Best, > > Xingbo > > > > Chesnay Schepler 于2022年8月19日周五 15:51写道: > > > >> +1 (binding) > >> > >> - signatures OK > >> - all required artifacts on dist.apache.org > >> - maven artifacts appear complete > >> - tag exists > >> - PR OK > >> - no PaxHeader directories > >> - no excessive files in the distribution > >> > >> On 17/08/2022 19:52, Danny Cranmer wrote: > >>> Hi everyone, > >>> > >>> Please review and vote on the release candidate #2 for the version > >> 1.15.2, > >>> as follows: > >>> [ ] +1, Approve the release > >>> [ ] -1, Do not approve the release (please provide specific comments) > >>> > >>> The complete staging area is available for your review, which includes: > >>> > >>> - JIRA release notes [1], > >>> - the official Apache source release and binary convenience > releases > >> to > >>> be deployed to dist.apache.org [2], which are signed with the key > >> with > >>> fingerprint 125FD8DB [3], > >>> - all artifacts to be deployed to the Maven Central Repository > [4], > >>> - source code tag "release-1.15.2-rc2" [5], > >>> - website pull request listing the new release and adding > >> announcement > >>> blog post [6]. > >>> > >>> > >>> The vote will be open for at least 72 hours. It is adopted by majority > >>> approval, with at least 3 PMC affirmative votes. > >>> > >>> 1.15.2-RC1 was rejected for 2x issues: > >>> > >>> 1. Dist/src archives contained PaxHeader files when decompressing > on > >>> Windows. Root cause was tar default archive format on Mac, fixed > by > >> using > >>> gnu-tar. I will follow up to update the release process to avoid > >> this issue > >>> in the future. > >>> 2. Dist/src archives contained additional files. I had some > locally > >>> gitignored files that ended up in the archive. New build used a > >> fresh clone > >>> of Flink and I compared the archive contents of 1.15.1 with > 1.15.2. > >>> > >>> > >>> Thanks, > >>> Danny Cranmer > >>> > >>> [1] > >>> > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351829 > >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.15.2-rc2/ > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >>> [4] > >> https://repository.apache.org/content/repositories/orgapacheflink-1524 > >>> [5] https://github.com/apache/flink/tree/release-1.15.2-rc2 > >>> [6] https://github.com/apache/flink-web/pull/566 > >>> > >> > -- https://twitter.com/snntrable https://github.com/knaufk
Re: [DISCUSS] Releasing Flink 1.15.2
Hi Guillaume, as long as those vulnerabilities have not been introduced since Flink 1.15.1, which they haven't, we can proceed with should not block the release. Best, Konstantin Am Di., 23. Aug. 2022 um 10:12 Uhr schrieb Guillaume Vauvert < guillaume.vauvert@gmail.com>: > Hello all, > > I am wondering if it is safe to release 1.15.2 despite > https://issues.apache.org/jira/browse/FLINK-29065 ? > > Regards, > > Guillaume > > On 8/16/22 12:53, Danny Cranmer wrote: > > Hello all, > > > > I have created a PR [1] to update the documentation to cover the 1.15.x > > stateful upgrades. Would appreciate a review for correctness. I will also > > put a note in the release announcement blog post. > > > > We also have another blocker [2] opened against 1.15.2. I will proceed > with > > the release preparation but am expecting this one to need to go in. > > > > [1] https://github.com/apache/flink/pull/20600 > > [2] https://issues.apache.org/jira/browse/FLINK-28975 > > > > Thanks, > > Danny > > > > On Mon, Aug 15, 2022 at 2:02 PM Timo Walther wrote: > > > >> Thanks Danny. > >> > >> I will merge FLINK-28861 in couple of minutes to master. I will open a > >> PR for 1.15 shortly. This issue is pretty tricky, we should add a > >> warning to 1.15.0 and 1.15.1 releases as it won't be easy to perform > >> stateful upgrades in between 1.15.x patch versions for pipelines that > >> use Table API. > >> > >> Regards, > >> Timo > >> > >> On 15.08.22 10:42, Danny Cranmer wrote: > >>> Thanks all. > >>> > >>> I can see the final issue has an approved PR now, awaiting merge, > thanks > >>> Timo and Chesnay. In the meantime I will get setup ready to start the > >>> release process. > >>> > >>> Thanks, > >>> > >>> On Fri, Aug 12, 2022 at 10:47 PM Jing Ge wrote: > >>> > Thanks Danny! Strong +1 and looking forward to the 1.15.2 asap. > > Best regards, > Jing > > On Fri, Aug 12, 2022 at 4:25 AM Xingbo Huang > >> wrote: > > Hi Danny, > > > > Thanks for driving the release. +1 for the 1.15.2 release. > > > > Best, > > Xingbo > > > > Chesnay Schepler 于2022年8月11日周四 20:06写道: > > > >> I think that's a good idea; a user in the Flink slack was asking for > >> it > >> just yesterday. > >> > >> About FLINK-28861, let's wait a day or something because there > should > be > >> PR very soon. > >> > >> It's perfectly fine for you to be the release manager; I can help > you > >> out with things with that require PMC permissions. > >> > >> On 11/08/2022 12:39, Danny Cranmer wrote: > >>> Hello all, > >>> > >>> I would like to start discussing the release of Flink 1.15.2. Flink > >> 1.15.1 > >>> was released on 6th July [1] and we have resolved 31 issues since > then > >> [2]. > >>> During the Flink 1.15.1 vote [3] we identified one blocker [4] and > one > >>> critical issue [5] that should be fixed soon after. We also have 3 > > other > >>> non-test related critical fixes merged into 1.15.2. > >>> > >>> There is 1 blocker in 1.15.2 not yet resolved that we could > consider > >>> waiting for, "Cannot resume from savepoint when using > > fromChangelogStream > >>> in upsert mode" [6]. There have been discussions on this issue and > it > >> looks > >>> like Timo will be working on it, however there is no PR available > yet. > >>> I'd like to advocate this release and in doing so nominate myself > to > be > >> the > >>> release manager. I'm conscious that I have not performed this duty > >> before, > >>> so alternatively am happy to shadow this process if I can find a > > willing > >>> volunteer to cover it on this occasion. > >>> > >>> [1] https://flink.apache.org/news/2022/07/06/release-1.15.1.html > >>> [2] > >>> > >> > https://issues.apache.org/jira/browse/FLINK-28322?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.15.2%20AND%20status%20in%20(resolved%2C%20closed)%20order%20by%20priority%20desc > >>> [3] > https://lists.apache.org/thread/t27qc36g141kzk8d83jytkdshfpdj0xl > >>> [4] https://issues.apache.org/jira/browse/FLINK-28322 > >>> [5] https://issues.apache.org/jira/browse/FLINK-23528 > >>> [6] https://issues.apache.org/jira/browse/FLINK-28861 > >>> > >>> Thanks, > >>> Danny Cranmer > >>> > >> > >> > -- https://twitter.com/snntrable https://github.com/knaufk
[jira] [Created] (FLINK-29080) Migrate unit and integration tests from managed table to catalog-based tests
Jane Chan created FLINK-29080: - Summary: Migrate unit and integration tests from managed table to catalog-based tests Key: FLINK-29080 URL: https://issues.apache.org/jira/browse/FLINK-29080 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Jane Chan Fix For: table-store-0.3.0 To get rid of ManagedTable -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS] Releasing Flink 1.15.2
Hello all, I am wondering if it is safe to release 1.15.2 despite https://issues.apache.org/jira/browse/FLINK-29065 ? Regards, Guillaume On 8/16/22 12:53, Danny Cranmer wrote: Hello all, I have created a PR [1] to update the documentation to cover the 1.15.x stateful upgrades. Would appreciate a review for correctness. I will also put a note in the release announcement blog post. We also have another blocker [2] opened against 1.15.2. I will proceed with the release preparation but am expecting this one to need to go in. [1] https://github.com/apache/flink/pull/20600 [2] https://issues.apache.org/jira/browse/FLINK-28975 Thanks, Danny On Mon, Aug 15, 2022 at 2:02 PM Timo Walther wrote: Thanks Danny. I will merge FLINK-28861 in couple of minutes to master. I will open a PR for 1.15 shortly. This issue is pretty tricky, we should add a warning to 1.15.0 and 1.15.1 releases as it won't be easy to perform stateful upgrades in between 1.15.x patch versions for pipelines that use Table API. Regards, Timo On 15.08.22 10:42, Danny Cranmer wrote: Thanks all. I can see the final issue has an approved PR now, awaiting merge, thanks Timo and Chesnay. In the meantime I will get setup ready to start the release process. Thanks, On Fri, Aug 12, 2022 at 10:47 PM Jing Ge wrote: Thanks Danny! Strong +1 and looking forward to the 1.15.2 asap. Best regards, Jing On Fri, Aug 12, 2022 at 4:25 AM Xingbo Huang wrote: Hi Danny, Thanks for driving the release. +1 for the 1.15.2 release. Best, Xingbo Chesnay Schepler 于2022年8月11日周四 20:06写道: I think that's a good idea; a user in the Flink slack was asking for it just yesterday. About FLINK-28861, let's wait a day or something because there should be PR very soon. It's perfectly fine for you to be the release manager; I can help you out with things with that require PMC permissions. On 11/08/2022 12:39, Danny Cranmer wrote: Hello all, I would like to start discussing the release of Flink 1.15.2. Flink 1.15.1 was released on 6th July [1] and we have resolved 31 issues since then [2]. During the Flink 1.15.1 vote [3] we identified one blocker [4] and one critical issue [5] that should be fixed soon after. We also have 3 other non-test related critical fixes merged into 1.15.2. There is 1 blocker in 1.15.2 not yet resolved that we could consider waiting for, "Cannot resume from savepoint when using fromChangelogStream in upsert mode" [6]. There have been discussions on this issue and it looks like Timo will be working on it, however there is no PR available yet. I'd like to advocate this release and in doing so nominate myself to be the release manager. I'm conscious that I have not performed this duty before, so alternatively am happy to shadow this process if I can find a willing volunteer to cover it on this occasion. [1] https://flink.apache.org/news/2022/07/06/release-1.15.1.html [2] https://issues.apache.org/jira/browse/FLINK-28322?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.15.2%20AND%20status%20in%20(resolved%2C%20closed)%20order%20by%20priority%20desc [3] https://lists.apache.org/thread/t27qc36g141kzk8d83jytkdshfpdj0xl [4] https://issues.apache.org/jira/browse/FLINK-28322 [5] https://issues.apache.org/jira/browse/FLINK-23528 [6] https://issues.apache.org/jira/browse/FLINK-28861 Thanks, Danny Cranmer
[jira] [Created] (FLINK-29079) Add doc for show statement of Hive dialect
luoyuxia created FLINK-29079: Summary: Add doc for show statement of Hive dialect Key: FLINK-29079 URL: https://issues.apache.org/jira/browse/FLINK-29079 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Affects Versions: 1.16.0 Reporter: luoyuxia Fix For: 1.16.0 Add a page of show statment for HiveDialect. As our Hive dialect is compatible to Hive, so we can take some from Hive docs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29078) Add doc for drop statement of Hive dialect
luoyuxia created FLINK-29078: Summary: Add doc for drop statement of Hive dialect Key: FLINK-29078 URL: https://issues.apache.org/jira/browse/FLINK-29078 Project: Flink Issue Type: Sub-task Components: Connectors / Hive, Documentation Affects Versions: 1.16.0 Reporter: luoyuxia Fix For: 1.16.0 Add a page of drop statment for HiveDialect. As our Hive dialect is compatible to Hive, so we can take some from Hive docs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29077) Add doc for creeate statement of Hive dialect
luoyuxia created FLINK-29077: Summary: Add doc for creeate statement of Hive dialect Key: FLINK-29077 URL: https://issues.apache.org/jira/browse/FLINK-29077 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Reporter: luoyuxia Fix For: 1.16.0 Add a page of create statment for HiveDialect. As our Hive dialect is compatible to Hive, so we can take some from Hive docs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29076) Add alter doc for Hive dialect
luoyuxia created FLINK-29076: Summary: Add alter doc for Hive dialect Key: FLINK-29076 URL: https://issues.apache.org/jira/browse/FLINK-29076 Project: Flink Issue Type: Sub-task Components: Documentation Affects Versions: 1.16.0 Reporter: luoyuxia Fix For: 1.16.0 Add a page of alter statment for HiveDialect. As our Hive dialect is compatible to Hive, so we can take it from [Hive docs|[https://cwiki.apache.org/confluence/display/hive/languagemanual+ddl#LanguageManualDDL]] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29075) RescaleBucketITCase#testSuspendAndRecoverAfterRescaleOverwrite is not stable
Jane Chan created FLINK-29075: - Summary: RescaleBucketITCase#testSuspendAndRecoverAfterRescaleOverwrite is not stable Key: FLINK-29075 URL: https://issues.apache.org/jira/browse/FLINK-29075 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.2.0 Reporter: Jane Chan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29074) use 'add jar' in sql client throws "Could not find any jdbc dialect factories that implement"
xuyang created FLINK-29074: -- Summary: use 'add jar' in sql client throws "Could not find any jdbc dialect factories that implement" Key: FLINK-29074 URL: https://issues.apache.org/jira/browse/FLINK-29074 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.16.0 Reporter: xuyang The following step can reproduce this bug: 1、 create a source table 't1' in sql-client using jdbc(mysql) 2、add a jar with jdbc connector 3、select * from 't1' then an exception throws: java.lang.IllegalStateException: Could not find any jdbc dialect factories that implement 'org.apache.flink.connector.jdbc.dialect.JdbcDialectFactory' in the classpath. -- This message was sent by Atlassian Jira (v8.20.10#820010)