[jira] [Updated] (FLINK-36374) Bundle forst statebackend in flink-dist and provide shortcut to enable
[ https://issues.apache.org/jira/browse/FLINK-36374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36374: --- Labels: pull-request-available (was: ) > Bundle forst statebackend in flink-dist and provide shortcut to enable > -- > > Key: FLINK-36374 > URL: https://issues.apache.org/jira/browse/FLINK-36374 > Project: Flink > Issue Type: Sub-task >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > > Currently, the forst statebackend are built under flink-statebackend-forst, > but is not included in the flink-dist jar. It is better to provide a same > distribution way like rocksdb. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36369) Move deprecated user-visible classes in table modules to the legacy package to make it easier to delete them later
[ https://issues.apache.org/jira/browse/FLINK-36369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36369: --- Labels: pull-request-available (was: ) > Move deprecated user-visible classes in table modules to the legacy package > to make it easier to delete them later > -- > > Key: FLINK-36369 > URL: https://issues.apache.org/jira/browse/FLINK-36369 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / API >Reporter: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36344) Introduce lastCompletedCheckpointTimestamp metrics
[ https://issues.apache.org/jira/browse/FLINK-36344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36344: --- Labels: pull-request-available (was: ) > Introduce lastCompletedCheckpointTimestamp metrics > -- > > Key: FLINK-36344 > URL: https://issues.apache.org/jira/browse/FLINK-36344 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing, Runtime / Metrics >Reporter: Yun Tang >Assignee: Baozhu Zhao >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0, 1.20.1 > > > Currently, we cannot know how long no new checkpoint ever completed based on > the existing metrics, we should introduce lastCompletedCheckpointTimestamp to > let the users could create alerts on how long no new checkpoints ever > completed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36307) Remove deprecated PyFlink config options
[ https://issues.apache.org/jira/browse/FLINK-36307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36307: --- Labels: pull-request-available (was: ) > Remove deprecated PyFlink config options > > > Key: FLINK-36307 > URL: https://issues.apache.org/jira/browse/FLINK-36307 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Xuannan Su >Assignee: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36325) Implement basic restore from checkpoint for ForStStateBackend
[ https://issues.apache.org/jira/browse/FLINK-36325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36325: --- Labels: pull-request-available (was: ) > Implement basic restore from checkpoint for ForStStateBackend > - > > Key: FLINK-36325 > URL: https://issues.apache.org/jira/browse/FLINK-36325 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Feifan Wang >Priority: Major > Labels: pull-request-available > > As title, implement basic restore from checkpoint for ForStStateBackend, > rescale will be implemented later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36366) Remove deprecate API in flink-core exclude connector and state part
[ https://issues.apache.org/jira/browse/FLINK-36366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36366: --- Labels: pull-request-available (was: ) > Remove deprecate API in flink-core exclude connector and state part > --- > > Key: FLINK-36366 > URL: https://issues.apache.org/jira/browse/FLINK-36366 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Assignee: Weijie Guo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36364) Do not reuse serialized key in Forst map state and/or other namespaces
[ https://issues.apache.org/jira/browse/FLINK-36364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36364: --- Labels: pull-request-available (was: ) > Do not reuse serialized key in Forst map state and/or other namespaces > -- > > Key: FLINK-36364 > URL: https://issues.apache.org/jira/browse/FLINK-36364 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36355) Remove deprecate API in flink-runtime exclude connector and state part
[ https://issues.apache.org/jira/browse/FLINK-36355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36355: --- Labels: pull-request-available (was: ) > Remove deprecate API in flink-runtime exclude connector and state part > -- > > Key: FLINK-36355 > URL: https://issues.apache.org/jira/browse/FLINK-36355 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream, Runtime / Coordination, Runtime / > REST, Runtime / Web Frontend >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Assignee: Yunfeng Zhou >Priority: Blocker > Labels: pull-request-available > > This ticket is dedicated to cleaning up other parts of the API besides > connector and state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36361) Do not use StringBuilder for EquivalentExprShuttle
[ https://issues.apache.org/jira/browse/FLINK-36361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36361: --- Labels: pull-request-available (was: ) > Do not use StringBuilder for EquivalentExprShuttle > -- > > Key: FLINK-36361 > URL: https://issues.apache.org/jira/browse/FLINK-36361 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > Currently there is {{EquivalentExprShuttle}} where there is a map of > {{toString}} node to {{RelNode}}. In fact there is no need for toString which > is calculated each time with usage of {{StringBuilder}} under the hood. And > even more we faced one weird case where it consumes memory. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36311) Remove deprecated flink-formats config options
[ https://issues.apache.org/jira/browse/FLINK-36311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36311: --- Labels: pull-request-available (was: ) > Remove deprecated flink-formats config options > -- > > Key: FLINK-36311 > URL: https://issues.apache.org/jira/browse/FLINK-36311 > Project: Flink > Issue Type: Sub-task > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Reporter: Xuannan Su >Assignee: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36360) Prepare release process and Scripts for the preview release
[ https://issues.apache.org/jira/browse/FLINK-36360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36360: --- Labels: pull-request-available (was: ) > Prepare release process and Scripts for the preview release > --- > > Key: FLINK-36360 > URL: https://issues.apache.org/jira/browse/FLINK-36360 > Project: Flink > Issue Type: New Feature > Components: Release System >Reporter: Xintong Song >Assignee: Xintong Song >Priority: Blocker > Labels: pull-request-available > Fix For: 2.0-preview > > > Flink Repo > || ||Branch||Version||Tag (if any)|| > |Regular|master|2.0-SNAPSHOT| | > |release-1.20|1.20-SNAPSHOT| | > |release-1.20-rc1|1.20.0|release-1.20.0| > |Preview|master|2.0-SNAPSHOT| | > |2.0-preview1-rc1|2.0-preview1|release-2.0-preview1| > > Docs > || ||Doc Version||Pointing Branch||Notes|| > |Regular|1.20.X|release-1.20| | > |Preview|2.0-previewX|2.0-preview1-rc1 (branch of the most recent preview & > rc)|Should be removed once 2.0.0 is out| > > Docker > ||Heading 1||Version||Branch||Notes|| > |Regular|1.20.X|dev-1.20| | > |Preview|2.0-previewX|dev-2.0|2.0.x should use the same branch| -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30614) Improve resolving schema compatibility -- Milestone two
[ https://issues.apache.org/jira/browse/FLINK-30614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-30614: --- Labels: pull-request-available (was: ) > Improve resolving schema compatibility -- Milestone two > --- > > Key: FLINK-30614 > URL: https://issues.apache.org/jira/browse/FLINK-30614 > Project: Flink > Issue Type: Sub-task > Components: API / Type Serialization System >Reporter: Hangxiang Yu >Priority: Major > Labels: pull-request-available > > In the milestone two, we should: > # Remove TypeSerializerSnapshot#resolveSchemaCompatibility(TypeSerializer > newSerializer) and related implementation. > # Make all places where use > TypeSerializerSnapshot#resolveSchemaCompatibility(TypeSerializer > newSerializer) to check the compatibility call > Typeserializer#resolveSchemaCompatibility(TypeSerializerSnapshot > oldSerializerSnapshot). > # Remove the default implementation of the new method. > It will be done after several stable version. > See FLIP-263 for more details. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34082) Remove deprecated methods of Configuration in 2.0
[ https://issues.apache.org/jira/browse/FLINK-34082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34082: --- Labels: pull-request-available (was: ) > Remove deprecated methods of Configuration in 2.0 > - > > Key: FLINK-34082 > URL: https://issues.apache.org/jira/browse/FLINK-34082 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Rui Fan >Assignee: Rui Fan >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36358) to_timestamp result is not correct when the string precision is long than date format
[ https://issues.apache.org/jira/browse/FLINK-36358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36358: --- Labels: pull-request-available (was: ) > to_timestamp result is not correct when the string precision is long than > date format > - > > Key: FLINK-36358 > URL: https://issues.apache.org/jira/browse/FLINK-36358 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.0, 1.18.0, 1.19.0 >Reporter: Jacky Lau >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > > tEnv.executeSql("select to_timestamp('2017-09-15 00:00:00.12345', '-MM-dd > HH:mm:ss.SSS')").print() > 2017-09-15 00:00:00.123 correct > > tEnv.executeSql("select cast(to_timestamp('2017-09-15 00:00:00.12345', > '-MM-dd HH:mm:ss.SSS') as string)").print() > 2017-09-15 00:00:00.12345 not correct -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36308) Remove deprecated CEP config options
[ https://issues.apache.org/jira/browse/FLINK-36308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36308: --- Labels: pull-request-available (was: ) > Remove deprecated CEP config options > > > Key: FLINK-36308 > URL: https://issues.apache.org/jira/browse/FLINK-36308 > Project: Flink > Issue Type: Sub-task > Components: Library / CEP >Reporter: Xuannan Su >Assignee: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36357) SDK retry for KDS connector
[ https://issues.apache.org/jira/browse/FLINK-36357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36357: --- Labels: pull-request-available (was: ) > SDK retry for KDS connector > --- > > Key: FLINK-36357 > URL: https://issues.apache.org/jira/browse/FLINK-36357 > Project: Flink > Issue Type: Sub-task >Reporter: Hong Liang Teoh >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36350) IllegalAccessError detected in JDK17+ runs
[ https://issues.apache.org/jira/browse/FLINK-36350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36350: --- Labels: pull-request-available test-stability (was: test-stability) > IllegalAccessError detected in JDK17+ runs > -- > > Key: FLINK-36350 > URL: https://issues.apache.org/jira/browse/FLINK-36350 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Priority: Blocker > Labels: pull-request-available, test-stability > > UnalignedCheckpointRescaleITCase and GroupReduceITCase are affected in JDK17 > and JDK21 test profiles. > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62359&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36349) ClassNotFoundException due to org.apache.flink.runtime.types.FlinkScalaKryoInstantiator missing
[ https://issues.apache.org/jira/browse/FLINK-36349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36349: --- Labels: pull-request-available test-stability (was: test-stability) > ClassNotFoundException due to > org.apache.flink.runtime.types.FlinkScalaKryoInstantiator missing > --- > > Key: FLINK-36349 > URL: https://issues.apache.org/jira/browse/FLINK-36349 > Project: Flink > Issue Type: Bug > Components: API / Type Serialization System >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Priority: Blocker > Labels: pull-request-available, test-stability > > This is most likely caused by FLINK-29741 which was recently merged. > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62359&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=17558 > {code} > Sep 23 01:58:51 01:58:50,533 12326 [AsyncOperations-thread-1] INFO > org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer [] - Kryo > serializer scala extensions are not available. > Sep 23 01:58:51 java.lang.ClassNotFoundException: > org.apache.flink.runtime.types.FlinkScalaKryoInstantiator > Sep 23 01:58:51 at > java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_292] > Sep 23 01:58:51 at > java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_292] > Sep 23 01:58:51 at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_292] > Sep 23 01:58:51 at > java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_292] > Sep 23 01:58:51 at java.lang.Class.forName0(Native Method) > ~[?:1.8.0_292] > Sep 23 01:58:51 at java.lang.Class.forName(Class.java:264) > ~[?:1.8.0_292] > [...] > {code} > It causes ClosureCleanerITCase to fail in the AdaptiveScheduler test profile. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36299) AdaptiveSchedulerTest.testStatusMetrics times out
[ https://issues.apache.org/jira/browse/FLINK-36299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36299: --- Labels: pull-request-available test-stability (was: test-stability) > AdaptiveSchedulerTest.testStatusMetrics times out > - > > Key: FLINK-36299 > URL: https://issues.apache.org/jira/browse/FLINK-36299 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62146&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=be5a4b15-4b23-56b1-7582-795f58a645a2&l=9849 > {code} > Sep 15 02:28:22 "ForkJoinPool-495-worker-25" #9352 daemon prio=5 os_prio=0 > tid=0x7fcdde409000 nid=0x77f4 waiting on condition [0x7fcd5c52c000] > Sep 15 02:28:22java.lang.Thread.State: WAITING (parking) > Sep 15 02:28:22 at sun.misc.Unsafe.park(Native Method) > Sep 15 02:28:22 - parking to wait for <0xf8d7d0b8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > Sep 15 02:28:22 at > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > Sep 15 02:28:22 at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > Sep 15 02:28:22 at > java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) > Sep 15 02:28:22 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest$SubmissionBufferingTaskManagerGateway.waitForSubmissions(AdaptiveSchedulerTest.java:2593) > Sep 15 02:28:22 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest.testStatusMetrics(AdaptiveSchedulerTest.java:732) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36315) The flink-cdc-base module supports source metric statistics
[ https://issues.apache.org/jira/browse/FLINK-36315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36315: --- Labels: pull-request-available (was: ) > The flink-cdc-base module supports source metric statistics > --- > > Key: FLINK-36315 > URL: https://issues.apache.org/jira/browse/FLINK-36315 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Reporter: liuxiaodong >Assignee: liuxiaodong >Priority: Major > Labels: pull-request-available > > The MySQL source already supports embedding observability metrics, but this > feature cannot be reused by other source types. Therefore, we hope to port > this feature to the base for easy reuse by other types -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-3992) Remove Key interface
[ https://issues.apache.org/jira/browse/FLINK-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-3992: -- Labels: pull-request-available (was: ) > Remove Key interface > > > Key: FLINK-3992 > URL: https://issues.apache.org/jira/browse/FLINK-3992 > Project: Flink > Issue Type: Sub-task > Components: API / DataSet >Affects Versions: 1.0.0 >Reporter: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36295) AdaptiveSchedulerClusterITCase. testCheckpointStatsPersistedAcrossRescale failed with
[ https://issues.apache.org/jira/browse/FLINK-36295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36295: --- Labels: pull-request-available test-stability (was: test-stability) > AdaptiveSchedulerClusterITCase. testCheckpointStatsPersistedAcrossRescale > failed with > -- > > Key: FLINK-36295 > URL: https://issues.apache.org/jira/browse/FLINK-36295 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Assignee: Zdenek Tison >Priority: Critical > Labels: pull-request-available, test-stability > Attachments: > FLINK-36295.failure.62156.20240916.1.logs-cron_jdk17-test_cron_jdk17_core-1726454552.log, > FLINK-36295.failure.with-revert.debug.log > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62156&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=10234 > {code} > Sep 16 03:06:30 03:06:30.168 [ERROR] Tests run: 3, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 5.275 s <<< FAILURE! -- in > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase > Sep 16 03:06:30 03:06:30.168 [ERROR] > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale > -- Time elapsed: 0.676 s <<< ERROR! > Sep 16 03:06:30 java.lang.IndexOutOfBoundsException: Index: -1 > Sep 16 03:06:30 at > java.base/java.util.Collections$EmptyList.get(Collections.java:4586) > Sep 16 03:06:30 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale(AdaptiveSchedulerClusterITCase.java:214) > Sep 16 03:06:30 at > java.base/java.lang.reflect.Method.invoke(Method.java:568) > Sep 16 03:06:30 at > java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194) > Sep 16 03:06:30 at > java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373) > Sep 16 03:06:30 at > java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182) > Sep 16 03:06:30 at > java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655) > Sep 16 03:06:30 at > java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622) > Sep 16 03:06:30 at > java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165) > Sep 16 03:06:30 > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36346) Remove deprecated API in flink-streaming-java module
[ https://issues.apache.org/jira/browse/FLINK-36346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36346: --- Labels: pull-request-available (was: ) > Remove deprecated API in flink-streaming-java module > > > Key: FLINK-36346 > URL: https://issues.apache.org/jira/browse/FLINK-36346 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Assignee: Weijie Guo >Priority: Blocker > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36293) RocksDBWriteBatchWrapperTest.testAsyncCancellation
[ https://issues.apache.org/jira/browse/FLINK-36293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36293: --- Labels: pull-request-available test-stability (was: test-stability) > RocksDBWriteBatchWrapperTest.testAsyncCancellation > --- > > Key: FLINK-36293 > URL: https://issues.apache.org/jira/browse/FLINK-36293 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Priority: Blocker > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62156&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=11508 > {code} > Sep 16 02:20:08 02:20:08.194 [ERROR] Tests run: 6, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 0.724 s <<< FAILURE! -- in > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapperTest > Sep 16 02:20:08 02:20:08.194 [ERROR] > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapperTest.testAsyncCancellation > -- Time elapsed: 0.121 s <<< ERROR! > Sep 16 02:20:08 java.lang.Exception: Unexpected exception, > expected but > was > Sep 16 02:20:08 Caused by: java.lang.AssertionError: > Sep 16 02:20:08 Expecting actual: > Sep 16 02:20:08 2 > Sep 16 02:20:08 to be less than: > Sep 16 02:20:08 2 > Sep 16 02:20:08 at > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapperTest.testAsyncCancellation(RocksDBWriteBatchWrapperTest.java:98) > Sep 16 02:20:08 at java.lang.reflect.Method.invoke(Method.java:498) > Sep 16 02:20:08 Suppressed: > org.apache.flink.runtime.execution.CancelTaskException > Sep 16 02:20:08 at > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.ensureNotCancelled(RocksDBWriteBatchWrapper.java:199) > Sep 16 02:20:08 at > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper.close(RocksDBWriteBatchWrapper.java:188) > Sep 16 02:20:08 at > org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapperTest.testAsyncCancellation(RocksDBWriteBatchWrapperTest.java:100) > Sep 16 02:20:08 ... 1 more > {code} > This test was added FLINK-35580 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36345) [feature][cdc-connector][oracle] Oracle cdc support partition table
[ https://issues.apache.org/jira/browse/FLINK-36345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36345: --- Labels: pull-request-available (was: ) > [feature][cdc-connector][oracle] Oracle cdc support partition table > --- > > Key: FLINK-36345 > URL: https://issues.apache.org/jira/browse/FLINK-36345 > Project: Flink > Issue Type: Improvement >Reporter: zhuxuetong >Priority: Major > Labels: pull-request-available > > [feature][cdc-connector][oracle] Oracle cdc support partition table -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36292) SplitFetcherManagerTest.testCloseCleansUpPreviouslyClosedFetcher times out
[ https://issues.apache.org/jira/browse/FLINK-36292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36292: --- Labels: pull-request-available (was: ) > SplitFetcherManagerTest.testCloseCleansUpPreviouslyClosedFetcher times out > -- > > Key: FLINK-36292 > URL: https://issues.apache.org/jira/browse/FLINK-36292 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Priority: Blocker > Labels: pull-request-available > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62173&view=logs&j=b6f8a893-8f59-51d5-fe28-fb56a8b0932c&t=095f1730-efbe-5303-c4a3-b5e3696fc4e2&l=10914 > {code} > Sep 17 01:15:16 01:15:16.318 [ERROR] Tests run: 5, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 32.65 s <<< FAILURE! -- in > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManagerTest > Sep 17 01:15:16 01:15:16.318 [ERROR] > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManagerTest.testCloseCleansUpPreviouslyClosedFetcher > -- Time elapsed: 30.02 s <<< ERROR! > Sep 17 01:15:16 org.junit.runners.model.TestTimedOutException: test timed out > after 3 milliseconds > Sep 17 01:15:16 at sun.misc.Unsafe.park(Native Method) > Sep 17 01:15:16 at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > Sep 17 01:15:16 at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) > Sep 17 01:15:16 at > java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475) > Sep 17 01:15:16 at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.close(SplitFetcherManager.java:344) > Sep 17 01:15:16 at > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManagerTest.testCloseCleansUpPreviouslyClosedFetcher(SplitFetcherManagerTest.java:97) > Sep 17 01:15:16 at java.lang.reflect.Method.invoke(Method.java:498) > Sep 17 01:15:16 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Sep 17 01:15:16 at java.lang.Thread.run(Thread.java:748) > {code} > The test was added by FLINK-35924 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36342) Rename misleading variable names
[ https://issues.apache.org/jira/browse/FLINK-36342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36342: --- Labels: pull-request-available (was: ) > Rename misleading variable names > > > Key: FLINK-36342 > URL: https://issues.apache.org/jira/browse/FLINK-36342 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Xu Hao >Priority: Minor > Labels: pull-request-available > > The variable named targetTotalBufferSize appears in class > BufferDebloatConfiguration and some other classes has been incorrectly named. > Based on the context and the origin of this variable, it is clearly a > *Duration* type variable that is read from configuration(the key is > *taskmanager.network.memory.buffer-debloat.target* and default value is 1s). > So a more suitable name might be targetTotalTime. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36338) Properly handle KeyContext when using AsyncKeyedStateBackendAdaptor
[ https://issues.apache.org/jira/browse/FLINK-36338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36338: --- Labels: pull-request-available (was: ) > Properly handle KeyContext when using AsyncKeyedStateBackendAdaptor > --- > > Key: FLINK-36338 > URL: https://issues.apache.org/jira/browse/FLINK-36338 > Project: Flink > Issue Type: Sub-task >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > > After FLINK-36117, we port old state backends implementation to new api using > AsyncKeyedStateBackendAdaptor, but it cannot work because the KeyContext is > not properly handled, which should be fixed -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36335) Improving Method Reusability in StreamGraphGenerator with JobVertexBuildContext
[ https://issues.apache.org/jira/browse/FLINK-36335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36335: --- Labels: pull-request-available (was: ) > Improving Method Reusability in StreamGraphGenerator with > JobVertexBuildContext > --- > > Key: FLINK-36335 > URL: https://issues.apache.org/jira/browse/FLINK-36335 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Reporter: Lei Yang >Priority: Major > Labels: pull-request-available > > Before introducing the AdaptiveGraphGenerator component, we need to refactor > the StreamGraphGenerator by introducing the JobVertexBuildContext to make > more methods reusable. The following tasks will be completed in this process: > 1. Introduce the JobVertexBuildContext to store context information during > the JobVertex build process. > 2. Change methods in StreamGraphGenerator that may be reused to public static. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36321) Execute read/write state request in different executor
[ https://issues.apache.org/jira/browse/FLINK-36321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36321: --- Labels: pull-request-available (was: ) > Execute read/write state request in different executor > -- > > Key: FLINK-36321 > URL: https://issues.apache.org/jira/browse/FLINK-36321 > Project: Flink > Issue Type: Sub-task >Reporter: Yanfei Lei >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36336) Remove deprecated dataset API
[ https://issues.apache.org/jira/browse/FLINK-36336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36336: --- Labels: pull-request-available (was: ) > Remove deprecated dataset API > - > > Key: FLINK-36336 > URL: https://issues.apache.org/jira/browse/FLINK-36336 > Project: Flink > Issue Type: New Feature >Reporter: xuhuang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36327) Remove the dependencies of the flink-scala and flink-streaming-scala modules from the table module.
[ https://issues.apache.org/jira/browse/FLINK-36327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36327: --- Labels: pull-request-available (was: ) > Remove the dependencies of the flink-scala and flink-streaming-scala modules > from the table module. > --- > > Key: FLINK-36327 > URL: https://issues.apache.org/jira/browse/FLINK-36327 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Reporter: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36332) Allow the Operator http client to be customised
[ https://issues.apache.org/jira/browse/FLINK-36332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36332: --- Labels: pull-request-available (was: ) > Allow the Operator http client to be customised > --- > > Key: FLINK-36332 > URL: https://issues.apache.org/jira/browse/FLINK-36332 > Project: Flink > Issue Type: Improvement >Reporter: Sam Barker >Priority: Minor > Labels: pull-request-available > > We are looking to produce a build of the Flink Kubernetes operator however > for internal policy reasons we need to exclude the Kotlin dependencies. > Kotlin is a transitive dependency of OkHttp and now that > [FLINK-36031|https://issues.apache.org/jira/browse/FLINK-36031] has been > merged OkHttp is entirely optional (but a sensible default). The Fabric8 > project explicitly support supplying alternative http clients (see > [what-artifacts-should-my-project-depend-on|https://github.com/fabric8io/kubernetes-client/blob/main/doc/FAQ.md#what-artifacts-should-my-project-depend-on]) > and the common pattern as demonstrated by the > [java-operator-sdk|https://github.com/operator-framework/java-operator-sdk/blob/24494cb6342a5c75dff9a6962156ff488ad0c818/pom.xml#L44] > is to define a property with the name of the client implementation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36330) Session Window TVFs with named parameters don't support column expansion
[ https://issues.apache.org/jira/browse/FLINK-36330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36330: --- Labels: pull-request-available (was: ) > Session Window TVFs with named parameters don't support column expansion > > > Key: FLINK-36330 > URL: https://issues.apache.org/jira/browse/FLINK-36330 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API, Table SQL / Planner >Reporter: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > The issue is very similar to FLINK-33169 > a query to reproduce > {code:sql} > SELECT t3_s, SUM(t3_i) AS agg > FROM > TABLE( > SESSION( > TABLE t3 PARTITION BY t3_s, DESCRIPTOR(t3_m_virtual), INTERVAL > '1' MINUTE)) > GROUP BY t3_s, window_start, window_end > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36329) DDB Streams connector not retrying some SDK exceptions
[ https://issues.apache.org/jira/browse/FLINK-36329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36329: --- Labels: pull-request-available (was: ) > DDB Streams connector not retrying some SDK exceptions > -- > > Key: FLINK-36329 > URL: https://issues.apache.org/jira/browse/FLINK-36329 > Project: Flink > Issue Type: Bug > Components: Connectors / DynamoDB >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > We are not retrying on some SDK exceptions. Here's one example: > ``` > Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable > to execute HTTP request: The target server failed to respond > ``` > Fix the SDK retries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35411) Optimize buffer triggering of async state requests
[ https://issues.apache.org/jira/browse/FLINK-35411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35411: --- Labels: pull-request-available (was: ) > Optimize buffer triggering of async state requests > -- > > Key: FLINK-35411 > URL: https://issues.apache.org/jira/browse/FLINK-35411 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends, Runtime / Task >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > > Currently during draining of async state requests, the task thread performs > {{Thread.sleep}} to avoid cpu overhead when polling mails. This can be > optimized by wait & notify. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36328) Making the log for child not found as debug
[ https://issues.apache.org/jira/browse/FLINK-36328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36328: --- Labels: pull-request-available (was: ) > Making the log for child not found as debug > --- > > Key: FLINK-36328 > URL: https://issues.apache.org/jira/browse/FLINK-36328 > Project: Flink > Issue Type: Improvement > Components: Connectors / DynamoDB >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > In DDB Flink connector, we are getting this warn message quite a lot: > > ``` > "splitId: {} is not present in parent-child relationship map. " > + "This indicates that there might be some data loss in the " + > "application or the child shard has not been discovered yet" > ``` > > This is happening for when all splits are read and the child shard hasn't > been discovered yet. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36285) Modify column type failed when downstream's column with defalut value
[ https://issues.apache.org/jira/browse/FLINK-36285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36285: --- Labels: pull-request-available (was: ) > Modify column type failed when downstream's column with defalut value > - > > Key: FLINK-36285 > URL: https://issues.apache.org/jira/browse/FLINK-36285 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.2.0 >Reporter: linqigeng >Priority: Major > Labels: pull-request-available > Attachments: image-2024-09-15-17-55-03-641.png, > image-2024-09-15-17-55-27-038.png > > > When downstream(such as doris) has a column with default value, it would > cause an exception in current version if user changed the column's type in > source table because `AlterColumnTypeEvent` only carriers type mapping. > !image-2024-09-15-17-55-03-641.png! > !image-2024-09-15-17-55-27-038.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36326) Newly added table failed in mysql pipeline connector
[ https://issues.apache.org/jira/browse/FLINK-36326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36326: --- Labels: pull-request-available (was: ) > Newly added table failed in mysql pipeline connector > > > Key: FLINK-36326 > URL: https://issues.apache.org/jira/browse/FLINK-36326 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.2.0 >Reporter: linqigeng >Priority: Major > Labels: pull-request-available > Attachments: image-2024-09-19-16-55-03-082.png > > > When mysql source added a newly table then restart flink cdc pipeline job > would cause this exception: > !image-2024-09-19-16-55-03-082.png! > pipeline def: > {code:java} > source: > type: mysql > name: MySQL Source > hostname: localhost > port: 3306 > username: root > password: root > tables: test_db.\.* > server-id: 4-40002 > jdbc.properties.tinyInt1isBit: false > jdbc.properties.zeroDateTimeBehavior: convertToNull > scan.newly-added-table.enabled: true > sink: > type: doris > fenodes: localhost:8030 > username: root > password: root > pipeline: > name: MySQL to Doris Pipeline > parallelism: 1{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28770) CREATE TABLE AS SELECT supports explain
[ https://issues.apache.org/jira/browse/FLINK-28770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-28770: --- Labels: pull-request-available (was: ) > CREATE TABLE AS SELECT supports explain > --- > > Key: FLINK-28770 > URL: https://issues.apache.org/jira/browse/FLINK-28770 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: tartarus >Priority: Major > Labels: pull-request-available > > Unsupported operation: > org.apache.flink.table.operations.ddl.CreateTableASOperation -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36319) FAIL behavior on non-retriable write errors causes an infinite loop when restarting from checkpoint
[ https://issues.apache.org/jira/browse/FLINK-36319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36319: --- Labels: pull-request-available (was: ) > FAIL behavior on non-retriable write errors causes an infinite loop when > restarting from checkpoint > --- > > Key: FLINK-36319 > URL: https://issues.apache.org/jira/browse/FLINK-36319 > Project: Flink > Issue Type: Sub-task >Reporter: Lorenzo Nicora >Assignee: Lorenzo Nicora >Priority: Major > Labels: pull-request-available > > The {{FAIL}} (default) error handling behavior when a write request is > rejected as non-retriable ({{{}onPrometheusNonRetriableError{}}}), causes the > job to fail and restart. > Restarting from checkpoint causes some out-of-order (duplicate) writes, that > by default Prometheus rejects as non-retrable. > As a consequence, when {{onPrometheusNonRetriableError}} = {{FAIL}} any > restarts from checkpoint puts the job in an infinite loop. > Changes: > 1. default {{onPrometheusNonRetriableError}} should be > {{DISCARD_AND_CONTINUE}} > 2. {{onPrometheusNonRetriableError}} cannot be set to {{FAIL}} > 3. Amend docs > We can keep the rest of the implementation as-is for the moment, and just > prevent from setting {{FAIL}} for this behaviour, as we may expand handling > this error with a different behaviour -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36318) Fail to restore from 1.18 if LAG function is used
[ https://issues.apache.org/jira/browse/FLINK-36318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36318: --- Labels: pull-request-available (was: ) > Fail to restore from 1.18 if LAG function is used > - > > Key: FLINK-36318 > URL: https://issues.apache.org/jira/browse/FLINK-36318 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.19.0, 1.20.0, 1.19.1 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Labels: pull-request-available > Fix For: 1.19.2, 1.20.1 > > > One can not restore from a savepoint taken in 1.18 using Flink 1.19 if a > query uses e.g. {{LAG}} function on a {{MAP}} type. > The reason is {{LAG/LEAD/ARRAY_AGG}} and possibly other functions use {{RAW}} > type for accumulator. > In > https://github.com/apache/flink/blob/d4c9ef165874e665b9e0f70cfe80fc3d387ac58e/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/typeutils/RowDataSerializer.java#L318 > we deserialize/serialize types using java serialization which results in > serializers being written into the snapshot. > In 1.19 > https://github.com/apache/flink/blob/d4c9ef165874e665b9e0f70cfe80fc3d387ac58e/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/typeutils/MapDataSerializer.java#L54 > was modified resulting in a change in {{serialVersionUID}} > As a result we fail to restore savepoints with a serialVersionUID mismatch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36015) Align rescale parameters
[ https://issues.apache.org/jira/browse/FLINK-36015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36015: --- Labels: pull-request-available (was: ) > Align rescale parameters > > > Key: FLINK-36015 > URL: https://issues.apache.org/jira/browse/FLINK-36015 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Zdenek Tison >Priority: Major > Labels: pull-request-available > > * Parameter > [_jobmanager.adaptive-scheduler.resource-wait-timeout_|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-adaptive-scheduler-resource-wait-timeout] > will be renamed to the > jobmanager.adaptive-scheduler.submission.resource-wait-timeout > * Parameter > [_jobmanager.adaptive-scheduler.resource-stabilization-timeout_|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-adaptive-scheduler-resource-wait-timeout] > will be renamed to the > jobmanager.adaptive-scheduler.submission.resource-stabilization-timeout > * Parameter > {_}j{_}[_obmanager.adaptive-scheduler.scaling-interval.min_|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-adaptive-scheduler-scaling-interval-min] > will be renamed to the > jobmanager.adaptive-scheduler.executing.cooldown-after-rescaling > * Parameter > [_jobmanager.adaptive-scheduler.scaling-interval.max_|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-adaptive-scheduler-scaling-interval-max] > will be renamed to the > {_}jobmanager.adaptive-scheduler{_}{_}.{_}executing.resource-stabilization-timeout > with default value 60s. > * Parameter > [jobmanager.adaptive-scheduler.min-parallelism-increase|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-adaptive-scheduler-min-parallelism-increase] > will be removed without a direct replacement. Still, it will be superseded > by combining the parameters > jobmanager.adaptive-scheduler.executing.cooldown-after-rescaling and > {_}jobmanager.adaptive-scheduler{_}{_}.{_}executing.resource-stabilization-timeout -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36287) Sink with topologies should not participate in UC
[ https://issues.apache.org/jira/browse/FLINK-36287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36287: --- Labels: pull-request-available (was: ) > Sink with topologies should not participate in UC > - > > Key: FLINK-36287 > URL: https://issues.apache.org/jira/browse/FLINK-36287 > Project: Flink > Issue Type: Bug >Reporter: Arvid Heise >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > > When the sink writer and committer are not chained, it's possible that > committables become part of the channel state. However, then it's possible > that they are not received before notifyCheckpointComplete. Further, the > contract of notifyCheckpointComplete dictates that all side effects need to > be committed or we fail on notifyCheckpointComplete. This contract is > essential to final checkpoints. > We can change by disallowing channel state within sinks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36250) Remove CheckpointStorage-related configuration getters/setters that return/set complex Java objects
[ https://issues.apache.org/jira/browse/FLINK-36250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36250: --- Labels: pull-request-available (was: ) > Remove CheckpointStorage-related configuration getters/setters that > return/set complex Java objects > --- > > Key: FLINK-36250 > URL: https://issues.apache.org/jira/browse/FLINK-36250 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Junrui Li >Assignee: Junrui Li >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > > FLINK-33581/FLIP-381: Deprecate configuration getters/setters that return or > set complex Java objects. > In Flink 2.0, we will remove these deprecated methods and fields. This change > will prevent users from configuring their jobs by passing complex Java > objects, encouraging them to use {{ConfigOption}} instead. > This JIRA will remove the public API associated with CheckpointStorage > getters/setters that return/set complex Java objects. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36251) Remove StateBackend-related configuration getters/setters that return/set complex Java objects
[ https://issues.apache.org/jira/browse/FLINK-36251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36251: --- Labels: pull-request-available (was: ) > Remove StateBackend-related configuration getters/setters that return/set > complex Java objects > -- > > Key: FLINK-36251 > URL: https://issues.apache.org/jira/browse/FLINK-36251 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Junrui Li >Assignee: Junrui Li >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > > FLINK-33581/FLIP-381: Deprecate configuration getters/setters that return or > set complex Java objects. > In Flink 2.0, we will remove these deprecated methods and fields. This change > will prevent users from configuring their jobs by passing complex Java > objects, encouraging them to use {{ConfigOption}} instead. > This JIRA will remove the public API associated with StateBackend > getters/setters that return/set complex Java objects. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35764) TimerGauge is incorrect when update is called during a measurement
[ https://issues.apache.org/jira/browse/FLINK-35764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35764: --- Labels: pull-request-available (was: ) > TimerGauge is incorrect when update is called during a measurement > -- > > Key: FLINK-35764 > URL: https://issues.apache.org/jira/browse/FLINK-35764 > Project: Flink > Issue Type: Bug > Components: Runtime / Metrics >Affects Versions: 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0 >Reporter: Liu Liu >Priority: Major > Labels: pull-request-available > > *Description* > Currently in {{{}TimerGauge{}}}, the timer measures the time spent in a state > (marked by {{markStart()}} and {{{}markEnd(){}}}). The current logic is as > follows: > - {{markStart()}} bumps {{currentMeasurementStartTS}} and > {{{}currentUpdateTS{}}}, while {{update()}} bumps {{currentUpdateTS}} > - {{currentCount}} stores the time in the state within this update interval > - When calling {{{}markEnd(){}}}, the time since > {{currentMeasurementStartTS}} is added to the {{currentCount}} > - When calling {{{}update(){}}}, the time since {{currentUpdateTS}} is added > to the {{{}currentCount{}}}, and the {{currentCount}} is used to update the > gauge value > The intent is that a state can span across two update intervals (by calling > {{{}markStart() -> update() -> markEnd(){}}}). However, the results will be > incorrect, since the time between {{markStart()}} and {{update()}} will be > counted in the first update interval by {{{}update(){}}}, and then in the > second update interval by {{{}markEnd(){}}}, so the measurement will be > larger than the correct value. The correct solution is to only add the time > since {{currentUpdateTS}} to {{currentCount}} in {{{}markEnd(){}}}. > *Test case* > {{@Test }} > {{void testUpdateBeforeMarkingEnd() { }} > {{ ManualClock clock = new ManualClock(42_000_000);}} > {{ // this timer gauge measures 2 update intervals}} > {{ TimerGauge gauge = new TimerGauge(clock, 2 * > View.UPDATE_INTERVAL_SECONDS);}} > {{ long UPDATE_INTERVAL_MILLIS = > TimeUnit.SECONDS.toMillis(View.UPDATE_INTERVAL_SECONDS); // 5000 ms}} > {{ long SLEEP = 10; // 10 ms}} > {{ // interval 1}} > {{ clock.advanceTime(UPDATE_INTERVAL_MILLIS - SLEEP, > TimeUnit.MILLISECONDS); // *(1)}} > {{ gauge.markStart(); }} > {{ clock.advanceTime(SLEEP, TimeUnit.MILLISECONDS); }} > {{ gauge.update();}} > {{ // interval 2}} > {{ clock.advanceTime(SLEEP, TimeUnit.MILLISECONDS); }} > {{ gauge.markEnd();}} > {{ clock.advanceTime(UPDATE_INTERVAL_MILLIS - SLEEP, > TimeUnit.MILLISECONDS); // *(1)}} > {{ gauge.update(); }} > {{ // expected: 2, actual: 3}} > {{ assertThat(gauge.getValue()).isEqualTo(SLEEP / > View.UPDATE_INTERVAL_SECONDS); }} > {{}}} > The current test cases in {{TimerGaugeTest}} do not catch this bug, because > the assert condition is (conveniently) {{{}isGreaterThanOrEqualTo{}}}, and > the code does not simulate the time passed outside the state ({{{}*(1){}}} in > the code above). > *Proposed changes* > * In {{{}TimerGauge{}}}, only add the time since {{currentUpdateTS}} to > {{currentCount}} in {{markEnd()}} > * Add the test case above to {{{}TimerGaugeTest{}}}, and adjust other test > cases -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34125) Flink 2.0: Remove deprecated serialization config methods and options
[ https://issues.apache.org/jira/browse/FLINK-34125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34125: --- Labels: 2.0-related pull-request-available (was: 2.0-related) > Flink 2.0: Remove deprecated serialization config methods and options > - > > Key: FLINK-34125 > URL: https://issues.apache.org/jira/browse/FLINK-34125 > Project: Flink > Issue Type: Sub-task > Components: API / Type Serialization System, Runtime / Configuration >Reporter: Zhanghao Chen >Priority: Major > Labels: 2.0-related, pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36314) Support state V1 interface
[ https://issues.apache.org/jira/browse/FLINK-36314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36314: --- Labels: pull-request-available (was: ) > Support state V1 interface > -- > > Key: FLINK-36314 > URL: https://issues.apache.org/jira/browse/FLINK-36314 > Project: Flink > Issue Type: Sub-task >Reporter: Yanfei Lei >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36310) Remove per-job and run-application deprecated APIs
[ https://issues.apache.org/jira/browse/FLINK-36310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36310: --- Labels: pull-request-available (was: ) > Remove per-job and run-application deprecated APIs > -- > > Key: FLINK-36310 > URL: https://issues.apache.org/jira/browse/FLINK-36310 > Project: Flink > Issue Type: Improvement >Reporter: xuhuang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36309) Flink CDC Paimon Sink commit failed should log stackTrace
[ https://issues.apache.org/jira/browse/FLINK-36309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36309: --- Labels: pull-request-available (was: ) > Flink CDC Paimon Sink commit failed should log stackTrace > -- > > Key: FLINK-36309 > URL: https://issues.apache.org/jira/browse/FLINK-36309 > Project: Flink > Issue Type: Bug > Components: Flink CDC > Environment: Flink 1.18.0 > CDC 3.2.0 >Reporter: JunboWang >Priority: Minor > Labels: pull-request-available > > Flink CDC Paimon Sink Commit failed should log StackTrace, not just a warning > log. > > {code:java} > 2024-09-18 11:15:07,984 WARN > org.apache.flink.cdc.connectors.paimon.sink.v2.PaimonCommitter [] - Commit > failed for 4 with 2 committable > 2024-09-18 11:15:08,093 WARN > org.apache.flink.cdc.connectors.paimon.sink.v2.PaimonCommitter [] - Commit > failed for 5 with 2 committable > 2024-09-18 11:15:08,410 WARN > org.apache.flink.cdc.connectors.paimon.sink.v2.PaimonCommitter [] - Commit > failed for 1 with 2 committable{code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36257) Remove easy-to-drop deprecated APIs
[ https://issues.apache.org/jira/browse/FLINK-36257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36257: --- Labels: pull-request-available (was: ) > Remove easy-to-drop deprecated APIs > --- > > Key: FLINK-36257 > URL: https://issues.apache.org/jira/browse/FLINK-36257 > Project: Flink > Issue Type: Sub-task >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Priority: Blocker > Labels: pull-request-available > > Some deprecated API Is no longer called and can simply be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34511) Flink 2.0: Remove legacy State&Checkpointing&Recovery options
[ https://issues.apache.org/jira/browse/FLINK-34511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34511: --- Labels: pull-request-available (was: ) > Flink 2.0: Remove legacy State&Checkpointing&Recovery options > - > > Key: FLINK-34511 > URL: https://issues.apache.org/jira/browse/FLINK-34511 > Project: Flink > Issue Type: Sub-task >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36296) Add incremental shard discovery for DDB Streams source
[ https://issues.apache.org/jira/browse/FLINK-36296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36296: --- Labels: pull-request-available (was: ) > Add incremental shard discovery for DDB Streams source > -- > > Key: FLINK-36296 > URL: https://issues.apache.org/jira/browse/FLINK-36296 > Project: Flink > Issue Type: Bug > Components: Connectors / DynamoDB >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > DDB Streams source does a full shard discovery every minute, on large > streams, this discovery can take more than 20 minutes to discover all shards, > we can optimize this to do an incremental shard discovery periodically while > also doing periodic shard discovery. This makes having the children shards > faster and reduces the millisBehindLatest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36279) RescaleOnCheckpointITCase.testRescaleOnCheckpoint fails
[ https://issues.apache.org/jira/browse/FLINK-36279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36279: --- Labels: pull-request-available test-stability (was: test-stability) > RescaleOnCheckpointITCase.testRescaleOnCheckpoint fails > --- > > Key: FLINK-36279 > URL: https://issues.apache.org/jira/browse/FLINK-36279 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 2.0-preview >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=62105&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=11287 > {code} > Sep 13 17:16:55 "ForkJoinPool-1-worker-25" #28 daemon prio=5 os_prio=0 > tid=0x7f973f0c2800 nid=0x31a1 waiting on condition [0x7f97089fc000] > Sep 13 17:16:55java.lang.Thread.State: TIMED_WAITING (sleeping) > Sep 13 17:16:55 at java.lang.Thread.sleep(Native Method) > Sep 13 17:16:55 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:152) > Sep 13 17:16:55 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145) > Sep 13 17:16:55 at > org.apache.flink.test.scheduling.UpdateJobResourceRequirementsITCase.waitForRunningTasks(UpdateJobResourceRequirementsITCase.java:219) > Sep 13 17:16:55 at > org.apache.flink.test.scheduling.RescaleOnCheckpointITCase.testRescaleOnCheckpoint(RescaleOnCheckpointITCase.java:139) > Sep 13 17:16:55 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Sep 13 17:16:55 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [...] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36278) Fix Kafka connector logs getting too big
[ https://issues.apache.org/jira/browse/FLINK-36278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36278: --- Labels: pull-request-available (was: ) > Fix Kafka connector logs getting too big > > > Key: FLINK-36278 > URL: https://issues.apache.org/jira/browse/FLINK-36278 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / Kafka >Affects Versions: kafka-3.3.0 >Reporter: Arvid Heise >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > > Currently, on failures, it's not (easily) possible to investigate logs. The > zip is over 40 mb and downloading fails. > It seems as if the container logs make up a huge chunk but should not be of > interest in most cases. This ticket should be about reducing the logs size to > normal levels (less than 1mb zipped). For comparison, a normal Flink run > results in 3mb zipped logs with many more tests. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36286) Set up validation workflows
[ https://issues.apache.org/jira/browse/FLINK-36286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36286: --- Labels: pull-request-available (was: ) > Set up validation workflows > --- > > Key: FLINK-36286 > URL: https://issues.apache.org/jira/browse/FLINK-36286 > Project: Flink > Issue Type: Sub-task >Reporter: Hong Liang Teoh >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36146) NoSuchElement exception from SingleThreadFetcherManager
[ https://issues.apache.org/jira/browse/FLINK-36146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36146: --- Labels: pull-request-available (was: ) > NoSuchElement exception from SingleThreadFetcherManager > --- > > Key: FLINK-36146 > URL: https://issues.apache.org/jira/browse/FLINK-36146 > Project: Flink > Issue Type: Bug > Components: API / Core > Environment: AWS EMR/Yarn >Reporter: Kim Gräsman >Priority: Minor > Labels: pull-request-available > > We're running Flink 1.14.2, but this appears to be an issue still on > mainline, so I thought I'd report it. > When running with high parallelism we've noticed a spurious error triggered > by a FileSource reader from S3; > {code:java} > 2024-08-19 15:23:07,044 INFO > org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Finished > reading split(s) [543131] > 2024-08-19 15:23:07,044 INFO > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - > Finished reading from splits [543131] > 2024-08-19 15:23:07,044 INFO > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager [] > - Closing splitFetcher 157 because it is idle. > 2024-08-19 15:23:07,045 INFO > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - > Shutting down split fetcher 157 > 2024-08-19 15:23:07,045 INFO > org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Split > fetcher 157 exited. > 2024-08-19 15:23:07,048 INFO > org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Adding > split(s) to reader: [FileSourceSplit: ... [0, 21679984) hosts=[localhost] > ID=201373 position=null] > 2024-08-19 15:23:07,064 INFO > org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Closing > Source Reader. > 2024-08-19 15:23:07,069 WARN org.apache.flink.runtime.taskmanager.Task > [] - Source: ... -> ... (114/1602)#0 (...) switched from RUNNING > to FAILED with failure cause: java.util.NoSuchElementException > at > java.base/java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:3471) > at > org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager.getRunningFetcher(SingleThreadFetcherManager.java:94) > at > org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager.addSplits(SingleThreadFetcherManager.java:82) > at > org.apache.flink.connector.base.source.reader.SourceReaderBase.addSplits(SourceReaderBase.java:242) > at > org.apache.flink.streaming.api.operators.SourceOperator.handleOperatorEvent(SourceOperator.java:428) > at > org.apache.flink.streaming.runtime.tasks.OperatorEventDispatcherImpl.dispatchEventToHandlers(OperatorEventDispatcherImpl.java:70) > at > org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.dispatchOperatorEvent(RegularOperatorChain.java:83) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$dispatchOperatorEvent$19(StreamTask.java:1473) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) > at > org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:353) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:317) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761) > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) > at java.base/java.lang.Thread.run(Thread.java:829) {code} > I believe this may be caused by a tiny TOCTOU race in > {{{}SingleThreadedFetcherManager{}}}. I'll admit that I don't fully > understand what the execution flows through that code look like, but the use > of atomic and synchronized indicate that it's used by multiple threads. If > that's not the case, this report can be safely ignored. > The backtrace points to > [https://github.com/apache/flink/blob/4faf0966766e3734792f80ed66e512aa3033cacd/flink-connectors/flink-connector-base/src
[jira] [Updated] (FLINK-36240) Incorrect Port display in the PrometheusReporter constructor
[ https://issues.apache.org/jira/browse/FLINK-36240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36240: --- Labels: pull-request-available (was: ) > Incorrect Port display in the PrometheusReporter constructor > > > Key: FLINK-36240 > URL: https://issues.apache.org/jira/browse/FLINK-36240 > Project: Flink > Issue Type: Bug > Components: Runtime / Metrics >Affects Versions: 1.17.2, 1.18.1, 1.20.0, 1.19.1 >Reporter: Arkadiusz Dankiewicz >Assignee: Arkadiusz Dankiewicz >Priority: Minor > Labels: pull-request-available > > In Flink v1.17 I found a problem with the PrometheusReporter constructor that > is still there. When the {{PrometheusReporter}} fails to start on any > configured port, the error message does not correctly display the configured > ports. Instead of printing the actual port numbers, the error message outputs > the {{{}Iterator{}}}'s toString representation, which is not helpful for > debugging. > The relevant code snippet is as follows: > {code:java} > PrometheusReporter(Iterator ports) { > while (ports.hasNext()) { > port = ports.next(); > try { > httpServer = new HTTPServer(new InetSocketAddress(port), > this.registry); > log.info("Started PrometheusReporter HTTP server on port > {}.", port); > break; > } catch (IOException ioe) { // assume port conflict > log.debug("Could not start PrometheusReporter HTTP server on > port {}.", port, ioe); > } > } > if (httpServer == null) { > throw new RuntimeException( > "Could not start PrometheusReporter HTTP server on any > configured port. Ports: " > + ports); > } > } {code} > The RuntimeException logs the Iterator object reference instead of the actual > list of port numbers: > {code:java} > Could not start PrometheusReporter HTTP server on any configured port. Ports: > org.apache.flink.util.UnionIterator@67065fd3{code} > To make the error message more informative, it would be better to log the > actual port numbers by collecting them into a list or converting the > {{Iterator}} to a string representation of the configured ports. This change > will significantly improve the debugging process for port-related issues in > the PrometheusReporter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36282) Incorrect data type in mysql pipeline connector
[ https://issues.apache.org/jira/browse/FLINK-36282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36282: --- Labels: pull-request-available (was: ) > Incorrect data type in mysql pipeline connector > --- > > Key: FLINK-36282 > URL: https://issues.apache.org/jira/browse/FLINK-36282 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.2.0 >Reporter: linqigeng >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.3.0, cdc-3.2.1 > > Attachments: image-2024-09-14-13-46-59-477.png, > image-2024-09-14-13-48-07-576.png, image-2024-09-14-13-48-39-430.png > > > There is a TINYINT(1) type column in the MySQL source table, and > `jdbc.properties.tinyInt1isBit: false` is defined in the pipeline YAML. But > flink-cdc-pipeline-connector-mysql still returns BOOLEAN type, expects > TINYINT type. > !image-2024-09-14-13-48-39-430.png! > !image-2024-09-14-13-46-59-477.png! > > !image-2024-09-14-13-48-07-576.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36284) StreamTableEnvironment#toDataStream(Table table, Class targetClass) is not suitable for setting targetClass as a class generated by Avro.
[ https://issues.apache.org/jira/browse/FLINK-36284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36284: --- Labels: pull-request-available (was: ) > StreamTableEnvironment#toDataStream(Table table, Class targetClass) is not > suitable for setting targetClass as a class generated by Avro. > > > Key: FLINK-36284 > URL: https://issues.apache.org/jira/browse/FLINK-36284 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / API >Reporter: xuyang >Priority: Blocker > Labels: pull-request-available > Fix For: 2.0-preview > > Attachments: image-2024-09-14-17-39-16-698.png > > > This issue can be fired by updating the {{testAvroToAvro}} method in the > {{org.apache.flink.table.runtime.batch.AvroTypesITCase}} class. > > {code:java} > @Test > public void testAvroToAvro() { > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > StreamTableEnvironment tEnv = StreamTableEnvironment.create(env); > DataStream ds = testData(env); > // before: using deprecated method > // Table t = tEnv.fromDataStream(ds, selectFields(ds)); > // after: using recommended new method > Table t = tEnv.fromDataStream(ds); > Table result = t.select($("*")); > // before: using deprecated method > // List results = > //CollectionUtil.iteratorToList( > //DataStreamUtils.collect(tEnv.toAppendStream(result, > User.class))); > // after: using recommended new method > List results = > CollectionUtil.iteratorToList( > DataStreamUtils.collect(tEnv.toDataStream(result, > User.class))); > List expected = Arrays.asList(USER_1, USER_2, USER_3); > assertThat(results).isEqualTo(expected); > } {code} > An exception will be thrown: > !image-2024-09-14-17-39-16-698.png|width=1049,height=594! > > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36283) The new PERCENTILE function doesn't follow user function first behavior when it encounters a user function with same name
[ https://issues.apache.org/jira/browse/FLINK-36283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36283: --- Labels: pull-request-available (was: ) > The new PERCENTILE function doesn't follow user function first behavior when > it encounters a user function with same name > - > > Key: FLINK-36283 > URL: https://issues.apache.org/jira/browse/FLINK-36283 > Project: Flink > Issue Type: Bug >Affects Versions: 2.0-preview >Reporter: lincoln lee >Assignee: Dylan He >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > The new PERCENTILE function doesn't follow user function first behavior when > it encounters a user function with same name, e.g., user create a temporary > function named > `percentile`, the following query should use user's function instead of > builtin one: > {code} > select percentile(...) > {code} > This was mentioned during the review but lost the final check, we should fix > it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36273) Remove deprecated Table/SQL configuration in 2.0
[ https://issues.apache.org/jira/browse/FLINK-36273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36273: --- Labels: pull-request-available (was: ) > Remove deprecated Table/SQL configuration in 2.0 > > > Key: FLINK-36273 > URL: https://issues.apache.org/jira/browse/FLINK-36273 > Project: Flink > Issue Type: Sub-task >Reporter: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36281) resourceSpec replace SlotSharingGroupUtils.extractResourceSpec(slotSharingGroup)
[ https://issues.apache.org/jira/browse/FLINK-36281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36281: --- Labels: pull-request-available (was: ) > resourceSpec replace > SlotSharingGroupUtils.extractResourceSpec(slotSharingGroup) > > > Key: FLINK-36281 > URL: https://issues.apache.org/jira/browse/FLINK-36281 > Project: Flink > Issue Type: Improvement > Components: API / Core >Affects Versions: 1.20.0 >Reporter: Caican Cai >Priority: Minor > Labels: pull-request-available > Fix For: 2.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36245) Remove legacy SourceFunction / SinkFunction / Sink V1 API in 2.0
[ https://issues.apache.org/jira/browse/FLINK-36245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36245: --- Labels: 2.0-related pull-request-available (was: 2.0-related) > Remove legacy SourceFunction / SinkFunction / Sink V1 API in 2.0 > > > Key: FLINK-36245 > URL: https://issues.apache.org/jira/browse/FLINK-36245 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / Common >Reporter: Qingsheng Ren >Assignee: LvYanquan >Priority: Major > Labels: 2.0-related, pull-request-available > Fix For: 2.0-preview > > > SourceFunction, SinkFunction and Sink V1 API has been marked as deprecated > and should be removed in Flink 2.0. > Considering SourceFunction / SinkFunction are heavily used in test cases for > building a simple data generator or a data validator, it could be a huge > amount of work to rewrite all these usages with Source and Sink V2 API. A > viable path for 2.0-preview version would be: > * Move SourceFunction, SinkFunction to an internal package, as a test util > * Rewrite all Sink V1 implementations with Sink V2 directly (the usage of > Sink V1 is low in the main repo) > As a long term working item, all usages of SourceFunction and SinkFunction > will be replaced by Source and Sink API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36277) Remove all direct uses of `TableEnvironmentInternal#registerTableSourceInternal` in table module
[ https://issues.apache.org/jira/browse/FLINK-36277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36277: --- Labels: pull-request-available (was: ) > Remove all direct uses of > `TableEnvironmentInternal#registerTableSourceInternal` in table module > > > Key: FLINK-36277 > URL: https://issues.apache.org/jira/browse/FLINK-36277 > Project: Flink > Issue Type: Sub-task >Reporter: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36276) Fix the description of MySqlDataSourceOptions.`scan.startup.mode`.
[ https://issues.apache.org/jira/browse/FLINK-36276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36276: --- Labels: pull-request-available (was: ) > Fix the description of MySqlDataSourceOptions.`scan.startup.mode`. > -- > > Key: FLINK-36276 > URL: https://issues.apache.org/jira/browse/FLINK-36276 > Project: Flink > Issue Type: Bug >Reporter: HunterXHunter >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36258) testRecursiveUploadForYarnS3n failed due to no AWS Credentials provided
[ https://issues.apache.org/jira/browse/FLINK-36258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36258: --- Labels: pull-request-available test-stability (was: test-stability) > testRecursiveUploadForYarnS3n failed due to no AWS Credentials provided > > > Key: FLINK-36258 > URL: https://issues.apache.org/jira/browse/FLINK-36258 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN, Tests >Affects Versions: 2.0.0 >Reporter: Weijie Guo >Assignee: Xuannan Su >Priority: Blocker > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=61979&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=28296 > {code:java} > Sep 11 05:44:32 Caused by: java.lang.IllegalArgumentException: AWS Access Key > ID and Secret Access Key must be specified by setting the > fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties (respectively). > Sep 11 05:44:32 at > org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:74) > Sep 11 05:44:32 at > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:80) > Sep 11 05:44:32 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Sep 11 05:44:32 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Sep 11 05:44:32 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Sep 11 05:44:32 at java.lang.reflect.Method.invoke(Method.java:498) > Sep 11 05:44:32 at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433) > Sep 11 05:44:32 at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) > Sep 11 05:44:32 at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) > Sep 11 05:44:32 at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) > Sep 11 05:44:32 at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) > Sep 11 05:44:32 at > org.apache.hadoop.fs.s3native.$Proxy64.initialize(Unknown Source) > Sep 11 05:44:32 at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:334) > Sep 11 05:44:32 at > org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:168) > Sep 11 05:44:32 ... 33 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-29741) [FLIP-265] Remove all Scala APIs
[ https://issues.apache.org/jira/browse/FLINK-29741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-29741: --- Labels: 2.0-related pull-request-available (was: 2.0-related) > [FLIP-265] Remove all Scala APIs > > > Key: FLINK-29741 > URL: https://issues.apache.org/jira/browse/FLINK-29741 > Project: Flink > Issue Type: Sub-task > Components: API / Scala >Reporter: Martijn Visser >Assignee: xuhuang >Priority: Major > Labels: 2.0-related, pull-request-available > Fix For: 2.0-preview > > > - Remove all @Public, @PublicEvolving and @Experimental Scala APIs (which > should have been marked as @Deprecated in FLINK-29740) > - Remove all Scala API documentation -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35634) Add a CDC quickstart utility
[ https://issues.apache.org/jira/browse/FLINK-35634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35634: --- Labels: pull-request-available (was: ) > Add a CDC quickstart utility > > > Key: FLINK-35634 > URL: https://issues.apache.org/jira/browse/FLINK-35634 > Project: Flink > Issue Type: New Feature > Components: Flink CDC >Reporter: yux >Assignee: yux >Priority: Minor > Labels: pull-request-available > > Currently, it's not very easy to initialize a CDC pipeline job from scratch, > requiring user to configure lots of Flink configurations manually. > This ticket suggests creating an extra component like `tiup` and `rustup` to > help user creating and submitting CDC job quickly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36275) Remove deprecated ProgramArgsQueryParameter
[ https://issues.apache.org/jira/browse/FLINK-36275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36275: --- Labels: pull-request-available (was: ) > Remove deprecated ProgramArgsQueryParameter > --- > > Key: FLINK-36275 > URL: https://issues.apache.org/jira/browse/FLINK-36275 > Project: Flink > Issue Type: Sub-task > Components: Runtime / REST >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Priority: Blocker > Labels: pull-request-available > > ProgramArgsQueryParameter has been deprecated in Flink 1.7(see > https://issues.apache.org/jira/browse/FLINK-10295). We can remove > this(Breaking compatibility) in Flink 2.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36270) DDB Streams Connector performance issue due to splitsAvailableForAssignment function
[ https://issues.apache.org/jira/browse/FLINK-36270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36270: --- Labels: pull-request-available (was: ) > DDB Streams Connector performance issue due to splitsAvailableForAssignment > function > > > Key: FLINK-36270 > URL: https://issues.apache.org/jira/browse/FLINK-36270 > Project: Flink > Issue Type: Bug > Components: Connectors / DynamoDB >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > In DDB Streams connector, while testing we found out that when we are > spending a lot of time in markAsFinished function because we are calling > splitsAvailableForAssignment which is O(N), and given n shards can be marked > as finished concurrently, the algorithm becomes O(n^2). Change the algo to > assign only child shards when a parent is finished. We can start tracking > child shards of a shard in SplitTracker -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36269) Remove `fromTableSource` in python module
[ https://issues.apache.org/jira/browse/FLINK-36269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36269: --- Labels: pull-request-available (was: ) > Remove `fromTableSource` in python module > - > > Key: FLINK-36269 > URL: https://issues.apache.org/jira/browse/FLINK-36269 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36248) Introduce new Join Operator with Async State API
[ https://issues.apache.org/jira/browse/FLINK-36248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36248: --- Labels: pull-request-available (was: ) > Introduce new Join Operator with Async State API > > > Key: FLINK-36248 > URL: https://issues.apache.org/jira/browse/FLINK-36248 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime >Reporter: xuyang >Assignee: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36016) Synchronize initialization time and clock usage
[ https://issues.apache.org/jira/browse/FLINK-36016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36016: --- Labels: pull-request-available (was: ) > Synchronize initialization time and clock usage > > > Key: FLINK-36016 > URL: https://issues.apache.org/jira/browse/FLINK-36016 > Project: Flink > Issue Type: Sub-task >Reporter: Zdenek Tison >Assignee: Zdenek Tison >Priority: Major > Labels: pull-request-available > > StateTransitionManager's initialization time and the clock parameter should > be based on the same time. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36249) Remove RestartStrategy-related configuration getters/setters that return/set complex Java objects
[ https://issues.apache.org/jira/browse/FLINK-36249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36249: --- Labels: pull-request-available (was: ) > Remove RestartStrategy-related configuration getters/setters that return/set > complex Java objects > - > > Key: FLINK-36249 > URL: https://issues.apache.org/jira/browse/FLINK-36249 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Junrui Li >Assignee: Junrui Li >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > > FLINK-33581/FLIP-381: Deprecate configuration getters/setters that return or > set complex Java objects. > In Flink 2.0, we will remove these deprecated methods and fields. This change > will prevent users from configuring their jobs by passing complex Java > objects, encouraging them to use {{ConfigOption}} instead. > This JIRA will remove the public API related to {{RestartStrategy}} > getter/setter methods. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36266) Insert into as select * behaves incorrect
[ https://issues.apache.org/jira/browse/FLINK-36266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36266: --- Labels: pull-request-available (was: ) > Insert into as select * behaves incorrect > - > > Key: FLINK-36266 > URL: https://issues.apache.org/jira/browse/FLINK-36266 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > For instance if there are 2 tables > {code:sql} > t(f0: INT, f1: INT, f2: INT) > t3(f0: INT, f1: INT, f2: INT) > t2(f0: INT, f1: INT) > {code} > then these queries fail > {code:sql} > INSERT INTO t(f0, f1, f2) SELECT * FROM t3; > INSERT INTO t(f0, f1, f2) SELECT 42, * FROM t2; > INSERT INTO t(f0, f1, f2) SELECT *, 42 FROM t2; > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33750) Remove deprecated config options.
[ https://issues.apache.org/jira/browse/FLINK-33750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-33750: --- Labels: 2.0-related pull-request-available (was: 2.0-related) > Remove deprecated config options. > - > > Key: FLINK-33750 > URL: https://issues.apache.org/jira/browse/FLINK-33750 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Junrui Li >Assignee: Xuannan Su >Priority: Blocker > Labels: 2.0-related, pull-request-available > Fix For: 2.0-preview > > > Remove deprecated config options in FLINK-2.0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36262) Avoid unnecessary String concatenations in the RexFieldAccess constructor: port fix for CALCITE-5965
[ https://issues.apache.org/jira/browse/FLINK-36262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36262: --- Labels: pull-request-available (was: ) > Avoid unnecessary String concatenations in the RexFieldAccess constructor: > port fix for CALCITE-5965 > > > Key: FLINK-36262 > URL: https://issues.apache.org/jira/browse/FLINK-36262 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > In case of tables with deeply nested structures this issue with unnecessary > String concatenations in the RexFieldAccess constructor started to play > significant role > and we faced it already. > Since the fix (CALCITE-5965) is only in Calcite 1.36.0 it can take a long > time before it comes with Calcite upgrade (currently there are 1.33 and 1.34 > in ready for review only) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36225) Remove deprecated methods in FLIP-382
[ https://issues.apache.org/jira/browse/FLINK-36225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36225: --- Labels: pull-request-available (was: ) > Remove deprecated methods in FLIP-382 > - > > Key: FLINK-36225 > URL: https://issues.apache.org/jira/browse/FLINK-36225 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Affects Versions: 2.0-preview >Reporter: Weijie Guo >Assignee: Weijie Guo >Priority: Blocker > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35927) Make ForStFlinkFileSystem save misc file in local filesystem
[ https://issues.apache.org/jira/browse/FLINK-35927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35927: --- Labels: pull-request-available (was: ) > Make ForStFlinkFileSystem save misc file in local filesystem > > > Key: FLINK-35927 > URL: https://issues.apache.org/jira/browse/FLINK-35927 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Hangxiang Yu >Assignee: Hangxiang Yu >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36230) REGEXP_EXTRACT Python Table API fails
[ https://issues.apache.org/jira/browse/FLINK-36230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36230: --- Labels: pull-request-available (was: ) > REGEXP_EXTRACT Python Table API fails > - > > Key: FLINK-36230 > URL: https://issues.apache.org/jira/browse/FLINK-36230 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Dylan He >Priority: Major > Labels: pull-request-available > Attachments: image-2024-09-06-10-37-12-147.png > > > Invalid call when extract_index is None. > !image-2024-09-06-10-37-12-147.png|width=514,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35667) Implement Reducing Async State API for ForStStateBackend
[ https://issues.apache.org/jira/browse/FLINK-35667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35667: --- Labels: pull-request-available (was: ) > Implement Reducing Async State API for ForStStateBackend > > > Key: FLINK-35667 > URL: https://issues.apache.org/jira/browse/FLINK-35667 > Project: Flink > Issue Type: Sub-task >Reporter: Zakelly Lan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36014) Align the desired and sufficient resources definiton in Executing and WaitForResources states
[ https://issues.apache.org/jira/browse/FLINK-36014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36014: --- Labels: pull-request-available (was: ) > Align the desired and sufficient resources definiton in Executing and > WaitForResources states > - > > Key: FLINK-36014 > URL: https://issues.apache.org/jira/browse/FLINK-36014 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Reporter: Zdenek Tison >Assignee: Zdenek Tison >Priority: Major > Labels: pull-request-available > > The goal is to use the same definition for the desired and sufficient > resources in the Executing state as in the WaitingForResources state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36246) Move async state related operators flink-runtime
[ https://issues.apache.org/jira/browse/FLINK-36246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36246: --- Labels: pull-request-available (was: ) > Move async state related operators flink-runtime > > > Key: FLINK-36246 > URL: https://issues.apache.org/jira/browse/FLINK-36246 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Zakelly Lan >Assignee: Zakelly Lan >Priority: Major > Labels: pull-request-available > > After FLINK-36063, all operators are moved to flink-runtime. We should move > the async state related ones as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34079) FLIP-405: Migrate string configuration key to ConfigOption
[ https://issues.apache.org/jira/browse/FLINK-34079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34079: --- Labels: pull-request-available (was: ) > FLIP-405: Migrate string configuration key to ConfigOption > -- > > Key: FLINK-34079 > URL: https://issues.apache.org/jira/browse/FLINK-34079 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration >Reporter: Rui Fan >Assignee: Xuannan Su >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > This is an umbrella Jira of > [FLIP-405|https://cwiki.apache.org/confluence/x/6Yr5E] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36243) Store namespace in ContextKey
[ https://issues.apache.org/jira/browse/FLINK-36243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36243: --- Labels: pull-request-available (was: ) > Store namespace in ContextKey > - > > Key: FLINK-36243 > URL: https://issues.apache.org/jira/browse/FLINK-36243 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Yanfei Lei >Priority: Major > Labels: pull-request-available > Fix For: 2.0-preview > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36242) Fix unstable test in MaterializedTableITCase
[ https://issues.apache.org/jira/browse/FLINK-36242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36242: --- Labels: pull-request-available (was: ) > Fix unstable test in MaterializedTableITCase > > > Key: FLINK-36242 > URL: https://issues.apache.org/jira/browse/FLINK-36242 > Project: Flink > Issue Type: Bug > Components: Table SQL / Gateway >Reporter: Feng Jin >Priority: Major > Labels: pull-request-available > > h3. Error message: > {code:java} > Aug 21 09:32:20 09:32:20.322 [ERROR] Tests run: 18, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 69.61 s <<< FAILURE! – in > org.apache.flink.table.gateway.service.MaterializedTableStatementITCase > Aug 21 09:32:20 09:32:20.322 [ERROR] > org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testDropMaterializedTableWithDeletedRefreshWorkflowInFullMode > – Time elapsed: 0.415 s <<< ERROR! > Aug 21 09:32:20 org.apache.flink.table.gateway.api.utils.SqlGatewayException: > Failed to getTable. > Aug 21 09:32:20 at > org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.getTable(SqlGatewayServiceImpl.java:300) > Aug 21 09:32:20 at > org.apache.flink.table.gateway.AbstractMaterializedTableStatementITCase.after(AbstractMaterializedTableStatementITCase.java:196) > Aug 21 09:32:20 at java.lang.reflect.Method.invoke(Method.java:498) > Aug 21 09:32:20 at > java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) > Aug 21 09:32:20 at > java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > Aug 21 09:32:20 at > java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) > Aug 21 09:32:20 at > java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > Aug 21 09:32:20 at > java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) > Aug 21 09:32:20 Caused by: org.apache.flink.table.api.TableException: Cannot > find table '`test_catalog12`.`test_db`.`users_shops`' in any of the catalogs > [test_catalog6, test_catalog7, test_catalog4, test_catalog5, test_catalog8, > test_catalog9, test_catalog12, test_catalog11, test_catalog2, test_catalog10, > test_catalog3, test_catalog1, default_catalog], nor as a temporary table. > Aug 21 09:32:20 at > org.apache.flink.table.catalog.CatalogManager.lambda$getTableOrError$4(CatalogManager.java:673) > Aug 21 09:32:20 at java.util.Optional.orElseThrow(Optional.java:290) > Aug 21 09:32:20 at > org.apache.flink.table.catalog.CatalogManager.getTableOrError(CatalogManager.java:670) > Aug 21 09:32:20 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.getTable(OperationExecutor.java:297) > Aug 21 09:32:20 at > org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.getTable(SqlGatewayServiceImpl.java:297) > Aug 21 09:32:20 ... 7 more > {code} > > {color:#00}Corresponding test link{color} > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=61530&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&s=ae4f8708-9994-57d3-c2d7-b892156e7812] > h3. Problem: > > {color:#00}As shown in the error message above, in the test method > afterEach, we will list all materialized tables and then drop them to prevent > any remaining refresh tasks.{color} > {color:#00}However, since the Drop Materialized Table has already been > deleted, it causes the error mentioned above.{color} > {color:#00}* Why does listing tables show they exist when dropping tables > results in an error stating they do not exist?{color} > {color:#00}1. In the test > dropMaterializedTableWithDeletedRefreshWorkflowInFullMode, we manually > dropped the Materialized Table.{color} > {color:#00}2. Despite manual deletion, background refresh tasks are still > being submitted. This leads to data continuing to be written into the > corresponding table data directory.{color} > {color:#00}3. TestFileSystemCatalog lists all directories for existing > tables during listTable. If a directory exists, it returns true. However, > during dropTable it checks if both table and schema files exist > simultaneously. This inconsistency caused the mentioned issue.{color} > h3. {color:#00}Solution:{color} > {color:#00}1. Fix TestFileSystemCatalog logic for listTable by checking > not only directories but also schema file existence.{color} > {color:#00}2. To further avoid this problem, change all tables in > MaterializedTableITCase to be manually dropped instead.{color} > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36239) DDB Streams Connector reprocessing due to DescribeStream inconsistencies for trimmed shards
[ https://issues.apache.org/jira/browse/FLINK-36239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36239: --- Labels: pull-request-available (was: ) > DDB Streams Connector reprocessing due to DescribeStream inconsistencies for > trimmed shards > --- > > Key: FLINK-36239 > URL: https://issues.apache.org/jira/browse/FLINK-36239 > Project: Flink > Issue Type: Bug > Components: Connectors / DynamoDB >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > *Problem* > We can have reprocessing of events when DDBStream shards are deleted by DDB > after 24 hours. > *Root cause* > We use DDB DescribeStream API to retrieve a list of shards to consume from. > This API is eventually consistent when it comes to deleting expired shards, > so some responses will include it, and some will not. > On DDBStreams connector, shards have the following lifecycle. > # {*}Discovery{*}: Shard discovered (known) > # {*}Assign{*}: Shard assigned (assigned) > # {*}Finished{*}: Once shard is finished, it will be moved to finished > (finished) > # *Cleanup:* Once shard is finished, DescribeStream doesn't return the > shardId, and >24h has progressed, we will delete the shard from state (to > prevent accumulating unnecessary state). > *Example:* > * We did describestream and processed shard-a >24 hours ago > * Now the shard has been removed since its more than 24 hours. > * We just got a describestream call for this shard. > * Describestream didn’t give this shard > * We got another describestream call. Describestream somehow sent that shard > back due to inconsistencies, we sent this out to SplitTracker. This shard was > not in finished shards, since we just deleted it some time back since it was > more than 25 hours old, so this shard would be processed duplicately. > *Fix* > We'll make the shard retention to be 48 hours to avoid these edge cases -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36234) Add 1.20 to PreviousDocs list
[ https://issues.apache.org/jira/browse/FLINK-36234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36234: --- Labels: pull-request-available (was: ) > Add 1.20 to PreviousDocs list > - > > Key: FLINK-36234 > URL: https://issues.apache.org/jira/browse/FLINK-36234 > Project: Flink > Issue Type: Technical Debt > Components: Documentation >Affects Versions: 2.0.0 >Reporter: Aleksandr Pilipenko >Priority: Major > Labels: pull-request-available > > Documentation for 2.0-SNAPSHOT is missing 1.20 from all versions list, as > well as from version picker. > [https://nightlies.apache.org/flink/flink-docs-master/versions/] > > Reported in mailing list: > https://lists.apache.org/thread/8g0hwk5lxly38vpcqwhd1hcy6djv9rq6 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35503) OracleE2eITCase fails with error ORA-12528 on Mac M2
[ https://issues.apache.org/jira/browse/FLINK-35503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35503: --- Labels: pull-request-available (was: ) > OracleE2eITCase fails with error ORA-12528 on Mac M2 > > > Key: FLINK-35503 > URL: https://issues.apache.org/jira/browse/FLINK-35503 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.1.0 > Environment: > * Mac M2 (Apple Silicon) > * using docker desktop with Rosetta enabled for amd64 emulation > >Reporter: Saketh Kurnool >Assignee: Zhongqiang Gong >Priority: Blocker > Labels: pull-request-available > Attachments: com.ververica.cdc.connectors.tests.OracleE2eITCase.txt, > oracle-docker-setup-logs.txt > > > Hello Flink CDC community, > I am attempting to run `OracleE2eITCase` (in the cdc source connector e2e > tests), and I am running into the following runtime exception: > {code:java} > java.sql.SQLException: > Listener refused the connection with the following error: > ORA-12528, TNS:listener: all appropriate instances are blocking new > connections > > at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:854) > at > oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:793) > at > oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:57) > at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:747) > at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562) > at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677) > at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228) > at > com.ververica.cdc.connectors.tests.OracleE2eITCase.getOracleJdbcConnection(OracleE2eITCase.java:197) > at > com.ververica.cdc.connectors.tests.OracleE2eITCase.testOracleCDC(OracleE2eITCase.java:149) > at java.base/java.lang.reflect.Method.invoke(Method.java:567) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at > org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29) > Caused by: oracle.net.ns.NetException: Listener refused the connection with > the following error: > ORA-12528, TNS:listener: all appropriate instances are blocking new > connections > > at oracle.net.ns.NSProtocolNIO.negotiateConnection(NSProtocolNIO.java:284) > at oracle.net.ns.NSProtocol.connect(NSProtocol.java:340) > at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1596) > at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:588) > ... 11 more{code} > I have attached the test results to this issue. > `OracleE2eITCase` runs the `goodboy008/oracle-19.3.0-ee:non-cdb` docker > image. I am able to reproduce the same issue when I run this docker image > locally - my observation is that dockerized Oracle DB instance is not being > set up properly, as I notice another ORA in the setup logs (`ORA-03113: > end-of-file on communication channel`). I have also attached the logs from > the docker image setup to this issue. To reproduce the ORA-12528 issue > locally, I: > * ran: `docker run goodboy008/oracle-19.3.0-ee:non-cdb` > * ssh'ed into the db pod > * ran: `sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba` > Any insight/workaround on getting this e2e test and the docker image running > on my machine would be much appreciated. I'm also happy to provide any other > information regarding my setup in the comments. Thank you! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36229) Port SingleThreadMultiplexSourceReaderBase to new undeprecated interfaces
[ https://issues.apache.org/jira/browse/FLINK-36229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36229: --- Labels: pull-request-available (was: ) > Port SingleThreadMultiplexSourceReaderBase to new undeprecated interfaces > - > > Key: FLINK-36229 > URL: https://issues.apache.org/jira/browse/FLINK-36229 > Project: Flink > Issue Type: Sub-task >Reporter: Hong Liang Teoh >Priority: Major > Labels: pull-request-available > > SingleThreadFetcherManager and SingleThreadMultiplexSourceReaderBase > constructor used in DDB Streams and KDS source are deprecated. > Let's update to non-deprecated interfaces. The new API has been available > since 1.18, and old API was deprecated in 1.19. > > These new interfaces simplify the creation of the SplitFetcher > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36224) Add the version mapping between pipeline connectors and flink
[ https://issues.apache.org/jira/browse/FLINK-36224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36224: --- Labels: pull-request-available (was: ) > Add the version mapping between pipeline connectors and flink > -- > > Key: FLINK-36224 > URL: https://issues.apache.org/jira/browse/FLINK-36224 > Project: Flink > Issue Type: Sub-task > Components: Flink CDC >Reporter: Thorne >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34975) FLIP-427: ForSt - Disaggregated State Store
[ https://issues.apache.org/jira/browse/FLINK-34975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34975: --- Labels: pull-request-available (was: ) > FLIP-427: ForSt - Disaggregated State Store > --- > > Key: FLINK-34975 > URL: https://issues.apache.org/jira/browse/FLINK-34975 > Project: Flink > Issue Type: New Feature > Components: Runtime / State Backends >Reporter: Hangxiang Yu >Assignee: Hangxiang Yu >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > This is a sub-FLIP for the disaggregated state management and its related > work, please read the [FLIP-423|https://cwiki.apache.org/confluence/x/R4p3EQ] > first to know the whole story. > As described in FLIP-423, there are some tough issues about embedded state > backend on local file system, respecially when dealing with extremely large > state: > # {*}Constraints of local disk space complicate the prediction of storage > requirements, potentially leading to job failures{*}: Especially in cloud > native deployment mode, pre-allocated local disks typically face strict > capacity constraints, making it challenging to forecast the size requirements > of job states. Over-provisioning disk space results in unnecessary resource > overhead, while under-provisioning risks job failure due to insufficient > space. > # *The tight coupling of compute and storage resources leads to > underutilization and increased waste:* Jobs can generally be categorized as > either CPU-intensive or IO-intensive. In a coupled architecture, > CPU-intensive jobs leave a significant portion of storage resources > underutilized, whereas IO-intensive jobs result in idle computing resources. > By considering remote storage as the primary storage, all working states are > maintained on the remote file system, which brings several advantages: > # *Remote storages e.g. S3/HDFS typically offer elastic scalability, > theoretically providing unlimited space.* > # *The allocation of remote storage resources can be optimized by reducing > them for CPU-intensive jobs and augmenting them for IO-intensive jobs, thus > enhancing overall resource utilization.* > # *This architecture facilitates a highly efficient and lightweight process > for checkpointing, recovery, and rescaling through fast copy or simple move.* > This FLIP aims to realize disaggregated state for our new key-value store > named *ForSt* which evloves from RocksDB and supports remote file system. > This makes Flink get rid of the disadvantages by coupled state architecture > and embrace the scalable as well as flexible cloud-native storage. > Please see [FLIP-427 |https://cwiki.apache.org/confluence/x/T4p3EQ]for more > details. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36221) Add specification about CAST ... AS ... built-in functions
[ https://issues.apache.org/jira/browse/FLINK-36221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36221: --- Labels: pull-request-available (was: ) > Add specification about CAST ... AS ... built-in functions > -- > > Key: FLINK-36221 > URL: https://issues.apache.org/jira/browse/FLINK-36221 > Project: Flink > Issue Type: Sub-task >Reporter: yux >Priority: Minor > Labels: pull-request-available > Fix For: cdc-3.2.0 > > > FLINK-34877 adds CAST ... AS ... syntax in transform expressions, but there's > no corresponding documentations yet. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36217) Remove powermock usage
[ https://issues.apache.org/jira/browse/FLINK-36217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36217: --- Labels: pull-request-available (was: ) > Remove powermock usage > -- > > Key: FLINK-36217 > URL: https://issues.apache.org/jira/browse/FLINK-36217 > Project: Flink > Issue Type: Technical Debt > Components: Tests >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > Most of the tests are either moved to a different repo like connectors or > rewritten in powermock free way. > Powermock itself became unmaintained (latest release was in 2020 > https://github.com/powermock/powermock/releases/tag/powermock-2.0.9) > and latest commit 2 years ago https://github.com/powermock/powermock > also there is no support for junit5 (the request to support it and even PR > from junit5 maintainers is ready for review since Feb 2023 > https://github.com/powermock/powermock/pull/1146, however still no feedback > from maintainers...) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36206) Support flink-connector-aws-base to allow custom override configuration
[ https://issues.apache.org/jira/browse/FLINK-36206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36206: --- Labels: pull-request-available (was: ) > Support flink-connector-aws-base to allow custom override configuration > --- > > Key: FLINK-36206 > URL: https://issues.apache.org/jira/browse/FLINK-36206 > Project: Flink > Issue Type: Improvement > Components: Connectors / DynamoDB, Connectors / Kinesis >Reporter: Abhi Gupta >Priority: Major > Labels: pull-request-available > > The flink-connector-aws-base in the file: > {color:#e8912d}flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java > {color} > {color:#172b4d}has a configuration to set the client override configuration > to default value even if customer supplies a custom override config. We > should fix this behaviour to support custom override configurations{color} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36208) use ThreadLocalRandom in AbstractID
[ https://issues.apache.org/jira/browse/FLINK-36208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36208: --- Labels: pull-request-available (was: ) > use ThreadLocalRandom in AbstractID > --- > > Key: FLINK-36208 > URL: https://issues.apache.org/jira/browse/FLINK-36208 > Project: Flink > Issue Type: Improvement > Components: API / Core >Reporter: Sean Sullivan >Priority: Minor > Labels: pull-request-available > > Flink AbstractID currently uses a static instance of java.util.Random > > Consider using java.util.concurrent.ThreadLocalRandom for improved > performance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36207) Disabling japicmp plugin for deprecated APIs
[ https://issues.apache.org/jira/browse/FLINK-36207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36207: --- Labels: pull-request-available (was: ) > Disabling japicmp plugin for deprecated APIs > > > Key: FLINK-36207 > URL: https://issues.apache.org/jira/browse/FLINK-36207 > Project: Flink > Issue Type: Improvement > Components: Build System >Affects Versions: 2.0.0 >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available > > The Apache Flink 2.0 release allows for the removal of public API. The > japicmp plugin usually checks for these kind of changes. To avoid adding > explicit excludes for each change, this Jira issue suggest to disable the API > check for APIs that are marked as deprecated. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler
[ https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-36201: --- Labels: pull-request-available (was: ) > StateLocalitySlotAssigner should be only used when local recovery is enabled > for Adaptive Scheduler > --- > > Key: FLINK-36201 > URL: https://issues.apache.org/jira/browse/FLINK-36201 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Reporter: Rui Fan >Priority: Major > Labels: pull-request-available > > SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of > DefaultSlotAssigner whenever failover happens. > I'm curious why we use StateLocalitySlotAssigner when local recovery is > disable. > As I understand, the local recovery doesn't take effect if flink doesn't > backup state on the TM local disk. So StateLocalitySlotAssigner should be > only used when local recovery is enabled. > > [1] > [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136] -- This message was sent by Atlassian Jira (v8.20.10#820010)