[jira] [Updated] (FLINK-35472) Improve tests for Elasticsearch 8 connector
[ https://issues.apache.org/jira/browse/FLINK-35472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-35472: -- Affects Version/s: elasticsearch-3.2.0 > Improve tests for Elasticsearch 8 connector > --- > > Key: FLINK-35472 > URL: https://issues.apache.org/jira/browse/FLINK-35472 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch, Tests >Affects Versions: elasticsearch-3.2.0 >Reporter: Mingliang Liu >Priority: Major > Labels: pull-request-available > > Per discussion in [this > PR|https://github.com/apache/flink-connector-elasticsearch/pull/104], it > makes the tests more reusable if we use parameterized tests. It requires some > changes of the existing tests, which includes: > # Make base test class parameterized with secure parameter. As JUnit 5 has > limited support for parameterized tests with inheritance, we can use the > {{ParameterizedTestExtension}} introduced in Flink, see this doc > # Manage the test container lifecycle instead of using the managed annotation > {{@Testcontainers}} and {{@Container}} so that the test containers can be > used as a singleton for all tests in the suite > # Create and use common methods in the base class that concrete test classes > can be mostly parameter-agnostic > This JIRA intends to not change any logic or functionality. Instead it > focuses on tests refactoring for more reusable tests and future proof. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34369) Elasticsearch connector supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849908#comment-17849908 ] Mingliang Liu commented on FLINK-34369: --- Can we close this now? > Elasticsearch connector supports SSL context > > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Labels: pull-request-available > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. > As SSLContext is not serializable and possibly environment aware, we can add > a (serializable) provider of SSL context to the {{NetworkClientConfig}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35472) Improve tests for Elasticsearch 8 connector
Mingliang Liu created FLINK-35472: - Summary: Improve tests for Elasticsearch 8 connector Key: FLINK-35472 URL: https://issues.apache.org/jira/browse/FLINK-35472 Project: Flink Issue Type: Improvement Components: Connectors / ElasticSearch, Tests Reporter: Mingliang Liu Per discussion in [this PR|https://github.com/apache/flink-connector-elasticsearch/pull/104], it makes the tests more reusable if we use parameterized tests. It requires some changes of the existing tests, which includes: # Make base test class parameterized with secure parameter. As JUnit 5 has limited support for parameterized tests with inheritance, we can use the {{ParameterizedTestExtension}} introduced in Flink, see this doc # Manage the test container lifecycle instead of using the managed annotation {{@Testcontainers}} and {{@Container}} so that the test containers can be used as a singleton for all tests in the suite # Create and use common methods in the base class that concrete test classes can be mostly parameter-agnostic This JIRA intends to not change any logic or functionality. Instead it focuses on tests refactoring for more reusable tests and future proof. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35424) Elasticsearch connector 8 supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-35424: -- Parent: FLINK-34369 Issue Type: Sub-task (was: Improvement) > Elasticsearch connector 8 supports SSL context > -- > > Key: FLINK-35424 > URL: https://issues.apache.org/jira/browse/FLINK-35424 > Project: Flink > Issue Type: Sub-task > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Labels: pull-request-available > > In FLINK-34369, we added SSL support for the base Elasticsearch sink class > that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is > using the AsyncSink API and it does not use the aforementioned base sink > class. It needs separate change to support this feature. > This is specially important to Elasticsearch 8 which enables secure by > default. Meanwhile, it merits if we add integration tests for this SSL > context support. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35424) Elasticsearch connector 8 supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-35424: -- Description: In FLINK-34369, we added SSL support for the base Elasticsearch sink class that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is using the AsyncSink API and it does not use the aforementioned base sink class. It needs separate change to support this feature. This is specially important to Elasticsearch 8 which enables secure by default. Meanwhile, it merits if we add integration tests for this SSL context support. was:In FLINK-34369, we added SSL support for the base Elasticsearch sink class that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is using the AsyncSink API and need separate change to support this feature. This is specially important to Elasticsearch 8 which enables secure by default. Meanwhile, it merits if we add integration tests for this SSL context support. > Elasticsearch connector 8 supports SSL context > -- > > Key: FLINK-35424 > URL: https://issues.apache.org/jira/browse/FLINK-35424 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Labels: pull-request-available > > In FLINK-34369, we added SSL support for the base Elasticsearch sink class > that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is > using the AsyncSink API and it does not use the aforementioned base sink > class. It needs separate change to support this feature. > This is specially important to Elasticsearch 8 which enables secure by > default. Meanwhile, it merits if we add integration tests for this SSL > context support. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35424) Elasticsearch connector 8 supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-35424: -- Description: In FLINK-34369, we added SSL support for the base Elasticsearch sink class that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is using the AsyncSink API and need separate change to support this feature. This is specially important to Elasticsearch 8 which enables secure by default. Meanwhile, it merits if we add integration tests for this SSL context support. (was: In ) > Elasticsearch connector 8 supports SSL context > -- > > Key: FLINK-35424 > URL: https://issues.apache.org/jira/browse/FLINK-35424 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Labels: pull-request-available > > In FLINK-34369, we added SSL support for the base Elasticsearch sink class > that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is > using the AsyncSink API and need separate change to support this feature. > This is specially important to Elasticsearch 8 which enables secure by > default. Meanwhile, it merits if we add integration tests for this SSL > context support. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35424) Elasticsearch connector 8 supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-35424: -- Description: In (was: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable and possibly environment aware, we can add a (serializable) provider of SSL context to the {{NetworkClientConfig}}.) > Elasticsearch connector 8 supports SSL context > -- > > Key: FLINK-35424 > URL: https://issues.apache.org/jira/browse/FLINK-35424 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Labels: pull-request-available > > In -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35424) Elasticsearch connector 8 supports SSL context
Mingliang Liu created FLINK-35424: - Summary: Elasticsearch connector 8 supports SSL context Key: FLINK-35424 URL: https://issues.apache.org/jira/browse/FLINK-35424 Project: Flink Issue Type: Improvement Components: Connectors / ElasticSearch Affects Versions: 1.17.1 Reporter: Mingliang Liu Assignee: Mingliang Liu The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable and possibly environment aware, we can add a (serializable) provider of SSL context to the {{NetworkClientConfig}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27054) Elasticsearch SQL connector SSL issue
[ https://issues.apache.org/jira/browse/FLINK-27054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843288#comment-17843288 ] Mingliang Liu commented on FLINK-27054: --- Hi, the FLINK-34369 was merged and I can use the same approach to support SQL connector. I have a draft PR that shows the idea. Please assign to me if no one is actively working on this. I may need help with review and integration testing. https://github.com/apache/flink-connector-elasticsearch/compare/main...liuml07:flink-connector-elasticsearch:table > Elasticsearch SQL connector SSL issue > - > > Key: FLINK-27054 > URL: https://issues.apache.org/jira/browse/FLINK-27054 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch >Reporter: ricardo >Assignee: Kelu Tao >Priority: Major > > The current Flink ElasticSearch SQL connector > https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/ > is missing SSL options, can't connect to ES clusters which require SSL > certificate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35287) Builder builds NetworkConfig for Elasticsearch connector 8
Mingliang Liu created FLINK-35287: - Summary: Builder builds NetworkConfig for Elasticsearch connector 8 Key: FLINK-35287 URL: https://issues.apache.org/jira/browse/FLINK-35287 Project: Flink Issue Type: Improvement Components: Connectors / ElasticSearch Reporter: Mingliang Liu In FLINK-26088 we added support for ElasticSearch 8.0. It is based on Async sink API and does not use the base module {{flink-connector-elasticsearch-base}}. Regarding the config options (host, username, password, headers, ssl...), we pass all options from the builder to AsyncSink, and last to AsyncWriter. It is less flexible when we add new options and the constructors will get longer and multiple places may validate options unnecessarily. I think it's nice if we make the sink builder builds the NetworkConfig once, and pass it all the way to the writer. This is also how the base module for 6.x / 7.x is implemented. In my recent work adding new options to the network config, this way works simpler. Let me create a PR to demonstrate the idea. No new features or major code refactoring other than the builder builds the NetworkConfig (code will be shorter). I have a few small fixes which I'll include into the incoming PR. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35221) Support SQL 2011 reserved keywords as identifiers in Flink HiveParser
Wencong Liu created FLINK-35221: --- Summary: Support SQL 2011 reserved keywords as identifiers in Flink HiveParser Key: FLINK-35221 URL: https://issues.apache.org/jira/browse/FLINK-35221 Project: Flink Issue Type: Improvement Components: Connectors / Hive Affects Versions: 1.20.0 Reporter: Wencong Liu According to Hive user documentation[1], starting from version 0.13.0, Hive prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 and earlier allow using SQL11 reserved keywords as identifiers by setting {{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This compatibility feature facilitates jobs that utilize keywords as identifiers. HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat SQL11 reserved keywords as identifiers. This poses a challenge for users migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios where keywords are used as identifiers. Addressing this issue is necessary to support such cases. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35221) Support SQL 2011 reserved keywords as identifiers in Flink HiveParser
[ https://issues.apache.org/jira/browse/FLINK-35221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-35221: Description: According to Hive user documentation[1], starting from version 0.13.0, Hive prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 and earlier allow using SQL11 reserved keywords as identifiers by setting {{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This compatibility feature facilitates jobs that utilize keywords as identifiers. HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat SQL11 reserved keywords as identifiers. This poses a challenge for users migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios where keywords are used as identifiers. Addressing this issue is necessary to support such cases. [1] [LanguageManual DDL - Apache Hive - Apache Software Foundation|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL] was: According to Hive user documentation[1], starting from version 0.13.0, Hive prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 and earlier allow using SQL11 reserved keywords as identifiers by setting {{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This compatibility feature facilitates jobs that utilize keywords as identifiers. HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat SQL11 reserved keywords as identifiers. This poses a challenge for users migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios where keywords are used as identifiers. Addressing this issue is necessary to support such cases. > Support SQL 2011 reserved keywords as identifiers in Flink HiveParser > -- > > Key: FLINK-35221 > URL: https://issues.apache.org/jira/browse/FLINK-35221 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Affects Versions: 1.20.0 >Reporter: Wencong Liu >Priority: Major > > According to Hive user documentation[1], starting from version 0.13.0, Hive > prohibits the use of reserved keywords as identifiers. Moreover, versions > 2.1.0 and earlier allow using SQL11 reserved keywords as identifiers by > setting {{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This > compatibility feature facilitates jobs that utilize keywords as identifiers. > HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat > SQL11 reserved keywords as identifiers. This poses a challenge for users > migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios > where keywords are used as identifiers. Addressing this issue is necessary to > support such cases. > [1] [LanguageManual DDL - Apache Hive - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35148) Improve InstantiationUtil for checking nullary public constructor
Mingliang Liu created FLINK-35148: - Summary: Improve InstantiationUtil for checking nullary public constructor Key: FLINK-35148 URL: https://issues.apache.org/jira/browse/FLINK-35148 Project: Flink Issue Type: Improvement Components: API / Core Affects Versions: 1.18.1, 1.19.0 Reporter: Mingliang Liu {{InstantiationUtil#hasPublicNullaryConstructor}} checks whether the given class has a public nullary constructor. The implementation can be improved a bit: the `Modifier#isPublic` check within the for-loop can be skipped as the {{Class#getConstructors()}} only returns public constructors. We can also add a negative unit test for this. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34718) KeyedPartitionWindowedStream and NonPartitionWindowedStream IllegalStateException in AZP
[ https://issues.apache.org/jira/browse/FLINK-34718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828019#comment-17828019 ] Wencong Liu commented on FLINK-34718: - The newly introduced DataStream operators are designed based on the mechanism of FLIP-331, which means that the ResultPartitionType for specific operators in a streaming job can be BLOCKING. However, the AdaptiveScheduler mandates that the ResultPartitionType for all operators must be PIPELINED, therefore, these operators are not suitable to be executed under the configuration of the AdaptiveScheduler. The default scheduler for IT tests is the {_}DefaultScheduler{_}, and I'm curious as to why it would change to the AdaptiveScheduler. 樂 [~rskraba] > KeyedPartitionWindowedStream and NonPartitionWindowedStream > IllegalStateException in AZP > > > Key: FLINK-34718 > URL: https://issues.apache.org/jira/browse/FLINK-34718 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Ryan Skraba >Priority: Critical > Labels: test-stability > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58320=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=9646] > 18 of the KeyedPartitionWindowedStreamITCase and > NonKeyedPartitionWindowedStreamITCase unit tests introduced in FLINK-34543 > are failing in the adaptive scheduler profile, with errors similar to: > {code:java} > Mar 15 01:54:12 Caused by: java.lang.IllegalStateException: The adaptive > scheduler supports pipelined data exchanges (violated by MapPartition > (org.apache.flink.streaming.runtime.tasks.OneInputStreamTask) -> > ddb598ad156ed281023ba4eebbe487e3). > Mar 15 01:54:12 at > org.apache.flink.util.Preconditions.checkState(Preconditions.java:215) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:438) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:356) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:124) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:384) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:361) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100) > Mar 15 01:54:12 at > org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) > Mar 15 01:54:12 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > Mar 15 01:54:12 ... 4 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34718) KeyedPartitionWindowedStream and NonPartitionWindowedStream IllegalStateException in AZP
[ https://issues.apache.org/jira/browse/FLINK-34718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828011#comment-17828011 ] Wencong Liu commented on FLINK-34718: - Sure, I'll take a look now. [~mapohl] > KeyedPartitionWindowedStream and NonPartitionWindowedStream > IllegalStateException in AZP > > > Key: FLINK-34718 > URL: https://issues.apache.org/jira/browse/FLINK-34718 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Ryan Skraba >Priority: Critical > Labels: test-stability > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58320=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=9646] > 18 of the KeyedPartitionWindowedStreamITCase and > NonKeyedPartitionWindowedStreamITCase unit tests introduced in FLINK-34543 > are failing in the adaptive scheduler profile, with errors similar to: > {code:java} > Mar 15 01:54:12 Caused by: java.lang.IllegalStateException: The adaptive > scheduler supports pipelined data exchanges (violated by MapPartition > (org.apache.flink.streaming.runtime.tasks.OneInputStreamTask) -> > ddb598ad156ed281023ba4eebbe487e3). > Mar 15 01:54:12 at > org.apache.flink.util.Preconditions.checkState(Preconditions.java:215) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:438) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:356) > Mar 15 01:54:12 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:124) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:384) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:361) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128) > Mar 15 01:54:12 at > org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100) > Mar 15 01:54:12 at > org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) > Mar 15 01:54:12 at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) > Mar 15 01:54:12 ... 4 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream
[ https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-34543: Description: Introduce the PartitionWindowedStream and provide multiple full window operations in it. The related motivation and design can be found in [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. was: 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream. 2. Introduce SortPartition API in KeyedStream. The related motivation and design can be found in [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. > Support Full Partition Processing On Non-keyed DataStream > - > > Key: FLINK-34543 > URL: https://issues.apache.org/jira/browse/FLINK-34543 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > > Introduce the PartitionWindowedStream and provide multiple full window > operations in it. > The related motivation and design can be found in > [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34632) Log checkpoint Id when logging checkpoint processing delay
[ https://issues.apache.org/jira/browse/FLINK-34632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824890#comment-17824890 ] Mingliang Liu commented on FLINK-34632: --- CC [~markcho] > Log checkpoint Id when logging checkpoint processing delay > -- > > Key: FLINK-34632 > URL: https://issues.apache.org/jira/browse/FLINK-34632 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Affects Versions: 1.18.1 >Reporter: Mingliang Liu >Priority: Minor > Labels: pull-request-available > > Currently we log a warning message when the checkpoint barrier takes too long > to start processing. It has the delay and would be easier for debugging > respective checkpoint if the id is also logged. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34632) Log checkpoint Id when logging checkpoint processing delay
Mingliang Liu created FLINK-34632: - Summary: Log checkpoint Id when logging checkpoint processing delay Key: FLINK-34632 URL: https://issues.apache.org/jira/browse/FLINK-34632 Project: Flink Issue Type: Improvement Components: Runtime / Checkpointing Affects Versions: 1.18.1 Reporter: Mingliang Liu Currently we log a warning message when the checkpoint barrier takes too long to start processing. It has the delay and would be easier for debugging respective checkpoint if the id is also logged. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream
[ https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-34543: Description: 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream. 2. Introduce SortPartition API in KeyedStream. The related motivation and design can be found in [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. was: 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream. 2. Introduce SortPartition API in KeyedStream. The related FLIP can be found in [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. > Support Full Partition Processing On Non-keyed DataStream > - > > Key: FLINK-34543 > URL: https://issues.apache.org/jira/browse/FLINK-34543 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > > 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream. > 2. Introduce SortPartition API in KeyedStream. > The related motivation and design can be found in > [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream
Wencong Liu created FLINK-34543: --- Summary: Support Full Partition Processing On Non-keyed DataStream Key: FLINK-34543 URL: https://issues.apache.org/jira/browse/FLINK-34543 Project: Flink Issue Type: Improvement Components: API / DataStream Affects Versions: 1.20.0 Reporter: Wencong Liu Fix For: 1.20.0 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream. 2. Introduce SortPartition API in KeyedStream. The related FLIP can be found in [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34448) ChangelogLocalRecoveryITCase#testRestartTM failed fatally with 127 exit code
[ https://issues.apache.org/jira/browse/FLINK-34448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818186#comment-17818186 ] Wencong Liu commented on FLINK-34448: - Maybe [~Yanfei Lei] could take a look . > ChangelogLocalRecoveryITCase#testRestartTM failed fatally with 127 exit code > > > Key: FLINK-34448 > URL: https://issues.apache.org/jira/browse/FLINK-34448 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.20.0 >Reporter: Matthias Pohl >Priority: Critical > Labels: test-stability > Attachments: FLINK-34448.head.log.gz > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57550=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=8897 > \ > {code} > Feb 16 02:43:47 02:43:47.142 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) > on project flink-tests: > Feb 16 02:43:47 02:43:47.142 [ERROR] > Feb 16 02:43:47 02:43:47.142 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Feb 16 02:43:47 02:43:47.142 [ERROR] Please refer to dump files (if any > exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Feb 16 02:43:47 02:43:47.142 [ERROR] ExecutionException The forked VM > terminated without properly saying goodbye. VM crash or System.exit called? > Feb 16 02:43:47 02:43:47.142 [ERROR] Command was /bin/sh -c cd > '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-11.0.19+7/bin/java' > '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' > '--add-opens=java.base/java.util=ALL-UNNAMED' > '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' > '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240216015747138_560.jar' > '/__w/1/s/flink-tests/target/surefire' '2024-02-16T01-57-43_286-jvmRun4' > 'surefire-20240216015747138_558tmp' 'surefire_185-20240216015747138_559tmp' > Feb 16 02:43:47 02:43:47.142 [ERROR] Error occurred in starting fork, check > output in log > Feb 16 02:43:47 02:43:47.142 [ERROR] Process Exit Code: 127 > Feb 16 02:43:47 02:43:47.142 [ERROR] Crashed tests: > Feb 16 02:43:47 02:43:47.142 [ERROR] > org.apache.flink.test.checkpointing.ChangelogLocalRecoveryITCase > Feb 16 02:43:47 02:43:47.142 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Feb 16 02:43:47 02:43:47.142 [ERROR] Command was /bin/sh -c cd > '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-11.0.19+7/bin/java' > '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' > '--add-opens=java.base/java.util=ALL-UNNAMED' > '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' > '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240216015747138_560.jar' > '/__w/1/s/flink-tests/target/surefire' '2024-02-16T01-57-43_286-jvmRun4' > 'surefire-20240216015747138_558tmp' 'surefire_185-20240216015747138_559tmp' > Feb 16 02:43:47 02:43:47.142 [ERROR] Error occurred in starting fork, check > output in log > Feb 16 02:43:47 02:43:47.142 [ERROR] Process Exit Code: 127 > Feb 16 02:43:47 02:43:47.142 [ERROR] Crashed tests: > Feb 16 02:43:47 02:43:47.142 [ERROR] > org.apache.flink.test.checkpointing.ChangelogLocalRecoveryITCase > Feb 16 02:43:47 02:43:47.142 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814583#comment-17814583 ] Fangliang Liu edited comment on FLINK-34376 at 2/6/24 3:24 AM: --- Hi [~matriv], [~twalthr], [~zonli] Related issues: https://issues.apache.org/jira/browse/FLINK-24691 was (Author: liufangliang): Hi [~matriv] ,[~twalthr] , [~zonli] Related issues: https://issues.apache.org/jira/browse/FLINK-24691 > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > The precision is wrong in the Flink 1.14.3 and master branch > !image-2024-02-06-11-15-02-669.png! > > The accuracy is correct in the Flink 1.13.2 > !image-2024-02-06-11-17-03-399.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814583#comment-17814583 ] Fangliang Liu commented on FLINK-34376: --- Hi [~matriv] ,[~twalthr] , [~zonli] Related issues: https://issues.apache.org/jira/browse/FLINK-24691 > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > The precision is wrong in the Flink 1.14.3 and master branch > !image-2024-02-06-11-15-02-669.png! > > The accuracy is correct in the Flink 1.13.2 > !image-2024-02-06-11-17-03-399.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Description: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} The precision is wrong in the Flink 1.14.3 and master branch !image-2024-02-06-11-15-02-669.png! The accuracy is correct in the Flink 1.13.2 !image-2024-02-06-11-17-03-399.png! was: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} The precision is wrong in the Flink 1.14.3 and master branch !image-2024-02-06-11-15-02-669.png! The accuracy is correct in the Flink 1.13.2 !image-2024-02-06-11-17-03-399.png! Related issues: https://issues.apache.org/jira/browse/FLINK-24691 > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > The precision is wrong in the Flink 1.14.3 and master branch > !image-2024-02-06-11-15-02-669.png! > > The accuracy is correct in the Flink 1.13.2 > !image-2024-02-06-11-17-03-399.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Description: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} The precision is wrong in the Flink 1.14.3 and master branch !image-2024-02-06-11-15-02-669.png! The accuracy is correct in the Flink 1.13.2 !image-2024-02-06-11-17-03-399.png! Related issues: https://issues.apache.org/jira/browse/FLINK-24691 was: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} The precision is wrong in the version below !image-2024-02-06-11-15-02-669.png! The accuracy is correct in the Flink 1.13.2 !image-2024-02-06-11-17-03-399.png! > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > The precision is wrong in the Flink 1.14.3 and master branch > !image-2024-02-06-11-15-02-669.png! > > The accuracy is correct in the Flink 1.13.2 > !image-2024-02-06-11-17-03-399.png! > > Related issues: > https://issues.apache.org/jira/browse/FLINK-24691 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Description: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} The precision is wrong in the version below !image-2024-02-06-11-15-02-669.png! The accuracy is correct in the Flink 1.13.2 !image-2024-02-06-11-17-03-399.png! was: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} the result in 1.14.3 and master branch is !image-2024-02-06-11-15-02-669.png! > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > The precision is wrong in the version below > !image-2024-02-06-11-15-02-669.png! > > The accuracy is correct in the Flink 1.13.2 > !image-2024-02-06-11-17-03-399.png! > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Attachment: image-2024-02-06-11-17-03-399.png > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png, > image-2024-02-06-11-17-03-399.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > the result in 1.14.3 and master branch is > !image-2024-02-06-11-15-02-669.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Description: {code:java} select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) {code} the result in 1.14.3 and master branch is !image-2024-02-06-11-15-02-669.png! > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png > > > {code:java} > select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) > {code} > the result in 1.14.3 and master branch is > !image-2024-02-06-11-15-02-669.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34376) FLINK SQL SUM() causes a precision error
[ https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-34376: -- Attachment: image-2024-02-06-11-15-02-669.png > FLINK SQL SUM() causes a precision error > > > Key: FLINK-34376 > URL: https://issues.apache.org/jira/browse/FLINK-34376 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3, 1.18.1 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2024-02-06-11-15-02-669.png > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34376) FLINK SQL SUM() causes a precision error
Fangliang Liu created FLINK-34376: - Summary: FLINK SQL SUM() causes a precision error Key: FLINK-34376 URL: https://issues.apache.org/jira/browse/FLINK-34376 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.18.1, 1.14.3 Reporter: Fangliang Liu -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34369) Elasticsearch connector supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-34369: -- Description: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable and possibly environment aware, we can add a (serializable) provider of SSL context to the {{NetworkClientConfig}}. was: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable, and sometimes the context is host / environment aware, we can add a (serializable) provider for providing SSL context to the {{NetworkClientConfig}}. > Elasticsearch connector supports SSL context > > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Priority: Major > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. > As SSLContext is not serializable and possibly environment aware, we can add > a (serializable) provider of SSL context to the {{NetworkClientConfig}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34369) Elasticsearch connector supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-34369: -- Description: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable, and sometimes the context is host / environment aware, we can add a (serializable) provider for providing SSL context to the {{NetworkClientConfig}}. was: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable, and sometimes the context is host / environment aware, we can add a (serializable) provider for providing SSL context to the {{ > Elasticsearch connector supports SSL context > > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Priority: Major > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. > As SSLContext is not serializable, and sometimes the context is host / > environment aware, we can add a (serializable) provider for providing SSL > context to the {{NetworkClientConfig}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34369) Elasticsearch connector supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-34369: -- Description: The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. As SSLContext is not serializable, and sometimes the context is host / environment aware, we can add a (serializable) provider for providing SSL context to the {{ was:The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. > Elasticsearch connector supports SSL context > > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Priority: Major > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. > As SSLContext is not serializable, and sometimes the context is host / > environment aware, we can add a (serializable) provider for providing SSL > context to the {{ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34369) Elasticsearch connector supports SSL context
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-34369: -- Summary: Elasticsearch connector supports SSL context (was: Elasticsearch connector supports SSL provider) > Elasticsearch connector supports SSL context > > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Priority: Major > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34369) Elasticsearch connector supports SSL provider
[ https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814504#comment-17814504 ] Mingliang Liu commented on FLINK-34369: --- Note the Flink ElasticSearch [SQL connector|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/] is also missing SSL options. That is tracked by FLINK-27054 as that may require different configurations than {{NetworkClientConfig}} API improvement. > Elasticsearch connector supports SSL provider > - > > Key: FLINK-34369 > URL: https://issues.apache.org/jira/browse/FLINK-34369 > Project: Flink > Issue Type: Improvement > Components: Connectors / ElasticSearch >Affects Versions: 1.17.1 >Reporter: Mingliang Liu >Priority: Major > > The current Flink ElasticSearch connector does not support SSL option, > causing issues connecting to secure ES clusters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34369) Elasticsearch connector supports SSL provider
Mingliang Liu created FLINK-34369: - Summary: Elasticsearch connector supports SSL provider Key: FLINK-34369 URL: https://issues.apache.org/jira/browse/FLINK-34369 Project: Flink Issue Type: Improvement Components: Connectors / ElasticSearch Affects Versions: 1.17.1 Reporter: Mingliang Liu The current Flink ElasticSearch connector does not support SSL option, causing issues connecting to secure ES clusters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27054) Elasticsearch SQL connector SSL issue
[ https://issues.apache.org/jira/browse/FLINK-27054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813391#comment-17813391 ] Mingliang Liu commented on FLINK-27054: --- Any updates on this? My understanding is this problem (not supporting SSL) exists in both ES 6 and ES 7 connectors, both SQL and non-SQL (DataStream), correct? > Elasticsearch SQL connector SSL issue > - > > Key: FLINK-27054 > URL: https://issues.apache.org/jira/browse/FLINK-27054 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch >Reporter: ricardo >Assignee: Kelu Tao >Priority: Major > > The current Flink ElasticSearch SQL connector > https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/ > is missing SSL options, can't connect to ES clusters which require SSL > certificate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34132) Batch WordCount job fails when run with AdaptiveBatch scheduler
[ https://issues.apache.org/jira/browse/FLINK-34132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813316#comment-17813316 ] Wencong Liu commented on FLINK-34132: - Thanks for the reminding. [~zhuzh] I will address these issues when I have some free time. > Batch WordCount job fails when run with AdaptiveBatch scheduler > --- > > Key: FLINK-34132 > URL: https://issues.apache.org/jira/browse/FLINK-34132 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.17.1, 1.18.1 >Reporter: Prabhu Joseph >Assignee: Junrui Li >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > Batch WordCount job fails when run with AdaptiveBatch scheduler. > *Repro Steps* > {code:java} > flink-yarn-session -Djobmanager.scheduler=adaptive -d > flink run -d /usr/lib/flink/examples/batch/WordCount.jar --input > s3://prabhuflinks3/INPUT --output s3://prabhuflinks3/OUT > {code} > *Error logs* > {code:java} > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: > org.apache.flink.runtime.client.JobInitializationException: Could not start > the JobMaster. > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) > at > org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095) > at > org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at > org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.apache.flink.runtime.client.JobInitializationException: Could not start > the JobMaster. > at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321) > at > org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1067) > at > org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:144) > at > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:73) > at > org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:106) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) > ... 12 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: > org.apache.flink.runtime.client.JobInitializationException: Could not start > the JobMaster. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > at > org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1062) > ... 20 more > Caused by: java.lang.RuntimeException: > org.apache.flink.runtime.client.JobInitializationException: Could not start > the JobMaster. > at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321) > at > org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > at > java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:457) > at
[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)
[ https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812213#comment-17812213 ] Wencong Liu commented on FLINK-32978: - [~martijnvisser] Thanks for the reminding. I've added the release notes information. > Deprecate RichFunction#open(Configuration parameters) > - > > Key: FLINK-32978 > URL: https://issues.apache.org/jira/browse/FLINK-32978 > Project: Flink > Issue Type: Technical Debt > Components: API / Core >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The > [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231] > has decided that the parameter in RichFunction#open will be removed in the > next major version. We should deprecate it now and remove it in Flink 2.0. > The removal will be tracked in > [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)
[ https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu closed FLINK-32978. --- Release Note: The RichFunction#open(Configuration parameters) method has been deprecated and will be removed in future versions. Users are encouraged to migrate to the new RichFunction#open(OpenContext openContext) method, which provides a more comprehensive context for initialization. Here are the key changes and recommendations for migration: The open(Configuration parameters) method is now marked as deprecated. A new method open(OpenContext openContext) has been added as a default method to the RichFunction interface. Users should implement the new open(OpenContext openContext) method for function initialization tasks. The new method will be called automatically before the execution of any processing methods (map, join, etc.). If the new open(OpenContext openContext) method is not implemented, Flink will fall back to invoking the deprecated open(Configuration parameters) method. Resolution: Fixed > Deprecate RichFunction#open(Configuration parameters) > - > > Key: FLINK-32978 > URL: https://issues.apache.org/jira/browse/FLINK-32978 > Project: Flink > Issue Type: Technical Debt > Components: API / Core >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The > [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231] > has decided that the parameter in RichFunction#open will be removed in the > next major version. We should deprecate it now and remove it in Flink 2.0. > The removal will be tracked in > [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)
[ https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu reopened FLINK-32978: - > Deprecate RichFunction#open(Configuration parameters) > - > > Key: FLINK-32978 > URL: https://issues.apache.org/jira/browse/FLINK-32978 > Project: Flink > Issue Type: Technical Debt > Components: API / Core >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The > [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231] > has decided that the parameter in RichFunction#open will be removed in the > next major version. We should deprecate it now and remove it in Flink 2.0. > The removal will be tracked in > [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34251) ClosureCleaner to include reference classes for non-serialization exception
Mingliang Liu created FLINK-34251: - Summary: ClosureCleaner to include reference classes for non-serialization exception Key: FLINK-34251 URL: https://issues.apache.org/jira/browse/FLINK-34251 Project: Flink Issue Type: Improvement Components: API / Core Affects Versions: 1.18.2 Reporter: Mingliang Liu Currently the ClosureCleaner throws exception if {{checkSerializable} is enabled while some object is non-serializable. It includes the non-serializable (nested) object in the exception in the exception message. However, when the user job program gets more complex pulling multiple operators each of which pulls multiple 3rd party libraries, it is unclear how the non-serializable object is referenced as some of those objects could be nested in multiple levels. For example, following exception is not straightforward where to check: {code} org.apache.flink.api.common.InvalidProgramException: java.lang.Object@528c868 is not serializable. {code} It would be nice to include the reference stack in the exception message, as following: {code} org.apache.flink.api.common.InvalidProgramException: java.lang.Object@72437d8d is not serializable. Referenced via [class com.mycompany.myapp.ComplexMap, class com.mycompany.myapp.LocalMap, class com.yourcompany.yourapp.YourPojo, class com.hercompany.herapp.Random, class java.lang.Object] ... {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34251) ClosureCleaner to include reference classes for non-serialization exception
[ https://issues.apache.org/jira/browse/FLINK-34251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated FLINK-34251: -- Priority: Minor (was: Major) > ClosureCleaner to include reference classes for non-serialization exception > --- > > Key: FLINK-34251 > URL: https://issues.apache.org/jira/browse/FLINK-34251 > Project: Flink > Issue Type: Improvement > Components: API / Core >Affects Versions: 1.18.2 >Reporter: Mingliang Liu >Priority: Minor > > Currently the ClosureCleaner throws exception if {{checkSerializable} is > enabled while some object is non-serializable. It includes the > non-serializable (nested) object in the exception in the exception message. > However, when the user job program gets more complex pulling multiple > operators each of which pulls multiple 3rd party libraries, it is unclear how > the non-serializable object is referenced as some of those objects could be > nested in multiple levels. For example, following exception is not > straightforward where to check: > {code} > org.apache.flink.api.common.InvalidProgramException: java.lang.Object@528c868 > is not serializable. > {code} > It would be nice to include the reference stack in the exception message, as > following: > {code} > org.apache.flink.api.common.InvalidProgramException: > java.lang.Object@72437d8d is not serializable. Referenced via [class > com.mycompany.myapp.ComplexMap, class com.mycompany.myapp.LocalMap, class > com.yourcompany.yourapp.YourPojo, class com.hercompany.herapp.Random, class > java.lang.Object] ... > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34246) Allow only archive failed job to history server
[ https://issues.apache.org/jira/browse/FLINK-34246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811468#comment-17811468 ] Wencong Liu commented on FLINK-34246: - Thanks [~qingwei91], for suggesting this. Are you suggesting that we should offer an option that allows the HistoryServer to archive only the failed batch jobs? This requirement seems quite specific. For instance, we would also need to consider archiving the logs of failed streaming jobs. > Allow only archive failed job to history server > --- > > Key: FLINK-34246 > URL: https://issues.apache.org/jira/browse/FLINK-34246 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Reporter: Lim Qing Wei >Priority: Minor > > Hi, I wonder if we can support only archiving Failed job to History Server. > History server is a great tool to allow us to check on previous job, we are > using FLink batch which can run many times throughout the week, we only need > to check job on History Server when it has failed. > It would be more efficient if we can choose to only store a subset of the > data. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34237) MongoDB connector compile failed with Flink 1.19-SNAPSHOT
[ https://issues.apache.org/jira/browse/FLINK-34237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811101#comment-17811101 ] Wencong Liu commented on FLINK-34237: - Thanks for the reminder. I'll fix it as soon as possible. > MongoDB connector compile failed with Flink 1.19-SNAPSHOT > - > > Key: FLINK-34237 > URL: https://issues.apache.org/jira/browse/FLINK-34237 > Project: Flink > Issue Type: Bug > Components: API / Core, Connectors / MongoDB >Reporter: Leonard Xu >Assignee: Wencong Liu >Priority: Blocker > Fix For: 1.19.0 > > > {code:java} > Error: Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile > (default-compile) on project flink-connector-mongodb: Compilation failure > 134Error: > /home/runner/work/flink-connector-mongodb/flink-connector-mongodb/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReaderContext.java:[35,8] > org.apache.flink.connector.mongodb.source.reader.MongoSourceReaderContext is > not abstract and does not override abstract method getTaskInfo() in > org.apache.flink.api.connector.source.SourceReaderContext > 135{code} > [https://github.com/apache/flink-connector-mongodb/actions/runs/7657281844/job/20867604084] > This is related to FLINK-33905 > One point: As > [FLIP-382|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs] > is accepted, all connectors who implement SourceReaderContext (i.e > MongoSourceReaderContext) should implement new introduced methods ` > getTaskInfo()` if they want to compile/work with Flink 1.19. > Another point: The FLIP-382 didn't mentioned the connector backward > compatibility well, maybe we need to rethink the section. As I just have a > rough look at the FLIP, maybe [~xtsong] and [~Wencong Liu] could comment > under this issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch
[ https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17805352#comment-17805352 ] Wencong Liu commented on FLINK-33009: - I've opened a pull request and CI has passed. > tools/release/update_japicmp_configuration.sh should only enable binary > compatibility checks in the release branch > -- > > Key: FLINK-33009 > URL: https://issues.apache.org/jira/browse/FLINK-33009 > Project: Flink > Issue Type: Bug > Components: Release System >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > > According to [Flink's API compatibility > constraints|https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/], > we only support binary compatibility between patch versions. In > [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246] > we have binary compatibility enabled even in {{master}}. This doesn't comply > with the rules. We should this flag disabled in {{master}}. The > {{tools/release/update_japicmp_configuration.sh}} should enable this flag in > the release branch as part of the release process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)
[ https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804660#comment-17804660 ] Wencong Liu edited comment on FLINK-32978 at 1/9/24 9:45 AM: - Thanks for proposing this issue . I will investigate all modified implementation classes annotated by @Public or @PublicEvolving and open a pull request to revert the error changes. was (Author: JIRAUSER281639): Thanks for proposing this issue . I will investigate all implementation classes annotated by @Public or @PublicEvolving and open a pull request to revert the error changes. > Deprecate RichFunction#open(Configuration parameters) > - > > Key: FLINK-32978 > URL: https://issues.apache.org/jira/browse/FLINK-32978 > Project: Flink > Issue Type: Technical Debt > Components: API / Core >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The > [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231] > has decided that the parameter in RichFunction#open will be removed in the > next major version. We should deprecate it now and remove it in Flink 2.0. > The removal will be tracked in > [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)
[ https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804660#comment-17804660 ] Wencong Liu commented on FLINK-32978: - Thanks for proposing this issue . I will investigate all implementation classes annotated by @Public or @PublicEvolving and open a pull request to revert the error changes. > Deprecate RichFunction#open(Configuration parameters) > - > > Key: FLINK-32978 > URL: https://issues.apache.org/jira/browse/FLINK-32978 > Project: Flink > Issue Type: Technical Debt > Components: API / Core >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The > [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231] > has decided that the parameter in RichFunction#open will be removed in the > next major version. We should deprecate it now and remove it in Flink 2.0. > The removal will be tracked in > [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu closed FLINK-33949. --- Resolution: Not A Problem > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801988#comment-17801988 ] Wencong Liu commented on FLINK-33949: - Thanks for the explanation from [~chesnay] . Given that all the actively running code might throw related exceptions, it would be unreasonable to directly modify the rules of japicmp. If there's a specific interface that needs to break this rule, we should simply exclude that interface. This ticket can be closed now. > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800913#comment-17800913 ] Wencong Liu edited comment on FLINK-33949 at 12/28/23 3:45 AM: --- Suppose we have two completely independent interfaces, I and J, both declaring a default method M with the same signature. Now, if there is a class T that implements both interfaces I and J but *does not override* the conflicting method M, the compiler would not know which interface's default method implementation to use, as they both have equal priority. If the code containing class T tries to invoke this method at runtime, the JVM would throw an {{IncompatibleClassChangeError}} because it is faced with an impossible decision: it does not know which interface’s default implementation to call. However, if M is abstract in I or J, the implementation class T *must* provides an explicit implementation of the method. So no matter how interfaces I or J change (as long as the signature of their method M does not change), it will not affect the behavior of the implementation class T or cause an {{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own method M implementation, disregarding any default implementations from the two interfaces. I have create a test case, where the StreamingRuntimeContext will be added a method return TestObject: {code:java} public class TestObject implements TestInterface1, TestInterface2 { @Override public String getResult() { return "777"; } } public interface TestInterface1 { String getResult(); } public interface TestInterface2 { default String getResult() { return "666"; } }{code} The job code is in the follows. The job is compiled with the modifiled StreamingRuntimeContext in Flink. {code:java} public static void main(String[] args) throws Exception { StreamExecutionEnvironment executionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource source = executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8); SingleOutputStreamOperator result = source.map(new RichMapFunction() { @Override public String map(Integer integer) { StreamingRuntimeContext runtimeContext = (StreamingRuntimeContext)getRuntimeContext(); return runtimeContext.getTestObject().getResult(); } }); CloseableIterator jobResult = result.executeAndCollect(); while (jobResult.hasNext()) System.out.println(jobResult.next()); } {code} When I change the abstract method getResult into default in TestInterface1 and recompiled Flink. The job is still able to finish without any code changes and exceptions. Therefore, I think the METHOD_ABSTRACT_NOW_DEFAULT doesn't break source compatibility. WDYT? [~martijnvisser] was (Author: JIRAUSER281639): Suppose we have two completely independent interfaces, I and J, both declaring a default method M with the same signature. Now, if there is a class T that implements both interfaces I and J but *does not override* the conflicting method M, the compiler would not know which interface's default method implementation to use, as they both have equal priority. If the code containing class T tries to invoke this method at runtime, the JVM would throw an {{IncompatibleClassChangeError}} because it is faced with an impossible decision: it does not know which interface’s default implementation to call. However, if M is abstract in I or J, the implementation class T *must* provides an explicit implementation of the method. So no matter how interfaces I or J change (as long as the signature of their method M does not change), it will not affect the behavior of the implementation class T or cause an {{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own method M implementation, disregarding any default implementations from the two interfaces. I have create a test case, where the StreamingRuntimeContext will be added a method return TestObject: {code:java} public class TestObject implements TestInterface1, TestInterface2 { @Override public String getResult() { return "777"; } } public interface TestInterface1 { String getResult(); } public interface TestInterface2 { default String getResult() { return "666"; } }{code} The job code is in the follows. The job is compiled with the modifiled StreamingRuntimeContext in Flink. {code:java} public static void main(String[] args) throws Exception { StreamExecutionEnvironment executionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource source = executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8); SingleOutputStreamOperator result = source.map(new RichMapFunction() { @Override public String map(Integer integer)
[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800913#comment-17800913 ] Wencong Liu commented on FLINK-33949: - Suppose we have two completely independent interfaces, I and J, both declaring a default method M with the same signature. Now, if there is a class T that implements both interfaces I and J but *does not override* the conflicting method M, the compiler would not know which interface's default method implementation to use, as they both have equal priority. If the code containing class T tries to invoke this method at runtime, the JVM would throw an {{IncompatibleClassChangeError}} because it is faced with an impossible decision: it does not know which interface’s default implementation to call. However, if M is abstract in I or J, the implementation class T *must* provides an explicit implementation of the method. So no matter how interfaces I or J change (as long as the signature of their method M does not change), it will not affect the behavior of the implementation class T or cause an {{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own method M implementation, disregarding any default implementations from the two interfaces. I have create a test case, where the StreamingRuntimeContext will be added a method return TestObject: {code:java} public class TestObject implements TestInterface1, TestInterface2 { @Override public String getResult() { return "777"; } } public interface TestInterface1 { String getResult(); } public interface TestInterface2 { default String getResult() { return "666"; } }{code} The job code is in the follows. The job is compiled with the modifiled StreamingRuntimeContext in Flink. {code:java} public static void main(String[] args) throws Exception { StreamExecutionEnvironment executionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource source = executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8); SingleOutputStreamOperator result = source.map(new RichMapFunction() { @Override public String map(Integer integer) { StreamingRuntimeContext runtimeContext = (StreamingRuntimeContext)getRuntimeContext(); return runtimeContext.getTestObject().getResult(); } }); CloseableIterator jobResult = result.executeAndCollect(); while (jobResult.hasNext()) System.out.println(jobResult.next()); } {code} When I change the abstract method getResult into default in TestInterface1 and recompiled Flink. The job is still able to finish without any code changes and exceptions. Therefore, I think the METHOD_ABSTRACT_NOW_DEFAULT doesn't break source compatibility. WDYT? > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800802#comment-17800802 ] Wencong Liu commented on FLINK-33949: - For the users have built implementations themselves, they still don't need any code changes when they upgrade to a new version with abstract->default changes. This change could ensure the source compatibility. [~martijnvisser] > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800787#comment-17800787 ] Wencong Liu commented on FLINK-33949: - Thanks [~martijnvisser] for your comments. The implementation classes of the @Public API have already overridden the abstract methods. After an abstract method becomes default, the behaviors of these implementation classes will not change any behaviors themselves. WDYT? > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch
[ https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800696#comment-17800696 ] Wencong Liu edited comment on FLINK-33009 at 12/27/23 6:43 AM: --- Hi [~mapohl] , I've encountered the same issue once more in FLINK-33949 when I'm making some code changes considered binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT? was (Author: JIRAUSER281639): Hi [~mapohl] , I've encountered the same issue once more in [FLINK-33949] METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible - ASF JIRA (apache.org)when I'm making some code changes considered binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT? > tools/release/update_japicmp_configuration.sh should only enable binary > compatibility checks in the release branch > -- > > Key: FLINK-33009 > URL: https://issues.apache.org/jira/browse/FLINK-33009 > Project: Flink > Issue Type: Bug > Components: Release System >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > > According to Flink's API compatibility constraints, we only support binary > compatibility between versions. In > [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246] > we have binary compatibility enabled even in {{master}}. This doesn't comply > with the rules. We should this flag disabled in {{master}}. The > {{tools/release/update_japicmp_configuration.sh}} should enable this flag in > the release branch as part of the release process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch
[ https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800696#comment-17800696 ] Wencong Liu edited comment on FLINK-33009 at 12/27/23 6:42 AM: --- Hi [~mapohl] , I've encountered the same issue once more in [FLINK-33949] METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible - ASF JIRA (apache.org)when I'm making some code changes considered binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT? was (Author: JIRAUSER281639): Hi [~mapohl] , I've encountered the same issue once more when I'm making some code changes considered binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT? > tools/release/update_japicmp_configuration.sh should only enable binary > compatibility checks in the release branch > -- > > Key: FLINK-33009 > URL: https://issues.apache.org/jira/browse/FLINK-33009 > Project: Flink > Issue Type: Bug > Components: Release System >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > > According to Flink's API compatibility constraints, we only support binary > compatibility between versions. In > [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246] > we have binary compatibility enabled even in {{master}}. This doesn't comply > with the rules. We should this flag disabled in {{master}}. The > {{tools/release/update_japicmp_configuration.sh}} should enable this flag in > the release branch as part of the release process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch
[ https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800696#comment-17800696 ] Wencong Liu commented on FLINK-33009: - Hi [~mapohl] , I've encountered the same issue once more when I'm making some code changes considered binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT? > tools/release/update_japicmp_configuration.sh should only enable binary > compatibility checks in the release branch > -- > > Key: FLINK-33009 > URL: https://issues.apache.org/jira/browse/FLINK-33009 > Project: Flink > Issue Type: Bug > Components: Release System >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > > According to Flink's API compatibility constraints, we only support binary > compatibility between versions. In > [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246] > we have binary compatibility enabled even in {{master}}. This doesn't comply > with the rules. We should this flag disabled in {{master}}. The > {{tools/release/update_japicmp_configuration.sh}} should enable this flag in > the release branch as part of the release process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33949: Description: Currently I'm trying to refactor some APIs annotated by @Public in [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. When an abstract method is changed into a default method, the japicmp maven plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source incompatible and binary incompatible. The reason maybe that if the abstract method becomes default, the logic in the default method will be ignored by the previous implementations. I create a test case in which a job is compiled with newly changed default method and submitted to the previous version. There is no exception thrown. Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for source and binary. We could add the following settings to override the default values for binary and source compatibility, such as: {code:java} METHOD_ABSTRACT_NOW_DEFAULT true true {code} By the way, currently the master branch checks both source compatibility and binary compatibility between minor versions. According to Flink's API compatibility constraints, the master branch shouldn't check binary compatibility. There is already jira FLINK-33009 to track it and we should fix it as soon as possible. was: Currently I'm trying to refactor some APIs annotated by @Public in [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. When an abstract method is changed into a default method, the japicmp maven plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source incompatible and binary incompatible. The reason maybe that if the abstract method becomes default, the logic in the default method will be ignored by the previous implementations. I create a test case in which a job is compiled with newly changed default method and submitted to the previous version. There is no exception thrown. Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for source and binary. By the way, currently the master branch checks both source compatibility and binary compatibility between minor versions. According to Flink's API compatibility constraints, the master branch shouldn't check binary compatibility. There is already jira FLINK-33009 to track it and we should fix it as soon as possible. > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. We could add the following settings to override the > default values for binary and source compatibility, such as: > {code:java} > > >METHOD_ABSTRACT_NOW_DEFAULT >true >true > > {code} > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
[ https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33949: Description: Currently I'm trying to refactor some APIs annotated by @Public in [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. When an abstract method is changed into a default method, the japicmp maven plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source incompatible and binary incompatible. The reason maybe that if the abstract method becomes default, the logic in the default method will be ignored by the previous implementations. I create a test case in which a job is compiled with newly changed default method and submitted to the previous version. There is no exception thrown. Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for source and binary. By the way, currently the master branch checks both source compatibility and binary compatibility between minor versions. According to Flink's API compatibility constraints, the master branch shouldn't check binary compatibility. There is already jira FLINK-33009 to track it and we should fix it as soon as possible. was: Currently I'm trying to refactor some APIs annotated by @Public in [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. When an abstract method is changed into a default method, the japicmp maven plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source incompatible and binary incompatible. The reason maybe that if the abstract method becomes default, the logic in the default method will be ignored by the previous implementations. I create a test case in which a job is compiled with newly changed default method and submitted to the previous version. There is no exception thrown. Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for source and binary. By the way, currently the master branch checks both source compatibility and binary compatibility between minor versions. According to Flink's API compatibility constraints, the master branch shouldn't check binary compatibility. There is already a [Jira|[FLINK-33009] tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch - ASF JIRA (apache.org)] to track it and we should fix it as soon as possible. > METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary > compatible > -- > > Key: FLINK-33949 > URL: https://issues.apache.org/jira/browse/FLINK-33949 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > Currently I'm trying to refactor some APIs annotated by @Public in > [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - > Apache Flink - Apache Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. > When an abstract method is changed into a default method, the japicmp maven > plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as > source incompatible and binary incompatible. > The reason maybe that if the abstract method becomes default, the logic in > the default method will be ignored by the previous implementations. > I create a test case in which a job is compiled with newly changed default > method and submitted to the previous version. There is no exception thrown. > Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for > source and binary. > By the way, currently the master branch checks both source compatibility and > binary compatibility between minor versions. According to Flink's API > compatibility constraints, the master branch shouldn't check binary > compatibility. There is already jira FLINK-33009 to track it and we should > fix it as soon as possible. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible
Wencong Liu created FLINK-33949: --- Summary: METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible Key: FLINK-33949 URL: https://issues.apache.org/jira/browse/FLINK-33949 Project: Flink Issue Type: Bug Components: Test Infrastructure Affects Versions: 1.19.0 Reporter: Wencong Liu Fix For: 1.19.0 Currently I'm trying to refactor some APIs annotated by @Public in [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. When an abstract method is changed into a default method, the japicmp maven plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source incompatible and binary incompatible. The reason maybe that if the abstract method becomes default, the logic in the default method will be ignored by the previous implementations. I create a test case in which a job is compiled with newly changed default method and submitted to the previous version. There is no exception thrown. Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for source and binary. By the way, currently the master branch checks both source compatibility and binary compatibility between minor versions. According to Flink's API compatibility constraints, the master branch shouldn't check binary compatibility. There is already a [Jira|[FLINK-33009] tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch - ASF JIRA (apache.org)] to track it and we should fix it as soon as possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33939) Make husky in runtime-web no longer affect git global hooks
[ https://issues.apache.org/jira/browse/FLINK-33939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17800411#comment-17800411 ] Wencong Liu commented on FLINK-33939: - Thanks for raising this issue! I completely agree with your proposal to make front-end code detection an optional command execution in our use of husky with runtime-web. By doing this, we can preserve the functionality of any globally configured git hooks. > Make husky in runtime-web no longer affect git global hooks > --- > > Key: FLINK-33939 > URL: https://issues.apache.org/jira/browse/FLINK-33939 > Project: Flink > Issue Type: Improvement > Components: Runtime / Web Frontend >Reporter: Jason TANG >Priority: Minor > > Since runtime-web relies on husky to ensure that front-end code changes are > detected before `git commit`, husky modifies the global git hooks > (core.hooksPath) so that core.hooksPath won't take effect if it's configured > globally, I thought it would be a good idea to make the front-end code > detection a optional command execution, which ensures that the globally > configured hooks are executed correctly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33905) FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs
Wencong Liu created FLINK-33905: --- Summary: FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs Key: FLINK-33905 URL: https://issues.apache.org/jira/browse/FLINK-33905 Project: Flink Issue Type: Improvement Components: API / Core Affects Versions: 1.19.0 Reporter: Wencong Liu This ticket is proposed for [FLIP-382|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798413#comment-17798413 ] Wencong Liu commented on FLINK-33502: - Sorry for the late reply. I've just identified the issue and proposed a fix; it should be stable now. [~mapohl] > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.19.0 > > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33785) TableJdbcUpsertOutputFormat could not deal with DELETE record correctly when primary keys were set
[ https://issues.apache.org/jira/browse/FLINK-33785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794749#comment-17794749 ] Bodong Liu commented on FLINK-33785: If this class continues to be used in subsequent development, and this report is indeed a BUG, can this issue be assigned to me? > TableJdbcUpsertOutputFormat could not deal with DELETE record correctly when > primary keys were set > -- > > Key: FLINK-33785 > URL: https://issues.apache.org/jira/browse/FLINK-33785 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: jdbc-3.1.1 > Environment: Flink: 1.17.1 > Jdbc connector: 3.1.1 > Postgresql: 16.1 >Reporter: Bodong Liu >Priority: Major > Attachments: image-2023-12-08-22-24-20-295.png, > image-2023-12-08-22-24-26-493.png, image-2023-12-08-22-24-58-986.png, > image-2023-12-08-22-28-44-948.png, image-2023-12-08-22-38-08-559.png, > image-2023-12-08-22-40-35-530.png, image-2023-12-08-22-42-06-566.png > > > h1. Issue Description > When using jdbc connector to DELETE records in database, I found it CAN NOT > delete records correctly. > h1. Reproduction steps > The steps are as follows: > * Create a table with 5 fields and a pk. DDL in postgres: > > {code:java} > create table public.fake > ( > id bigint not null default > nextval('fake_id_seq'::regclass), > name character varying(128) not null, > age integer, > location character varying(256), > birthday timestamp without time zone default CURRENT_TIMESTAMP, > primary key (id, name) > );{code} > !image-2023-12-08-22-24-26-493.png! > > * Insert some data into the table: > {code:java} > INSERT INTO public.fake (id, name, age, location, birthday) VALUES (1, > 'Jack', 10, null, '2023-12-08 21:35:46.00'); > INSERT INTO public.fake (id, name, age, location, birthday) VALUES (2, > 'Jerry', 18, 'Fake Location', '2023-12-08 13:36:17.088295'); > INSERT INTO public.fake (id, name, age, location, birthday) VALUES (3, > 'John', 20, null, null); > INSERT INTO public.fake (id, name, age, location, birthday) VALUES (4, > 'Marry', null, null, '2023-12-08 13:37:09.721785'); > {code} > !image-2023-12-08-22-24-58-986.png! > * Run the flink code: > {code:java} > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > final String[] fieldNames = {"id", "name", "age", "location", "birthday"}; > final int[] fieldTypes = { > Types.BIGINT, Types.VARCHAR, Types.INTEGER, Types.VARCHAR, > Types.TIMESTAMP > }; > final String[] primaryKeys = {"id", "name"}; > InternalJdbcConnectionOptions internalJdbcConnectionOptions = > InternalJdbcConnectionOptions.builder() > > .setClassLoader(Thread.currentThread().getContextClassLoader()) > .setDriverName(Driver.class.getName()) > .setDBUrl("jdbc:postgresql://localhost:5432/postgres") > .setUsername("postgres") > .setPassword("postgres") > .setTableName("fake") > .setParallelism(1) > .setConnectionCheckTimeoutSeconds(10) > .setDialect(new PostgresDialect()) > .build(); > JdbcOutputFormat, Row, > JdbcBatchStatementExecutor> jdbcOutputFormat = > JdbcOutputFormat.builder() > .setFieldNames(fieldNames) > .setKeyFields(primaryKeys) > .setFieldTypes(fieldTypes) > .setOptions(internalJdbcConnectionOptions) > .setFlushIntervalMills(1000) > .setFlushMaxSize(10) > .setMaxRetryTimes(3) > .build(); > GenericJdbcSinkFunction> jdbcSinkFunction = > new GenericJdbcSinkFunction<>(jdbcOutputFormat); > Timestamp timestamp = Timestamp.valueOf("2023-12-08 21:35:46.00"); > // Row to delete > Row row = Row.ofKind(RowKind.DELETE, 1L, "Jack", 10, null, timestamp); > Tuple2 element = Tuple2.of(false, row); > > env.fromCollection(Collections.singleton(element)).addSink(jdbcSinkFunction); > env.execute(); > } {code} > When the code executed successfully, we can see that the record id=1 and > name=Jack was not deleted. > h1. Cause Analysis > In the build method of JdbcOutputFormat.Builder, if 'keyFields' option was > set in the JdbcDmlOptions, the method will return a > 'org.apache.flink.connector.jdbc.internal.TableJdbcUpsertOutputFormat'. > !image-2023-12-08-22-28-44-948.png! > And in >
[jira] [Created] (FLINK-33785) TableJdbcUpsertOutputFormat could not deal with DELETE record correctly when primary keys were set
Bodong Liu created FLINK-33785: -- Summary: TableJdbcUpsertOutputFormat could not deal with DELETE record correctly when primary keys were set Key: FLINK-33785 URL: https://issues.apache.org/jira/browse/FLINK-33785 Project: Flink Issue Type: Bug Components: Connectors / JDBC Affects Versions: jdbc-3.1.1 Environment: Flink: 1.17.1 Jdbc connector: 3.1.1 Postgresql: 16.1 Reporter: Bodong Liu Attachments: image-2023-12-08-22-24-20-295.png, image-2023-12-08-22-24-26-493.png, image-2023-12-08-22-24-58-986.png, image-2023-12-08-22-28-44-948.png, image-2023-12-08-22-38-08-559.png, image-2023-12-08-22-40-35-530.png, image-2023-12-08-22-42-06-566.png h1. Issue Description When using jdbc connector to DELETE records in database, I found it CAN NOT delete records correctly. h1. Reproduction steps The steps are as follows: * Create a table with 5 fields and a pk. DDL in postgres: {code:java} create table public.fake ( id bigint not null default nextval('fake_id_seq'::regclass), name character varying(128) not null, age integer, location character varying(256), birthday timestamp without time zone default CURRENT_TIMESTAMP, primary key (id, name) );{code} !image-2023-12-08-22-24-26-493.png! * Insert some data into the table: {code:java} INSERT INTO public.fake (id, name, age, location, birthday) VALUES (1, 'Jack', 10, null, '2023-12-08 21:35:46.00'); INSERT INTO public.fake (id, name, age, location, birthday) VALUES (2, 'Jerry', 18, 'Fake Location', '2023-12-08 13:36:17.088295'); INSERT INTO public.fake (id, name, age, location, birthday) VALUES (3, 'John', 20, null, null); INSERT INTO public.fake (id, name, age, location, birthday) VALUES (4, 'Marry', null, null, '2023-12-08 13:37:09.721785'); {code} !image-2023-12-08-22-24-58-986.png! * Run the flink code: {code:java} public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); final String[] fieldNames = {"id", "name", "age", "location", "birthday"}; final int[] fieldTypes = { Types.BIGINT, Types.VARCHAR, Types.INTEGER, Types.VARCHAR, Types.TIMESTAMP }; final String[] primaryKeys = {"id", "name"}; InternalJdbcConnectionOptions internalJdbcConnectionOptions = InternalJdbcConnectionOptions.builder() .setClassLoader(Thread.currentThread().getContextClassLoader()) .setDriverName(Driver.class.getName()) .setDBUrl("jdbc:postgresql://localhost:5432/postgres") .setUsername("postgres") .setPassword("postgres") .setTableName("fake") .setParallelism(1) .setConnectionCheckTimeoutSeconds(10) .setDialect(new PostgresDialect()) .build(); JdbcOutputFormat, Row, JdbcBatchStatementExecutor> jdbcOutputFormat = JdbcOutputFormat.builder() .setFieldNames(fieldNames) .setKeyFields(primaryKeys) .setFieldTypes(fieldTypes) .setOptions(internalJdbcConnectionOptions) .setFlushIntervalMills(1000) .setFlushMaxSize(10) .setMaxRetryTimes(3) .build(); GenericJdbcSinkFunction> jdbcSinkFunction = new GenericJdbcSinkFunction<>(jdbcOutputFormat); Timestamp timestamp = Timestamp.valueOf("2023-12-08 21:35:46.00"); // Row to delete Row row = Row.ofKind(RowKind.DELETE, 1L, "Jack", 10, null, timestamp); Tuple2 element = Tuple2.of(false, row); env.fromCollection(Collections.singleton(element)).addSink(jdbcSinkFunction); env.execute(); } {code} When the code executed successfully, we can see that the record id=1 and name=Jack was not deleted. h1. Cause Analysis In the build method of JdbcOutputFormat.Builder, if 'keyFields' option was set in the JdbcDmlOptions, the method will return a 'org.apache.flink.connector.jdbc.internal.TableJdbcUpsertOutputFormat'. !image-2023-12-08-22-28-44-948.png! And in 'org.apache.flink.connector.jdbc.internal.TableJdbcUpsertOutputFormat#createDeleteExecutor', the method get all the fieldNames instead of keyFields to build the delete sql statement. So the detele sql may not execute correctly. !image-2023-12-08-22-38-08-559.png! h1. How to fix * Use the real keyFields then fallback to fieldNames to build the executor. !image-2023-12-08-22-42-06-566.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792610#comment-17792610 ] Wencong Liu commented on FLINK-33502: - Thanks [~JunRuiLi] . I have investigated it and found that the root cause is different with this issue. But the exception caught in the outermost layer is same. I'll reopen this issue and fix it as soon as possible. > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.19.0 > > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33652) First Steps documentation is having empty page link
[ https://issues.apache.org/jira/browse/FLINK-33652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790075#comment-17790075 ] Wencong Liu commented on FLINK-33652: - Hello [~pranav.sharma], thanks for the careful investigation. Feel free to open a pull request! > First Steps documentation is having empty page link > --- > > Key: FLINK-33652 > URL: https://issues.apache.org/jira/browse/FLINK-33652 > Project: Flink > Issue Type: Bug > Environment: Web >Reporter: Pranav Sharma >Priority: Minor > Attachments: image-2023-11-26-15-23-02-007.png, > image-2023-11-26-15-25-04-708.png > > > > Under this page URL > [link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/], > under "Summary" heading, the "concepts" link is pointing to an empty page > [link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/]. > Upon visiting, the tab heading contains HTML as well. (Attached screenshots) > It may be pointed to concepts/overview instead. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33626) Wrong style in flink ui
[ https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789020#comment-17789020 ] Wencong Liu edited comment on FLINK-33626 at 11/23/23 8:27 AM: --- This is a similar issue with FLINK-33356 The navigation bar on Flink’s official website is messed up. - ASF JIRA (apache.org) [~Sergey Nuyanzin] could you revert the modification to the file {*}book{*}? !image-2023-11-23-16-23-57-678.png! was (Author: JIRAUSER281639): This is a similar issue with [FLINK-33356] The navigation bar on Flink’s official website is messed up. - ASF JIRA (apache.org) [~snuyanzin] could you revert the modification to the file {*}book{*}? !image-2023-11-23-16-23-57-678.png! > Wrong style in flink ui > --- > > Key: FLINK-33626 > URL: https://issues.apache.org/jira/browse/FLINK-33626 > Project: Flink > Issue Type: Bug > Components: Travis >Affects Versions: 1.19.0 >Reporter: Fang Yong >Priority: Major > Attachments: image-2023-11-23-16-06-44-000.png, > image-2023-11-23-16-23-57-678.png > > > https://nightlies.apache.org/flink/flink-docs-master/ > !image-2023-11-23-16-06-44-000.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33626) Wrong style in flink ui
[ https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789020#comment-17789020 ] Wencong Liu commented on FLINK-33626: - This is a similar issue with [FLINK-33356] The navigation bar on Flink’s official website is messed up. - ASF JIRA (apache.org) [~snuyanzin] could you revert the modification to the file {*}book{*}? !image-2023-11-23-16-23-57-678.png! > Wrong style in flink ui > --- > > Key: FLINK-33626 > URL: https://issues.apache.org/jira/browse/FLINK-33626 > Project: Flink > Issue Type: Bug > Components: Travis >Affects Versions: 1.19.0 >Reporter: Fang Yong >Priority: Major > Attachments: image-2023-11-23-16-06-44-000.png, > image-2023-11-23-16-23-57-678.png > > > https://nightlies.apache.org/flink/flink-docs-master/ > !image-2023-11-23-16-06-44-000.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33626) Wrong style in flink ui
[ https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33626: Attachment: image-2023-11-23-16-23-57-678.png > Wrong style in flink ui > --- > > Key: FLINK-33626 > URL: https://issues.apache.org/jira/browse/FLINK-33626 > Project: Flink > Issue Type: Bug > Components: Travis >Affects Versions: 1.19.0 >Reporter: Fang Yong >Priority: Major > Attachments: image-2023-11-23-16-06-44-000.png, > image-2023-11-23-16-23-57-678.png > > > https://nightlies.apache.org/flink/flink-docs-master/ > !image-2023-11-23-16-06-44-000.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788998#comment-17788998 ] Wencong Liu edited comment on FLINK-33502 at 11/23/23 6:49 AM: --- Thank [~mapohl] for your help. The fix will be merged soon. was (Author: JIRAUSER281639): Thank [~mapohl] for your help. The fix should be merged soon. > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788998#comment-17788998 ] Wencong Liu commented on FLINK-33502: - Thank [~mapohl] for your help. The fix should be merged soon. > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787828#comment-17787828 ] Wencong Liu commented on FLINK-33502: - Thank you for your detailed reply. I am currently trying to download the build artifacts for the corresponding stage. However, I noticed that the log collection downloaded using the method shown in the figure is different from the logs-ci-test_ci_tests-1699014739.zip that you mentioned. !image-2023-11-20-14-37-37-321.png|width=839,height=434! Could you please advise me on how to download logs-ci-test_ci_tests-1699014739.zip? > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > Labels: test-stability > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33502: Attachment: image-2023-11-20-14-37-37-321.png > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > Labels: test-stability > Attachments: image-2023-11-20-14-37-37-321.png > > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error
[ https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787224#comment-17787224 ] Wencong Liu commented on FLINK-33502: - Thank you for your reminder [~mapohl] . I would like to ask if you know any way to obtain the complete runtime logs of this ITCase? In the local IDE, we can configure _log4j2-test.properties_ to directly output INFO-level logs to the console. From the link on Github, I can only see that the process exit code is 239. Based on this information alone, I am currently unable to identify the root cause.樂 > HybridShuffleITCase caused a fatal error > > > Key: FLINK-33502 > URL: https://issues.apache.org/jira/browse/FLINK-33502 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Priority: Major > Labels: test-stability > > [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177] > {code:java} > Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check > output in log > 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9170Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9171Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd > /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC > -Xms256m -XX:+IgnoreUnrecognizedVMOptions > --add-opens=java.base/java.util=ALL-UNNAMED > --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar > /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar > /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 > surefire6242806641230738408tmp surefire_1603959900047297795160tmp > 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, > check output in log > 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239 > 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests: > 9176Error: 21:21:35 21:21:35.379 [ERROR] > org.apache.flink.test.runtime.HybridShuffleITCase > 9177Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 9178Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 9179Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 9180Error: 21:21:35 21:21:35.379 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > [...] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33569) Could not deploy yarn-application when using yarn over s3a filesystem.
[ https://issues.apache.org/jira/browse/FLINK-33569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786674#comment-17786674 ] Bodong Liu edited comment on FLINK-33569 at 11/16/23 8:59 AM: -- I want to fix this issue by using {code:java} new Path(tmpConfigurationFile.toURI()){code} instead of {code:java} new Path(tmpConfigurationFile.getAbsolutePath()){code} was (Author: JIRAUSER303071): I want to fix this issue by using {code:java} new Path(tmpConfigurationFile.toPath().toAbsolutePath().toUri()){code} instead of {code:java} new Path(tmpConfigurationFile.getAbsolutePath()){code} > Could not deploy yarn-application when using yarn over s3a filesystem. > -- > > Key: FLINK-33569 > URL: https://issues.apache.org/jira/browse/FLINK-33569 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.18.0, 1.17.1 > Environment: h1. *Env:* > * OS: ArchLinux kernel:{color:#00}6.6.1 AMD64{color} > * Flink: 1.17.1 > * Hadoop: 3.3.6 > * Minio: 2023-11-15 > h1. Settings > h2. hadoop core-site.xml: > > {code:java} > > fs.defaultFS > s3a://hadoop > > > fs.s3a.path.style.access > true > > > > fs.s3a.access.key > admin > > > > fs.s3a.secret.key > password > > > > fs.s3a.endpoint > http://localhost:9000 > > > fs.s3a.connection.establish.timeout > 5000 > > > fs.s3a.multipart.size > 512M > > > fs.s3a.impl > org.apache.hadoop.fs.s3a.S3AFileSystem > > > fs.AbstractFileSystem.s3a.impl > org.apache.hadoop.fs.s3a.S3A > > {code} > h1. Flink run command: > ./bin/flink run-application -t yarn-application > ./examples/streaming/TopSpeedWindowing.jar > > >Reporter: Bodong Liu >Priority: Minor > Attachments: 2023-11-16_16-47.png, image-2023-11-16-16-46-21-684.png, > image-2023-11-16-16-48-40-223.png > > > > I now use the `yarn-application` mode to deploy Flink. I found that when I > set Hadoop's storage to the s3a file system, Flink could not submit tasks to > Yarn. > The error is reported as follows: > {code:java} > > The program finished with the following exception: > org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't > deploy Yarn Application Cluster > at > org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) > at > org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) > at > org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:212) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1098) > at > org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at > org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) > Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path > for > URI:file:///tmp/application_1700122774429_0001-flink-conf.yaml5526160496134930395.tmp': > Input/output error > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$26(S3AFileSystem.java:3854) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) > at >
[jira] [Comment Edited] (FLINK-33569) Could not deploy yarn-application when using yarn over s3a filesystem.
[ https://issues.apache.org/jira/browse/FLINK-33569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786674#comment-17786674 ] Bodong Liu edited comment on FLINK-33569 at 11/16/23 8:54 AM: -- I want to fix this issue by using {code:java} new Path(tmpConfigurationFile.toPath().toAbsolutePath().toUri()){code} instead of {code:java} new Path(tmpConfigurationFile.getAbsolutePath()){code} was (Author: JIRAUSER303071): I want to fix this issue. > Could not deploy yarn-application when using yarn over s3a filesystem. > -- > > Key: FLINK-33569 > URL: https://issues.apache.org/jira/browse/FLINK-33569 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.18.0, 1.17.1 > Environment: h1. *Env:* > * OS: ArchLinux kernel:{color:#00}6.6.1 AMD64{color} > * Flink: 1.17.1 > * Hadoop: 3.3.6 > * Minio: 2023-11-15 > h1. Settings > h2. hadoop core-site.xml: > > {code:java} > > fs.defaultFS > s3a://hadoop > > > fs.s3a.path.style.access > true > > > > fs.s3a.access.key > admin > > > > fs.s3a.secret.key > password > > > > fs.s3a.endpoint > http://localhost:9000 > > > fs.s3a.connection.establish.timeout > 5000 > > > fs.s3a.multipart.size > 512M > > > fs.s3a.impl > org.apache.hadoop.fs.s3a.S3AFileSystem > > > fs.AbstractFileSystem.s3a.impl > org.apache.hadoop.fs.s3a.S3A > > {code} > h1. Flink run command: > ./bin/flink run-application -t yarn-application > ./examples/streaming/TopSpeedWindowing.jar > > >Reporter: Bodong Liu >Priority: Minor > Attachments: 2023-11-16_16-47.png, image-2023-11-16-16-46-21-684.png, > image-2023-11-16-16-48-40-223.png > > > > I now use the `yarn-application` mode to deploy Flink. I found that when I > set Hadoop's storage to the s3a file system, Flink could not submit tasks to > Yarn. > The error is reported as follows: > {code:java} > > The program finished with the following exception: > org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't > deploy Yarn Application Cluster > at > org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) > at > org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) > at > org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:212) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1098) > at > org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at > org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) > Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path > for > URI:file:///tmp/application_1700122774429_0001-flink-conf.yaml5526160496134930395.tmp': > Input/output error > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$26(S3AFileSystem.java:3854) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3847) > at > org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397) >
[jira] [Commented] (FLINK-33569) Could not deploy yarn-application when using yarn over s3a filesystem.
[ https://issues.apache.org/jira/browse/FLINK-33569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786674#comment-17786674 ] Bodong Liu commented on FLINK-33569: I want to fix this issue. > Could not deploy yarn-application when using yarn over s3a filesystem. > -- > > Key: FLINK-33569 > URL: https://issues.apache.org/jira/browse/FLINK-33569 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.18.0, 1.17.1 > Environment: h1. *Env:* > * OS: ArchLinux kernel:{color:#00}6.6.1 AMD64{color} > * Flink: 1.17.1 > * Hadoop: 3.3.6 > * Minio: 2023-11-15 > h1. Settings > h2. hadoop core-site.xml: > > {code:java} > > fs.defaultFS > s3a://hadoop > > > fs.s3a.path.style.access > true > > > > fs.s3a.access.key > admin > > > > fs.s3a.secret.key > password > > > > fs.s3a.endpoint > http://localhost:9000 > > > fs.s3a.connection.establish.timeout > 5000 > > > fs.s3a.multipart.size > 512M > > > fs.s3a.impl > org.apache.hadoop.fs.s3a.S3AFileSystem > > > fs.AbstractFileSystem.s3a.impl > org.apache.hadoop.fs.s3a.S3A > > {code} > h1. Flink run command: > ./bin/flink run-application -t yarn-application > ./examples/streaming/TopSpeedWindowing.jar > > >Reporter: Bodong Liu >Priority: Minor > Attachments: 2023-11-16_16-47.png, image-2023-11-16-16-46-21-684.png, > image-2023-11-16-16-48-40-223.png > > > > I now use the `yarn-application` mode to deploy Flink. I found that when I > set Hadoop's storage to the s3a file system, Flink could not submit tasks to > Yarn. > The error is reported as follows: > {code:java} > > The program finished with the following exception: > org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't > deploy Yarn Application Cluster > at > org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) > at > org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) > at > org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:212) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1098) > at > org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at > org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) > Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path > for > URI:file:///tmp/application_1700122774429_0001-flink-conf.yaml5526160496134930395.tmp': > Input/output error > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) > at > org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$26(S3AFileSystem.java:3854) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3847) > at > org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397) > at > org.apache.flink.yarn.YarnApplicationFileUploader.uploadLocalFileToRemote(YarnApplicationFileUploader.java:202) > at > org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:181) > at >
[jira] [Updated] (FLINK-33569) Could not deploy yarn-application when using yarn over s3a filesystem.
[ https://issues.apache.org/jira/browse/FLINK-33569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bodong Liu updated FLINK-33569: --- Description: I now use the `yarn-application` mode to deploy Flink. I found that when I set Hadoop's storage to the s3a file system, Flink could not submit tasks to Yarn. The error is reported as follows: {code:java} The program finished with the following exception: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:212) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1098) at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path for URI:file:///tmp/application_1700122774429_0001-flink-conf.yaml5526160496134930395.tmp': Input/output error at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$26(S3AFileSystem.java:3854) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3847) at org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397) at org.apache.flink.yarn.YarnApplicationFileUploader.uploadLocalFileToRemote(YarnApplicationFileUploader.java:202) at org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:181) at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1050) at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:626) at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:474) ... 10 more {code} I found by looking through the source code and debugging that when Hadoop uses the s3a file system, uploading and downloading files must use URIs with `scheme` to build path parameters. In the `org.apache.flink.yarn.YarnClusterDescriptor` class, when uploading a temporarily generated `yaml` configuration file, the absolute path of the file is used instead of the URI as the path construction parameter, but other file upload and download behaviors They all use URI as the path parameter. This is the reason for the error reported above. was: I now use the `yarn-application` mode to deploy Flink. I found that when I set Hadoop's storage to the s3a file system, Flink could not submit tasks to Yarn. The error is reported as follows: {code:java} The program finished with the following exception: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) at
[jira] [Created] (FLINK-33569) Could not deploy yarn-application when using yarn over s3a filesystem.
Bodong Liu created FLINK-33569: -- Summary: Could not deploy yarn-application when using yarn over s3a filesystem. Key: FLINK-33569 URL: https://issues.apache.org/jira/browse/FLINK-33569 Project: Flink Issue Type: Bug Components: Deployment / YARN Affects Versions: 1.17.1, 1.18.0 Environment: h1. *Env:* * OS: ArchLinux kernel:{color:#00}6.6.1 AMD64{color} * Flink: 1.17.1 * Hadoop: 3.3.6 * Minio: 2023-11-15 h1. Settings h2. hadoop core-site.xml: {code:java} fs.defaultFS s3a://hadoop fs.s3a.path.style.access true fs.s3a.access.key admin fs.s3a.secret.key password fs.s3a.endpoint http://localhost:9000 fs.s3a.connection.establish.timeout 5000 fs.s3a.multipart.size 512M fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem fs.AbstractFileSystem.s3a.impl org.apache.hadoop.fs.s3a.S3A {code} h1. Flink run command: ./bin/flink run-application -t yarn-application ./examples/streaming/TopSpeedWindowing.jar Reporter: Bodong Liu Attachments: 2023-11-16_16-47.png, image-2023-11-16-16-46-21-684.png, image-2023-11-16-16-48-40-223.png I now use the `yarn-application` mode to deploy Flink. I found that when I set Hadoop's storage to the s3a file system, Flink could not submit tasks to Yarn. The error is reported as follows: {code:java} The program finished with the following exception: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:481) at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67) at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:212) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1098) at org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157) Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path for URI:file:///tmp/application_1700122774429_0001-flink-conf.yaml5526160496134930395.tmp': Input/output error at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360) at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222) at org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$26(S3AFileSystem.java:3854) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3847) at org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397) at org.apache.flink.yarn.YarnApplicationFileUploader.uploadLocalFileToRemote(YarnApplicationFileUploader.java:202) at org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:181) at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1050) at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:626) at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:474) ... 10 more {code} I found by looking through the source code and debugging that when Hadoop uses the s3a file system, uploading and downloading files must use URIs with
[jira] [Commented] (FLINK-30483) Make Avro format support for TIMESTAMP_LTZ
[ https://issues.apache.org/jira/browse/FLINK-30483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786078#comment-17786078 ] Mingliang Liu commented on FLINK-30483: --- Just a heads up there is a FLIP for this [FLIP-378|https://cwiki.apache.org//confluence/display/FLINK/FLIP-378%3A+Support+Avro+timestamp+with+local+timezone] and related discussions can happen in the maillist and/or related Jira FLINK-33198 > Make Avro format support for TIMESTAMP_LTZ > -- > > Key: FLINK-30483 > URL: https://issues.apache.org/jira/browse/FLINK-30483 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.16.0 >Reporter: Mingliang Liu >Assignee: Jagadesh Adireddi >Priority: Major > Labels: pull-request-available, stale-assigned > > Currently Avro format does not support TIMESTAMP_LTZ (short for > TIMESTAMP_WITH_LOCAL_TIME_ZONE) type. Avro 1.10+ introduces local timestamp > logic type (both milliseconds and microseconds), see spec [1]. As TIMESTAMP > currently only supports milliseconds, we can make TIMESTAMP_LTZ support > milliseconds first. > A related work is to support microseconds, and there is already > work-in-progress Jira FLINK-23589 for TIMESTAMP type. We can consolidate the > effort or track that separately for TIMESTAMP_LTZ. > [1] > https://avro.apache.org/docs/1.10.2/spec.html#Local+timestamp+%28millisecond+precision%29 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler
[ https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784350#comment-17784350 ] Wencong Liu commented on FLINK-33323: - Thanks for your reminder! [~mapohl] . Could you please help or teach me get the complete log during the running phase of HybridShuffleITCase like `mvn-3.zip` file in this jira? I've taken a look at your issue and it seems that the phenomenon doesn't match the one described in this Jira. Therefore, I would require additional logs to further investigate. > HybridShuffleITCase fails with produced an uncaught exception in > FatalExitExceptionHandler > -- > > Key: FLINK-33323 > URL: https://issues.apache.org/jira/browse/FLINK-33323 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.19.0 >Reporter: Sergey Nuyanzin >Assignee: Wencong Liu >Priority: Critical > Labels: pull-request-available, test-stability > Attachments: mvn-3.zip > > > This build > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=9166 > fails with > {noformat} > 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR > org.apache.flink.util.FatalExitExceptionHandler [] - FATAL: > Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. > Stopping the process... > java.util.concurrent.RejectedExecutionException: Task > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not > completed, task = > java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = > org.apache.fl > ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]] > rejected from > java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool > size = 10, active threads = 9, > queued tasks = 1, completed tasks = 1] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) > ~[?:?] > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) > ~[?:?] > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340) > ~[?:?] > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562) > ~[?:?] > at > org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151) > ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT] > at > org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308) > ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] > at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) > [?:?] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > [?:?] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > [?:?] > at java.lang.Thread.run(Thread.java:833) [?:?] > {noformat} > also logs are attached -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33445) Translate DataSet migration guideline to Chinese
[ https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782384#comment-17782384 ] Wencong Liu commented on FLINK-33445: - Thanks [~liyubin117] ! Assigned to you. Please go ahead. > Translate DataSet migration guideline to Chinese > > > Key: FLINK-33445 > URL: https://issues.apache.org/jira/browse/FLINK-33445 > Project: Flink > Issue Type: Improvement > Components: chinese-translation >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Assignee: Yubin Li >Priority: Major > Labels: starter > Fix For: 1.19.0 > > > The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about > adding an introduction about how to migrate DataSet API to DataStream has > been merged into master branch. Here is the > [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] > in the Flink website. > According to the [contribution > guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], > we should add an identical markdown file in {{content.zh/}} and translate it > to Chinese. Any community volunteers are welcomed to take this task. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese
[ https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33445: Description: The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about adding an introduction about how to migrate DataSet API to DataStream has been merged into master branch. Here is the [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] in the Flink website. According to the [contribution guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], we should add an identical markdown file in {{content.zh/}} and translate it to Chinese. Any community volunteers are welcomed to take this task. was: The FLIINK-33041 about adding an introduction about how to migrate DataSet API to DataStream has been merged into master branch. Here is the [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] in the Flink website. According to the [contribution guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], we should add an identical markdown file in {{content.zh/}} and translate it to Chinese. Any community volunteers are welcomed to take this task. > Translate DataSet migration guideline to Chinese > > > Key: FLINK-33445 > URL: https://issues.apache.org/jira/browse/FLINK-33445 > Project: Flink > Issue Type: Improvement > Components: chinese-translation >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Labels: starter > Fix For: 1.19.0 > > > The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about > adding an introduction about how to migrate DataSet API to DataStream has > been merged into master branch. Here is the > [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] > in the Flink website. > According to the [contribution > guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], > we should add an identical markdown file in {{content.zh/}} and translate it > to Chinese. Any community volunteers are welcomed to take this task. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese
[ https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33445: Component/s: chinese-translation (was: Documentation) > Translate DataSet migration guideline to Chinese > > > Key: FLINK-33445 > URL: https://issues.apache.org/jira/browse/FLINK-33445 > Project: Flink > Issue Type: Improvement > Components: chinese-translation >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > The FLIINK-33041 about adding an introduction about how to migrate DataSet > API to DataStream has been merged into master branch. Here is the > [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] > in the Flink website. > According to the [contribution > guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], > we should add an identical markdown file in {{content.zh/}} and translate it > to Chinese. Any community volunteers are welcomed to take this task. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese
[ https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33445: Labels: starter (was: ) > Translate DataSet migration guideline to Chinese > > > Key: FLINK-33445 > URL: https://issues.apache.org/jira/browse/FLINK-33445 > Project: Flink > Issue Type: Improvement > Components: chinese-translation >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Labels: starter > Fix For: 1.19.0 > > > The FLIINK-33041 about adding an introduction about how to migrate DataSet > API to DataStream has been merged into master branch. Here is the > [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] > in the Flink website. > According to the [contribution > guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], > we should add an identical markdown file in {{content.zh/}} and translate it > to Chinese. Any community volunteers are welcomed to take this task. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese
[ https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33445: Description: The FLIINK-33041 about adding an introduction about how to migrate DataSet API to DataStream has been merged into master branch. Here is the [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] in the Flink website. According to the [contribution guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], we should add an identical markdown file in {{content.zh/}} and translate it to Chinese. Any community volunteers are welcomed to take this task. was: The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about adding an introduction about how to migrate DataSet API to DataStream has been merged into master branch. Here is the link in the Flink website: [How to Migrate from DataSet to DataStream | Apache Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] According to the [contribution guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], we should add an identical markdown file in {{content.zh/}} and translate it to Chinese. Any community volunteers are welcomed to take this task. > Translate DataSet migration guideline to Chinese > > > Key: FLINK-33445 > URL: https://issues.apache.org/jira/browse/FLINK-33445 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > The FLIINK-33041 about adding an introduction about how to migrate DataSet > API to DataStream has been merged into master branch. Here is the > [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] > in the Flink website. > According to the [contribution > guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], > we should add an identical markdown file in {{content.zh/}} and translate it > to Chinese. Any community volunteers are welcomed to take this task. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33445) Translate DataSet migration guideline to Chinese
Wencong Liu created FLINK-33445: --- Summary: Translate DataSet migration guideline to Chinese Key: FLINK-33445 URL: https://issues.apache.org/jira/browse/FLINK-33445 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.19.0 Reporter: Wencong Liu Fix For: 1.19.0 The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about adding an introduction about how to migrate DataSet API to DataStream has been merged into master branch. Here is the link in the Flink website: [How to Migrate from DataSet to DataStream | Apache Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/] According to the [contribution guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation], we should add an identical markdown file in {{content.zh/}} and translate it to Chinese. Any community volunteers are welcomed to take this task. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33356) The navigation bar on Flink’s official website is messed up.
[ https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781629#comment-17781629 ] Wencong Liu commented on FLINK-33356: - This is because of a recent failure in the documentation build. Once the issue with the document building process is resolved, the website will return to normal. > The navigation bar on Flink’s official website is messed up. > > > Key: FLINK-33356 > URL: https://issues.apache.org/jira/browse/FLINK-33356 > Project: Flink > Issue Type: Bug > Components: Project Website >Reporter: Junrui Li >Assignee: Wencong Liu >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > Attachments: image-2023-10-25-11-55-52-653.png, > image-2023-10-25-12-34-22-790.png > > > The side navigation bar on the Flink official website at the following link: > [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed > up, as shown in the attached screenshot. > !image-2023-10-25-11-55-52-653.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] (FLINK-27758) [JUnit5 Migration] Module: flink-table-runtime
[ https://issues.apache.org/jira/browse/FLINK-27758 ] Chao Liu deleted comment on FLINK-27758: -- was (Author: JIRAUSER302840): Hi [~Sergey Nuyanzin] I'd like to work on this ticket, could I get assigned to this? > [JUnit5 Migration] Module: flink-table-runtime > -- > > Key: FLINK-27758 > URL: https://issues.apache.org/jira/browse/FLINK-27758 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime, Tests >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available, stale-assigned > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27758) [JUnit5 Migration] Module: flink-table-runtime
[ https://issues.apache.org/jira/browse/FLINK-27758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781135#comment-17781135 ] Chao Liu commented on FLINK-27758: -- Hi [~Sergey Nuyanzin] I'd like to work on this ticket, could I get assigned to this? > [JUnit5 Migration] Module: flink-table-runtime > -- > > Key: FLINK-27758 > URL: https://issues.apache.org/jira/browse/FLINK-27758 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime, Tests >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available, stale-assigned > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-33356) The navigation bar on Flink’s official website is messed up.
[ https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17779330#comment-17779330 ] Wencong Liu edited comment on FLINK-33356 at 10/25/23 5:58 AM: --- Hello [~JunRuiLi] , I found this case is due to the commit "30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject commit. I think we should revert this change. Could you assign to me? !image-2023-10-25-12-34-22-790.png! was (Author: JIRAUSER281639): Hello [~JunRuiLi] , I found this case is due to the commit "30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject commit. I think we should revert this change !image-2023-10-25-12-34-22-790.png! > The navigation bar on Flink’s official website is messed up. > > > Key: FLINK-33356 > URL: https://issues.apache.org/jira/browse/FLINK-33356 > Project: Flink > Issue Type: Bug > Components: Project Website >Reporter: Junrui Li >Priority: Major > Attachments: image-2023-10-25-11-55-52-653.png, > image-2023-10-25-12-34-22-790.png > > > The side navigation bar on the Flink official website at the following link: > [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed > up, as shown in the attached screenshot. > !image-2023-10-25-11-55-52-653.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33356) The navigation bar on Flink’s official website is messed up.
[ https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17779330#comment-17779330 ] Wencong Liu commented on FLINK-33356: - Hello [~JunRuiLi] , I found this case is due to the commit "30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject commit. I think we should revert this change !image-2023-10-25-12-34-22-790.png! > The navigation bar on Flink’s official website is messed up. > > > Key: FLINK-33356 > URL: https://issues.apache.org/jira/browse/FLINK-33356 > Project: Flink > Issue Type: Bug > Components: Project Website >Reporter: Junrui Li >Priority: Major > Attachments: image-2023-10-25-11-55-52-653.png, > image-2023-10-25-12-34-22-790.png > > > The side navigation bar on the Flink official website at the following link: > [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed > up, as shown in the attached screenshot. > !image-2023-10-25-11-55-52-653.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33356) The navigation bar on Flink’s official website is messed up.
[ https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33356: Attachment: image-2023-10-25-12-34-22-790.png > The navigation bar on Flink’s official website is messed up. > > > Key: FLINK-33356 > URL: https://issues.apache.org/jira/browse/FLINK-33356 > Project: Flink > Issue Type: Bug > Components: Project Website >Reporter: Junrui Li >Priority: Major > Attachments: image-2023-10-25-11-55-52-653.png, > image-2023-10-25-12-34-22-790.png > > > The side navigation bar on the Flink official website at the following link: > [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed > up, as shown in the attached screenshot. > !image-2023-10-25-11-55-52-653.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream
[ https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33144: Description: [FLIP-357: Deprecate Iteration API of DataStream - Apache Flink - Apache Software Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream] has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. (was: FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version.) > Deprecate Iteration API in DataStream > - > > Key: FLINK-33144 > URL: https://issues.apache.org/jira/browse/FLINK-33144 > Project: Flink > Issue Type: Technical Debt > Components: API / DataStream >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > [FLIP-357: Deprecate Iteration API of DataStream - Apache Flink - Apache > Software > Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream] > has decided to deprecate the Iteration API of DataStream and remove it > completely in the next major version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream
[ https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33144: Description: FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. In the future, if other modules in the Flink repository require the use of the Iteration API, we can consider extracting all Iteration implementations from the Flink ML repository into an independent module. (was: Currently, the Iteration API of DataStream is incomplete. For instance, it lacks support for iteration in sync mode and exactly once semantics. Additionally, it does not offer the ability to set iteration termination conditions. As a result, it's hard for developers to build an iteration pipeline by DataStream in the practical applications such as machine learning. [FLIP-176: Unified Iteration to Support Algorithms|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300] has introduced a unified iteration library in the Flink ML repository. This library addresses all the issues present in the Iteration API of DataStream and could provide solution for all the iteration use-cases. However, maintaining two separate implementations of iteration in both the Flink repository and the Flink ML repository would introduce unnecessary complexity and make it difficult to maintain the Iteration API. FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. In the future, if other modules in the Flink repository require the use of the Iteration API, we can consider extracting all Iteration implementations from the Flink ML repository into an independent module.) > Deprecate Iteration API in DataStream > - > > Key: FLINK-33144 > URL: https://issues.apache.org/jira/browse/FLINK-33144 > Project: Flink > Issue Type: Technical Debt > Components: API / DataStream >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > FLIP-357 has decided to deprecate the Iteration API of DataStream and remove > it completely in the next major version. In the future, if other modules in > the Flink repository require the use of the Iteration API, we can consider > extracting all Iteration implementations from the Flink ML repository into an > independent module. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream
[ https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wencong Liu updated FLINK-33144: Description: FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. (was: FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. In the future, if other modules in the Flink repository require the use of the Iteration API, we can consider extracting all Iteration implementations from the Flink ML repository into an independent module.) > Deprecate Iteration API in DataStream > - > > Key: FLINK-33144 > URL: https://issues.apache.org/jira/browse/FLINK-33144 > Project: Flink > Issue Type: Technical Debt > Components: API / DataStream >Affects Versions: 1.19.0 >Reporter: Wencong Liu >Priority: Major > Fix For: 1.19.0 > > > FLIP-357 has decided to deprecate the Iteration API of DataStream and remove > it completely in the next major version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-33144) Deprecate Iteration API in DataStream
Wencong Liu created FLINK-33144: --- Summary: Deprecate Iteration API in DataStream Key: FLINK-33144 URL: https://issues.apache.org/jira/browse/FLINK-33144 Project: Flink Issue Type: Technical Debt Components: API / DataStream Affects Versions: 1.19.0 Reporter: Wencong Liu Fix For: 1.19.0 Currently, the Iteration API of DataStream is incomplete. For instance, it lacks support for iteration in sync mode and exactly once semantics. Additionally, it does not offer the ability to set iteration termination conditions. As a result, it's hard for developers to build an iteration pipeline by DataStream in the practical applications such as machine learning. [FLIP-176: Unified Iteration to Support Algorithms|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300] has introduced a unified iteration library in the Flink ML repository. This library addresses all the issues present in the Iteration API of DataStream and could provide solution for all the iteration use-cases. However, maintaining two separate implementations of iteration in both the Flink repository and the Flink ML repository would introduce unnecessary complexity and make it difficult to maintain the Iteration API. FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it completely in the next major version. In the future, if other modules in the Flink repository require the use of the Iteration API, we can consider extracting all Iteration implementations from the Flink ML repository into an independent module. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33079) The gap between the checkpoint timeout and the interval settings is too large in the example
[ https://issues.apache.org/jira/browse/FLINK-33079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-33079: -- Description: The gap between the checkpoint timeout and the interval settings is too large in the following example [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/] Some users will think that the documentation is the optimal solution and refer to this demo setting, and the result is that the actual checkpoint interval is not as expected because of the checkpoint-timeout !image-2023-09-13-14-17-12-718.png|width=682,height=468! The following situation occurs when the checkpoint interval is set to 20s and the checkpoint timeout is set to 10 minutes. !image-2023-09-13-14-19-05-493.png|width=1637,height=757! So lets do some optimization in the checkpoint example(e.g. checkpoint interval 60s, checkpoint timeout 60s), or provide more documentation for setting up checkpoints config. was: The gap between the checkpoint timeout and the interval settings is too large https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/ Some users will think that the documentation is the optimal solution and refer to this demo setting, and the result is that the actual checkpoint interval is not as expected because of the checkpoint-timeout !image-2023-09-13-14-17-12-718.png|width=682,height=468! The following situation occurs when the checkpoint interval is set to 20s and the checkpoint timeout is set to 10 minutes. !image-2023-09-13-14-19-05-493.png|width=1637,height=757! So lets do some optimization in the checkpoint example(e.g. checkpoint interval 60s, checkpoint timeout 60s), or provide more documentation for setting up checkpoints config. > The gap between the checkpoint timeout and the interval settings is too large > in the example > > > Key: FLINK-33079 > URL: https://issues.apache.org/jira/browse/FLINK-33079 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.19.0 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2023-09-13-14-17-12-718.png, > image-2023-09-13-14-19-05-493.png > > > The gap between the checkpoint timeout and the interval settings is too large > in the following example > [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/] > Some users will think that the documentation is the optimal solution and > refer to this demo setting, and the result is that the actual checkpoint > interval is not as expected because of the checkpoint-timeout > !image-2023-09-13-14-17-12-718.png|width=682,height=468! > The following situation occurs when the checkpoint interval is set to 20s and > the checkpoint timeout is set to 10 minutes. > !image-2023-09-13-14-19-05-493.png|width=1637,height=757! > So lets do some optimization in the checkpoint example(e.g. checkpoint > interval 60s, checkpoint timeout 60s), or provide more documentation for > setting up checkpoints config. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33079) The gap between the checkpoint timeout and the interval settings is too large in the example
[ https://issues.apache.org/jira/browse/FLINK-33079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fangliang Liu updated FLINK-33079: -- Summary: The gap between the checkpoint timeout and the interval settings is too large in the example (was: The gap between the checkpoint timeout and the interval settings is too large) > The gap between the checkpoint timeout and the interval settings is too large > in the example > > > Key: FLINK-33079 > URL: https://issues.apache.org/jira/browse/FLINK-33079 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.19.0 >Reporter: Fangliang Liu >Priority: Major > Attachments: image-2023-09-13-14-17-12-718.png, > image-2023-09-13-14-19-05-493.png > > > The gap between the checkpoint timeout and the interval settings is too large > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/ > Some users will think that the documentation is the optimal solution and > refer to this demo setting, and the result is that the actual checkpoint > interval is not as expected because of the checkpoint-timeout > !image-2023-09-13-14-17-12-718.png|width=682,height=468! > The following situation occurs when the checkpoint interval is set to 20s and > the checkpoint timeout is set to 10 minutes. > !image-2023-09-13-14-19-05-493.png|width=1637,height=757! > So lets do some optimization in the checkpoint example(e.g. checkpoint > interval 60s, checkpoint timeout 60s), or provide more documentation for > setting up checkpoints config. > -- This message was sent by Atlassian Jira (v8.20.10#820010)