[GitHub] [flink] xintongsong commented on a diff in pull request #20256: [FLINK-24713][Runtime/Coordination] Postpone resourceManager serving
xintongsong commented on code in PR #20256: URL: https://github.com/apache/flink/pull/20256#discussion_r922991336 ## flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java: ## @@ -244,6 +253,20 @@ public void onPreviousAttemptWorkersRecovered(Collection recoveredWo "Worker {} recovered from previous attempt.", resourceId.getStringWithMetadata()); } +if (recoveredWorkers.size() > 0) { +scheduleRunAsync( +() -> { +if (!readyToServeFuture.isDone()) { +readyToServeFuture.complete(null); +log.info( +"Timeout to wait recovery taskmanagers, recovery future is completed"); +} +}, +previousWorkerRecoverTimeout.getSeconds(), +TimeUnit.SECONDS); Review Comment: ```suggestion previousWorkerRecoverTimeout.toMillis(), TimeUnit.MILLISECONDS); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xintongsong commented on a diff in pull request #20256: [FLINK-24713][Runtime/Coordination] Postpone resourceManager serving
xintongsong commented on code in PR #20256: URL: https://github.com/apache/flink/pull/20256#discussion_r922988717 ## flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java: ## @@ -244,6 +254,20 @@ public void onPreviousAttemptWorkersRecovered(Collection recoveredWo "Worker {} recovered from previous attempt.", resourceId.getStringWithMetadata()); } +if (recoveredWorkers.size() > 0) { Review Comment: There's no need to go through the `scheduleRunAsync` when timeout is zero. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-28113) Document periodic savepointing
[ https://issues.apache.org/jira/browse/FLINK-28113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567820#comment-17567820 ] Gyula Fora commented on FLINK-28113: I have unassigned this ticket for now, please open a PR if you are still working on it, otherwise someone else can take it :) > Document periodic savepointing > -- > > Key: FLINK-28113 > URL: https://issues.apache.org/jira/browse/FLINK-28113 > Project: Flink > Issue Type: Improvement > Components: Kubernetes Operator >Reporter: Gyula Fora >Priority: Major > Labels: Starter > Fix For: kubernetes-operator-1.1.0 > > > We should add a new section to the job management doc page about periodic > savepoint triggering > [https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/job-management/#savepoint-management] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] flinkbot commented on pull request #20291: don't delay the deletion of checkpoints of incremental rocksdb
flinkbot commented on PR #20291: URL: https://github.com/apache/flink/pull/20291#issuecomment-1186793145 ## CI report: * 1b5540f001140e21145252abe9d1e1493f0cc71f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-28113) Document periodic savepointing
[ https://issues.apache.org/jira/browse/FLINK-28113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gyula Fora reassigned FLINK-28113: -- Assignee: (was: ConradJam) > Document periodic savepointing > -- > > Key: FLINK-28113 > URL: https://issues.apache.org/jira/browse/FLINK-28113 > Project: Flink > Issue Type: Improvement > Components: Kubernetes Operator >Reporter: Gyula Fora >Priority: Major > Labels: Starter > Fix For: kubernetes-operator-1.1.0 > > > We should add a new section to the job management doc page about periodic > savepoint triggering > [https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/job-management/#savepoint-management] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] fredia opened a new pull request, #20291: don't delay the deletion of checkpoints of incremental rocksdb
fredia opened a new pull request, #20291: URL: https://github.com/apache/flink/pull/20291 ## What is the purpose of the change Do not update the lastUsedCheckpointID of PlaceholderStreamStateHandle. ## Verifying this change Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing This change added tests and can be verified as follows: - *SharedStateRegistryTest#testUnregisterPlaceholderState()* - *SharedStateRegistryTest#testUnregisterUnusedState()* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-28509) Support REVERSE built-in function in Table API
[ https://issues.apache.org/jira/browse/FLINK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu closed FLINK-28509. --- Assignee: LuNing Wang Resolution: Fixed Merged to master via e1d93566365e6fe6a8f780c88fc73df2b4466c29 > Support REVERSE built-in function in Table API > -- > > Key: FLINK-28509 > URL: https://issues.apache.org/jira/browse/FLINK-28509 > Project: Flink > Issue Type: Improvement > Components: API / Python, Table SQL / API >Reporter: LuNing Wang >Assignee: LuNing Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28508) Support SPLIT_INDEX and STR_TO_MAP built-in function in Table API
[ https://issues.apache.org/jira/browse/FLINK-28508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567818#comment-17567818 ] Dian Fu commented on FLINK-28508: - Merged to master via e85c3038d901db3696112c1add3babcce0b0bcbc > Support SPLIT_INDEX and STR_TO_MAP built-in function in Table API > - > > Key: FLINK-28508 > URL: https://issues.apache.org/jira/browse/FLINK-28508 > Project: Flink > Issue Type: Improvement > Components: API / Python, Table SQL / API >Reporter: LuNing Wang >Assignee: LuNing Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] swuferhong commented on a diff in pull request #20084: [FLINK-27988][table-planner] Let HiveTableSource extend from SupportStatisticsReport
swuferhong commented on code in PR #20084: URL: https://github.com/apache/flink/pull/20084#discussion_r922985005 ## flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java: ## @@ -261,6 +279,93 @@ public DynamicTableSource copy() { return source; } +@Override +public TableStats reportStatistics() { +try { +// only support BOUNDED source +if (isStreamingSource()) { +return TableStats.UNKNOWN; +} +if (flinkConf.get(HiveOptions.SOURCE_REPORT_STATISTICS) +!= FileSystemConnectorOptions.FileStatisticsType.ALL) { +return TableStats.UNKNOWN; +} + +HiveSourceBuilder sourceBuilder = +new HiveSourceBuilder(jobConf, flinkConf, tablePath, hiveVersion, catalogTable) +.setProjectedFields(projectedFields) +.setLimit(limit); Review Comment: > Adding limit push down logic in HiveTableSource for reporting stats. But, this not work now because of optimizing order. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-28508) Support SPLIT_INDEX and STR_TO_MAP built-in function in Table API
[ https://issues.apache.org/jira/browse/FLINK-28508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu reassigned FLINK-28508: --- Assignee: LuNing Wang > Support SPLIT_INDEX and STR_TO_MAP built-in function in Table API > - > > Key: FLINK-28508 > URL: https://issues.apache.org/jira/browse/FLINK-28508 > Project: Flink > Issue Type: Improvement > Components: API / Python, Table SQL / API >Reporter: LuNing Wang >Assignee: LuNing Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] dianfu merged pull request #20278: [FLINK-28509][table][python] Support REVERSE built-in function in Table API
dianfu merged PR #20278: URL: https://github.com/apache/flink/pull/20278 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-28558) HistoryServer log retrieval configuration improvement
[ https://issues.apache.org/jira/browse/FLINK-28558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song closed FLINK-28558. Resolution: Done master (1.16): 6eee0c69b6cd28fb086bf44e1d7fdb6f646a467e > HistoryServer log retrieval configuration improvement > - > > Key: FLINK-28558 > URL: https://issues.apache.org/jira/browse/FLINK-28558 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration, Runtime / REST >Reporter: Xintong Song >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > HistoryServer generates log retrieval urls base on the following > configuration: > - historyserver.log.jobmanager.url-pattern > - historyserver.log.taskmanager.url-pattern > The usability can be improved in two ways: > - Explicitly explain in description that only http/https schemas are > supported, and add sanity checks for it. > - If the schema is not specified, add "http://; by default. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] xintongsong closed pull request #20280: [FLINK-28558][history] Improve log-url configuration usability.
xintongsong closed pull request #20280: [FLINK-28558][history] Improve log-url configuration usability. URL: https://github.com/apache/flink/pull/20280 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186772018 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186771645 @flinkbot run azure re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support
gyfora commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922968515 ## flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/factory/StandaloneKubernetesTaskManagerFactory.java: ## @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.kubernetes.operator.kubeclient.factory; + +import org.apache.flink.kubernetes.kubeclient.FlinkPod; +import org.apache.flink.kubernetes.kubeclient.decorators.EnvSecretsDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.FlinkConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KerberosMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KubernetesStepDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.MountSecretsDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.CmdStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.InitStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesTaskManagerParameters; +import org.apache.flink.kubernetes.operator.utils.StandaloneKubernetesUtils; +import org.apache.flink.kubernetes.utils.Constants; +import org.apache.flink.util.Preconditions; + +import io.fabric8.kubernetes.api.model.Pod; +import io.fabric8.kubernetes.api.model.PodBuilder; +import io.fabric8.kubernetes.api.model.apps.Deployment; +import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; + +/** Utility class for constructing the TaskManager Deployment when deploying in standalone mode. */ +public class StandaloneKubernetesTaskManagerFactory { + +public static Deployment buildKubernetesTaskManagerDeployment( +FlinkPod podTemplate, +StandaloneKubernetesTaskManagerParameters kubernetesTaskManagerParameters) { +FlinkPod flinkPod = Preconditions.checkNotNull(podTemplate).copy(); + +final KubernetesStepDecorator[] stepDecorators = +new KubernetesStepDecorator[] { +new InitStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new EnvSecretsDecorator(kubernetesTaskManagerParameters), +new MountSecretsDecorator(kubernetesTaskManagerParameters), +new CmdStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new HadoopConfMountDecorator(kubernetesTaskManagerParameters), +new KerberosMountDecorator(kubernetesTaskManagerParameters), +new FlinkConfMountDecorator(kubernetesTaskManagerParameters) +}; Review Comment: That’s right, makes sense :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24787) Add more details of state latency tracking documentation
[ https://issues.apache.org/jira/browse/FLINK-24787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567805#comment-17567805 ] Hangxiang Yu commented on FLINK-24787: -- Sure, Could you assign it to me ? > Add more details of state latency tracking documentation > > > Key: FLINK-24787 > URL: https://issues.apache.org/jira/browse/FLINK-24787 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Runtime / Metrics, Runtime / State > Backends >Reporter: Yun Tang >Priority: Major > Fix For: 1.16.0 > > > Current documentation only tells how to enable or configure state latency > tracking related options. We could add more details of state specific > descriptions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27721) Slack: set up archive
[ https://issues.apache.org/jira/browse/FLINK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567803#comment-17567803 ] Xintong Song commented on FLINK-27721: -- Status updates: Right now, I have something imperfect but workable. I probably won't have time to further improve it recently. Given that we are approaching the 10k messages limit very soon, I'll try to deploy the current version. The known limitations are: # *Messages are not organized in threads at frontend, making it hard for people to read.* This is the same limitation that [airflow|http://apache-airflow.slack-archives.org/] also has. Properties needed for grouping messages into threads are already captured in the database. All we need is to improve the way the messages are displayed. # *It's not realtime.* Slack's new event api never worked for me. So I went for an approach that periodically fetches the messages, with a configurable interval (default 1h). Consequently, new messages may take up to 1 hour to appear in the archive, which is probably fine because they can be searched in Slack anyway. # *It's unlikely, but still possible, to loose messages.* With Slack's conversation api, we need to first retrieve parent messages that are directly sent to the channel, then for each of them retrieve threaded messages replying to it. That means for an already retrieved thread, we cannot know whether there're new replies to it without trying to retrieve it again. Moreover, the api has a ~50/min rate limit, so we probably should not frequently retrieve replies for all threads. My current approach is to only retrieve new messages for threads started within the recent 30 days (configurable). That means new replies to a thread started more than 30 days ago can be lost, which I'd expect to be very rare. # *Backup is not automatic.* We can dump the database with one command, without interrupting the service. We just need to setup a cronjob to trigger and handle the dumps (uploading & cleaning). Some numbers, FYI: # [Slack Analytics|https://apache-flink.slack.com/admin/stats] shows we now have 9.1k total messages. In the last 30 days, only 31% of messages are sent in public channels, 67% in DMs and 1% in private channels. # Slack archive captures public channel messages only. It captures 2.5k total messages, taking about 7~8 minutes on my laptop. The bottleneck is the Slack's api rate limit. # A full dump of the database, containing all the 2.5k messages, channel & user information, completes almost instantly. The dumped file is 3.7MB large. I'll try to deploy the service next. Based on the numbers, I think a dedicated VM might not be necessary. So I'd try with the flink-packages host first. BTW, I have already backed up a dump of all public messages so far, so it shouldn't be a problem if the service is not deployed by the time the 10k limit is reached. > Slack: set up archive > - > > Key: FLINK-27721 > URL: https://issues.apache.org/jira/browse/FLINK-27721 > Project: Flink > Issue Type: Sub-task >Reporter: Xintong Song >Assignee: Xintong Song >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28581) Test Changelog StateBackend V2 Manually
[ https://issues.apache.org/jira/browse/FLINK-28581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu updated FLINK-28581: - Parent: FLINK-25842 Issue Type: Sub-task (was: Technical Debt) > Test Changelog StateBackend V2 Manually > --- > > Key: FLINK-28581 > URL: https://issues.apache.org/jira/browse/FLINK-28581 > Project: Flink > Issue Type: Sub-task > Components: Runtime / State Backends >Reporter: Hangxiang Yu >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28580) Predicate supports unknown stats
[ https://issues.apache.org/jira/browse/FLINK-28580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-28580: --- Labels: pull-request-available (was: ) > Predicate supports unknown stats > > > Key: FLINK-28580 > URL: https://issues.apache.org/jira/browse/FLINK-28580 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.2.0 > > > Now there will be a NPE if minValue or maxValue of FieldStats is null. > We can know the stats is unknown in LeafPredicate.test(long rowCount, > FieldStats[] fieldStats), and return true directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] JingsongLi opened a new pull request, #222: [FLINK-28580] Predicate supports unknown stats
JingsongLi opened a new pull request, #222: URL: https://github.com/apache/flink-table-store/pull/222 Now there will be a NPE if minValue or maxValue of FieldStats is null. We can know the stats is unknown in LeafPredicate.test(long rowCount, FieldStats[] fieldStats), and return true directly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-28581) Test Changelog StateBackend V2 Manually
[ https://issues.apache.org/jira/browse/FLINK-28581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu updated FLINK-28581: - Issue Type: Technical Debt (was: Bug) > Test Changelog StateBackend V2 Manually > --- > > Key: FLINK-28581 > URL: https://issues.apache.org/jira/browse/FLINK-28581 > Project: Flink > Issue Type: Technical Debt > Components: Runtime / State Backends >Reporter: Hangxiang Yu >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-28580) Predicate supports unknown stats
[ https://issues.apache.org/jira/browse/FLINK-28580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee reassigned FLINK-28580: Assignee: Jingsong Lee > Predicate supports unknown stats > > > Key: FLINK-28580 > URL: https://issues.apache.org/jira/browse/FLINK-28580 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Fix For: table-store-0.2.0 > > > Now there will be a NPE if minValue or maxValue of FieldStats is null. > We can know the stats is unknown in LeafPredicate.test(long rowCount, > FieldStats[] fieldStats), and return true directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28581) Test Changelog StateBackend V2 Manually
Hangxiang Yu created FLINK-28581: Summary: Test Changelog StateBackend V2 Manually Key: FLINK-28581 URL: https://issues.apache.org/jira/browse/FLINK-28581 Project: Flink Issue Type: Bug Components: Runtime / State Backends Reporter: Hangxiang Yu -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28526) Fail to lateral join with UDTF from Table with timstamp column
[ https://issues.apache.org/jira/browse/FLINK-28526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567799#comment-17567799 ] Dian Fu commented on FLINK-28526: - cc [~hxbks2ks] Could you help to take a look at this issue? > Fail to lateral join with UDTF from Table with timstamp column > -- > > Key: FLINK-28526 > URL: https://issues.apache.org/jira/browse/FLINK-28526 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.15.1 >Reporter: Xuannan Su >Priority: Major > > The bug can be reproduced with the following test > {code:python} > def test_flink(self): > env = StreamExecutionEnvironment.get_execution_environment() > t_env = StreamTableEnvironment.create(env) > table = t_env.from_descriptor( > TableDescriptor.for_connector("filesystem") > .schema( > Schema.new_builder() > .column("name", DataTypes.STRING()) > .column("cost", DataTypes.INT()) > .column("distance", DataTypes.INT()) > .column("time", DataTypes.TIMESTAMP(3)) > .watermark("time", "`time` - INTERVAL '60' SECOND") > .build() > ) > .format("csv") > .option("path", "./input.csv") > .build() > ) > @udtf(result_types=DataTypes.INT()) > def table_func(row: Row): > return row.cost + row.distance > table = table.join_lateral(table_func.alias("cost_times_distance")) > table.execute().print() > {code} > It causes the following exception > {code:none} > E pyflink.util.exceptions.TableException: > org.apache.flink.table.api.TableException: Unsupported Python SqlFunction > CAST. > E at > org.apache.flink.table.planner.plan.nodes.exec.utils.CommonPythonUtil.createPythonFunctionInfo(CommonPythonUtil.java:146) > E at > org.apache.flink.table.planner.plan.nodes.exec.utils.CommonPythonUtil.createPythonFunctionInfo(CommonPythonUtil.java:429) > E at > org.apache.flink.table.planner.plan.nodes.exec.utils.CommonPythonUtil.createPythonFunctionInfo(CommonPythonUtil.java:135) > E at > org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecPythonCorrelate.extractPythonTableFunctionInfo(CommonExecPythonCorrelate.java:133) > E at > org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecPythonCorrelate.createPythonOneInputTransformation(CommonExecPythonCorrelate.java:106) > E at > org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecPythonCorrelate.translateToPlanInternal(CommonExecPythonCorrelate.java:95) > E at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148) > E at > org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249) > E at > org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:136) > E at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148) > E at > org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:79) > E at > scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) > E at scala.collection.Iterator.foreach(Iterator.scala:937) > E at scala.collection.Iterator.foreach$(Iterator.scala:937) > E at > scala.collection.AbstractIterator.foreach(Iterator.scala:1425) > E at scala.collection.IterableLike.foreach(IterableLike.scala:70) > E at scala.collection.IterableLike.foreach$(IterableLike.scala:69) > E at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > E at > scala.collection.TraversableLike.map(TraversableLike.scala:233) > E at > scala.collection.TraversableLike.map$(TraversableLike.scala:226) > E at > scala.collection.AbstractTraversable.map(Traversable.scala:104) > E at > org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:78) > E at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:181) > E at > org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656) > E at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:828) > E at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1317) > E at >
[jira] [Updated] (FLINK-28580) Predicate supports unknown stats
[ https://issues.apache.org/jira/browse/FLINK-28580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-28580: - Priority: Major (was: Minor) > Predicate supports unknown stats > > > Key: FLINK-28580 > URL: https://issues.apache.org/jira/browse/FLINK-28580 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Major > Fix For: table-store-0.2.0 > > > Now there will be a NPE if minValue or maxValue of FieldStats is null. > We can know the stats is unknown in LeafPredicate.test(long rowCount, > FieldStats[] fieldStats), and return true directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28580) Predicate supports unknown stats
[ https://issues.apache.org/jira/browse/FLINK-28580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-28580: - Priority: Minor (was: Major) > Predicate supports unknown stats > > > Key: FLINK-28580 > URL: https://issues.apache.org/jira/browse/FLINK-28580 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Minor > Fix For: table-store-0.2.0 > > > Now there will be a NPE if minValue or maxValue of FieldStats is null. > We can know the stats is unknown in LeafPredicate.test(long rowCount, > FieldStats[] fieldStats), and return true directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28579) Supports predicate testing for new columns
[ https://issues.apache.org/jira/browse/FLINK-28579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-28579: - Priority: Minor (was: Major) > Supports predicate testing for new columns > -- > > Key: FLINK-28579 > URL: https://issues.apache.org/jira/browse/FLINK-28579 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Minor > Fix For: table-store-0.2.0 > > > The currently added column, if there is a filter on it, will cause an error > in the RowDataToObjectArrayConverter because the number of columns is not > correct > We can make RowDataToObjectArrayConverter supports from shorter rowData. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28580) Predicate supports unknown stats
Jingsong Lee created FLINK-28580: Summary: Predicate supports unknown stats Key: FLINK-28580 URL: https://issues.apache.org/jira/browse/FLINK-28580 Project: Flink Issue Type: Improvement Components: Table Store Reporter: Jingsong Lee Fix For: table-store-0.2.0 Now there will be a NPE if minValue or maxValue of FieldStats is null. We can know the stats is unknown in LeafPredicate.test(long rowCount, FieldStats[] fieldStats), and return true directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28579) Supports predicate testing for new columns
Jingsong Lee created FLINK-28579: Summary: Supports predicate testing for new columns Key: FLINK-28579 URL: https://issues.apache.org/jira/browse/FLINK-28579 Project: Flink Issue Type: Improvement Components: Table Store Reporter: Jingsong Lee Fix For: table-store-0.2.0 The currently added column, if there is a filter on it, will cause an error in the RowDataToObjectArrayConverter because the number of columns is not correct We can make RowDataToObjectArrayConverter supports from shorter rowData. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28455) pyflink tableResult collect result to local timeout
[ https://issues.apache.org/jira/browse/FLINK-28455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhou updated FLINK-28455: - Labels: flink pyflink (was: ) > pyflink tableResult collect result to local timeout > > > Key: FLINK-28455 > URL: https://issues.apache.org/jira/browse/FLINK-28455 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.13.0 >Reporter: zhou >Priority: Major > Labels: flink, pyflink > > when I used pyflink do this: > > {code:java} > with party_enter_final_result.execute().collect() as results: > for result in results:{code} > sometimes TimeoutException occured,the Exception as following: > {code:java} > [2022-07-07 01:18:55,843] {bash.py:173} INFO - Job has been submitted with > JobID 017de55acf2a71552fc293626cfbbe67 > [2022-07-07 01:20:02,384] {bash.py:173} INFO - Traceback (most recent call > last): > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/opt/airflow/data/repo/dags/chloe/chloe_counter.py", line 80, in > [2022-07-07 01:20:02,384] {bash.py:173} INFO - main(date) > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/opt/airflow/data/repo/dags/chloe/chloe_counter.py", line 53, in main > [2022-07-07 01:20:02,384] {bash.py:173} INFO - for result in results: > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/space/flink/opt/python/pyflink.zip/pyflink/table/table_result.py", line > 236, in __next__ > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/space/flink/opt/python/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line > 1285, in __call__ > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/space/flink/opt/python/pyflink.zip/pyflink/util/exceptions.py", line 147, > in deco > [2022-07-07 01:20:02,384] {bash.py:173} INFO - File > "/space/flink/opt/python/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 326, > in get_return_value > [2022-07-07 01:20:02,384] {bash.py:173} INFO - py4j.protocol.Py4JJavaError: > An error occurred while calling o66.hasNext. > [2022-07-07 01:20:02,384] {bash.py:173} INFO - : java.lang.RuntimeException: > Failed to fetch next result > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > java.lang.reflect.Method.invoke(Method.java:498) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - at > java.lang.Thread.run(Thread.java:748) > [2022-07-07 01:20:02,385] {bash.py:173} INFO - Caused by: > java.io.IOException: Failed to fetch job execution result > [2022-07-07 01:20:02,386] {bash.py:173} INFO - at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:177) > [2022-07-07 01:20:02,386] {bash.py:173} INFO - at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:120) > [2022-07-07 01:20:02,386] {bash.py:173} INFO - at >
[jira] [Commented] (FLINK-28529) ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode failed with CheckpointException: Checkpoint expired before comple
[ https://issues.apache.org/jira/browse/FLINK-28529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567796#comment-17567796 ] Yanfei Lei commented on FLINK-28529: This is related to the instability of triggering checkpoint, I would open a PR after [https://github.com/apache/flink/pull/19864] megered. > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > failed with CheckpointException: Checkpoint expired before completing > --- > > Key: FLINK-28529 > URL: https://issues.apache.org/jira/browse/FLINK-28529 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Yanfei Lei >Priority: Critical > Labels: test-stability > > {code:java} > 2022-07-12T04:30:49.9912088Z Jul 12 04:30:49 [ERROR] > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > Time elapsed: 617.048 s <<< ERROR! > 2022-07-12T04:30:49.9913108Z Jul 12 04:30:49 > java.util.concurrent.ExecutionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint expired > before completing. > 2022-07-12T04:30:49.9913880Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2022-07-12T04:30:49.9914606Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2022-07-12T04:30:49.9915572Z Jul 12 04:30:49 at > org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode(ChangelogPeriodicMaterializationSwitchStateBackendITCase.java:125) > 2022-07-12T04:30:49.9916483Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-07-12T04:30:49.9917377Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-07-12T04:30:49.9918121Z Jul 12 04:30:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-07-12T04:30:49.9918788Z Jul 12 04:30:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-07-12T04:30:49.9919456Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2022-07-12T04:30:49.9920193Z Jul 12 04:30:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2022-07-12T04:30:49.9920923Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2022-07-12T04:30:49.9921630Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2022-07-12T04:30:49.9922326Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-07-12T04:30:49.9923023Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2022-07-12T04:30:49.9923708Z Jul 12 04:30:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2022-07-12T04:30:49.9924449Z Jul 12 04:30:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-07-12T04:30:49.9925124Z Jul 12 04:30:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-07-12T04:30:49.9925912Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-07-12T04:30:49.9926742Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-07-12T04:30:49.9928142Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-07-12T04:30:49.9928715Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-07-12T04:30:49.9929311Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-07-12T04:30:49.9929863Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-07-12T04:30:49.9930376Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-07-12T04:30:49.9930911Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-07-12T04:30:49.9931441Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-07-12T04:30:49.9931975Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-07-12T04:30:49.9932493Z
[GitHub] [flink] swuferhong commented on pull request #20084: [FLINK-27988][table-planner] Let HiveTableSource extend from SupportStatisticsReport
swuferhong commented on PR #20084: URL: https://github.com/apache/flink/pull/20084#issuecomment-1186725864 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24342) Filesystem sink does not escape right bracket in partition name
[ https://issues.apache.org/jira/browse/FLINK-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567792#comment-17567792 ] Alexander Trushev commented on FLINK-24342: --- Can this ticket be reviewed please > Filesystem sink does not escape right bracket in partition name > --- > > Key: FLINK-24342 > URL: https://issues.apache.org/jira/browse/FLINK-24342 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Reporter: Alexander Trushev >Priority: Minor > Labels: auto-deprioritized-minor, pull-request-available > > h3. How to reproduce the problem > In the following code snippet filesystem sink creates a partition named > "\{date\}" and writes value "1" to file. > {code:sql} > create table sink ( > val int, > part string > ) partitioned by (part) with ( > 'connector' = 'filesystem', > 'path' = '/tmp/sink', > 'format' = 'csv' > ); > insert into sink values (1, '{date}'); > {code} > h3. Expected behavior > Escaped "\{" and "\}" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate%7D > {code} > h3. Actual behavior > Escaped only "\{" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate} > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28544) Elasticsearch6SinkE2ECase failed with no space left on device
[ https://issues.apache.org/jira/browse/FLINK-28544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567791#comment-17567791 ] Huang Xingbo commented on FLINK-28544: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=38281=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678 > Elasticsearch6SinkE2ECase failed with no space left on device > - > > Key: FLINK-28544 > URL: https://issues.apache.org/jira/browse/FLINK-28544 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Tests >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Critical > Labels: test-stability > > {code:java} > 2022-07-13T02:49:13.5455800Z Jul 13 02:49:13 [ERROR] Tests run: 1, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 49.38 s <<< FAILURE! - in > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase > 2022-07-13T02:49:13.5465965Z Jul 13 02:49:13 [ERROR] > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase Time elapsed: > 49.38 s <<< ERROR! > 2022-07-13T02:49:13.5466765Z Jul 13 02:49:13 java.lang.RuntimeException: > Failed to build JobManager image > 2022-07-13T02:49:13.5467621Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:67) > 2022-07-13T02:49:13.5468645Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configure(FlinkTestcontainersConfigurator.java:147) > 2022-07-13T02:49:13.5469564Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainers$Builder.build(FlinkContainers.java:197) > 2022-07-13T02:49:13.5470467Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:88) > 2022-07-13T02:49:13.5471424Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:51) > 2022-07-13T02:49:13.5472504Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.ElasticsearchSinkE2ECaseBase.(ElasticsearchSinkE2ECaseBase.java:58) > 2022-07-13T02:49:13.5473388Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase.(Elasticsearch6SinkE2ECase.java:36) > 2022-07-13T02:49:13.5474161Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > 2022-07-13T02:49:13.5474905Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > 2022-07-13T02:49:13.5475756Z Jul 13 02:49:13 at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > 2022-07-13T02:49:13.5476734Z Jul 13 02:49:13 at > java.lang.reflect.Constructor.newInstance(Constructor.java:423) > 2022-07-13T02:49:13.5477495Z Jul 13 02:49:13 at > org.junit.platform.commons.util.ReflectionUtils.newInstance(ReflectionUtils.java:550) > 2022-07-13T02:49:13.5478313Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ConstructorInvocation.proceed(ConstructorInvocation.java:56) > 2022-07-13T02:49:13.5479220Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2022-07-13T02:49:13.5480165Z Jul 13 02:49:13 at > org.junit.jupiter.api.extension.InvocationInterceptor.interceptTestClassConstructor(InvocationInterceptor.java:73) > 2022-07-13T02:49:13.5481038Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2022-07-13T02:49:13.5481944Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2022-07-13T02:49:13.5482875Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2022-07-13T02:49:13.5483764Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2022-07-13T02:49:13.5484642Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > 2022-07-13T02:49:13.5486123Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > 2022-07-13T02:49:13.5488185Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:77) > 2022-07-13T02:49:13.543Z Jul 13 02:49:13 at >
[jira] [Updated] (FLINK-28544) Elasticsearch6SinkE2ECase failed with no space left on device
[ https://issues.apache.org/jira/browse/FLINK-28544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo updated FLINK-28544: - Priority: Critical (was: Major) > Elasticsearch6SinkE2ECase failed with no space left on device > - > > Key: FLINK-28544 > URL: https://issues.apache.org/jira/browse/FLINK-28544 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Tests >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Critical > Labels: test-stability > > {code:java} > 2022-07-13T02:49:13.5455800Z Jul 13 02:49:13 [ERROR] Tests run: 1, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 49.38 s <<< FAILURE! - in > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase > 2022-07-13T02:49:13.5465965Z Jul 13 02:49:13 [ERROR] > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase Time elapsed: > 49.38 s <<< ERROR! > 2022-07-13T02:49:13.5466765Z Jul 13 02:49:13 java.lang.RuntimeException: > Failed to build JobManager image > 2022-07-13T02:49:13.5467621Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:67) > 2022-07-13T02:49:13.5468645Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configure(FlinkTestcontainersConfigurator.java:147) > 2022-07-13T02:49:13.5469564Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainers$Builder.build(FlinkContainers.java:197) > 2022-07-13T02:49:13.5470467Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:88) > 2022-07-13T02:49:13.5471424Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:51) > 2022-07-13T02:49:13.5472504Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.ElasticsearchSinkE2ECaseBase.(ElasticsearchSinkE2ECaseBase.java:58) > 2022-07-13T02:49:13.5473388Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase.(Elasticsearch6SinkE2ECase.java:36) > 2022-07-13T02:49:13.5474161Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > 2022-07-13T02:49:13.5474905Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > 2022-07-13T02:49:13.5475756Z Jul 13 02:49:13 at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > 2022-07-13T02:49:13.5476734Z Jul 13 02:49:13 at > java.lang.reflect.Constructor.newInstance(Constructor.java:423) > 2022-07-13T02:49:13.5477495Z Jul 13 02:49:13 at > org.junit.platform.commons.util.ReflectionUtils.newInstance(ReflectionUtils.java:550) > 2022-07-13T02:49:13.5478313Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ConstructorInvocation.proceed(ConstructorInvocation.java:56) > 2022-07-13T02:49:13.5479220Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2022-07-13T02:49:13.5480165Z Jul 13 02:49:13 at > org.junit.jupiter.api.extension.InvocationInterceptor.interceptTestClassConstructor(InvocationInterceptor.java:73) > 2022-07-13T02:49:13.5481038Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2022-07-13T02:49:13.5481944Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2022-07-13T02:49:13.5482875Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2022-07-13T02:49:13.5483764Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2022-07-13T02:49:13.5484642Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > 2022-07-13T02:49:13.5486123Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > 2022-07-13T02:49:13.5488185Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:77) > 2022-07-13T02:49:13.543Z Jul 13 02:49:13 at > org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestClassConstructor(ClassBasedTestDescriptor.java:355) > 2022-07-13T02:49:13.5490237Z Jul 13 02:49:13 at >
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186714442 @flinkbot run azure re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26721) PulsarSourceITCase.testSavepoint failed on azure pipeline
[ https://issues.apache.org/jira/browse/FLINK-26721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567790#comment-17567790 ] Huang Xingbo commented on FLINK-26721: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=38295=logs=fc7981dc-d266-55b0-5fff-f0d0a2294e36=1a9b228a-3e0e-598f-fc81-c321539dfdbf > PulsarSourceITCase.testSavepoint failed on azure pipeline > - > > Key: FLINK-26721 > URL: https://issues.apache.org/jira/browse/FLINK-26721 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Pulsar >Affects Versions: 1.16.0 >Reporter: Yun Gao >Assignee: Yufan Sheng >Priority: Blocker > Labels: build-stability, test-stability > Fix For: 1.16.0 > > > {code:java} > Mar 18 05:49:52 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 315.581 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > Mar 18 05:49:52 [ERROR] > org.apache.flink.connector.pulsar.source.PulsarSourceITCase.testSavepoint(TestEnvironment, > DataStreamSourceExternalContext, CheckpointingMode)[1] Time elapsed: > 140.803 s <<< FAILURE! > Mar 18 05:49:52 java.lang.AssertionError: > Mar 18 05:49:52 > Mar 18 05:49:52 Expecting > Mar 18 05:49:52 > Mar 18 05:49:52 to be completed within 2M. > Mar 18 05:49:52 > Mar 18 05:49:52 exception caught while trying to get the future result: > java.util.concurrent.TimeoutException > Mar 18 05:49:52 at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) > Mar 18 05:49:52 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > Mar 18 05:49:52 at > org.assertj.core.internal.Futures.assertSucceededWithin(Futures.java:109) > Mar 18 05:49:52 at > org.assertj.core.api.AbstractCompletableFutureAssert.internalSucceedsWithin(AbstractCompletableFutureAssert.java:400) > Mar 18 05:49:52 at > org.assertj.core.api.AbstractCompletableFutureAssert.succeedsWithin(AbstractCompletableFutureAssert.java:396) > Mar 18 05:49:52 at > org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.checkResultWithSemantic(SourceTestSuiteBase.java:766) > Mar 18 05:49:52 at > org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.restartFromSavepoint(SourceTestSuiteBase.java:399) > Mar 18 05:49:52 at > org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.testSavepoint(SourceTestSuiteBase.java:241) > Mar 18 05:49:52 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Mar 18 05:49:52 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Mar 18 05:49:52 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Mar 18 05:49:52 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 18 05:49:52 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Mar 18 05:49:52 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Mar 18 05:49:52 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > Mar 18 05:49:52 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > Mar 18 05:49:52 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > Mar 18 05:49:52 at >
[jira] [Commented] (FLINK-28544) Elasticsearch6SinkE2ECase failed with no space left on device
[ https://issues.apache.org/jira/browse/FLINK-28544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567788#comment-17567788 ] Huang Xingbo commented on FLINK-28544: -- Thanks [~martijnvisser]. I will take a look. > Elasticsearch6SinkE2ECase failed with no space left on device > - > > Key: FLINK-28544 > URL: https://issues.apache.org/jira/browse/FLINK-28544 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Tests >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Priority: Major > Labels: test-stability > > {code:java} > 2022-07-13T02:49:13.5455800Z Jul 13 02:49:13 [ERROR] Tests run: 1, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 49.38 s <<< FAILURE! - in > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase > 2022-07-13T02:49:13.5465965Z Jul 13 02:49:13 [ERROR] > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase Time elapsed: > 49.38 s <<< ERROR! > 2022-07-13T02:49:13.5466765Z Jul 13 02:49:13 java.lang.RuntimeException: > Failed to build JobManager image > 2022-07-13T02:49:13.5467621Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:67) > 2022-07-13T02:49:13.5468645Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configure(FlinkTestcontainersConfigurator.java:147) > 2022-07-13T02:49:13.5469564Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainers$Builder.build(FlinkContainers.java:197) > 2022-07-13T02:49:13.5470467Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:88) > 2022-07-13T02:49:13.5471424Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:51) > 2022-07-13T02:49:13.5472504Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.ElasticsearchSinkE2ECaseBase.(ElasticsearchSinkE2ECaseBase.java:58) > 2022-07-13T02:49:13.5473388Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase.(Elasticsearch6SinkE2ECase.java:36) > 2022-07-13T02:49:13.5474161Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > 2022-07-13T02:49:13.5474905Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > 2022-07-13T02:49:13.5475756Z Jul 13 02:49:13 at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > 2022-07-13T02:49:13.5476734Z Jul 13 02:49:13 at > java.lang.reflect.Constructor.newInstance(Constructor.java:423) > 2022-07-13T02:49:13.5477495Z Jul 13 02:49:13 at > org.junit.platform.commons.util.ReflectionUtils.newInstance(ReflectionUtils.java:550) > 2022-07-13T02:49:13.5478313Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ConstructorInvocation.proceed(ConstructorInvocation.java:56) > 2022-07-13T02:49:13.5479220Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2022-07-13T02:49:13.5480165Z Jul 13 02:49:13 at > org.junit.jupiter.api.extension.InvocationInterceptor.interceptTestClassConstructor(InvocationInterceptor.java:73) > 2022-07-13T02:49:13.5481038Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2022-07-13T02:49:13.5481944Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2022-07-13T02:49:13.5482875Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2022-07-13T02:49:13.5483764Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2022-07-13T02:49:13.5484642Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > 2022-07-13T02:49:13.5486123Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > 2022-07-13T02:49:13.5488185Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:77) > 2022-07-13T02:49:13.543Z Jul 13 02:49:13 at > org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestClassConstructor(ClassBasedTestDescriptor.java:355) > 2022-07-13T02:49:13.5490237Z Jul 13 02:49:13 at >
[jira] [Commented] (FLINK-28390) Allows RocksDB to configure FIFO Compaction to reduce CPU overhead.
[ https://issues.apache.org/jira/browse/FLINK-28390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567789#comment-17567789 ] Yun Tang commented on FLINK-28390: -- [~Ming Li] I think you can create the issue or even the PR directly in the RocksDB community and share the related link here. > Allows RocksDB to configure FIFO Compaction to reduce CPU overhead. > --- > > Key: FLINK-28390 > URL: https://issues.apache.org/jira/browse/FLINK-28390 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: ming li >Priority: Major > > We know that the fifo compaction strategy may silently delete data and may > lose data for the business. But in some scenarios, FIFO compaction can be a > very effective way to reduce CPU usage. > > Flink's Taskmanager is usually some small-scale processes, such as allocating > 4 CPUs and 16G memory. When the state size is small, the CPU overhead > occupied by RocksDB is not high, and as the state increases, RocksDB may > frequently be in the compaction operation, which will occupy a large amount > of CPU and affect the computing operation. > > We usually configure a TTL for the state, so when using FIFO we can configure > it to be slightly longer than the TTL, so that the upper layer is the same as > before. > > Although the FIFO Compaction strategy may bring space amplification, the disk > is cheaper than the CPU after all, so the overall cost is reduced. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-28544) Elasticsearch6SinkE2ECase failed with no space left on device
[ https://issues.apache.org/jira/browse/FLINK-28544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo reassigned FLINK-28544: Assignee: Huang Xingbo > Elasticsearch6SinkE2ECase failed with no space left on device > - > > Key: FLINK-28544 > URL: https://issues.apache.org/jira/browse/FLINK-28544 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Tests >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: test-stability > > {code:java} > 2022-07-13T02:49:13.5455800Z Jul 13 02:49:13 [ERROR] Tests run: 1, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 49.38 s <<< FAILURE! - in > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase > 2022-07-13T02:49:13.5465965Z Jul 13 02:49:13 [ERROR] > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase Time elapsed: > 49.38 s <<< ERROR! > 2022-07-13T02:49:13.5466765Z Jul 13 02:49:13 java.lang.RuntimeException: > Failed to build JobManager image > 2022-07-13T02:49:13.5467621Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:67) > 2022-07-13T02:49:13.5468645Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configure(FlinkTestcontainersConfigurator.java:147) > 2022-07-13T02:49:13.5469564Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainers$Builder.build(FlinkContainers.java:197) > 2022-07-13T02:49:13.5470467Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:88) > 2022-07-13T02:49:13.5471424Z Jul 13 02:49:13 at > org.apache.flink.connector.testframe.container.FlinkContainerTestEnvironment.(FlinkContainerTestEnvironment.java:51) > 2022-07-13T02:49:13.5472504Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.ElasticsearchSinkE2ECaseBase.(ElasticsearchSinkE2ECaseBase.java:58) > 2022-07-13T02:49:13.5473388Z Jul 13 02:49:13 at > org.apache.flink.streaming.tests.Elasticsearch6SinkE2ECase.(Elasticsearch6SinkE2ECase.java:36) > 2022-07-13T02:49:13.5474161Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > 2022-07-13T02:49:13.5474905Z Jul 13 02:49:13 at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > 2022-07-13T02:49:13.5475756Z Jul 13 02:49:13 at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > 2022-07-13T02:49:13.5476734Z Jul 13 02:49:13 at > java.lang.reflect.Constructor.newInstance(Constructor.java:423) > 2022-07-13T02:49:13.5477495Z Jul 13 02:49:13 at > org.junit.platform.commons.util.ReflectionUtils.newInstance(ReflectionUtils.java:550) > 2022-07-13T02:49:13.5478313Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ConstructorInvocation.proceed(ConstructorInvocation.java:56) > 2022-07-13T02:49:13.5479220Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2022-07-13T02:49:13.5480165Z Jul 13 02:49:13 at > org.junit.jupiter.api.extension.InvocationInterceptor.interceptTestClassConstructor(InvocationInterceptor.java:73) > 2022-07-13T02:49:13.5481038Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2022-07-13T02:49:13.5481944Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2022-07-13T02:49:13.5482875Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2022-07-13T02:49:13.5483764Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2022-07-13T02:49:13.5484642Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > 2022-07-13T02:49:13.5486123Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > 2022-07-13T02:49:13.5488185Z Jul 13 02:49:13 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:77) > 2022-07-13T02:49:13.543Z Jul 13 02:49:13 at > org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestClassConstructor(ClassBasedTestDescriptor.java:355) > 2022-07-13T02:49:13.5490237Z Jul 13 02:49:13 at >
[jira] [Commented] (FLINK-28529) ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode failed with CheckpointException: Checkpoint expired before comple
[ https://issues.apache.org/jira/browse/FLINK-28529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567786#comment-17567786 ] Huang Xingbo commented on FLINK-28529: -- Hi [~Yanfei Lei] Any updates on the progress? > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > failed with CheckpointException: Checkpoint expired before completing > --- > > Key: FLINK-28529 > URL: https://issues.apache.org/jira/browse/FLINK-28529 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Yanfei Lei >Priority: Critical > Labels: test-stability > > {code:java} > 2022-07-12T04:30:49.9912088Z Jul 12 04:30:49 [ERROR] > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > Time elapsed: 617.048 s <<< ERROR! > 2022-07-12T04:30:49.9913108Z Jul 12 04:30:49 > java.util.concurrent.ExecutionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint expired > before completing. > 2022-07-12T04:30:49.9913880Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2022-07-12T04:30:49.9914606Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2022-07-12T04:30:49.9915572Z Jul 12 04:30:49 at > org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode(ChangelogPeriodicMaterializationSwitchStateBackendITCase.java:125) > 2022-07-12T04:30:49.9916483Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-07-12T04:30:49.9917377Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-07-12T04:30:49.9918121Z Jul 12 04:30:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-07-12T04:30:49.9918788Z Jul 12 04:30:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-07-12T04:30:49.9919456Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2022-07-12T04:30:49.9920193Z Jul 12 04:30:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2022-07-12T04:30:49.9920923Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2022-07-12T04:30:49.9921630Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2022-07-12T04:30:49.9922326Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-07-12T04:30:49.9923023Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2022-07-12T04:30:49.9923708Z Jul 12 04:30:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2022-07-12T04:30:49.9924449Z Jul 12 04:30:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-07-12T04:30:49.9925124Z Jul 12 04:30:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-07-12T04:30:49.9925912Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-07-12T04:30:49.9926742Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-07-12T04:30:49.9928142Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-07-12T04:30:49.9928715Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-07-12T04:30:49.9929311Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-07-12T04:30:49.9929863Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-07-12T04:30:49.9930376Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-07-12T04:30:49.9930911Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-07-12T04:30:49.9931441Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-07-12T04:30:49.9931975Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-07-12T04:30:49.9932493Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) >
[jira] [Commented] (FLINK-28529) ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode failed with CheckpointException: Checkpoint expired before comple
[ https://issues.apache.org/jira/browse/FLINK-28529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567785#comment-17567785 ] Huang Xingbo commented on FLINK-28529: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=38295=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > failed with CheckpointException: Checkpoint expired before completing > --- > > Key: FLINK-28529 > URL: https://issues.apache.org/jira/browse/FLINK-28529 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0 >Reporter: Huang Xingbo >Assignee: Yanfei Lei >Priority: Critical > Labels: test-stability > > {code:java} > 2022-07-12T04:30:49.9912088Z Jul 12 04:30:49 [ERROR] > ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode > Time elapsed: 617.048 s <<< ERROR! > 2022-07-12T04:30:49.9913108Z Jul 12 04:30:49 > java.util.concurrent.ExecutionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint expired > before completing. > 2022-07-12T04:30:49.9913880Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2022-07-12T04:30:49.9914606Z Jul 12 04:30:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2022-07-12T04:30:49.9915572Z Jul 12 04:30:49 at > org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationSwitchStateBackendITCase.testSwitchFromDisablingToEnablingInClaimMode(ChangelogPeriodicMaterializationSwitchStateBackendITCase.java:125) > 2022-07-12T04:30:49.9916483Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-07-12T04:30:49.9917377Z Jul 12 04:30:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-07-12T04:30:49.9918121Z Jul 12 04:30:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-07-12T04:30:49.9918788Z Jul 12 04:30:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-07-12T04:30:49.9919456Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2022-07-12T04:30:49.9920193Z Jul 12 04:30:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2022-07-12T04:30:49.9920923Z Jul 12 04:30:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2022-07-12T04:30:49.9921630Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2022-07-12T04:30:49.9922326Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-07-12T04:30:49.9923023Z Jul 12 04:30:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2022-07-12T04:30:49.9923708Z Jul 12 04:30:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2022-07-12T04:30:49.9924449Z Jul 12 04:30:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-07-12T04:30:49.9925124Z Jul 12 04:30:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-07-12T04:30:49.9925912Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-07-12T04:30:49.9926742Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-07-12T04:30:49.9928142Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-07-12T04:30:49.9928715Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-07-12T04:30:49.9929311Z Jul 12 04:30:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-07-12T04:30:49.9929863Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-07-12T04:30:49.9930376Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-07-12T04:30:49.9930911Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-07-12T04:30:49.9931441Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-07-12T04:30:49.9931975Z Jul 12 04:30:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) >
[jira] [Updated] (FLINK-27013) Hive dialect supports IS_DISTINCT_FROM
[ https://issues.apache.org/jira/browse/FLINK-27013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luoyuxia updated FLINK-27013: - Description: It'll throw the exception with error message "Unsupported call: IS DISTINCT FROM(STRING, STRING) " with the following SQL in Hive dialect: {code:java} create table test(x string, y string); select x <=> y, (x <=> y) = false from test; {code} And I found the IS_NOT_DISTINCT_FROM is supported in ExprCodeGenerator.scala, but IS_ DISTINCT_FROM is not. The IS_ DISTINCT_FROM should also be implemented in ExprCodeGenerator. Then, I also found such sql can work in Flink SQL {code:java} f63 IS DISTINCT FROM f64 {code} The reason is such sql will be converted to "AND(OR(IS NOT NULL($63), IS NOT NULL($64)), IS NOT TRUE(=($63, $64)))" intead of "IS_DISTINCT_FROM". was: It'll throw the exception with error message "Unsupported call: IS DISTINCT FROM(STRING, STRING) " with the following SQL in Hive dialect: {code:java} create table test(x string, y string); select x <=> y, (x <=> y) = false from test; {code} And I found the IS_NOT_DISTINCT_FROM is supported in ExprCodeGenerator.scala, but IS_ DISTINCT_FROM is not. The IS_ DISTINCT_FROM should also be implemented in ExprCodeGenerator. > Hive dialect supports IS_DISTINCT_FROM > -- > > Key: FLINK-27013 > URL: https://issues.apache.org/jira/browse/FLINK-27013 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive >Reporter: luoyuxia >Priority: Major > Labels: pull-request-available > > It'll throw the exception with error message "Unsupported call: IS DISTINCT > FROM(STRING, STRING) " with the following SQL in Hive dialect: > > {code:java} > create table test(x string, y string); > select x <=> y, (x <=> y) = false from test; {code} > > And I found the IS_NOT_DISTINCT_FROM is supported in > ExprCodeGenerator.scala, but IS_ DISTINCT_FROM is not. The IS_ DISTINCT_FROM > should also be implemented in ExprCodeGenerator. > > Then, I also found such sql can work in Flink SQL > > {code:java} > f63 IS DISTINCT FROM f64 {code} > The reason is such sql will be converted to "AND(OR(IS NOT NULL($63), IS NOT > NULL($64)), IS NOT TRUE(=($63, $64)))" intead of "IS_DISTINCT_FROM". > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #131: [FLINK-28563] Add Transformer for VectorSlicer
yunfengzhou-hub commented on code in PR #131: URL: https://github.com/apache/flink-ml/pull/131#discussion_r922941135 ## flink-ml-lib/src/test/java/org/apache/flink/ml/feature/VectorSlicerTest.java: ## @@ -0,0 +1,156 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.ml.feature; + +import org.apache.flink.api.common.restartstrategy.RestartStrategies; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.ml.feature.vectorslicer.VectorSlicer; +import org.apache.flink.ml.linalg.DenseVector; +import org.apache.flink.ml.linalg.SparseVector; +import org.apache.flink.ml.linalg.Vectors; +import org.apache.flink.ml.util.TestUtils; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions; +import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; +import org.apache.flink.test.util.AbstractTestBase; +import org.apache.flink.types.Row; + +import org.apache.commons.collections.IteratorUtils; +import org.junit.Before; +import org.junit.Test; + +import java.util.Arrays; +import java.util.List; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNull; + +/** Tests VectorSlicer. */ +public class VectorSlicerTest extends AbstractTestBase { + +private StreamTableEnvironment tEnv; +private Table inputDataTable; + +private static final List INPUT_DATA = +Arrays.asList( +Row.of( +0, +Vectors.dense(2.1, 3.1, 2.3, 3.4, 5.3, 5.1), +Vectors.sparse(5, new int[] {1, 3, 4}, new double[] {0.1, 0.2, 0.3})), +Row.of( +1, +Vectors.dense(2.3, 4.1, 1.3, 2.4, 5.1, 4.1), +Vectors.sparse(5, new int[] {1, 2, 4}, new double[] {0.1, 0.2, 0.3}))); + +private static final DenseVector EXPECTED_OUTPUT_DATA_1 = Vectors.dense(2.1, 3.1, 2.3); +private static final DenseVector EXPECTED_OUTPUT_DATA_2 = Vectors.dense(2.3, 4.1, 1.3); + +private static final SparseVector EXPECTED_OUTPUT_DATA_3 = +Vectors.sparse(3, new int[] {1}, new double[] {0.1}); +private static final SparseVector EXPECTED_OUTPUT_DATA_4 = +Vectors.sparse(3, new int[] {1, 2}, new double[] {0.1, 0.2}); + +@Before +public void before() { +Configuration config = new Configuration(); + config.set(ExecutionCheckpointingOptions.ENABLE_CHECKPOINTS_AFTER_TASKS_FINISH, true); +StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(config); +env.setParallelism(4); +env.enableCheckpointing(100); +env.setRestartStrategy(RestartStrategies.noRestart()); +tEnv = StreamTableEnvironment.create(env); +DataStream dataStream = env.fromCollection(INPUT_DATA); +inputDataTable = tEnv.fromDataStream(dataStream).as("id", "vec", "sparseVec"); +} + +private void verifyOutputResult(Table output, String outputCol, boolean isSparse) +throws Exception { +DataStream dataStream = tEnv.toDataStream(output); +List results = IteratorUtils.toList(dataStream.executeAndCollect()); +assertEquals(2, results.size()); +for (Row result : results) { +if (result.getField(0) == (Object) 0) { +if (isSparse) { +assertEquals(EXPECTED_OUTPUT_DATA_3, result.getField(outputCol)); +} else { +assertEquals(EXPECTED_OUTPUT_DATA_1, result.getField(outputCol)); +} +} else if (result.getField(0) == (Object) 1) { +if (isSparse) { +assertEquals(EXPECTED_OUTPUT_DATA_4, result.getField(outputCol)); +} else { +assertEquals(EXPECTED_OUTPUT_DATA_2, result.getField(outputCol)); +} +} else { +
[jira] [Commented] (FLINK-24787) Add more details of state latency tracking documentation
[ https://issues.apache.org/jira/browse/FLINK-24787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567783#comment-17567783 ] Yun Tang commented on FLINK-24787: -- [~masteryhx] I agree, would you like to take this ticket? > Add more details of state latency tracking documentation > > > Key: FLINK-24787 > URL: https://issues.apache.org/jira/browse/FLINK-24787 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Runtime / Metrics, Runtime / State > Backends >Reporter: Yun Tang >Priority: Major > Fix For: 1.16.0 > > > Current documentation only tells how to enable or configure state latency > tracking related options. We could add more details of state specific > descriptions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #220: [FLINK-28565] Create `NOTICE` file for `flink-table-store-hive-catalog`
JingsongLi commented on code in PR #220: URL: https://github.com/apache/flink-table-store/pull/220#discussion_r922941660 ## flink-table-store-hive/flink-table-store-hive-catalog/src/main/resources/META-INF/NOTICE: ## @@ -0,0 +1,60 @@ +flink-table-store-hive-catalog +Copyright 2014-2021 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + +This project bundles the following dependencies under the Apache Software License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt) + +- org.apache.hive:hive-metastore:2.3.4 + +This project bundles the following dependencies under the BSD license. +See bundled license files for details. + +- org.antlr:antlr-runtime:3.5.2 Review Comment: There is no antlr in bundled jar. ## flink-table-store-hive/flink-table-store-hive-catalog/src/main/resources/META-INF/NOTICE: ## @@ -0,0 +1,60 @@ +flink-table-store-hive-catalog +Copyright 2014-2021 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + +This project bundles the following dependencies under the Apache Software License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt) + +- org.apache.hive:hive-metastore:2.3.4 + +This project bundles the following dependencies under the BSD license. +See bundled license files for details. + +- org.antlr:antlr-runtime:3.5.2 + +The bundled Apache Hive org.apache.hive:hive-metastore dependency bundles the following dependencies under +the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt) + +- com.google.guava:guava:14.0.1 Review Comment: No guava -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186706854 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #216: [FLINK-28482] num-sorted-run.stop-trigger introduced a unstable merging
JingsongLi commented on code in PR #216: URL: https://github.com/apache/flink-table-store/pull/216#discussion_r922940630 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/file/mergetree/compact/MergeTreeCompactManager.java: ## @@ -69,7 +75,17 @@ public void submitCompaction() { throw new IllegalStateException( "Please finish the previous compaction before submitting new one."); } -strategy.pick(levels.numberOfLevels(), levels.levelSortedRuns()) +List sortedRuns = levels.levelSortedRuns(); +if (maxSortedRunNum != null && maxSortedRunNum < sortedRuns.size()) { +pickSortedRuns(sortedRuns.subList(0, maxSortedRunNum)); +pickSortedRuns(sortedRuns.subList(maxSortedRunNum, sortedRuns.size())); +} else { +pickSortedRuns(sortedRuns); +} +} + +private void pickSortedRuns(List sortedRuns) { +strategy.pick(levels.numberOfLevels(), sortedRuns) .ifPresent( unit -> { if (unit.files().size() < 2) { Review Comment: I think it is better to limit sorted runs in `CompactStrategy`. We can pass `maxRuns` to `UniversalCompaction.this(...)`, and limit runs in `createUnit`. The `strategy.pick(levels.numberOfLevels(), partial runs)` may lead to incorrect runs, because strategy doesn't know the global information, it's not sure if there are existing runs in the deep layers. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186704611 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-27587) Support keyed co-broadcast processing in PyFlink
[ https://issues.apache.org/jira/browse/FLINK-27587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo closed FLINK-27587. Resolution: Done Merged into master via 7a9016cca05aeb55cf6d66a163d96fbd75e42963 > Support keyed co-broadcast processing in PyFlink > > > Key: FLINK-27587 > URL: https://issues.apache.org/jira/browse/FLINK-27587 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Affects Versions: 1.15.0 >Reporter: Juntao Hu >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > Support > KeyedStream.connect(BroadcastStream).process(). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] flinkbot commented on pull request #20290: [FLINK-28140][python][docs] Improve the documentation by adding Python examples in DataStream API Integration page
flinkbot commented on PR #20290: URL: https://github.com/apache/flink/pull/20290#issuecomment-1186702420 ## CI report: * 17fd790a7a66619503caf6e02af97da2d2574591 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo closed pull request #20144: [FLINK-27587][python] Support keyed co-broadcast processing
HuangXingBo closed pull request #20144: [FLINK-27587][python] Support keyed co-broadcast processing URL: https://github.com/apache/flink/pull/20144 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] TanYuxin-tyx commented on a diff in pull request #20200: [FLINK-28446][runtime] Expose more information in PartitionDescriptor to support more optimized Shuffle Service
TanYuxin-tyx commented on code in PR #20200: URL: https://github.com/apache/flink/pull/20200#discussion_r922937503 ## flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/PartitionDescriptor.java: ## @@ -53,14 +54,22 @@ public class PartitionDescriptor implements Serializable { /** Connection index to identify this partition of intermediate result. */ private final int connectionIndex; +/** Whether the intermediate result is a broadcast result. */ +private final boolean isBroadcast; + +/** The distribution pattern of the intermediate result. */ +private final DistributionPattern distributionPattern; Review Comment: @wsry Good point. Thanks for reviewing the code. I have updated the PR according to the comments, could you please take a look again? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pengmide opened a new pull request, #20290: [FLINK-28140][python][docs] Improve the documentation by adding Python examples in DataStream API Integration page
pengmide opened a new pull request, #20290: URL: https://github.com/apache/flink/pull/20290 ## What is the purpose of the change Improve the documentation by adding Python examples. ## Brief change log - Improve the timezone api documentation by adding Python examples. - Improve the table api documentation by adding Python examples. - Improve the generating watermarks api documentation by adding Python examples. - Improve the parallel configuration api documentation by adding Python examples. - Improve the timezone api documentation by adding Python examples. - Improve the state_backends api documentation by adding Python examples. - Improve the task_failure_recovery api documentation by adding Python examples. ## Verifying this change - This Change without andy test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2
[ https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-28578. Assignee: Nicholas Jiang Resolution: Fixed master: 2b8a6aa75b3be48404070088460ba00ae2927835 > Upgrade Spark version of flink-table-store-spark to 3.2.2 > - > > Key: FLINK-28578 > URL: https://issues.apache.org/jira/browse/FLINK-28578 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.2.0 >Reporter: Nicholas Jiang >Assignee: Nicholas Jiang >Priority: Minor > Labels: pull-request-available > Fix For: table-store-0.2.0 > > > CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark > UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or > 3.3.0 or later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] JingsongLi merged pull request #221: [FLINK-28578] Upgrade Spark version of flink-table-store-spark to 3.2.2
JingsongLi merged PR #221: URL: https://github.com/apache/flink-table-store/pull/221 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zstraw commented on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource
zstraw commented on PR #17956: URL: https://github.com/apache/flink/pull/17956#issuecomment-1186698116 @flinkbot run azure re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo commented on pull request #19864: [FLINK-27162][runtime] Trigger non-periodic checkpoint in 'timer' thread
HuangXingBo commented on PR #19864: URL: https://github.com/apache/flink/pull/19864#issuecomment-1186698256 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] swuferhong commented on a diff in pull request #20084: [FLINK-27988][table-planner] Let HiveTableSource extend from SupportStatisticsReport
swuferhong commented on code in PR #20084: URL: https://github.com/apache/flink/pull/20084#discussion_r922935948 ## flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java: ## @@ -261,6 +279,93 @@ public DynamicTableSource copy() { return source; } +@Override +public TableStats reportStatistics() { +try { +// only support BOUNDED source +if (isStreamingSource()) { +return TableStats.UNKNOWN; +} +if (flinkConf.get(HiveOptions.SOURCE_REPORT_STATISTICS) +!= FileSystemConnectorOptions.FileStatisticsType.ALL) { +return TableStats.UNKNOWN; +} + +HiveSourceBuilder sourceBuilder = +new HiveSourceBuilder(jobConf, flinkConf, tablePath, hiveVersion, catalogTable) +.setProjectedFields(projectedFields) +.setLimit(limit); Review Comment: > we should consider how to handle the case after limit push and filter push down Now, hive source don't support filter push down. For limit push down, `PushLimitIntoTableSourceScanRule` happened after `FlinkRecomputeStatisticsProgram`, and `PushLimitIntoTableSourceScanRule` can re-compute the new row count. So, I think there is no need to add re-compute stats logic in HiveTableSource. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24342) Filesystem sink does not escape right bracket in partition name
[ https://issues.apache.org/jira/browse/FLINK-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Trushev updated FLINK-24342: -- Priority: Minor (was: Not a Priority) > Filesystem sink does not escape right bracket in partition name > --- > > Key: FLINK-24342 > URL: https://issues.apache.org/jira/browse/FLINK-24342 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Reporter: Alexander Trushev >Priority: Minor > Labels: auto-deprioritized-minor, pull-request-available > > h3. How to reproduce the problem > In the following code snippet filesystem sink creates a partition named > "\{date\}" and writes value "1" to file. > {code:sql} > create table sink ( > val int, > part string > ) partitioned by (part) with ( > 'connector' = 'filesystem', > 'path' = '/tmp/sink', > 'format' = 'csv' > ); > insert into sink values (1, '{date}'); > {code} > h3. Expected behavior > Escaped "\{" and "\}" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate%7D > {code} > h3. Actual behavior > Escaped only "\{" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate} > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] deadwind4 commented on pull request #20278: [FLINK-28509][table][python] Support REVERSE built-in function in Table API
deadwind4 commented on PR #20278: URL: https://github.com/apache/flink/pull/20278#issuecomment-1186684843 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-28573) Nested type will lose nullability when converting from TableSchema
[ https://issues.apache.org/jira/browse/FLINK-28573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-28573. Assignee: Jane Chan Resolution: Fixed master: b1b9827b08d1182b30cdd34464d154560e7e2c62 > Nested type will lose nullability when converting from TableSchema > -- > > Key: FLINK-28573 > URL: https://issues.apache.org/jira/browse/FLINK-28573 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.2.0 >Reporter: Jane Chan >Assignee: Jane Chan >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.2.0 > > > E.g. ArrayDataType, MultisetDataType etc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] JingsongLi merged pull request #219: [FLINK-28573] Nested type will lose nullability when converting from TableSchema
JingsongLi merged PR #219: URL: https://github.com/apache/flink-table-store/pull/219 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2
[ https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567768#comment-17567768 ] Nicholas Jiang commented on FLINK-28578: [~lzljs3620320], please help to assign this ticket to me. > Upgrade Spark version of flink-table-store-spark to 3.2.2 > - > > Key: FLINK-28578 > URL: https://issues.apache.org/jira/browse/FLINK-28578 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.2.0 >Reporter: Nicholas Jiang >Priority: Minor > Labels: pull-request-available > Fix For: table-store-0.2.0 > > > CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark > UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or > 3.3.0 or later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-24342) Filesystem sink does not escape right bracket in partition name
[ https://issues.apache.org/jira/browse/FLINK-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Trushev updated FLINK-24342: -- Description: h3. How to reproduce the problem In the following code snippet filesystem sink creates a partition named "\{date\}" and writes value "1" to file. {code:sql} create table sink ( val int, part string ) partitioned by (part) with ( 'connector' = 'filesystem', 'path' = '/tmp/sink', 'format' = 'csv' ); insert into sink values (1, '{date}'); {code} h3. Expected behavior Escaped "\{" and "\}" in partition name {code} $ ls /tmp/sink/ part=%7Bdate%7D {code} h3. Actual behavior Escaped only "\{" in partition name {code} $ ls /tmp/sink/ part=%7Bdate} {code} was: h3. How to reproduce the problem In the following code snippet filesystem sink creates a partition named "\{date\}" and writes content "1" to file. {code:scala} val env = StreamExecutionEnvironment.getExecutionEnvironment val tEnv = StreamTableEnvironment.create(env) val source = env.fromElements(("{date}", 1)) tEnv.createTemporaryView("source", source) val sinkSql = """ |create table sink ( | part string, | content int |) partitioned by (part) with ( | 'connector' = 'filesystem', | 'path' = '/tmp/sink', | 'format' = 'csv' |) |""".stripMargin tEnv.executeSql(sinkSql).await() tEnv.executeSql("insert into sink select * from source").await() {code} h3. Expected behavior Escaped "\{" and "\}" in partition name {code} $ ls /tmp/sink/ part=%7Bdate%7D {code} h3. Actual behavior Escaped only "\{" in partition name {code} $ ls /tmp/sink/ part=%7Bdate} {code} > Filesystem sink does not escape right bracket in partition name > --- > > Key: FLINK-24342 > URL: https://issues.apache.org/jira/browse/FLINK-24342 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Reporter: Alexander Trushev >Priority: Not a Priority > Labels: auto-deprioritized-minor, pull-request-available > > h3. How to reproduce the problem > In the following code snippet filesystem sink creates a partition named > "\{date\}" and writes value "1" to file. > {code:sql} > create table sink ( > val int, > part string > ) partitioned by (part) with ( > 'connector' = 'filesystem', > 'path' = '/tmp/sink', > 'format' = 'csv' > ); > insert into sink values (1, '{date}'); > {code} > h3. Expected behavior > Escaped "\{" and "\}" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate%7D > {code} > h3. Actual behavior > Escaped only "\{" in partition name > {code} > $ ls /tmp/sink/ > part=%7Bdate} > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28464) Support CsvReaderFormat in PyFlink
[ https://issues.apache.org/jira/browse/FLINK-28464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-28464: Affects Version/s: (was: 1.15.1) > Support CsvReaderFormat in PyFlink > -- > > Key: FLINK-28464 > URL: https://issues.apache.org/jira/browse/FLINK-28464 > Project: Flink > Issue Type: New Feature > Components: API / Python >Reporter: Juntao Hu >Assignee: Juntao Hu >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-28464) Support CsvReaderFormat in PyFlink
[ https://issues.apache.org/jira/browse/FLINK-28464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu closed FLINK-28464. --- Assignee: Juntao Hu Resolution: Fixed Merged to master via cef3aa136edb555b229950fb09067e042dd4361f > Support CsvReaderFormat in PyFlink > -- > > Key: FLINK-28464 > URL: https://issues.apache.org/jira/browse/FLINK-28464 > Project: Flink > Issue Type: New Feature > Components: API / Python >Affects Versions: 1.15.1 >Reporter: Juntao Hu >Assignee: Juntao Hu >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2
[ https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-28578: --- Labels: pull-request-available (was: ) > Upgrade Spark version of flink-table-store-spark to 3.2.2 > - > > Key: FLINK-28578 > URL: https://issues.apache.org/jira/browse/FLINK-28578 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.2.0 >Reporter: Nicholas Jiang >Priority: Minor > Labels: pull-request-available > Fix For: table-store-0.2.0 > > > CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark > UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or > 3.3.0 or later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] SteNicholas opened a new pull request, #221: [FLINK-28578] Upgrade Spark version of flink-table-store-spark to 3.2.2
SteNicholas opened a new pull request, #221: URL: https://github.com/apache/flink-table-store/pull/221 CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or 3.3.0 or later. **The brief change log** - Upgrade Spark version of flink-table-store-spark from 3.2.1 to 3.2.2. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dianfu closed pull request #20220: [FLINK-28464][python][format] Support CsvReaderFormat
dianfu closed pull request #20220: [FLINK-28464][python][format] Support CsvReaderFormat URL: https://github.com/apache/flink/pull/20220 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.2.2
[ https://issues.apache.org/jira/browse/FLINK-28578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Jiang updated FLINK-28578: --- Summary: Upgrade Spark version of flink-table-store-spark to 3.2.2 (was: Upgrade Spark version of flink-table-store-spark to 3.1.3, 3.2.2 or 3.3.0 or later) > Upgrade Spark version of flink-table-store-spark to 3.2.2 > - > > Key: FLINK-28578 > URL: https://issues.apache.org/jira/browse/FLINK-28578 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.2.0 >Reporter: Nicholas Jiang >Priority: Minor > Fix For: table-store-0.2.0 > > > CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark > UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or > 3.3.0 or later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28578) Upgrade Spark version of flink-table-store-spark to 3.1.3, 3.2.2 or 3.3.0 or later
Nicholas Jiang created FLINK-28578: -- Summary: Upgrade Spark version of flink-table-store-spark to 3.1.3, 3.2.2 or 3.3.0 or later Key: FLINK-28578 URL: https://issues.apache.org/jira/browse/FLINK-28578 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.2.0 Reporter: Nicholas Jiang Fix For: table-store-0.2.0 CVE-2022-33891: Apache Spark shell command injection vulnerability via Spark UI. Upgrade to supported Apache Spark maintenance release 3.1.3, 3.2.2 or 3.3.0 or later. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] bzhaoopenstack commented on pull request #20193: [WIP][FLINK-28433][connector/jdbc]Add mariadb jdbc connection validation
bzhaoopenstack commented on PR #20193: URL: https://github.com/apache/flink/pull/20193#issuecomment-1186671889 Hi all. @MartijnVisser @hadronzoo any updates on this one? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] yunfengzhou-hub commented on pull request #132: [FLINK-28571]Add Chi-squared test as Transformer to ml.feature
yunfengzhou-hub commented on PR #132: URL: https://github.com/apache/flink-ml/pull/132#issuecomment-1186670613 Hi @taosiyuan163 , thanks for contributing to Flink ML. Could you please verify your code by running `mvn clean package` command in the root folder? It will check the formats and run all the tests. I can see that there are still errors after cloning your repository and running the command above. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xinbinhuang commented on a diff in pull request #20289: [FLINK-23633][FLIP-150][connector/common] HybridSource: Support dynamic stop position in FileSource
xinbinhuang commented on code in PR #20289: URL: https://github.com/apache/flink/pull/20289#discussion_r922925804 ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/hybrid/HybridSourceSplitEnumerator.java: ## @@ -224,7 +226,9 @@ public void handleSourceEvent(int subtaskId, SourceEvent sourceEvent) { } // track readers that have finished processing for current enumerator +// TODO: should finishedReaders be reset after switching to a new numerator? Review Comment: @tweise It seems that `finishedReaders` will keep increasing after the first switch unless the job is restarted. So if there are more than 2 sources in the chain, the 3rd and latter source may never get triggered. Is my understand correct? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] JackWangCS commented on pull request #20206: [FLINK-25909][runtime][security] Add HBaseDelegationTokenProvider
JackWangCS commented on PR #20206: URL: https://github.com/apache/flink/pull/20206#issuecomment-1186667143 Hi @gaborgsomogyi , I found some issues with the KerberosDelegationTokenManager when I am testing the HBaseDelegationTokenProvider. The tokens obtained by KerberosDelegationTokenManager could not be renewed and caused the application fail to submit. You can find more logs from: https://gist.github.com/JackWangCS/0b1ec2c1137c686ab874124569063234. I already test the HBaseDelegationTokenProvider to obtain HBase delegation token, but need more time to test the renew part. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23633) HybridSource: Support dynamic stop position in FileSource
[ https://issues.apache.org/jira/browse/FLINK-23633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567763#comment-17567763 ] Xinbin Huang commented on FLINK-23633: -- [~thw] I've made a PR to implement this. Would you have time to take a look? https://github.com/apache/flink/pull/20289 > HybridSource: Support dynamic stop position in FileSource > - > > Key: FLINK-23633 > URL: https://issues.apache.org/jira/browse/FLINK-23633 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem >Reporter: Thomas Weise >Assignee: Xinbin Huang >Priority: Major > Labels: pull-request-available > > As of FLINK-22670 FileSource can be used with HybridSource with fixed end > position. To support the scenario where the switch position isn't known ahead > of time, FileSource needs to have a hook to decide when it is time to stop > with continuous polling and then expose the end position through the > enumerator. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] xinbinhuang commented on pull request #20289: [FLINK-23633][FLIP-150][connector/common] HybridSource: Support dynamic stop position in FileSource
xinbinhuang commented on PR #20289: URL: https://github.com/apache/flink/pull/20289#issuecomment-118153 cc: @tweise This is the first draft of the implementation. PTAL! (i'm planning to add more tests gradually) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23633) HybridSource: Support dynamic stop position in FileSource
[ https://issues.apache.org/jira/browse/FLINK-23633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23633: --- Labels: pull-request-available (was: ) > HybridSource: Support dynamic stop position in FileSource > - > > Key: FLINK-23633 > URL: https://issues.apache.org/jira/browse/FLINK-23633 > Project: Flink > Issue Type: Sub-task > Components: Connectors / FileSystem >Reporter: Thomas Weise >Assignee: Xinbin Huang >Priority: Major > Labels: pull-request-available > > As of FLINK-22670 FileSource can be used with HybridSource with fixed end > position. To support the scenario where the switch position isn't known ahead > of time, FileSource needs to have a hook to decide when it is time to stop > with continuous polling and then expose the end position through the > enumerator. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] xinbinhuang commented on pull request #20289: [FLINK-23633][FLIP-150][connector/common] HybridSource: Support dynamic stop position in FileSource
xinbinhuang commented on PR #20289: URL: https://github.com/apache/flink/pull/20289#issuecomment-1186664448 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21319) hadoop-mapreduce jars are not loaded into classpath when submiting flink on yarn jobs.
[ https://issues.apache.org/jira/browse/FLINK-21319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-21319: --- Labels: auto-deprioritized-major stale-minor (was: auto-deprioritized-major) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > hadoop-mapreduce jars are not loaded into classpath when submiting flink on > yarn jobs. > -- > > Key: FLINK-21319 > URL: https://issues.apache.org/jira/browse/FLINK-21319 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.12.1 >Reporter: Tang Yan >Priority: Minor > Labels: auto-deprioritized-major, stale-minor > > My code is to query hive: > {code:java} > EnvironmentSettings settings = > EnvironmentSettings.newInstance().useBlinkPlanner().build(); > TableEnvironment tableEnv = TableEnvironment.create(settings); > String name = "myhive"; > String defaultDatabase = "test"; > String hiveConfDir = "/etc/hive/conf"; > HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir); > tableEnv.registerCatalog(name, hive); > tableEnv.useDatabase(defaultDatabase); > String testsql="select count(1) from mytable"; > tableEnv.executeSql(testsql); > {code} > Env: Flink 1.12.1 + CDH6.3.0 > My submit command: > > {code:java} > export HADOOP_CLASSPATH=`hadoop classpath` > /opt/flink-1.12.1/bin/flink run -m yarn-cluster -p 2 -c > com..flink.test.HiveConnTestJob /home/path/flinkTestCDH6-0.0.1-SNAPSHOT.jar > {code} > > Job ERROR: > > {code:java} > java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf > {code} > > If I do hadoop classpath on the server, I can see hadoop-mapreduce jars > folder is included as below, but when I check the flink job logs, it's not > included there. > > {code:java} > [root@my_server1 lib]# hadoop classpath > /etc/hadoop/conf:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//*:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/GPLEXTRAS-6.3.0-1.gplextras6.3.0.p0.1279813/lib/hadoop/lib/COPYING.hadoop-lzo:/opt/cloudera/parcels/GPLEXTRAS-6.3.0-1.gplextras6.3.0.p0.1279813/lib/hadoop/lib/hadoop-lzo-0.4.15-cdh6.3.0.jar:/opt/cloudera/parcels/GPLEXTRAS-6.3.0-1.gplextras6.3.0.p0.1279813/lib/hadoop/lib/hadoop-lzo.jar:/opt/cloudera/parcels/GPLEXTRAS-6.3.0-1.gplextras6.3.0.p0.1279813/lib/hadoop/lib/native > {code} > > Flink job logs: > > {code:java} > 2021-02-08 05:26:42,590 INFO > org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Classpath: >
[jira] [Updated] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-26515: --- Labels: stale-major test-stability (was: test-stability) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > RetryingExecutorTest. testDiscardOnTimeout failed on azure > -- > > Key: FLINK-26515 > URL: https://issues.apache.org/jira/browse/FLINK-26515 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.14.3 >Reporter: Yun Gao >Priority: Major > Labels: stale-major, test-stability > > {code:java} > Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 1.941 s <<< FAILURE! - in > org.apache.flink.changelog.fs.RetryingExecutorTest > Mar 06 01:20:29 [ERROR] testTimeout Time elapsed: 1.934 s <<< FAILURE! > Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but > was:<1922.869766> > Mar 06 01:20:29 at org.junit.Assert.fail(Assert.java:89) > Mar 06 01:20:29 at org.junit.Assert.failNotEquals(Assert.java:835) > Mar 06 01:20:29 at org.junit.Assert.assertEquals(Assert.java:555) > Mar 06 01:20:29 at org.junit.Assert.assertEquals(Assert.java:685) > Mar 06 01:20:29 at > org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145) > Mar 06 01:20:29 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Mar 06 01:20:29 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Mar 06 01:20:29 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Mar 06 01:20:29 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 06 01:20:29 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Mar 06 01:20:29 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Mar 06 01:20:29 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Mar 06 01:20:29 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Mar 06 01:20:29 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Mar 06 01:20:29 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Mar 06 01:20:29 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Mar 06 01:20:29 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Mar 06 01:20:29 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Mar 06 01:20:29 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Mar 06 01:20:29 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43) > Mar 06 01:20:29 at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) > Mar 06 01:20:29 at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > Mar 06 01:20:29 at > java.util.Iterator.forEachRemaining(Iterator.java:116) > Mar 06 01:20:29 at > java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) > Mar 06 01:20:29 at > java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) > Mar 06 01:20:29 at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) > {code} >
[jira] [Updated] (FLINK-27661) [Metric]Flink-Metrics PrometheusPushGatewayReporter support authentication
[ https://issues.apache.org/jira/browse/FLINK-27661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-27661: --- Labels: pull-request-available stale-major (was: pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > [Metric]Flink-Metrics PrometheusPushGatewayReporter support authentication > -- > > Key: FLINK-27661 > URL: https://issues.apache.org/jira/browse/FLINK-27661 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics > Environment: Flink:1.13.0 >Reporter: jiangchunyang >Priority: Major > Labels: pull-request-available, stale-major > > We found that the native PushGateway does not support authentication. As a > result, the metrics data in on YARN mode cannot be reported to pushGateway > with authentication. > Although we have some other solutions, such as landing files and others, we > think pushGateway is the best solution. > So I decided to do some implementation on my own, and will submit pr to the > community later. > At present I only submit pr to the branch of Flink-1.13. If necessary, I > think I can submit it to the master branch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-27293) CVE-2020-36518 in flink-shaded jackson
[ https://issues.apache.org/jira/browse/FLINK-27293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-27293: --- Labels: auto-deprioritized-major pull-request-available (was: pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > CVE-2020-36518 in flink-shaded jackson > -- > > Key: FLINK-27293 > URL: https://issues.apache.org/jira/browse/FLINK-27293 > Project: Flink > Issue Type: Technical Debt > Components: BuildSystem / Shaded >Reporter: Spencer Deehring >Priority: Minor > Labels: auto-deprioritized-major, pull-request-available > > jackson-databind contains a CVE and is pulled in via jackson-bom located > here: > [https://github.com/apache/flink-shaded/blob/master/flink-shaded-jackson-parent/pom.xml#L38] > This needs to be updated to version > {code:java} > 2.12.6.20220326{code} > as noted here: > [https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.12#micro-patches] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-kubernetes-operator] usamj commented on a diff in pull request #278: [WIP] Add standalone mode support
usamj commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922903180 ## flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/factory/StandaloneKubernetesTaskManagerFactory.java: ## @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.kubernetes.operator.kubeclient.factory; + +import org.apache.flink.kubernetes.kubeclient.FlinkPod; +import org.apache.flink.kubernetes.kubeclient.decorators.EnvSecretsDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.FlinkConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KerberosMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KubernetesStepDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.MountSecretsDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.CmdStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.InitStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesTaskManagerParameters; +import org.apache.flink.kubernetes.operator.utils.StandaloneKubernetesUtils; +import org.apache.flink.kubernetes.utils.Constants; +import org.apache.flink.util.Preconditions; + +import io.fabric8.kubernetes.api.model.Pod; +import io.fabric8.kubernetes.api.model.PodBuilder; +import io.fabric8.kubernetes.api.model.apps.Deployment; +import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; + +/** Utility class for constructing the TaskManager Deployment when deploying in standalone mode. */ +public class StandaloneKubernetesTaskManagerFactory { + +public static Deployment buildKubernetesTaskManagerDeployment( +FlinkPod podTemplate, +StandaloneKubernetesTaskManagerParameters kubernetesTaskManagerParameters) { +FlinkPod flinkPod = Preconditions.checkNotNull(podTemplate).copy(); + +final KubernetesStepDecorator[] stepDecorators = +new KubernetesStepDecorator[] { +new InitStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new EnvSecretsDecorator(kubernetesTaskManagerParameters), +new MountSecretsDecorator(kubernetesTaskManagerParameters), +new CmdStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new HadoopConfMountDecorator(kubernetesTaskManagerParameters), +new KerberosMountDecorator(kubernetesTaskManagerParameters), +new FlinkConfMountDecorator(kubernetesTaskManagerParameters) +}; Review Comment: It looks like it mounts the TM pod template into the Pod which I assume is used by the JM to create the TM pods? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xinbinhuang commented on a diff in pull request #20289: Draft: send events from reader to enumerator
xinbinhuang commented on code in PR #20289: URL: https://github.com/apache/flink/pull/20289#discussion_r922891812 ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/hybrid/HybridSourceSplitEnumerator.java: ## @@ -258,12 +263,25 @@ private void switchEnumerator() { currentSourceIndex++; } -HybridSource.SourceSwitchContext switchContext = -new HybridSource.SourceSwitchContext() { +List previousSplits = +finishedSplits.stream() +.filter( +split -> +split.isFinished +&& split.sourceIndex() == previousSourceIndex) Review Comment: this is a paranoid check.. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support
gyfora commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922890202 ## flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/factory/StandaloneKubernetesTaskManagerFactory.java: ## @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.kubernetes.operator.kubeclient.factory; + +import org.apache.flink.kubernetes.kubeclient.FlinkPod; +import org.apache.flink.kubernetes.kubeclient.decorators.EnvSecretsDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.FlinkConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KerberosMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KubernetesStepDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.MountSecretsDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.CmdStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.InitStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesTaskManagerParameters; +import org.apache.flink.kubernetes.operator.utils.StandaloneKubernetesUtils; +import org.apache.flink.kubernetes.utils.Constants; +import org.apache.flink.util.Preconditions; + +import io.fabric8.kubernetes.api.model.Pod; +import io.fabric8.kubernetes.api.model.PodBuilder; +import io.fabric8.kubernetes.api.model.apps.Deployment; +import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; + +/** Utility class for constructing the TaskManager Deployment when deploying in standalone mode. */ +public class StandaloneKubernetesTaskManagerFactory { + +public static Deployment buildKubernetesTaskManagerDeployment( +FlinkPod podTemplate, +StandaloneKubernetesTaskManagerParameters kubernetesTaskManagerParameters) { +FlinkPod flinkPod = Preconditions.checkNotNull(podTemplate).copy(); + +final KubernetesStepDecorator[] stepDecorators = +new KubernetesStepDecorator[] { +new InitStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new EnvSecretsDecorator(kubernetesTaskManagerParameters), +new MountSecretsDecorator(kubernetesTaskManagerParameters), +new CmdStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new HadoopConfMountDecorator(kubernetesTaskManagerParameters), +new KerberosMountDecorator(kubernetesTaskManagerParameters), +new FlinkConfMountDecorator(kubernetesTaskManagerParameters) +}; Review Comment: Hm interesting, do you know why is it used in the Native integration and not here? I don't quite understand the purpose of it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] usamj commented on a diff in pull request #278: [WIP] Add standalone mode support
usamj commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922884347 ## flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/factory/StandaloneKubernetesTaskManagerFactory.java: ## @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.kubernetes.operator.kubeclient.factory; + +import org.apache.flink.kubernetes.kubeclient.FlinkPod; +import org.apache.flink.kubernetes.kubeclient.decorators.EnvSecretsDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.FlinkConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KerberosMountDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.KubernetesStepDecorator; +import org.apache.flink.kubernetes.kubeclient.decorators.MountSecretsDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.CmdStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.decorators.InitStandaloneTaskManagerDecorator; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesTaskManagerParameters; +import org.apache.flink.kubernetes.operator.utils.StandaloneKubernetesUtils; +import org.apache.flink.kubernetes.utils.Constants; +import org.apache.flink.util.Preconditions; + +import io.fabric8.kubernetes.api.model.Pod; +import io.fabric8.kubernetes.api.model.PodBuilder; +import io.fabric8.kubernetes.api.model.apps.Deployment; +import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; + +/** Utility class for constructing the TaskManager Deployment when deploying in standalone mode. */ +public class StandaloneKubernetesTaskManagerFactory { + +public static Deployment buildKubernetesTaskManagerDeployment( +FlinkPod podTemplate, +StandaloneKubernetesTaskManagerParameters kubernetesTaskManagerParameters) { +FlinkPod flinkPod = Preconditions.checkNotNull(podTemplate).copy(); + +final KubernetesStepDecorator[] stepDecorators = +new KubernetesStepDecorator[] { +new InitStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new EnvSecretsDecorator(kubernetesTaskManagerParameters), +new MountSecretsDecorator(kubernetesTaskManagerParameters), +new CmdStandaloneTaskManagerDecorator(kubernetesTaskManagerParameters), +new HadoopConfMountDecorator(kubernetesTaskManagerParameters), +new KerberosMountDecorator(kubernetesTaskManagerParameters), +new FlinkConfMountDecorator(kubernetesTaskManagerParameters) +}; Review Comment: Both are unneeded for TM, looking at it closer `PodTemplateMountDecorator` isn't needed for JM either. I will remove it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a diff in pull request #20091: [FLINK-27570][runtime]Count finalize failure in checkpoint manager
rkhachatryan commented on code in PR #20091: URL: https://github.com/apache/flink/pull/20091#discussion_r922878505 ## flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java: ## @@ -1368,6 +1369,15 @@ private CompletedCheckpoint finalizeCheckpoint(PendingCheckpoint pendingCheckpoi CheckpointFailureReason.FINALIZE_CHECKPOINT_FAILURE, e1)); } +if (e1 instanceof FlinkExpectedException) { Review Comment: I'm afraid `FlinkExcpectedException` will never reach this line because it will be caught in `PendingCheckpoint.finalizeCheckpoint` and wrapped into `IOException` there. Replacing `FlinkExpectedException` with `FlinkRuntimeException` shoudll be enough though. This also suggests that some test is necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #20233: [FLINK-28474][checkpoint] Fix the bug ChannelStateWriteResult might not fail after checkpoint abort
1996fanrui commented on PR #20233: URL: https://github.com/apache/flink/pull/20233#issuecomment-1186551517 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] TanYuxin-tyx commented on pull request #20200: [FLINK-28446][runtime] Expose more information in PartitionDescriptor to support more optimized Shuffle Service
TanYuxin-tyx commented on PR #20200: URL: https://github.com/apache/flink/pull/20200#issuecomment-1186548326 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support
gyfora commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922826974 ## flink-kubernetes-mock-shaded/pom.xml: ## @@ -0,0 +1,154 @@ + Review Comment: And it actually fixed some of my other intellij test issues -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support
gyfora commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922510295 ## flink-kubernetes-mock-shaded/pom.xml: ## @@ -0,0 +1,154 @@ + Review Comment: I completely removed the whole mock shaded module (https://github.com/gyfora/flink-kubernetes-operator/commit/64a1ec539f56b68bf76091aabde56fcf00791467) and everything seem to still work. All tests still pass. Do we still need this? What am I missing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-28577) 1.15.1 web ui console report error about checkpoint size
nobleyd created FLINK-28577: --- Summary: 1.15.1 web ui console report error about checkpoint size Key: FLINK-28577 URL: https://issues.apache.org/jira/browse/FLINK-28577 Project: Flink Issue Type: Bug Components: Runtime / Web Frontend Affects Versions: 1.15.1 Reporter: nobleyd 1.15.1 1 start-cluster 2 submit job: ./bin/flink run -d ./examples/streaming/TopSpeedWindowing.jar 3 trigger savepoint: ./bin/flink savepoint {{{jobId} ./sp0}} {{4 open web ui for job and change to checkpoint tab, nothing showed.}} {{Chrome console log shows some error:}} {{main.a7e97c2f60a2616e.js:1 ERROR TypeError: Cannot read properties of null (reading 'checkpointed_size') at q (253.e9e8f2b56b4981f5.js:1:607974) at Sl (main.a7e97c2f60a2616e.js:1:186068) at Br (main.a7e97c2f60a2616e.js:1:184696) at N8 (main.a7e97c2f60a2616e.js:1:185128) at Br (main.a7e97c2f60a2616e.js:1:185153) at N8 (main.a7e97c2f60a2616e.js:1:185128) at Br (main.a7e97c2f60a2616e.js:1:185153) at N8 (main.a7e97c2f60a2616e.js:1:185128) at Br (main.a7e97c2f60a2616e.js:1:185153) at B8 (main.a7e97c2f60a2616e.js:1:191872)}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support
gyfora commented on code in PR #278: URL: https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922819597 ## flink-kubernetes-standalone/src/test/java/org/apache/flink/kubernetes/operator/kubeclient/Fabric8FlinkStandaloneKubeClientTest.java: ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.kubernetes.operator.kubeclient; + +import org.apache.flink.client.deployment.ClusterSpecification; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.kubernetes.kubeclient.FlinkPod; +import org.apache.flink.kubernetes.kubeclient.KubernetesJobManagerSpecification; +import org.apache.flink.kubernetes.operator.kubeclient.factory.StandaloneKubernetesJobManagerFactory; +import org.apache.flink.kubernetes.operator.kubeclient.factory.StandaloneKubernetesTaskManagerFactory; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesJobManagerParameters; +import org.apache.flink.kubernetes.operator.kubeclient.parameters.StandaloneKubernetesTaskManagerParameters; +import org.apache.flink.kubernetes.operator.kubeclient.utils.TestUtils; +import org.apache.flink.util.concurrent.Executors; + +import io.fabric8.kubernetes.api.model.apps.Deployment; +import io.fabric8.kubernetes.client.NamespacedKubernetesClient; +import io.fabric8.kubernetes.client.server.mock.EnableKubernetesMockClient; +import io.fabric8.kubernetes.client.server.mock.KubernetesMockServer; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import java.util.List; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +/** @link Fabric8FlinkStandaloneKubeClient unit tests */ +@EnableKubernetesMockClient(crud = true) +public class Fabric8FlinkStandaloneKubeClientTest { +private static final String NAMESPACE = "test"; + +KubernetesMockServer mockServer; +protected NamespacedKubernetesClient kubernetesClient; +private FlinkStandaloneKubeClient flinkKubeClient; +private StandaloneKubernetesTaskManagerParameters taskManagerParameters; +private Deployment tmDeployment; +private ClusterSpecification clusterSpecification; +private Configuration flinkConfig = new Configuration(); + +@BeforeEach +public final void setup() { +flinkConfig = TestUtils.createTestFlinkConfig(); +kubernetesClient = mockServer.createClient(); + +flinkKubeClient = +new Fabric8FlinkStandaloneKubeClient( +flinkConfig, kubernetesClient, Executors.newDirectExecutorService()); +clusterSpecification = TestUtils.createClusterSpecification(); + +taskManagerParameters = +new StandaloneKubernetesTaskManagerParameters(flinkConfig, clusterSpecification); + +tmDeployment = + StandaloneKubernetesTaskManagerFactory.buildKubernetesTaskManagerDeployment( +new FlinkPod.Builder().build(), taskManagerParameters); +} + +@Test +public void testCreateTaskManagerDeployment() { +flinkKubeClient.createTaskManagerDeployment(tmDeployment); + +final List resultedDeployments = + kubernetesClient.apps().deployments().inNamespace(NAMESPACE).list().getItems(); +assertEquals(1, resultedDeployments.size()); +} + +@Test +public void testStopAndCleanupCluster() throws Exception { +flinkConfig = TestUtils.createTestFlinkConfig(); +ClusterSpecification clusterSpecification = TestUtils.createClusterSpecification(); Review Comment: seems like duplicate code, as @BeforeEach already assigns the same values -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #20137: Just for CI
1996fanrui commented on PR #20137: URL: https://github.com/apache/flink/pull/20137#issuecomment-1186476938 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-28565) Create NOTICE file for flink-table-store-hive-catalog
[ https://issues.apache.org/jira/browse/FLINK-28565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-28565: --- Labels: pull-request-available (was: ) > Create NOTICE file for flink-table-store-hive-catalog > - > > Key: FLINK-28565 > URL: https://issues.apache.org/jira/browse/FLINK-28565 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.2.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] SteNicholas opened a new pull request, #220: [FLINK-28565] Create `NOTICE` file for `flink-table-store-hive-catalog`
SteNicholas opened a new pull request, #220: URL: https://github.com/apache/flink-table-store/pull/220 `NOTICE` file is need to be created for `flink-table-store-hive-catalog`. **The brief change log** - Introduces the `NOTICE` file in `flink-table-store-hive-catalog`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] qingwei91 commented on pull request #20235: [Flink 14101][Connectors][Jdbc] SQL Server dialect
qingwei91 commented on PR #20235: URL: https://github.com/apache/flink/pull/20235#issuecomment-1186440169 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org