[GitHub] [flink] flinkbot edited a comment on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
flinkbot edited a comment on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-950605426 ## CI report: * 974ba5b8300fcb03b38a4762f50f246597ce7e92 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25412) * 756027340bfe8a48fd5947c6eddd8db22beca44a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25452) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
flinkbot edited a comment on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-950605426 ## CI report: * 974ba5b8300fcb03b38a4762f50f246597ce7e92 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25412) * 756027340bfe8a48fd5947c6eddd8db22beca44a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25442) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17352: [FLINK-10230][table] Support 'SHOW CREATE VIEW' syntax to print the query of a view
flinkbot edited a comment on pull request #17352: URL: https://github.com/apache/flink/pull/17352#issuecomment-926627700 ## CI report: * d45a1f76630a03ec6a0efd3d38044ab925ab7533 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25288) * c904e97695ec22ed3d3b9fb2a3fce7f8bb13494c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25451) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17352: [FLINK-10230][table] Support 'SHOW CREATE VIEW' syntax to print the query of a view
flinkbot edited a comment on pull request #17352: URL: https://github.com/apache/flink/pull/17352#issuecomment-926627700 ## CI report: * d45a1f76630a03ec6a0efd3d38044ab925ab7533 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25288) * c904e97695ec22ed3d3b9fb2a3fce7f8bb13494c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17563: [hotfix][flink-connector-jdbc] Removed un-used variable from test fixture
flinkbot edited a comment on pull request #17563: URL: https://github.com/apache/flink/pull/17563#issuecomment-951350039 ## CI report: * ac1900b42fb04a42e58a297c2e1bef8f66f7f10a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25443) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17472: [FLINK-24486][rest] Make async result store duration configurable
flinkbot edited a comment on pull request #17472: URL: https://github.com/apache/flink/pull/17472#issuecomment-943076656 ## CI report: * a8cd28194ad50044279d895a36d08a84da8fb78e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25441) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KarmaGYZ commented on pull request #17533: [FLINK-24541][runtime] Quoting the external resource list in generati…
KarmaGYZ commented on pull request #17533: URL: https://github.com/apache/flink/pull/17533#issuecomment-951534765 @xintongsong Would you like to take a look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KarmaGYZ commented on a change in pull request #17533: [FLINK-24541][runtime] Quoting the external resource list in generati…
KarmaGYZ commented on a change in pull request #17533: URL: https://github.com/apache/flink/pull/17533#discussion_r736122468 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/TaskExecutorProcessUtils.java ## @@ -128,7 +128,10 @@ public static String generateDynamicConfigsStr( if (!taskExecutorProcessSpec.getExtendedResources().isEmpty()) { configs.put( ExternalResourceOptions.EXTERNAL_RESOURCE_LIST.key(), -String.join(";", taskExecutorProcessSpec.getExtendedResources().keySet())); +'"' ++ String.join( +";", taskExecutorProcessSpec.getExtendedResources().keySet()) ++ '"'); Review comment: Done @snuyanzin -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17533: [FLINK-24541][runtime] Quoting the external resource list in generati…
flinkbot edited a comment on pull request #17533: URL: https://github.com/apache/flink/pull/17533#issuecomment-948396321 ## CI report: * 5d34492f1f5bafd4d725a6740640feac1c2434ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25301) * 1b3591a9697dc708dd83ac8839b91bd383834f38 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25450) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17533: [FLINK-24541][runtime] Quoting the external resource list in generati…
flinkbot edited a comment on pull request #17533: URL: https://github.com/apache/flink/pull/17533#issuecomment-948396321 ## CI report: * 5d34492f1f5bafd4d725a6740640feac1c2434ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25301) * 1b3591a9697dc708dd83ac8839b91bd383834f38 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17534: [FLINK-24610][table-planner] Support sql projection common expression…
flinkbot edited a comment on pull request #17534: URL: https://github.com/apache/flink/pull/17534#issuecomment-948414124 ## CI report: * cac697f88dd20d6baadf9585f75f46407c6922c9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25369) * ea6ee40c799a4e29be9cb9c61924111d7332 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25449) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17534: [FLINK-24610][table-planner] Support sql projection common expression…
flinkbot edited a comment on pull request #17534: URL: https://github.com/apache/flink/pull/17534#issuecomment-948414124 ## CI report: * cac697f88dd20d6baadf9585f75f46407c6922c9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25369) * ea6ee40c799a4e29be9cb9c61924111d7332 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] wsry commented on pull request #476: Add blog post "Sort-Based Blocking Shuffle Implementation in Flink"
wsry commented on pull request #476: URL: https://github.com/apache/flink-web/pull/476#issuecomment-951525415 @infoverload Thanks for the review and feedback. I have updated the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] RocMarshal edited a comment on pull request #17352: [FLINK-10230][table] Support 'SHOW CREATE VIEW' syntax to print the query of a view
RocMarshal edited a comment on pull request #17352: URL: https://github.com/apache/flink/pull/17352#issuecomment-948195266 @wuchong I really appreciate it. I make some change based on your suggestions. PTAL. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] RocMarshal commented on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.
RocMarshal commented on pull request #16962: URL: https://github.com/apache/flink/pull/16962#issuecomment-951520003 Hi, @Airblader , I made some change based on your review comments. Could you help me to check it if you have free time? Thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] guoweiM commented on a change in pull request #14: [FLINK-8][iteration] Add per-round operator wrappers
guoweiM commented on a change in pull request #14: URL: https://github.com/apache/flink-ml/pull/14#discussion_r735447020 ## File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/perround/PerRoundOperatorWrapper.java ## @@ -0,0 +1,80 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.iteration.operator.perround; + +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.api.java.functions.KeySelector; +import org.apache.flink.iteration.IterationRecord; +import org.apache.flink.iteration.operator.OperatorWrapper; +import org.apache.flink.iteration.proxy.ProxyKeySelector; +import org.apache.flink.iteration.proxy.ProxyStreamPartitioner; +import org.apache.flink.iteration.typeinfo.IterationRecordTypeInfo; +import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator; +import org.apache.flink.streaming.api.operators.OneInputStreamOperator; +import org.apache.flink.streaming.api.operators.StreamOperator; +import org.apache.flink.streaming.api.operators.StreamOperatorFactory; +import org.apache.flink.streaming.api.operators.StreamOperatorParameters; +import org.apache.flink.streaming.api.operators.TwoInputStreamOperator; +import org.apache.flink.streaming.runtime.partitioner.StreamPartitioner; +import org.apache.flink.util.OutputTag; + +/** The operator wrapper implementation for per-round wrappers. */ +public class PerRoundOperatorWrapper implements OperatorWrapper> { + +@Override +public StreamOperator> wrap( Review comment: I think we should give more detailed exception: Tell the user that the iteration body could only build from the iteration variable and iteration constant. BTW, maybe we should also update the java doc. ## File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/perround/AbstractPerRoundWrapperOperator.java ## @@ -0,0 +1,482 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.iteration.operator.perround; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.typeutils.TypeSerializer; +import org.apache.flink.api.java.functions.KeySelector; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.MetricOptions; +import org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend; +import org.apache.flink.core.memory.ManagedMemoryUseCase; +import org.apache.flink.iteration.IterationRecord; +import org.apache.flink.iteration.operator.AbstractWrapperOperator; +import org.apache.flink.iteration.operator.OperatorUtils; +import org.apache.flink.iteration.proxy.state.ProxyStateSnapshotContext; +import org.apache.flink.iteration.proxy.state.ProxyStreamOperatorStateContext; +import org.apache.flink.iteration.utils.ReflectionUtils; +import org.apache.flink.metrics.MetricGroup; +import org.apache.flink.metrics.groups.OperatorMetricGroup; +import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.jobgraph.OperatorID; +import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups; +import org.apache.flink.runtime.state.AbstractKeyedStateBackend; +import org.apache.flink.runtime.state.CheckpointStreamFactory; +import org.apache.flink.runtime.state.DefaultOperatorStateBackend; +import org.apache.flink.runtime.state.KeyedStateBackend; +import
[GitHub] [flink] RocMarshal removed a comment on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.
RocMarshal removed a comment on pull request #16962: URL: https://github.com/apache/flink/pull/16962#issuecomment-943132915 Thanks @Airblader @twalthr @wuchong @xccui for the review. I updated it based on your comments. PTAL. Thx. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] RocMarshal commented on a change in pull request #17508: [FLINK-24351][docs] Translate "JSON Function" pages into Chinese
RocMarshal commented on a change in pull request #17508: URL: https://github.com/apache/flink/pull/17508#discussion_r736107166 ## File path: docs/data/sql_functions_zh.yml ## @@ -707,11 +707,9 @@ json: - sql: IS JSON [ { VALUE | SCALAR | ARRAY | OBJECT } ] table: STRING.isJson([JsonType type]) description: | - Determine whether a given string is valid JSON. + 判定给定字符串是否为有效的 JSON。 - Specifying the optional type argument puts a constraint on which type of JSON object is - allowed. If the string is valid JSON, but not that type, `false` is returned. The default is - `VALUE`. + 指定一个可选类型参数将会限制允许哪种类型的 JSON 对象。如果字符串是有效的 JSON,但不是指定的类型,则返回 'false'。默认值为 'VALUE'。 Review comment: > Just saw this. Is anything unclear here? I can't tell if you mean to say that something is wrong with the code or the (English) docs. From what I can see everything seems to be correct? @Airblader I just want to prove that description '是这个意思 '1' IS JSON SCALAR 返回为false' is inaccurate. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xmarker commented on pull request #17465: [FLINK-24488][connectors/kafka] KafkaRecordSerializationSchemaBuilder does not forward timestamp
xmarker commented on pull request #17465: URL: https://github.com/apache/flink/pull/17465#issuecomment-951501239 @fapaul OK,I squashed commits and pushed again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17537: [FLINK-19722][table-planner]Pushdown Watermark to SourceProvider (new Source)
flinkbot edited a comment on pull request #17537: URL: https://github.com/apache/flink/pull/17537#issuecomment-948566240 ## CI report: * Unknown: [CANCELED](TBD) * 3f5b75277f82eb706621be280fc7c12dbbbe3060 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25448) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-21474) Migrate ParquetTableSource to use DynamicTableSource Interface
[ https://issues.apache.org/jira/browse/FLINK-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-21474. Resolution: Invalid We have FileSystemTableSource now. Parquet as a format. > Migrate ParquetTableSource to use DynamicTableSource Interface > -- > > Key: FLINK-21474 > URL: https://issues.apache.org/jira/browse/FLINK-21474 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.12.1 >Reporter: Zhenqiu Huang >Priority: Minor > Labels: auto-unassigned, stale-minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17537: [FLINK-19722][table-planner]Pushdown Watermark to SourceProvider (new Source)
flinkbot edited a comment on pull request #17537: URL: https://github.com/apache/flink/pull/17537#issuecomment-948566240 ## CI report: * Unknown: [CANCELED](TBD) * 3f5b75277f82eb706621be280fc7c12dbbbe3060 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24381) Hidden password value when Flink SQL connector throw exception.
[ https://issues.apache.org/jira/browse/FLINK-24381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17434082#comment-17434082 ] Ada Wong commented on FLINK-24381: -- So I create a new ticket and PR to add support for using GlobalConfiguration#isSensitive, or in this PR add this. > Hidden password value when Flink SQL connector throw exception. > --- > > Key: FLINK-24381 > URL: https://issues.apache.org/jira/browse/FLINK-24381 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.13.2 >Reporter: Ada Wong >Assignee: Ada Wong >Priority: Major > Labels: pull-request-available > > This following is error message. Password is 'bar' and is displayed. > Could we hidden it to password='**' or password='' inspired by > Apache Kafka source code. > {code:java} > Missing required options are: > hosts > Unable to create a sink for writing table > 'default_catalog.default_database.dws'. > Table options are: > 'connector'='elasticsearch7-x' > 'index'='foo' > 'password'='bar' > at > org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:208) > at > org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:369) > at > org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:221) > at > org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:159 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-24381) Hidden password value when Flink SQL connector throw exception.
[ https://issues.apache.org/jira/browse/FLINK-24381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17434082#comment-17434082 ] Ada Wong edited comment on FLINK-24381 at 10/26/21, 2:26 AM: - So I create a new ticket and PR to add support for using GlobalConfiguration#isSensitive, or in this PR add this. was (Author: ana4): So I create a new ticket and PR to add support for using GlobalConfiguration#isSensitive, or in this PR add this. > Hidden password value when Flink SQL connector throw exception. > --- > > Key: FLINK-24381 > URL: https://issues.apache.org/jira/browse/FLINK-24381 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.13.2 >Reporter: Ada Wong >Assignee: Ada Wong >Priority: Major > Labels: pull-request-available > > This following is error message. Password is 'bar' and is displayed. > Could we hidden it to password='**' or password='' inspired by > Apache Kafka source code. > {code:java} > Missing required options are: > hosts > Unable to create a sink for writing table > 'default_catalog.default_database.dws'. > Table options are: > 'connector'='elasticsearch7-x' > 'index'='foo' > 'password'='bar' > at > org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:208) > at > org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:369) > at > org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:221) > at > org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:159 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
flinkbot edited a comment on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-950605426 ## CI report: * 974ba5b8300fcb03b38a4762f50f246597ce7e92 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25412) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xuyangzhong commented on pull request #17537: [FLINK-19722][table-planner]Pushdown Watermark to SourceProvider (new Source)
xuyangzhong commented on pull request #17537: URL: https://github.com/apache/flink/pull/17537#issuecomment-951496047 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
flinkbot edited a comment on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-950605426 ## CI report: * 974ba5b8300fcb03b38a4762f50f246597ce7e92 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25412) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
flinkbot edited a comment on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-950605426 ## CI report: * 974ba5b8300fcb03b38a4762f50f246597ce7e92 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25412) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24624) Add clean up phase when kubernetes session start failed
[ https://issues.apache.org/jira/browse/FLINK-24624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17434081#comment-17434081 ] Aitozi commented on FLINK-24624: [~wangyang0918] Besides that, I create this issue also want to discuss that do we have to guarantee the k8s resource is cleaned when we deploy a session or application mode cluster failed. As far as i know(I am doing some test to deploy kubernetes deploy), there is residual k8s resources in some situations like: 1. deployClusterInternal success , but failed to getClusterClient from the {{ClusterClientProvider}} which is shown in this issue. 2. deploySessionCluster success, but we have problem with deployment to spawn a ready pod due to the resource or schedule problem or webhook intercept of kubernetes We can simply to try-catch the deploySessionCluster method block to solve the case 1 which have been done in my PR. But I still have some concern about the case2. I think there there should be a deadline to spawn a cluster , the related resource should be destroy after timeout. > Add clean up phase when kubernetes session start failed > --- > > Key: FLINK-24624 > URL: https://issues.apache.org/jira/browse/FLINK-24624 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.14.0 >Reporter: Aitozi >Priority: Major > Labels: pull-request-available > > Serval k8s resources are created when deploy the kubernetes session. But the > resource are left there when deploy failed. This will lead to the next > failure or resource leak. So I think we should add the clean up phase when > start failed -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] ruanhang1993 commented on pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
ruanhang1993 commented on pull request #17556: URL: https://github.com/apache/flink/pull/17556#issuecomment-951492768 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17465: [FLINK-24488][connectors/kafka] KafkaRecordSerializationSchemaBuilder does not forward timestamp
flinkbot edited a comment on pull request #17465: URL: https://github.com/apache/flink/pull/17465#issuecomment-942425244 ## CI report: * 915e5f96c182f7ece0b1a5f1ed6452973fff8fa2 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25390) * 24f0a46cdfaecfa1ed2cb51b86f3f8b8acd14527 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25447) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17465: [FLINK-24488][connectors/kafka] KafkaRecordSerializationSchemaBuilder does not forward timestamp
flinkbot edited a comment on pull request #17465: URL: https://github.com/apache/flink/pull/17465#issuecomment-942425244 ## CI report: * 915e5f96c182f7ece0b1a5f1ed6452973fff8fa2 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25390) * 24f0a46cdfaecfa1ed2cb51b86f3f8b8acd14527 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24624) Add clean up phase when kubernetes session start failed
[ https://issues.apache.org/jira/browse/FLINK-24624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17434076#comment-17434076 ] Aitozi commented on FLINK-24624: Thanks for pointing that [~wangyang0918]. I think I can quickly update the flink doc to guide user to apply the node list permission. > Add clean up phase when kubernetes session start failed > --- > > Key: FLINK-24624 > URL: https://issues.apache.org/jira/browse/FLINK-24624 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.14.0 >Reporter: Aitozi >Priority: Major > Labels: pull-request-available > > Serval k8s resources are created when deploy the kubernetes session. But the > resource are left there when deploy failed. This will lead to the next > failure or resource leak. So I think we should add the clean up phase when > start failed -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules
ruanhang1993 commented on a change in pull request #17556: URL: https://github.com/apache/flink/pull/17556#discussion_r736086843 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterExtension.java ## @@ -0,0 +1,215 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.testutils; + +import org.apache.flink.api.common.time.Deadline; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.CoreOptions; +import org.apache.flink.configuration.JobManagerOptions; +import org.apache.flink.configuration.MemorySize; +import org.apache.flink.configuration.RestOptions; +import org.apache.flink.configuration.TaskManagerOptions; +import org.apache.flink.configuration.UnmodifiableConfiguration; +import org.apache.flink.core.testutils.CustomExtension; +import org.apache.flink.runtime.messages.Acknowledge; +import org.apache.flink.runtime.minicluster.MiniCluster; +import org.apache.flink.runtime.minicluster.MiniClusterConfiguration; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.Preconditions; +import org.apache.flink.util.concurrent.FutureUtils; + +import org.junit.jupiter.api.extension.ExtensionContext; +import org.junit.rules.TemporaryFolder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.net.URI; +import java.time.Duration; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +/** Extension which starts a {@link MiniCluster} for testing purposes. */ +public class MiniClusterExtension implements CustomExtension { +private static final MemorySize DEFAULT_MANAGED_MEMORY_SIZE = MemorySize.parse("80m"); + +protected final Logger log = LoggerFactory.getLogger(getClass()); + +private final TemporaryFolder temporaryFolder = new TemporaryFolder(); + +private final MiniClusterResourceConfiguration miniClusterResourceConfiguration; + +private MiniCluster miniCluster = null; + +private int numberSlots = -1; + +private UnmodifiableConfiguration restClusterClientConfig; + +public MiniClusterExtension( +final MiniClusterResourceConfiguration miniClusterResourceConfiguration) { +this.miniClusterResourceConfiguration = +Preconditions.checkNotNull(miniClusterResourceConfiguration); +} + +public int getNumberSlots() { +return numberSlots; +} + +public MiniCluster getMiniCluster() { +return miniCluster; +} + +public UnmodifiableConfiguration getClientConfiguration() { +return restClusterClientConfig; +} + +public URI getRestAddres() { +return miniCluster.getRestAddress().join(); +} + +@Override +public void before(ExtensionContext context) throws Exception { +temporaryFolder.create(); + +startMiniCluster(); + +numberSlots = +miniClusterResourceConfiguration.getNumberSlotsPerTaskManager() +* miniClusterResourceConfiguration.getNumberTaskManagers(); +} + +public void cancelAllJobs() { +try { +final Deadline jobCancellationDeadline = +Deadline.fromNow( +Duration.ofMillis( +miniClusterResourceConfiguration +.getShutdownTimeout() +.toMilliseconds())); + +final List> jobCancellationFutures = +miniCluster.listJobs() +.get( + jobCancellationDeadline.timeLeft().toMillis(), +TimeUnit.MILLISECONDS) +.stream() +.filter(status -> !status.getJobState().isGloballyTerminalState()) +.map(status -> miniCluster.cancelJob(status.getJobId())) +.collect(Collectors.toList()); + +FutureUtils.waitForAll(jobCancellationFutures) +
[GitHub] [flink] flinkbot edited a comment on pull request #17564: [FLINK-24640] CEIL, FLOOR built-in functions for Timestamp should res…
flinkbot edited a comment on pull request #17564: URL: https://github.com/apache/flink/pull/17564#issuecomment-951476287 ## CI report: * 6803fe5578e495e0ed28f38a35aa6fa4e6afff1a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25446) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17564: [FLINK-24640] CEIL, FLOOR built-in functions for Timestamp should res…
flinkbot commented on pull request #17564: URL: https://github.com/apache/flink/pull/17564#issuecomment-951476874 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 6803fe5578e495e0ed28f38a35aa6fa4e6afff1a (Tue Oct 26 01:40:32 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-24640).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17564: [FLINK-24640] CEIL, FLOOR built-in functions for Timestamp should res…
flinkbot commented on pull request #17564: URL: https://github.com/apache/flink/pull/17564#issuecomment-951476287 ## CI report: * 6803fe5578e495e0ed28f38a35aa6fa4e6afff1a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24640) CEIL, FLOOR built-in functions for Timestamp should respect DST
[ https://issues.apache.org/jira/browse/FLINK-24640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-24640: --- Labels: pull-request-available (was: ) > CEIL, FLOOR built-in functions for Timestamp should respect DST > --- > > Key: FLINK-24640 > URL: https://issues.apache.org/jira/browse/FLINK-24640 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > > The problem is that if there is a date in DST time then > {code:sql} > select floor(current_timestamp to year); > {code} > leads to result > {noformat} > 2021-12-31 23:00:00.000 > {noformat} > while expected is {{2022-01-01 00:00:00.000}} > same issue is with {{WEEK}}, {{QUARTER}} and {{MONTH}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] snuyanzin opened a new pull request #17564: [FLINK-24640] CEIL, FLOOR built-in functions for Timestamp should res…
snuyanzin opened a new pull request #17564: URL: https://github.com/apache/flink/pull/17564 ## What is the purpose of the change The PR makes `org.apache.flink.table.utils.DateTimeUtils#timestampCeil` and `org.apache.flink.table.utils.DateTimeUtils#timestampFloor` respecting DST ## Verifying this change There is `org.apache.flink.table.utils.DateTimeUtilsTest` introduced ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-24640) CEIL, FLOOR built-in functions for Timestamp should respect DST
Sergey Nuyanzin created FLINK-24640: --- Summary: CEIL, FLOOR built-in functions for Timestamp should respect DST Key: FLINK-24640 URL: https://issues.apache.org/jira/browse/FLINK-24640 Project: Flink Issue Type: Bug Components: Table SQL / API Reporter: Sergey Nuyanzin The problem is that if there is a date in DST time then {code:sql} select floor(current_timestamp to year); {code} leads to result {noformat} 2021-12-31 23:00:00.000 {noformat} while expected is {{2022-01-01 00:00:00.000}} same issue is with {{WEEK}}, {{QUARTER}} and {{MONTH}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19589) Expose S3 options for tagging and object lifecycle policy for FileSystem
[ https://issues.apache.org/jira/browse/FLINK-19589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17434053#comment-17434053 ] Padarn Wilson commented on FLINK-19589: --- Sorry I can't make time to do this myself as I stopped working with Flink and it became a very low priority for our work. > Expose S3 options for tagging and object lifecycle policy for FileSystem > > > Key: FLINK-19589 > URL: https://issues.apache.org/jira/browse/FLINK-19589 > Project: Flink > Issue Type: Improvement > Components: FileSystems >Affects Versions: 1.12.0 >Reporter: Padarn Wilson >Priority: Minor > Labels: auto-unassigned, stale-minor > > This ticket proposes to expose the management of two properties related S3 > Object management: > - [Lifecycle configuration > |https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html] > - [Object > tagging|https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.htm] > Being able to control these is useful for people who want to manage jobs > using S3 for checkpointing or job output, but need to control per job level > configuration of the tagging/lifecycle for the purposes of auditing or cost > control (for example deleting old state from S3) > Ideally, it would be possible to control this on each object being written by > Flink, or at least at a job level. > _Note_*:* Some related existing properties can be set using the hadoop module > using system properties: see for example > {code:java} > fs.s3a.acl.default{code} > which sets the default ACL on written objects. > *Solutions*: > 1) Modify hadoop module: > The above-linked module could be updated in order to have a new property (and > similar for lifecycle) > fs.s3a.tags.default > which could be a comma separated list of tags to set. For example > {code:java} > fs.s3a.acl.default = "jobname:JOBNAME,owner:OWNER"{code} > This seems like a natural place to put this logic (and is outside of Flink if > we decide to go this way. However it does not allow for a sink and checkpoint > to have different values for these. > 2) Expose withTagging from module > The hadoop module used by Flink's existing filesystem has already exposed put > request level tagging (see > [this|https://github.com/aws/aws-sdk-java/blob/c06822732612d7208927d2a678073098522085c3/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/PutObjectRequest.java#L292]). > This could be used in the Flink filesystem plugin to expose these options. A > possible approach could be to somehow incorporate it into the file path, e.g., > {code:java} > path = "TAGS:s3://bucket/path"{code} > Or possible as an option that can be applied to the checkpoint and sink > configurations, e.g., > {code:java} > env.getCheckpointingConfig().setS3Tags(TAGS) {code} > and similar for a file sink. > _Note_: The lifecycle can also be managed using the module: see > [here|https://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-java.html]. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25442) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-24634) Java 11 profile should target JDK 8
[ https://issues.apache.org/jira/browse/FLINK-24634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-24634. Resolution: Fixed master: a392d19ae9f2693e66f73fcf73e5a2940b3dd6d2 > Java 11 profile should target JDK 8 > --- > > Key: FLINK-24634 > URL: https://issues.apache.org/jira/browse/FLINK-24634 > Project: Flink > Issue Type: Technical Debt > Components: Build System >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > Thee {{java11}} profile currently targets Java 11. This was useful because we > saw that doing so reveals additional issues that are not detected when > building for Java 8. The end goal was to ensure a smooth transition once we > switch. > However, this has adverse effects on developer productivity. > If you happen to switch between Java versions (for example, because of > separate environments, or because certain features require Java 8), then you > can easily run into UnsupportedVersionErrors when attempting to use Java 8 > with Java 11 bytecode. > IntelliJ also picks up on this and automatically sets the language level to > 11, which means that it will readily allow you to use Java 11 exclusive APIs > that will fail on CI later on. > To remedy this I propose to split the profile. > The {{java11}} profile will pretty much stay as is, except that it is > targeting java 8. The value proposition of this profile is being able to > build Flink for Java 8 with Java 11. > A new explicitly-opt-in {{java11-target}} profile then sets the target > version to Java 11, which we will use on CI. This profile will ensure that we > can readily switch to Java 11 as the target in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol merged pull request #17561: [FLINK-24634][build] Make Java 11 target opt-in
zentol merged pull request #17561: URL: https://github.com/apache/flink/pull/17561 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-24018) Remove Scala dependencies from flink-cep/flink-streaming-java
[ https://issues.apache.org/jira/browse/FLINK-24018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-24018. Resolution: Fixed master: dd48d058c6b745f505870836048284a76a23f7cc > Remove Scala dependencies from flink-cep/flink-streaming-java > - > > Key: FLINK-24018 > URL: https://issues.apache.org/jira/browse/FLINK-24018 > Project: Flink > Issue Type: Sub-task > Components: Build System >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > To finalize the Scala isolation we should remove the Scala dependencies from > flink-cep/flink-streaming-java, and resolve the resulting cascade of > unnecessary scala suffixes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol merged pull request #17518: [FLINK-24018][build] Remove Scala dependencies from Java APIs
zentol merged pull request #17518: URL: https://github.com/apache/flink/pull/17518 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-9631) use Files.createDirectories instead of directory.mkdirs
[ https://issues.apache.org/jira/browse/FLINK-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-9631: -- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > use Files.createDirectories instead of directory.mkdirs > --- > > Key: FLINK-9631 > URL: https://issues.apache.org/jira/browse/FLINK-9631 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Affects Versions: 1.4.2, 1.5.0 > Environment: flink1.4 > jdk1.8 latest > linux 2.6 >Reporter: makeyang >Priority: Minor > Labels: auto-unassigned, stale-minor > > job can't be run due to below exception: > {color:#6a8759}Could not create RocksDB data directory{color} > but with this exception, I can't tell exactly why. > so I suggest Files.createDirectories which throw exception be used rather > than File.mkdirs > > I have some more suggestions: > # should we use Files.createDirectories to relpace File.mkdirs? > # each time task manager throw exception to jobmanager, should IP+nodeId be > contained in exception, which means we should define more flink exception > which is used to wrap other exceptions such as jdk exceptions? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-7957) Add MetricGroup#getLogicalScope
[ https://issues.apache.org/jira/browse/FLINK-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-7957: -- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Add MetricGroup#getLogicalScope > --- > > Key: FLINK-7957 > URL: https://issues.apache.org/jira/browse/FLINK-7957 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.4.0 >Reporter: Chesnay Schepler >Priority: Minor > Labels: auto-unassigned, stale-minor > > Various reporters make use of a generalized scope string (e.g. > "taskmanager.job.task."). This string can be generated with > {{AbstractMetricGroup#getLogicalScope}}, which is however an internal API. As > a result, the access pattern currently looks like this: > {code} > ((FrontMetricGroup>)group).getLogicalScope(CHARACTER_FILTER, > '.') > {code} > Given the wide-spread of this kind of scope i propose to move it into the > MetricGroup interface. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24482) KafkaShuffleTestBase.testMetrics fails on Azure
[ https://issues.apache.org/jira/browse/FLINK-24482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-24482: --- Labels: stale-critical test-stability (was: test-stability) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Critical but is unassigned and neither itself nor its Sub-Tasks have been updated for 14 days. I have gone ahead and marked it "stale-critical". If this ticket is critical, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > KafkaShuffleTestBase.testMetrics fails on Azure > --- > > Key: FLINK-24482 > URL: https://issues.apache.org/jira/browse/FLINK-24482 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.13.2 >Reporter: Till Rohrmann >Priority: Critical > Labels: stale-critical, test-stability > Attachments: logs-ci_build-test_ci_build_kafka_gelly-1633289435.zip > > > The test {{KafkaShuffleTestBase.testMetrics}} fails with > {code} > Oct 06 17:30:40 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 33.966 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleTestBase > Oct 06 17:30:40 [ERROR] > testMetrics(org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleTestBase) > Time elapsed: 2.948 s <<< FAILURE! > Oct 06 17:30:40 java.lang.AssertionError: expected:<14716> but was:<14206> > Oct 06 17:30:40 at org.junit.Assert.fail(Assert.java:88) > Oct 06 17:30:40 at org.junit.Assert.failNotEquals(Assert.java:834) > Oct 06 17:30:40 at org.junit.Assert.assertEquals(Assert.java:645) > Oct 06 17:30:40 at org.junit.Assert.assertEquals(Assert.java:631) > Oct 06 17:30:40 at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMetricsTest(KafkaConsumerTestBase.java:1892) > Oct 06 17:30:40 at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.testMetrics(KafkaConsumerTestBase.java:1808) > Oct 06 17:30:40 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Oct 06 17:30:40 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Oct 06 17:30:40 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Oct 06 17:30:40 at java.lang.reflect.Method.invoke(Method.java:498) > Oct 06 17:30:40 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > Oct 06 17:30:40 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Oct 06 17:30:40 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > Oct 06 17:30:40 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Oct 06 17:30:40 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Oct 06 17:30:40 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Oct 06 17:30:40 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > Oct 06 17:30:40 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > Oct 06 17:30:40 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > Oct 06 17:30:40 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Oct 06 17:30:40 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Oct 06 17:30:40 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Oct 06 17:30:40 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > Oct 06 17:30:40 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > Oct 06 17:30:40 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Oct 06 17:30:40 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Oct 06 17:30:40 at >
[jira] [Updated] (FLINK-13035) LocalStreamEnvironment shall launch actuall task solts
[ https://issues.apache.org/jira/browse/FLINK-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-13035: --- Labels: auto-unassigned pull-request-available stale-minor (was: auto-unassigned pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > LocalStreamEnvironment shall launch actuall task solts > --- > > Key: FLINK-13035 > URL: https://issues.apache.org/jira/browse/FLINK-13035 > Project: Flink > Issue Type: Improvement > Components: Runtime / Task >Affects Versions: 1.9.0 >Reporter: Wong >Priority: Minor > Labels: auto-unassigned, pull-request-available, stale-minor > Time Spent: 10m > Remaining Estimate: 0h > > When developing flink jobs, there is some times use different soltgroup to > expand threads.But now minicluster use default > jobGraph.getMaximumParallelism(), sometimes is less than actual solts,so it > can‘’t lanch job if not set TaskManagerOptions.NUM_TASK_SLOTS . Is this > needed? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19589) Expose S3 options for tagging and object lifecycle policy for FileSystem
[ https://issues.apache.org/jira/browse/FLINK-19589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-19589: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Expose S3 options for tagging and object lifecycle policy for FileSystem > > > Key: FLINK-19589 > URL: https://issues.apache.org/jira/browse/FLINK-19589 > Project: Flink > Issue Type: Improvement > Components: FileSystems >Affects Versions: 1.12.0 >Reporter: Padarn Wilson >Priority: Minor > Labels: auto-unassigned, stale-minor > > This ticket proposes to expose the management of two properties related S3 > Object management: > - [Lifecycle configuration > |https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html] > - [Object > tagging|https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.htm] > Being able to control these is useful for people who want to manage jobs > using S3 for checkpointing or job output, but need to control per job level > configuration of the tagging/lifecycle for the purposes of auditing or cost > control (for example deleting old state from S3) > Ideally, it would be possible to control this on each object being written by > Flink, or at least at a job level. > _Note_*:* Some related existing properties can be set using the hadoop module > using system properties: see for example > {code:java} > fs.s3a.acl.default{code} > which sets the default ACL on written objects. > *Solutions*: > 1) Modify hadoop module: > The above-linked module could be updated in order to have a new property (and > similar for lifecycle) > fs.s3a.tags.default > which could be a comma separated list of tags to set. For example > {code:java} > fs.s3a.acl.default = "jobname:JOBNAME,owner:OWNER"{code} > This seems like a natural place to put this logic (and is outside of Flink if > we decide to go this way. However it does not allow for a sink and checkpoint > to have different values for these. > 2) Expose withTagging from module > The hadoop module used by Flink's existing filesystem has already exposed put > request level tagging (see > [this|https://github.com/aws/aws-sdk-java/blob/c06822732612d7208927d2a678073098522085c3/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/PutObjectRequest.java#L292]). > This could be used in the Flink filesystem plugin to expose these options. A > possible approach could be to somehow incorporate it into the file path, e.g., > {code:java} > path = "TAGS:s3://bucket/path"{code} > Or possible as an option that can be applied to the checkpoint and sink > configurations, e.g., > {code:java} > env.getCheckpointingConfig().setS3Tags(TAGS) {code} > and similar for a file sink. > _Note_: The lifecycle can also be managed using the module: see > [here|https://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-java.html]. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-12637) Add metrics for floatingBufferUsage and exclusiveBufferUsage for credit based mode
[ https://issues.apache.org/jira/browse/FLINK-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-12637: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Add metrics for floatingBufferUsage and exclusiveBufferUsage for credit based > mode > -- > > Key: FLINK-12637 > URL: https://issues.apache.org/jira/browse/FLINK-12637 > Project: Flink > Issue Type: Improvement > Components: Runtime / Network >Affects Versions: 1.9.0 >Reporter: Aitozi >Priority: Minor > Labels: auto-unassigned, stale-minor > > Described > [here|https://github.com/apache/flink/pull/8455#issuecomment-496077999] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21441) FlinkKinesisProducer does not forward custom endpoint
[ https://issues.apache.org/jira/browse/FLINK-21441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-21441: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > FlinkKinesisProducer does not forward custom endpoint > - > > Key: FLINK-21441 > URL: https://issues.apache.org/jira/browse/FLINK-21441 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.12.1, 1.13.0 >Reporter: Ingo Bürk >Priority: Minor > Labels: auto-unassigned, stale-minor > > For the Kinesis connector, between aws.region and aws.endpoint one is > required. For FlinkKinesisProducer, however, aws.region is always required. > However, in this case the aws.endpoint never ends up being used since the > code only calls KinesisProducerConfiguration#fromProperties. One would have > to set a "KinesisEndpoint" property instead, but this is not a valid property. > This should be fixed such that a custom endpoint can be used, probably the > same goes for the port. > Also, this is currently being worked around here: > https://github.com/apache/flink/blob/master/flink-end-to-end-tests/flink-streaming-kinesis-test/src/main/java/org/apache/flink/streaming/kinesis/test/KinesisExample.java#L70 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-8513) Add documentation for connecting to non-AWS S3 endpoints
[ https://issues.apache.org/jira/browse/FLINK-8513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-8513: -- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Add documentation for connecting to non-AWS S3 endpoints > > > Key: FLINK-8513 > URL: https://issues.apache.org/jira/browse/FLINK-8513 > Project: Flink > Issue Type: Improvement > Components: Connectors / FileSystem, Documentation >Reporter: chris snow >Priority: Minor > Labels: auto-unassigned, stale-minor > > It would be useful if the documentation provided information on connecting to > non-AWS S3 endpoints when using presto. For example: > > > You need to configure both {{s3.access-key}} and {{s3.secret-key}} in Flink's > {{flink-conf.yaml}}: > {code:java} > s3.access-key: your-access-key > s3.secret-key: your-secret-key{code} > If you are using a non-AWS S3 endpoint (such as [IBM's Cloud Object > Storage|https://www.ibm.com/cloud/object-storage]), you can configure the S3 > endpoint in Flink's {{flink-conf.yaml}}: > {code:java} > s3.endpoint: your-endpoint-hostname{code} > > > Source: > [https://github.com/apache/flink/blob/master/docs/ops/deployment/aws.md] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20004) UpperLimitExceptionParameter description is misleading
[ https://issues.apache.org/jira/browse/FLINK-20004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-20004: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > UpperLimitExceptionParameter description is misleading > -- > > Key: FLINK-20004 > URL: https://issues.apache.org/jira/browse/FLINK-20004 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.11.2 >Reporter: Flavio Pompermaier >Priority: Minor > Labels: auto-unassigned, stale-minor > > The maxExceptions query parameter of /jobs/:jobid/exceptions REST API is an > integer parameter, not a list of comma separated values..this is probably a > cut-and-paste error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21474) Migrate ParquetTableSource to use DynamicTableSource Interface
[ https://issues.apache.org/jira/browse/FLINK-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-21474: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Migrate ParquetTableSource to use DynamicTableSource Interface > -- > > Key: FLINK-21474 > URL: https://issues.apache.org/jira/browse/FLINK-21474 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.12.1 >Reporter: Zhenqiu Huang >Priority: Minor > Labels: auto-unassigned, stale-minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21473) Migrate ParquetInputFormat to BulkFormat interface
[ https://issues.apache.org/jira/browse/FLINK-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-21473: --- Labels: auto-unassigned stale-minor (was: auto-unassigned) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Migrate ParquetInputFormat to BulkFormat interface > -- > > Key: FLINK-21473 > URL: https://issues.apache.org/jira/browse/FLINK-21473 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.12.1 >Reporter: Zhenqiu Huang >Priority: Minor > Labels: auto-unassigned, stale-minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13259) Typos about "access"
[ https://issues.apache.org/jira/browse/FLINK-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-13259: --- Labels: auto-unassigned pull-request-available stale-minor (was: auto-unassigned pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Minor but is unassigned and neither itself nor its Sub-Tasks have been updated for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is still Minor, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Typos about "access" > > > Key: FLINK-13259 > URL: https://issues.apache.org/jira/browse/FLINK-13259 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Affects Versions: 1.8.0, 1.8.1 >Reporter: Charles Xu >Priority: Minor > Labels: auto-unassigned, pull-request-available, stale-minor > Time Spent: 10m > Remaining Estimate: 0h > > Typo in PojoComparator, FieldAccessor, HashTableITCase, > ReOpenableHashTableITCase, ReOpenableHashTableTestBase: accesss -> access. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24639) Improve assignment of Kinesis shards to subtasks
John Karp created FLINK-24639: - Summary: Improve assignment of Kinesis shards to subtasks Key: FLINK-24639 URL: https://issues.apache.org/jira/browse/FLINK-24639 Project: Flink Issue Type: Improvement Components: Connectors / Kinesis Reporter: John Karp Attachments: Screen Shot 2021-10-25 at 5.11.29 PM.png The default assigner of Kinesis shards to Flink subtasks simply takes the hashCode() of the StreamShardHandle (an integer), which is then interpreted modulo the number of subtasks. This basically does random-ish but deterministic assignment of shards to subtasks. However, this can lead to some subtasks getting several times the number of shards as others. To prevent those unlucky subtasks from being overloaded, the overall Flink cluster must be over-provisioned, so that each subtask has more headroom to handle any over-assignment of shards. We can do better here, at least if Kinesis is being used in a common way. Each record sent to a Kinesis stream has a particular hash key in the range [0, 2^128), which is used to determine which shard gets used; each shard has an assigned range of hash keys. By default Kinesis assigns each shard equal fractions of the hash-key space. And when you scale up or down using UpdateShardCount, it tries to maintain equal fractions to the extent possible. Also, a shard's hash key range is fixed at creation; it can only be replaced by new shards, which split it, or merge it. Given the above, one way to assign shards to subtasks is to do a linear mapping from hash-keys in range [0, 2^128) to subtask indices in [0, nSubtasks). For the 'coordinate' of each shard we pick the middle of the shard's range, to ensure neither subtask 0 nor subtask (n-1) is assigned too many. However this will probably not be helpful for Kinesis users that don't randomly assign partition or hash keys to Kinesis records. The existing assigner is probably better for them. I ran a simulation of the default shard assigner versus some alternatives, using shards taken from one of our Kinesis streams; results attached. The measure I used I call 'overload' and it measures how many times more shards the most heavily-loaded subtask has than is necessary. (DEFAULT is the default assigner, Sha256 is similar to the default but with a stronger hashing function, ShardId extracts the shard number from the shardId and uses that, and HashKey is the one I describe above.) Patch is at: https://github.com/apache/flink/compare/master...john-karp:uniform-shard-assigner?expand=1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17562: [FLINK-16206][table-planner] Support JSON_ARRAYAGG
flinkbot edited a comment on pull request #17562: URL: https://github.com/apache/flink/pull/17562#issuecomment-951200622 ## CI report: * 7a90ad8e577b79b3f68fbed12a5824f6b822c129 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25434) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17518: [FLINK-24018][build] Remove Scala dependencies from Java APIs
flinkbot edited a comment on pull request #17518: URL: https://github.com/apache/flink/pull/17518#issuecomment-946526433 ## CI report: * d5a8ba676046f94bcdb8607231becd051ee4c7db Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25433) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17549: [FLINK-16205][table-planner] Support JSON_OBJECTAGG
flinkbot edited a comment on pull request #17549: URL: https://github.com/apache/flink/pull/17549#issuecomment-949678124 ## CI report: * 5ea492bfda82c87d79bf7ab6fb999f65e8c872ca Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25435) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17563: [hotfix][flink-connector-jdbc] Removed un-used variable from test fixture
flinkbot edited a comment on pull request #17563: URL: https://github.com/apache/flink/pull/17563#issuecomment-951350039 ## CI report: * ac1900b42fb04a42e58a297c2e1bef8f66f7f10a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25443) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17563: [hotfix][flink-connector-jdbc] Removed un-used variable from test fixture
flinkbot commented on pull request #17563: URL: https://github.com/apache/flink/pull/17563#issuecomment-951351112 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit ac1900b42fb04a42e58a297c2e1bef8f66f7f10a (Mon Oct 25 21:35:39 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17563: [hotfix][flink-connector-jdbc] Removed un-used variable from test fixture
flinkbot commented on pull request #17563: URL: https://github.com/apache/flink/pull/17563#issuecomment-951350039 ## CI report: * ac1900b42fb04a42e58a297c2e1bef8f66f7f10a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] mans2singh opened a new pull request #17563: [hotfix][flink-connector-jdbc] Removed un-used variable from test fixture
mans2singh opened a new pull request #17563: URL: https://github.com/apache/flink/pull/17563 ## What is the purpose of the change * JdbcTestFixture does not use SELECT_ID_BOOKS variable ## Brief change log * Removed un-used variable ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25442) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17472: [FLINK-24486][rest] Make async result store duration configurable
flinkbot edited a comment on pull request #17472: URL: https://github.com/apache/flink/pull/17472#issuecomment-943076656 ## CI report: * e377697fc0cb853615137e934d268650259762f4 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25314) * a8cd28194ad50044279d895a36d08a84da8fb78e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25441) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17472: [FLINK-24486][rest] Make async result store duration configurable
flinkbot edited a comment on pull request #17472: URL: https://github.com/apache/flink/pull/17472#issuecomment-943076656 ## CI report: * e377697fc0cb853615137e934d268650259762f4 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25314) * a8cd28194ad50044279d895a36d08a84da8fb78e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) * bb80cf155d7e802f2b486dedf37d6e60dbf0a5d8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23986) Add support for opting-out of Scala
[ https://issues.apache.org/jira/browse/FLINK-23986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-23986: - Release Note: The Java DataSet/-Stream APIs are now independent from Scala and no longer transitively depend on it. The implications are the following: a) If you only intend to use the Java APIs, with Java types, then you can opt-in to a Scala-free Flink by removing the flink-scala jar from the lib/ directory of the distribution. You are then free to use any Scala version and Scala libraries. Do note that Scala itself cannot be bundled in user-jars; instead the jars MUST be put into the lib/ directory of the distribution. b) If you relied on the Scala APIs, without an explicit dependency on them, then you may experience issues when building your projects. You can solve this by adding explicit dependencies to the APIs that you are using. This should primarily affect users of the Scala DataStream/CEP APIs. c) A large number of modules have lost their Scala suffix. Any dependency on one of the modules listed below needs to be updated. Further caution is advised when mixing dependencies from different Flink versions (e.g., an older connector), as you may now end up pulling in multiple versions of a single module (that would previously be prevented by the name being equal). Any dependency to one of the following modules needs to be updated to no longer include a suffix: flink-cep flink-clients flink-connector-elasticsearch-base flink-connector-elasticsearch5 flink-connector-elasticsearch6 flink-connector-elasticsearch7 flink-connector-gcp-pubsub flink-connector-hbase-1.4 flink-connector-hbase-2.2 flink-connector-hbase-base flink-connector-jdbc flink-connector-kafka flink-connector-kinesis flink-connector-nifi flink-connector-pulsar flink-connector-rabbitmq flink-connector-testing flink-connector-twitter flink-connector-wikiedits flink-container flink-dstl-dfs flink-gelly flink-hadoop-bulk flink-kubernetes flink-runtime-web flink-scala flink-sql-connector-elasticsearch6 flink-sql-connector-elasticsearch7 flink-sql-connector-hbase-1.4 flink-sql-connector-hbase-2.2 flink-sql-connector-kafka flink-sql-connector-kinesis flink-sql-connector-rabbitmq flink-state-processor-api flink-statebackend-rocksdb flink-streaming-java flink-table-api-java-bridge flink-test-utils flink-yarn was: The Java DataSet/-Stream APIs are now independent from Scala and no longer transitively depend on it. The implications are the following: a) If you relied on the Scala APIs, without an explicit dependency on them, then you may experience issues when building your projects. You can solve this by adding explicit dependencies to the APIs that you are using. This should primarily affect users of the Scala DataStream/CEP APIs. b) If you only intend to use the Java APIs, with Java types, then you can opt-in to a Scala-free Flink by removing the flink-scala jar from the lib/ directory of the distribution. You are then free to use any Scala version and Scala libraries. Do note that Scala itself cannot be bundled in user-jars; instead the jars MUST be put into the lib/ directory of the distribution. c) A large number of modules have lost their Scala suffix. Any dependency on one of the modules listed below needs to be updated. Further caution is advised when mixing dependencies from different Flink versions (e.g., an older connector), as you may now end up pulling in multiple versions of a single module (that would previously be prevented by the name being equal). Any dependency to one of the following modules needs to be updated to no longer include a suffix: flink-cep flink-clients flink-connector-elasticsearch-base flink-connector-elasticsearch5 flink-connector-elasticsearch6 flink-connector-elasticsearch7 flink-connector-gcp-pubsub flink-connector-hbase-1.4 flink-connector-hbase-2.2 flink-connector-hbase-base flink-connector-jdbc flink-connector-kafka flink-connector-kinesis flink-connector-nifi flink-connector-pulsar flink-connector-rabbitmq flink-connector-testing flink-connector-twitter flink-connector-wikiedits flink-container flink-dstl-dfs flink-gelly flink-hadoop-bulk flink-kubernetes flink-runtime-web flink-scala flink-sql-connector-elasticsearch6 flink-sql-connector-elasticsearch7 flink-sql-connector-hbase-1.4 flink-sql-connector-hbase-2.2 flink-sql-connector-kafka flink-sql-connector-kinesis flink-sql-connector-rabbitmq flink-state-processor-api flink-statebackend-rocksdb flink-streaming-java flink-table-api-java-bridge flink-test-utils flink-yarn > Add support for opting-out of Scala > --- > > Key: FLINK-23986 > URL: https://issues.apache.org/jira/browse/FLINK-23986 > Project: Flink > Issue Type: Technical Debt > Components: API / DataSet, API / DataStream >Reporter: Chesnay Schepler >
[jira] [Updated] (FLINK-23986) Add support for opting-out of Scala
[ https://issues.apache.org/jira/browse/FLINK-23986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-23986: - Release Note: The Java DataSet/-Stream APIs are now independent from Scala and no longer transitively depend on it. The implications are the following: a) If you relied on the Scala APIs, without an explicit dependency on them, then you may experience issues when building your projects. You can solve this by adding explicit dependencies to the APIs that you are using. This should primarily affect users of the Scala DataStream/CEP APIs. b) If you only intend to use the Java APIs, with Java types, then you can opt-in to a Scala-free Flink by removing the flink-scala jar from the lib/ directory of the distribution. You are then free to use any Scala version and Scala libraries. Do note that Scala itself cannot be bundled in user-jars; instead the jars MUST be put into the lib/ directory of the distribution. c) A large number of modules have lost their Scala suffix. Any dependency on one of the modules listed below needs to be updated. Further caution is advised when mixing dependencies from different Flink versions (e.g., an older connector), as you may now end up pulling in multiple versions of a single module (that would previously be prevented by the name being equal). Any dependency to one of the following modules needs to be updated to no longer include a suffix: flink-cep flink-clients flink-connector-elasticsearch-base flink-connector-elasticsearch5 flink-connector-elasticsearch6 flink-connector-elasticsearch7 flink-connector-gcp-pubsub flink-connector-hbase-1.4 flink-connector-hbase-2.2 flink-connector-hbase-base flink-connector-jdbc flink-connector-kafka flink-connector-kinesis flink-connector-nifi flink-connector-pulsar flink-connector-rabbitmq flink-connector-testing flink-connector-twitter flink-connector-wikiedits flink-container flink-dstl-dfs flink-gelly flink-hadoop-bulk flink-kubernetes flink-runtime-web flink-scala flink-sql-connector-elasticsearch6 flink-sql-connector-elasticsearch7 flink-sql-connector-hbase-1.4 flink-sql-connector-hbase-2.2 flink-sql-connector-kafka flink-sql-connector-kinesis flink-sql-connector-rabbitmq flink-state-processor-api flink-statebackend-rocksdb flink-streaming-java flink-table-api-java-bridge flink-test-utils flink-yarn > Add support for opting-out of Scala > --- > > Key: FLINK-23986 > URL: https://issues.apache.org/jira/browse/FLINK-23986 > Project: Flink > Issue Type: Technical Debt > Components: API / DataSet, API / DataStream >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Fix For: 1.15.0 > > > The goal of this ticket is to fully isolate the Java APIs (excluding the > Table API) from Scala, such that jobs that don't require Scala can be run > without Scala being in {{lib/}}. > In particular we want to allow Scala users that use the Java APIs to use > whichever Scala version they want to use. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17549: [FLINK-16205][table-planner] Support JSON_OBJECTAGG
flinkbot edited a comment on pull request #17549: URL: https://github.com/apache/flink/pull/17549#issuecomment-949678124 ## CI report: * 5c0c45fbb0cc26c41ac386601d0682245ec5b814 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25432) * 5ea492bfda82c87d79bf7ab6fb999f65e8c872ca Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25435) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-24638) Unknown variable or type "org.apache.flink.table.utils.DateTimeUtils"
Sergey Nuyanzin created FLINK-24638: --- Summary: Unknown variable or type "org.apache.flink.table.utils.DateTimeUtils" Key: FLINK-24638 URL: https://issues.apache.org/jira/browse/FLINK-24638 Project: Flink Issue Type: Bug Components: Table SQL / API Reporter: Sergey Nuyanzin The problem is not constantly reproduced however it is reproduced almost every 2-nd query via FlinkSqlClient containing {{current_timestamp}}, {{current_date}} e.g. {code:sql} select extract(millennium from current_date); select extract(millennium from current_timestamp); select floor(current_timestamp to day); select ceil(current_timestamp to day); {code} trace {noformat} [ERROR] Could not execute SQL statement. Reason: org.codehaus.commons.compiler.CompileException: Line 59, Column 16: Unknown variable or type "org.apache.flink.table.utils.DateTimeUtils" at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12211) at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6860) at org.codehaus.janino.UnitCompiler.access$13600(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$22.visitPackage(UnitCompiler.java:6472) at org.codehaus.janino.UnitCompiler$22.visitPackage(UnitCompiler.java:6469) at org.codehaus.janino.Java$Package.accept(Java.java:4248) at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6469) at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6855) at org.codehaus.janino.UnitCompiler.access$14200(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$22$2$1.visitAmbiguousName(UnitCompiler.java:6497) at org.codehaus.janino.UnitCompiler$22$2$1.visitAmbiguousName(UnitCompiler.java:6494) at org.codehaus.janino.Java$AmbiguousName.accept(Java.java:4224) at org.codehaus.janino.UnitCompiler$22$2.visitLvalue(UnitCompiler.java:6494) at org.codehaus.janino.UnitCompiler$22$2.visitLvalue(UnitCompiler.java:6490) at org.codehaus.janino.Java$Lvalue.accept(Java.java:4148) at org.codehaus.janino.UnitCompiler$22.visitRvalue(UnitCompiler.java:6490) at org.codehaus.janino.UnitCompiler$22.visitRvalue(UnitCompiler.java:6469) at org.codehaus.janino.Java$Rvalue.accept(Java.java:4116) at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6469) at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9026) at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:7106) at org.codehaus.janino.UnitCompiler.access$15800(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$22$2.visitMethodInvocation(UnitCompiler.java:6517) at org.codehaus.janino.UnitCompiler$22$2.visitMethodInvocation(UnitCompiler.java:6490) at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5073) at org.codehaus.janino.UnitCompiler$22.visitRvalue(UnitCompiler.java:6490) at org.codehaus.janino.UnitCompiler$22.visitRvalue(UnitCompiler.java:6469) at org.codehaus.janino.Java$Rvalue.accept(Java.java:4116) at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6469) at org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9237) at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9123) at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9025) at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5062) at org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4423) at org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4396) at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5073) at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396) at org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5662) at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:3792) at org.codehaus.janino.UnitCompiler.access$6100(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$13.visitAssignment(UnitCompiler.java:3754) at org.codehaus.janino.UnitCompiler$13.visitAssignment(UnitCompiler.java:3734) at org.codehaus.janino.Java$Assignment.accept(Java.java:4477) at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:3734) at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:2360) at org.codehaus.janino.UnitCompiler.access$1800(UnitCompiler.java:215) at org.codehaus.janino.UnitCompiler$6.visitExpressionStatement(UnitCompiler.java:1494) at org.codehaus.janino.UnitCompiler$6.visitExpressionStatement(UnitCompiler.java:1487) at
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25440) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher
flinkbot edited a comment on pull request #17474: URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500 ## CI report: * 083a2773e9edc56a79acebb824dec0e424442456 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25377) * 97737102fdbf40713cdb7cb47285a82ece786934 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-24637) Move savepoint disposal operation cache into Dispatcher
Chesnay Schepler created FLINK-24637: Summary: Move savepoint disposal operation cache into Dispatcher Key: FLINK-24637 URL: https://issues.apache.org/jira/browse/FLINK-24637 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination, Runtime / REST Reporter: Chesnay Schepler Fix For: 1.15.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18312) Move async savepoint operation cache into Dispatcher
[ https://issues.apache.org/jira/browse/FLINK-18312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-18312: - Summary: Move async savepoint operation cache into Dispatcher (was: Move async operation cache into Dispatcher) > Move async savepoint operation cache into Dispatcher > - > > Key: FLINK-18312 > URL: https://issues.apache.org/jira/browse/FLINK-18312 > Project: Flink > Issue Type: Sub-task > Components: Runtime / REST >Affects Versions: 1.14.0, 1.13.2 >Reporter: Yu Wang >Assignee: Nicolaus Weidner >Priority: Major > Labels: pull-request-available > > The information about in-progress async operations is currently stored in the > respective REST handler. This means that users cannot query standby > Dispatchers for the status of a savepoint operation. > We should move this state into the Dispatcher. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24636) Move cluster deletion operation cache into ResourceManager
Chesnay Schepler created FLINK-24636: Summary: Move cluster deletion operation cache into ResourceManager Key: FLINK-24636 URL: https://issues.apache.org/jira/browse/FLINK-24636 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination, Runtime / REST Reporter: Chesnay Schepler Fix For: 1.15.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17561: [FLINK-24634][build] Make Java 11 target opt-in
flinkbot edited a comment on pull request #17561: URL: https://github.com/apache/flink/pull/17561#issuecomment-951012765 ## CI report: * 4af1896675848f0ee6ecde772275549b6b29fa57 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25431) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17550: [FLINK-24599][table] Replace static methods with member methods
flinkbot edited a comment on pull request #17550: URL: https://github.com/apache/flink/pull/17550#issuecomment-949779530 ## CI report: * 220fcb4c0756aa76684923447db7983457b353a5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25429) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17549: [FLINK-16205][table-planner] Support JSON_OBJECTAGG
flinkbot edited a comment on pull request #17549: URL: https://github.com/apache/flink/pull/17549#issuecomment-949678124 ## CI report: * 707582c5ba35c46d8b9e880a716d257efbf2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25422) * 5c0c45fbb0cc26c41ac386601d0682245ec5b814 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25432) * 5ea492bfda82c87d79bf7ab6fb999f65e8c872ca Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25435) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-24351) translate "JSON Function" pages into Chinese
[ https://issues.apache.org/jira/browse/FLINK-24351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ingo Bürk reassigned FLINK-24351: - Assignee: ZhuoYu Chen (was: Ingo Bürk) > translate "JSON Function" pages into Chinese > - > > Key: FLINK-24351 > URL: https://issues.apache.org/jira/browse/FLINK-24351 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: liwei li >Assignee: ZhuoYu Chen >Priority: Major > Labels: pull-request-available > Attachments: sql_functions_zh.yml, 微信图片_20211009105019.png > > > translate "JSON Function" pages into Chinese, > docs/data/sql_functions_zh.yml > > https://github.com/apache/flink/pull/17275#issuecomment-924536467 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-24351) translate "JSON Function" pages into Chinese
[ https://issues.apache.org/jira/browse/FLINK-24351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ingo Bürk reassigned FLINK-24351: - Assignee: Ingo Bürk (was: ZhuoYu Chen) > translate "JSON Function" pages into Chinese > - > > Key: FLINK-24351 > URL: https://issues.apache.org/jira/browse/FLINK-24351 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: liwei li >Assignee: Ingo Bürk >Priority: Major > Labels: pull-request-available > Attachments: sql_functions_zh.yml, 微信图片_20211009105019.png > > > translate "JSON Function" pages into Chinese, > docs/data/sql_functions_zh.yml > > https://github.com/apache/flink/pull/17275#issuecomment-924536467 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] Airblader commented on a change in pull request #17508: [FLINK-24351][docs] Translate "JSON Function" pages into Chinese
Airblader commented on a change in pull request #17508: URL: https://github.com/apache/flink/pull/17508#discussion_r735869757 ## File path: docs/data/sql_functions_zh.yml ## @@ -887,15 +875,11 @@ json: - sql: JSON_ARRAY([value]* [ { NULL | ABSENT } ON NULL ]) table: jsonArray(JsonOnNull, values...) description: | - Builds a JSON array string from a list of values. + 将一个数值列表构建成一个 JSON 数组字符串。 - This function returns a JSON string. The values can be arbitrary expressions. The `ON NULL` - behavior defines how to treat `NULL` values. If omitted, `ABSENT ON NULL` is assumed by - default. + 这个函数返回一个 JSON 字符串,值可以是任意表达式。'ON NULL' 行为定义了如何处理 'NULL' 值。如果省略,则假定 'ABSENT ON NULL' 为默认值。 - Elements which are created from another JSON construction function call (`JSON_OBJECT`, - `JSON_ARRAY`) are inserted directly rather than as a string. This allows building nested JSON - structures. + 元素是由另一个 JSON 构造函数调用 ('JSON_OBJECT','JSON_ARRAY') 直接插入所创建,而不是作为一个字符串。他允许构建嵌套的 JSON 结构。 Review comment: Yes, this is somewhat magic behavior of these functions, but all other RDBMS implement it this way as well, and it is significantly more useful to users than producing `'["[1]"]` – we discussed this at length. :-) We now also have `JSON_STRING`, so users could use `JSON_ARRAY(JSON_STRING(JSON_ARRAY(1)))` if they really wanted to. ## File path: docs/data/sql_functions_zh.yml ## @@ -887,15 +875,11 @@ json: - sql: JSON_ARRAY([value]* [ { NULL | ABSENT } ON NULL ]) table: jsonArray(JsonOnNull, values...) description: | - Builds a JSON array string from a list of values. + 将一个数值列表构建成一个 JSON 数组字符串。 - This function returns a JSON string. The values can be arbitrary expressions. The `ON NULL` - behavior defines how to treat `NULL` values. If omitted, `ABSENT ON NULL` is assumed by - default. + 这个函数返回一个 JSON 字符串,值可以是任意表达式。'ON NULL' 行为定义了如何处理 'NULL' 值。如果省略,则假定 'ABSENT ON NULL' 为默认值。 - Elements which are created from another JSON construction function call (`JSON_OBJECT`, - `JSON_ARRAY`) are inserted directly rather than as a string. This allows building nested JSON - structures. + 元素是由另一个 JSON 构造函数调用 ('JSON_OBJECT','JSON_ARRAY') 直接插入所创建,而不是作为一个字符串。他允许构建嵌套的 JSON 结构。 Review comment: Yes, this is somewhat magic behavior of these functions, but all other RDBMS implement it this way as well, and it is significantly more useful to users than producing `'["[1]"]'` – we discussed this at length. :-) We now also have `JSON_STRING`, so users could use `JSON_ARRAY(JSON_STRING(JSON_ARRAY(1)))` if they really wanted to. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on a change in pull request #17536: [FLINK-24530][datastream] GlobalCommitter might not commit all records on drain
AHeise commented on a change in pull request #17536: URL: https://github.com/apache/flink/pull/17536#discussion_r735869185 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/CommitterHandler.java ## @@ -72,4 +78,32 @@ default void snapshotState(StateSnapshotContext context) throws Exception {} * @return successfully retried committables that is sent downstream. */ Collection retry() throws IOException, InterruptedException; + +/** + * The serializable factory of a committer handler such that the stateful implementations of + * {@link CommitterHandler} do not need to be {@link Serializable} themselves. + */ +interface Factory extends Serializable { +CommitterHandler create(Sink sink) throws IOException; + +default T checkSerializerPresent(Optional optional, boolean global) { +String scope = global ? " global" : ""; +checkState( +optional.isPresent(), +"Internal error: a%s committer should only be created if the sink has a%s committable serializer.", +scope, +scope); +return optional.get(); +} + +default T checkCommitterPresent(Optional optional, boolean global) { +String scope = global ? " global" : ""; +checkState( +optional.isPresent(), +"Expected a%s committer because%s committable serializer is set.", +scope, +scope); +return optional.get(); +} +} Review comment: How about I change the Factory interface to an abstract class? It might be weird because they are default methods? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17179: [FLINK-24086][checkpoint] Rebuilding the SharedStateRegistry only when the restore method is called for the first time.
flinkbot edited a comment on pull request #17179: URL: https://github.com/apache/flink/pull/17179#issuecomment-914169535 ## CI report: * de26dc218e37b6192e937205374cb2de8c0cf747 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25428) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on a change in pull request #17536: [FLINK-24530][datastream] GlobalCommitter might not commit all records on drain
AHeise commented on a change in pull request #17536: URL: https://github.com/apache/flink/pull/17536#discussion_r735868485 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/CommitRetrier.java ## @@ -21,33 +21,44 @@ import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService; import org.apache.flink.util.clock.Clock; import org.apache.flink.util.clock.SystemClock; +import org.apache.flink.util.function.ThrowingConsumer; import java.io.IOException; +import java.util.Collection; import static org.apache.flink.util.Preconditions.checkNotNull; /** * Retries the committables of a {@link CommitterHandler} until all committables are eventually * committed. */ -class CommitRetrier { +class CommitRetrier { +@VisibleForTesting static final int RETRY_DELAY = 1000; private final ProcessingTimeService processingTimeService; -private final CommitterHandler committerHandler; +private final CommitterHandler committerHandler; +private final ThrowingConsumer, IOException> committableConsumer; private final Clock clock; -@VisibleForTesting static final int RETRY_DELAY = 1000; public CommitRetrier( -ProcessingTimeService processingTimeService, CommitterHandler committerHandler) { -this(processingTimeService, committerHandler, SystemClock.getInstance()); +ProcessingTimeService processingTimeService, +CommitterHandler committerHandler, +ThrowingConsumer, IOException> committableConsumer) { Review comment: I used a lambda here as a callback for timer-based retry. Not sure how this can be solved differently. I certainly would try to avoid having the retrier in the emitCommittables - that can probably lead to nasty stack exceptions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Airblader commented on a change in pull request #17508: [FLINK-24351][docs] Translate "JSON Function" pages into Chinese
Airblader commented on a change in pull request #17508: URL: https://github.com/apache/flink/pull/17508#discussion_r735868561 ## File path: docs/data/sql_functions_zh.yml ## @@ -707,11 +707,9 @@ json: - sql: IS JSON [ { VALUE | SCALAR | ARRAY | OBJECT } ] table: STRING.isJson([JsonType type]) description: | - Determine whether a given string is valid JSON. + 判定给定字符串是否为有效的 JSON。 - Specifying the optional type argument puts a constraint on which type of JSON object is - allowed. If the string is valid JSON, but not that type, `false` is returned. The default is - `VALUE`. + 指定一个可选类型参数将会限制允许哪种类型的 JSON 对象。如果字符串是有效的 JSON,但不是指定的类型,则返回 'false'。默认值为 'VALUE'。 Review comment: Just saw this. Is anything unclear here? I can't tell if you mean to say that something is wrong with the code or the (English) docs. From what I can see everything seems to be correct? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24384) Count checkpoints failed in trigger phase into numberOfFailedCheckpoints
[ https://issues.apache.org/jira/browse/FLINK-24384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17433926#comment-17433926 ] Feifan Wang commented on FLINK-24384: - Thanks for reply [~pnowojski], I am glad to work on it, can you assign this ticket to me ? I will consider how to implement this before coding, and describe it in the follow-up comments. > Count checkpoints failed in trigger phase into numberOfFailedCheckpoints > > > Key: FLINK-24384 > URL: https://issues.apache.org/jira/browse/FLINK-24384 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing, Runtime / Metrics >Affects Versions: 1.14.1 >Reporter: Feifan Wang >Priority: Major > > h1. *Problem* > In current implementation, checkpoints failed in trigger phase do not count > into metric 'numberOfFailedCheckpoints'. Such that users can not aware > checkpoint stoped by this metric. > As lang as users can use rules like _*'numberOfCompletedCheckpoints' not > increase in some minutes past*_ (maybe checkpoint interval + timeout) for > alerting, but I think it is ambages and can not alert timely. > > h1. *Proposal* > As the title, count checkpoints failed in trigger phase into > 'numberOfFailedCheckpoints'. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] AHeise commented on a change in pull request #17536: [FLINK-24530][datastream] GlobalCommitter might not commit all records on drain
AHeise commented on a change in pull request #17536: URL: https://github.com/apache/flink/pull/17536#discussion_r735867252 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/AbstractCommitterHandler.java ## @@ -65,21 +68,55 @@ protected void recoveredCommittables(List recovered) throws IOException return all; } +protected final Collection commitAndReturnSuccess(List committables) +throws IOException, InterruptedException { +Collection failed = commit(committables); +if (failed.isEmpty()) { +return committables; +} +// Assume that (Global)Committer#commit does not create a new instance for failed +// committables. This assumption is documented in the respective JavaDoc. +Set successful = +Collections.newSetFromMap(new IdentityHashMap<>(committables.size())); +successful.addAll(committables); +successful.removeAll(failed); +return successful; +} + +protected final Collection commit(List committables) +throws IOException, InterruptedException { +List failed = commitInternal(committables); +recoveredCommittables(failed); +return failed; +} + +/** + * Commits a list of committables. + * + * @param committables A list of committables that is ready for committing. + * @return A list of committables needed to re-commit. + */ +abstract List commitInternal(List committables) +throws IOException, InterruptedException; + @Override public boolean needsRetry() { return !recoveredCommittables.isEmpty(); } @Override -public void retry() throws IOException, InterruptedException { -retry(prependRecoveredCommittables(Collections.emptyList())); +public Collection retry() throws IOException, InterruptedException { +return retry(prependRecoveredCommittables(Collections.emptyList())); } -protected abstract void retry(List recoveredCommittables) -throws IOException, InterruptedException; +protected Collection retry(List recoveredCommittables) Review comment: That's unfortunately not that easy because of the global committers: Currently all committers are emitting `CommT` and not `GlobalCommT` anymore after this refactor. This is possible because in fact the global committers are not emitting anything. Now `commitAndReturnSuccess` is working on the internal type (`GlobalCommT` in case of global committers). Hence, the signature is conflicting here. We could create mix-ins interfaces for non-global and global committers where we can implement them. The question is if that's simpler. We could also re-introduce an emit type to `CommitterHandler`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17562: [FLINK-16206][table-planner] Support JSON_ARRAYAGG
flinkbot edited a comment on pull request #17562: URL: https://github.com/apache/flink/pull/17562#issuecomment-951200622 ## CI report: * 7a90ad8e577b79b3f68fbed12a5824f6b822c129 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25434) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17562: [FLINK-16206][table-planner] Support JSON_ARRAYAGG
flinkbot commented on pull request #17562: URL: https://github.com/apache/flink/pull/17562#issuecomment-951200622 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17549: [FLINK-16205][table-planner] Support JSON_OBJECTAGG
flinkbot edited a comment on pull request #17549: URL: https://github.com/apache/flink/pull/17549#issuecomment-949678124 ## CI report: * 707582c5ba35c46d8b9e880a716d257efbf2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25422) * 5c0c45fbb0cc26c41ac386601d0682245ec5b814 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25432) * 5ea492bfda82c87d79bf7ab6fb999f65e8c872ca UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17518: [FLINK-24018][build] Remove Scala dependencies from Java APIs
flinkbot edited a comment on pull request #17518: URL: https://github.com/apache/flink/pull/17518#issuecomment-946526433 ## CI report: * 708f2a7720d6f1ed80d60a2103e995e066b40646 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25312) * d5a8ba676046f94bcdb8607231becd051ee4c7db Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25433) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org