[jira] [Commented] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal
[ https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425946#comment-17425946 ] xuyangzhong commented on FLINK-23271: - Hi , [~matriv]. The result of the discussion in calcite is the following: 1. Casting decimal (and float, real, double) into boolean will be forbidden. 2. Casting integer types(contains tinyint, smallint, int, bigint) into boolean is allowed. So I think we better follow the behavior of calcite because calcite maybe represents the most db's behavior. What's your opinion? > RuntimeException: while resolving method 'booleanValue' in class class > java.math.BigDecimal > --- > > Key: FLINK-23271 > URL: https://issues.apache.org/jira/browse/FLINK-23271 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Affects Versions: 1.14.0 >Reporter: xiaojin.wy >Priority: Major > > *Sql :* > CREATE TABLE database3_t0( > c0 DECIMAL , c1 SMALLINT > ) WITH ( > 'connector' = 'filesystem', > 'path' = 'hdfs:///tmp/database3_t0.csv', > 'format' = 'csv' > ); > INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as > SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 > as SMALLINT)); > SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST > (0.10915913549909961 AS BOOLEAN; > *After excuting the sql, you will find the error:* > java.lang.RuntimeException: while resolving method 'booleanValue' in class > class java.math.BigDecimal > at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424) > at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435) > at > org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453) > at > org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398) > at > org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538) > at > org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450) > at > org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894) > at > org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90) > at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90) > at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818) > at > org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198) > at > org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90) > at > org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66) > at > org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128) > at > org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326) > at > org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262) > at > org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at >
[GitHub] [flink-ml] zhipeng93 closed pull request #15: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables
zhipeng93 closed pull request #15: URL: https://github.com/apache/flink-ml/pull/15 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] zhipeng93 edited a comment on pull request #15: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables
zhipeng93 edited a comment on pull request #15: URL: https://github.com/apache/flink-ml/pull/15#issuecomment-938323217 I have rebased this PR on [Flink-10, iteration](https://github.com/apache/flink-ml/pull/17) and the new PR is [here](https://github.com/apache/flink-ml/pull/18). So closing this PR now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] zhipeng93 edited a comment on pull request #15: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables
zhipeng93 edited a comment on pull request #15: URL: https://github.com/apache/flink-ml/pull/15#issuecomment-938323217 I have rebased this PR on [Flink-10][iteration](https://github.com/apache/flink-ml/pull/17) and the new PR is [here](https://github.com/apache/flink-ml/pull/18). So closing this PR now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] zhipeng93 commented on pull request #15: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables
zhipeng93 commented on pull request #15: URL: https://github.com/apache/flink-ml/pull/15#issuecomment-938323217 I have rebased this PR on [Flink-10][iteration](https://github.com/apache/flink-ml/pull/17) and the new PR is [here](https://github.com/apache/flink-ml/pull/18) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] zhipeng93 opened a new pull request #18: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables
zhipeng93 opened a new pull request #18: URL: https://github.com/apache/flink-ml/pull/18 This PR supports withBroadcast() function in DataStream by caching the broadcastInputs in static variables. Note that this PR is rebased on [[FLINK-10][iteration]](https://github.com/apache/flink-ml/pull/17). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-15352) develop MySQLCatalog to connect Flink with MySQL tables and ecosystem
[ https://issues.apache.org/jira/browse/FLINK-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425936#comment-17425936 ] Roc Marshal commented on FLINK-15352: - Could someone help me to merge it if there are nothing inappropriate ? Thank you for your help. > develop MySQLCatalog to connect Flink with MySQL tables and ecosystem > -- > > Key: FLINK-15352 > URL: https://issues.apache.org/jira/browse/FLINK-15352 > Project: Flink > Issue Type: New Feature > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Bowen Li >Assignee: Roc Marshal >Priority: Minor > Labels: pull-request-available > Attachments: research-results.tar.gz > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-10230) Support 'SHOW CREATE VIEW' syntax to print the query of a view
[ https://issues.apache.org/jira/browse/FLINK-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425935#comment-17425935 ] Roc Marshal commented on FLINK-10230: - Could someone help me to merge it if there are nothing inappropriate ? Thank you. > Support 'SHOW CREATE VIEW' syntax to print the query of a view > -- > > Key: FLINK-10230 > URL: https://issues.apache.org/jira/browse/FLINK-10230 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Client >Reporter: Timo Walther >Assignee: Roc Marshal >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > Fix For: 1.15.0 > > > FLINK-10163 added initial support for views in SQL Client. We should add a > command that allows for printing the query of a view for debugging. MySQL > offers {{SHOW CREATE VIEW}} for this. Hive generalizes this to {{SHOW CREATE > TABLE}}. The latter one could be extended to also show information about the > used table factories and properties. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xintongsong commented on a change in pull request #16821: [FLINK-23358][core] Refactor CoreOptions parent first patterns to List options
xintongsong commented on a change in pull request #16821: URL: https://github.com/apache/flink/pull/16821#discussion_r724660924 ## File path: flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ## @@ -127,9 +149,13 @@ + "thrown while trying to load a user code class."); public static String[] getParentFirstLoaderPatterns(Configuration config) { -String base = config.getString(ALWAYS_PARENT_FIRST_LOADER_PATTERNS); -String append = config.getString(ALWAYS_PARENT_FIRST_LOADER_PATTERNS_ADDITIONAL); -return parseParentFirstLoaderPatterns(base, append); +List base = +ConfigUtils.decodeListFromConfig( +config, ALWAYS_PARENT_FIRST_LOADER_PATTERNS, String::new); +List append = +ConfigUtils.decodeListFromConfig( +config, ALWAYS_PARENT_FIRST_LOADER_PATTERNS_ADDITIONAL, String::new); Review comment: ```suggestion List base = config.get(ALWAYS_PARENT_FIRST_LOADER_PATTERNS); List append = config.get(ALWAYS_PARENT_FIRST_LOADER_PATTERNS_ADDITIONAL); ``` ## File path: flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ## @@ -94,24 +101,39 @@ * */ @Documentation.Section(Documentation.Sections.EXPERT_CLASS_LOADING) -public static final ConfigOption ALWAYS_PARENT_FIRST_LOADER_PATTERNS = +public static final ConfigOption> ALWAYS_PARENT_FIRST_LOADER_PATTERNS = ConfigOptions.key("classloader.parent-first-patterns.default") -.defaultValue( - "java.;scala.;org.apache.flink.;com.esotericsoftware.kryo;org.apache.hadoop.;javax.annotation.;org.xml;javax.xml;org.apache.xerces;org.w3c;" -+ PARENT_FIRST_LOGGING_PATTERNS) +.stringType() +.asList() +.defaultValues( +mergeListsToArray( +Arrays.asList( +"java.", +"scala.", +"org.apache.flink.", +"com.esotericsoftware.kryo", +"org.apache.hadoop.", +"javax.annotation.", +"org.xml", +"javax.xml", +"org.apache.xerces", +"org.w3c"), +PARENT_FIRST_LOGGING_PATTERNS)) .withDeprecatedKeys("classloader.parent-first-patterns") .withDescription( -"A (semicolon-separated) list of patterns that specifies which classes should always be" +"A list of patterns that specifies which classes should always be" Review comment: No need to remove `(semicolon-separated)`. It's a good hint for users on how to format a list type config option. ## File path: flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ## @@ -36,8 +38,13 @@ public class CoreOptions { @Internal -public static final String PARENT_FIRST_LOGGING_PATTERNS = - "org.slf4j;org.apache.log4j;org.apache.logging;org.apache.commons.logging;ch.qos.logback"; +public static final List PARENT_FIRST_LOGGING_PATTERNS = +Arrays.asList( +"org.slf4j", +"org.apache.log4j", +"org.apache.logging", +"org.apache.commons.logging", +"ch.qos.logback"); Review comment: You can save a lot of conversions between list and array by making this a `String[]`. Because `defaultValues()` only takes arrays. The only thing you need would be `ArrayUtils.concat()` for merging two arrays. `mergeListsToArray` will not be necessary. ## File path: flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ## @@ -151,43 +177,50 @@ @Documentation.ExcludeFromDocumentation( "Plugin classloader list is considered an implementation detail. " + "Configuration only included in case to mitigate unintended side-effects of this young feature.") -public static final ConfigOption PLUGIN_ALWAYS_PARENT_FIRST_LOADER_PATTERNS = +public static final ConfigOption> PLUGIN_ALWAYS_PARENT_FIRST_LOADER_PATTERNS = ConfigOptions.key("plugin.classloader.parent-first-patterns.default") .stringType() -.defaultValue( -"java.;org.apache.flink.;javax.annotation.;" -
[jira] [Assigned] (FLINK-24334) Configuration kubernetes.flink.log.dir not working
[ https://issues.apache.org/jira/browse/FLINK-24334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang reassigned FLINK-24334: - Assignee: ouyangwulin > Configuration kubernetes.flink.log.dir not working > -- > > Key: FLINK-24334 > URL: https://issues.apache.org/jira/browse/FLINK-24334 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.14.0, 1.13.2 >Reporter: Yang Wang >Assignee: ouyangwulin >Priority: Major > > After FLINK-21128, {{kubernetes.flink.log.dir}} is useless and could be > replaced with {{env.log.dir}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24474) Standalone clusters should bind to localhost by default
[ https://issues.apache.org/jira/browse/FLINK-24474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425931#comment-17425931 ] Yang Wang commented on FLINK-24474: --- What will happen if the standalone cluster is started across multiple machines? > Standalone clusters should bind to localhost by default > --- > > Key: FLINK-24474 > URL: https://issues.apache.org/jira/browse/FLINK-24474 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Fix For: 1.15.0 > > > By default the REST endpoints bind to 0.0.0.0. > This is fine for docker use-cases as it simplifies the setup and the API > isn't reachable unless the user explicitly enables that via docker. > However, for standalone clusters this is a different story, and it is > currently too easy for users to accidentally expose their clusters to the > outside world. > We should set the bind address by default to localhost, and change the > docker-scripts to set this to 0.0.0.0 . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24334) Configuration kubernetes.flink.log.dir not working
[ https://issues.apache.org/jira/browse/FLINK-24334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425930#comment-17425930 ] Yang Wang commented on FLINK-24334: --- [~ouyangwuli] I think your suggestion makes sense and I have assigned this ticket to you. Thanks for working on this ticket. > Configuration kubernetes.flink.log.dir not working > -- > > Key: FLINK-24334 > URL: https://issues.apache.org/jira/browse/FLINK-24334 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.14.0, 1.13.2 >Reporter: Yang Wang >Priority: Major > > After FLINK-21128, {{kubernetes.flink.log.dir}} is useless and could be > replaced with {{env.log.dir}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhuzhurk commented on pull request #15229: [FLINK-19142][runtime] Fix slot hijacking after task failover
zhuzhurk commented on pull request #15229: URL: https://github.com/apache/flink/pull/15229#issuecomment-938299821 @tillrohrmann I have an idea already and will do it this week. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu reassigned FLINK-24390: --- Assignee: Dian Fu > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Assignee: Dian Fu >Priority: Blocker > Labels: pull-request-available > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] jjiey commented on pull request #15902: [FLINK-19028][docs] Translate the "application_parameters.md" into Chinese
jjiey commented on pull request #15902: URL: https://github.com/apache/flink/pull/15902#issuecomment-938295045 Already translated and merged on 3 Sep. see [pr 16949](https://github.com/apache/flink/pull/16949) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] jjiey closed pull request #15902: [FLINK-19028][docs] Translate the "application_parameters.md" into Chinese
jjiey closed pull request #15902: URL: https://github.com/apache/flink/pull/15902 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17433: [FLINK-24390][python] Limit the grpcio version <= 1.40.0 for Python 3.5
flinkbot commented on pull request #17433: URL: https://github.com/apache/flink/pull/17433#issuecomment-938293481 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit d1dee9c4830043dd463ed6accbb80c1c75e2c624 (Fri Oct 08 02:41:23 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-24390).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-24390: --- Labels: pull-request-available (was: ) > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Priority: Blocker > Labels: pull-request-available > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dianfu opened a new pull request #17433: [FLINK-24390][python] Limit the grpcio version <= 1.40.0 for Python 3.5
dianfu opened a new pull request #17433: URL: https://github.com/apache/flink/pull/17433 ## What is the purpose of the change *This pull request limits the grpcio version <= 1.40.0 for Python 3.5 as the latest grpcio (1.41.0 which was released in 9.28) has dropped the support for Python 3.5* ## Verifying this change This change is a code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: ( no) - The serializers: ( no) - The runtime per-record code paths (performance sensitive): ( no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no) - The S3 file system connector: ( no) ## Documentation - Does this pull request introduce a new feature? ( no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425921#comment-17425921 ] Dian Fu commented on FLINK-24390: - It should be caused that the latest grpcio (1.41.0 which was released in 9.28) has dropped the support for Python 3.5. We could simply limit the grpcio version. > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Priority: Blocker > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-24442) Flink Queries Docs markup does not show escape ticks
[ https://issues.apache.org/jira/browse/FLINK-24442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-24442. --- Fix Version/s: 1.15.0 Assignee: Mans Singh Resolution: Fixed Fixed in - master: 21582b1a92d2f20c6e433db644f56ee79259aae6 - 1.14: 3fe4e6eb0a27c0d3d6056adac639df8910c709b9 > Flink Queries Docs markup does not show escape ticks > > > Key: FLINK-24442 > URL: https://issues.apache.org/jira/browse/FLINK-24442 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.0 >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: pull-request-available > Fix For: 1.15.0, 1.14.0 > > Attachments: Screen Shot 2021-10-03 at 7.01.41 PM.png > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The [table query overview > |https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/overview/#syntax]mentions: > {quote} * Unlike Java, back-ticks allow identifiers to contain > non-alphanumeric characters (e.g. {{“SELECT a AS }}{{my field}}{{ FROM > t”}}).{quote} > The "my field" identifier appears without escape back ticks as shown in > screenshot below: > > !Screen Shot 2021-10-03 at 7.01.41 PM.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong merged pull request #17408: [FLINK-24442][table][docs] - Corrected markup to show back ticks
wuchong merged pull request #17408: URL: https://github.com/apache/flink/pull/17408 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-24351) translate "JSON Function" pages into Chinese
[ https://issues.apache.org/jira/browse/FLINK-24351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-24351: --- Assignee: ZhuoYu Chen > translate "JSON Function" pages into Chinese > - > > Key: FLINK-24351 > URL: https://issues.apache.org/jira/browse/FLINK-24351 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: liwei li >Assignee: ZhuoYu Chen >Priority: Major > > translate "JSON Function" pages into Chinese, > docs/data/sql_functions_zh.yml > > https://github.com/apache/flink/pull/17275#issuecomment-924536467 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xintongsong commented on pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support
xintongsong commented on pull request #15599: URL: https://github.com/apache/flink/pull/15599#issuecomment-938283249 @galenwarren Great to hear from you. It would be nice to see this in the next release. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24351) translate "JSON Function" pages into Chinese
[ https://issues.apache.org/jira/browse/FLINK-24351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425908#comment-17425908 ] liwei li commented on FLINK-24351: -- [~monster#12], Sorry, I don't have permission. Maybe we can ask [~jark] for help? > translate "JSON Function" pages into Chinese > - > > Key: FLINK-24351 > URL: https://issues.apache.org/jira/browse/FLINK-24351 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: liwei li >Priority: Major > > translate "JSON Function" pages into Chinese, > docs/data/sql_functions_zh.yml > > https://github.com/apache/flink/pull/17275#issuecomment-924536467 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-24408) org.codehaus.janino.InternalCompilerException: Compiling "StreamExecValues$200": Code of method "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class "Strea
[ https://issues.apache.org/jira/browse/FLINK-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425894#comment-17425894 ] shengkui leng edited comment on FLINK-24408 at 10/8/21, 1:08 AM: - The SQL can be fount in the attachment: [^sql.txt] I built it with this code: [^HudiBenchS3.java] was (Author: shengkui): [^sql.txt] I built it with this code: [^HudiBenchS3.java] > org.codehaus.janino.InternalCompilerException: Compiling > "StreamExecValues$200": Code of method > "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class > "StreamExecValues$200" grows beyond 64 KB > - > > Key: FLINK-24408 > URL: https://issues.apache.org/jira/browse/FLINK-24408 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 > Environment: * Flink v1.14.0 > * Hudi trunk(v0.10.0 snapshot) > * Hadoop v2.9.2 >Reporter: shengkui leng >Priority: Major > Attachments: HudiBenchS3.java, sql.txt > > > I build a large SQL in application, and meet the issue "Code of method method > grows beyond 64 KB". This bug should be fixed refer to #FLINK-22903. > > {quote}{{ java.lang.RuntimeException: Could not instantiate generated class > 'StreamExecValues$200'at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:60) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:35) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323) > [flink-dist_2.11-1.14.0.jar:1.14.0]Caused by: > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4864) > [flink-dist_2.11-1.14.0.jar:1.14.0]at >
[jira] [Commented] (FLINK-24408) org.codehaus.janino.InternalCompilerException: Compiling "StreamExecValues$200": Code of method "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class "StreamExec
[ https://issues.apache.org/jira/browse/FLINK-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425894#comment-17425894 ] shengkui leng commented on FLINK-24408: --- [^sql.txt] I built it with this code: [^HudiBenchS3.java] > org.codehaus.janino.InternalCompilerException: Compiling > "StreamExecValues$200": Code of method > "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class > "StreamExecValues$200" grows beyond 64 KB > - > > Key: FLINK-24408 > URL: https://issues.apache.org/jira/browse/FLINK-24408 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 > Environment: * Flink v1.14.0 > * Hudi trunk(v0.10.0 snapshot) > * Hadoop v2.9.2 >Reporter: shengkui leng >Priority: Major > Attachments: HudiBenchS3.java, sql.txt > > > I build a large SQL in application, and meet the issue "Code of method method > grows beyond 64 KB". This bug should be fixed refer to #FLINK-22903. > > {quote}{{ java.lang.RuntimeException: Could not instantiate generated class > 'StreamExecValues$200'at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:60) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:35) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323) > [flink-dist_2.11-1.14.0.jar:1.14.0]Caused by: > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4864) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) > [flink-dist_2.11-1.14.0.jar:1.14.0]at >
[jira] [Updated] (FLINK-24408) org.codehaus.janino.InternalCompilerException: Compiling "StreamExecValues$200": Code of method "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class "StreamExecVa
[ https://issues.apache.org/jira/browse/FLINK-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shengkui leng updated FLINK-24408: -- Attachment: HudiBenchS3.java > org.codehaus.janino.InternalCompilerException: Compiling > "StreamExecValues$200": Code of method > "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class > "StreamExecValues$200" grows beyond 64 KB > - > > Key: FLINK-24408 > URL: https://issues.apache.org/jira/browse/FLINK-24408 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 > Environment: * Flink v1.14.0 > * Hudi trunk(v0.10.0 snapshot) > * Hadoop v2.9.2 >Reporter: shengkui leng >Priority: Major > Attachments: HudiBenchS3.java, sql.txt > > > I build a large SQL in application, and meet the issue "Code of method method > grows beyond 64 KB". This bug should be fixed refer to #FLINK-22903. > > {quote}{{ java.lang.RuntimeException: Could not instantiate generated class > 'StreamExecValues$200'at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:60) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:35) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323) > [flink-dist_2.11-1.14.0.jar:1.14.0]Caused by: > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4864) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) > [flink-dist_2.11-1.14.0.jar:1.14.0]
[jira] [Updated] (FLINK-24408) org.codehaus.janino.InternalCompilerException: Compiling "StreamExecValues$200": Code of method "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class "StreamExecVa
[ https://issues.apache.org/jira/browse/FLINK-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shengkui leng updated FLINK-24408: -- Attachment: sql.txt > org.codehaus.janino.InternalCompilerException: Compiling > "StreamExecValues$200": Code of method > "nextRecord(Ljava/lang/Object;)Ljava/lang/Object;" of class > "StreamExecValues$200" grows beyond 64 KB > - > > Key: FLINK-24408 > URL: https://issues.apache.org/jira/browse/FLINK-24408 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 > Environment: * Flink v1.14.0 > * Hudi trunk(v0.10.0 snapshot) > * Hadoop v2.9.2 >Reporter: shengkui leng >Priority: Major > Attachments: sql.txt > > > I build a large SQL in application, and meet the issue "Code of method method > grows beyond 64 KB". This bug should be fixed refer to #FLINK-22903. > > {quote}{{ java.lang.RuntimeException: Could not instantiate generated class > 'StreamExecValues$200'at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:60) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.operators.values.ValuesInputFormat.open(ValuesInputFormat.java:35) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323) > [flink-dist_2.11-1.14.0.jar:1.14.0]Caused by: > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache.get(LocalCache.java:3962) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4859) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69) > [flink-table_2.11-1.14.0.jar:1.14.0]... 6 moreCaused by: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue.at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74) > [flink-table_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4864) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) > [flink-dist_2.11-1.14.0.jar:1.14.0]at > org.apache.flink.shaded.guava30.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) > [flink-dist_2.11-1.14.0.jar:1.14.0]at >
[jira] [Created] (FLINK-24478) gradle quickstart is out-of-date
David Anderson created FLINK-24478: -- Summary: gradle quickstart is out-of-date Key: FLINK-24478 URL: https://issues.apache.org/jira/browse/FLINK-24478 Project: Flink Issue Type: Improvement Affects Versions: 1.14.0 Reporter: David Anderson Assignee: Nico Kruber The gradle quickstart, as described in the docs, and produced by {{bash -c "$(curl [https://flink.apache.org/q/gradle-quickstart.sh])" – 1.14.0 _2.11}} is out of date, and it has some obvious errors. E.g., it defines scalaBinaryVersion as '_2.11', and then has {{flinkShadowJar "org.apache.flink:flink-connector-kafka-0.11_${scalaBinaryVersion}:${flinkVersion}"}} which is both ancient and includes the _ again. (I realize now that the extra _ actually comes from the bash command I copied from the docs, so the docs need to be fixed as well.) The quickstart also doesn't produce a gradlew script, and if I try {{gradle build}} I get this output: {{$ gradle build Starting a Gradle Daemon (subsequent builds will be faster) FAILURE: Build failed with an exception. * Where: Build file '/Users/david/stuff/quickstart/build.gradle' line: 41 * What went wrong: A problem occurred evaluating root project 'quickstart'. > Cannot add task 'wrapper' as a task with that name already exists}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] mans2singh commented on pull request #17408: [FLINK-24442][table][docs] - Corrected markup to show back ticks
mans2singh commented on pull request #17408: URL: https://github.com/apache/flink/pull/17408#issuecomment-938235770 @leonardBang @wuchong - Can you please review this PR ? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol merged pull request #17427: [FLINK-24472][tests] Wait for cancellation future to complete
zentol merged pull request #17427: URL: https://github.com/apache/flink/pull/17427 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hililiwei commented on a change in pull request #17106: [FLINK-20443][API/DataStream] ContinuousProcessingTimeTrigger doesn't fire at the end of the window
hililiwei commented on a change in pull request #17106: URL: https://github.com/apache/flink/pull/17106#discussion_r723857343 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/ContinuousProcessingTimeTrigger.java ## @@ -74,6 +74,11 @@ public TriggerResult onEventTime(long time, W window, TriggerContext ctx) throws @Override public TriggerResult onProcessingTime(long time, W window, TriggerContext ctx) throws Exception { + +if (time == window.maxTimestamp()) { Review comment: I modified the relevant code of the next fire time according to the content of the discussion, please help to see if it is correct? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise merged pull request #17358: [hotfix] Fix typo,'Kakfa' corrected to 'Kafka'
AHeise merged pull request #17358: URL: https://github.com/apache/flink/pull/17358 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise merged pull request #17421: [FLINK-24405][tests] Introduce util to reliably drain all messages from a kafka topic
AHeise merged pull request #17421: URL: https://github.com/apache/flink/pull/17421 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17422: [FLINK-9432][table] Enable EXTRACT for DECADE, ISODOW, ISOYEAR
flinkbot edited a comment on pull request #17422: URL: https://github.com/apache/flink/pull/17422#issuecomment-937776577 ## CI report: * cb2bcd6ec000f84f73bfa38fb5c00ec33b9030de Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24822) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on pull request #17235: [FLINK-24246][connector/pulsar] Bump pulsar client to latest 2.8.1 release
AHeise commented on pull request #17235: URL: https://github.com/apache/flink/pull/17235#issuecomment-937503443 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] StephanEwen commented on pull request #17425: [hotfix][datastream] Add missing Internal annotations for OperatorCoordinator class
StephanEwen commented on pull request #17425: URL: https://github.com/apache/flink/pull/17425#issuecomment-937676627 +1 to merge -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr closed pull request #17350: [FLINK-24359][table-runtime] Use ResolvedSchema in AbstractFileSystemTable
twalthr closed pull request #17350: URL: https://github.com/apache/flink/pull/17350 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] snuyanzin commented on pull request #6614: [FLINK-9432] [table] Support extract timeunit: DECADE, EPOCH, ISODOW, ISOYEAR…
snuyanzin commented on pull request #6614: URL: https://github.com/apache/flink/pull/6614#issuecomment-937270898 I will close this PR and create a new one as there are from one side to many merge conflicts and from another calcite was updated to 1.2x since the time the PR was created. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann commented on pull request #17262: [FLINK-24273][kubernetes] Relocate io.fabric8 dependency.
tillrohrmann commented on pull request #17262: URL: https://github.com/apache/flink/pull/17262#issuecomment-937919332 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] XComp commented on a change in pull request #100: [FLINK-24447][zk] Bundle netty
XComp commented on a change in pull request #100: URL: https://github.com/apache/flink-shaded/pull/100#discussion_r724055043 ## File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-35/src/main/resources/META-INF/NOTICE ## @@ -7,6 +7,14 @@ The Apache Software Foundation (http://www.apache.org/). This project bundles the following dependencies under the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt) - com.google.guava:guava:27.0.1-jre Review comment: It's not really connected to this change. But I realize that the shaded plugin adds `org.apache.zookeeper.zookeeper` `3.5.9` and `3.4.14` which is because we want to support ZK 3.5 and 3.4, I guess? But the NOTICE file only mentions `3.5.9`. Is that correct or should we also add `3.4.14`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17426: [FLINK-24167][Runtime]Add default HeartbeatReceiver and HeartbeatSend…
flinkbot commented on pull request #17426: URL: https://github.com/apache/flink/pull/17426#issuecomment-937651834 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 75c4348dc82fbebe4d064076f63d2f5a9054d1a7 (Thu Oct 07 10:14:03 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-24167).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17350: [FLINK-24359][table-runtime] Use ResolvedSchema in AbstractFileSystemTable
flinkbot edited a comment on pull request #17350: URL: https://github.com/apache/flink/pull/17350#issuecomment-926550692 ## CI report: * 0a0cd972b9e8406a06ead5f7e9c8fd161340a625 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24806) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] sjwiesman commented on pull request #273: [FLINK-24464][core] Metrics sometimes report negative backlog
sjwiesman commented on pull request #273: URL: https://github.com/apache/flink-statefun/pull/273#issuecomment-937327204 I'll fix the header while merging, grazi! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17427: [FLINK-24472][tests] Wait for cancellation future to complete
flinkbot edited a comment on pull request #17427: URL: https://github.com/apache/flink/pull/17427#issuecomment-937776797 ## CI report: * 1741d035b576d508032d7e963376238ffaeef39b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24823) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Aitozi commented on pull request #17410: [FLINK-24445][build] Copy rpc-akka jar in package phase
Aitozi commented on pull request #17410: URL: https://github.com/apache/flink/pull/17410#issuecomment-937564663 Hi @zentol, I still have some doubts that after changing to the `package` phase , the `mvn clean test` will fail when executing in `flink-rpc-akka-loader` folder. Because the fat jar have not copied, but the testing of `AkkaRpcSystemLoaderTest` rely on this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17422: [FLINK-9432] Enable EXTRACT for DECADE, ISODOW, ISOYEAR
flinkbot commented on pull request #17422: URL: https://github.com/apache/flink/pull/17422#issuecomment-937329360 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] igalshilman commented on a change in pull request #273: [FLINK-24464][core] Metrics sometimes report negative backlog
igalshilman commented on a change in pull request #273: URL: https://github.com/apache/flink-statefun/pull/273#discussion_r723750424 ## File path: statefun-flink/statefun-flink-core/src/test/java/org/apache/flink/statefun/flink/core/metrics/NonNegativeCounterTest.java ## @@ -0,0 +1,27 @@ +package org.apache.flink.statefun.flink.core.metrics; Review comment: Missing header -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] alpreu commented on a change in pull request #17374: [FLINK-24327][connectors/elasticsearch] Add Elasticsearch 7 sink for table API
alpreu commented on a change in pull request #17374: URL: https://github.com/apache/flink/pull/17374#discussion_r724221489 ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch7Configuration.java ## @@ -20,20 +20,91 @@ import org.apache.flink.annotation.Internal; import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.connector.base.DeliveryGuarantee; +import org.apache.flink.connector.elasticsearch.sink.FlushBackoffType; import org.apache.flink.table.api.ValidationException; import org.apache.http.HttpHost; +import java.time.Duration; import java.util.List; +import java.util.Optional; import java.util.stream.Collectors; -import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchConnectorOptions.HOSTS_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.BULK_FLUSH_BACKOFF_DELAY_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.BULK_FLUSH_BACKOFF_MAX_RETRIES_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.BULK_FLUSH_BACKOFF_TYPE_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.BULK_FLUSH_INTERVAL_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.DELIVERY_GUARANTEE_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.HOSTS_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.PASSWORD_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7ConnectorOptions.USERNAME_OPTION; +import static org.apache.flink.util.Preconditions.checkNotNull; /** Elasticsearch 7 specific configuration. */ @Internal -final class Elasticsearch7Configuration extends ElasticsearchConfiguration { +final class Elasticsearch7Configuration { +protected final ReadableConfig config; +private final ClassLoader classLoader; + Elasticsearch7Configuration(ReadableConfig config, ClassLoader classLoader) { -super(config, classLoader); +this.config = checkNotNull(config); +this.classLoader = checkNotNull(classLoader); +} + +public int getBulkFlushMaxActions() { +int maxActions = config.get(ElasticsearchConnectorOptions.BULK_FLUSH_MAX_ACTIONS_OPTION); +// convert 0 to -1, because Elasticsearch client use -1 to disable this configuration. +return maxActions == 0 ? -1 : maxActions; +} + +public long getBulkFlushMaxByteSize() { +long maxSize = + config.get(ElasticsearchConnectorOptions.BULK_FLASH_MAX_SIZE_OPTION).getBytes(); +// convert 0 to -1, because Elasticsearch client use -1 to disable this configuration. +return maxSize == 0 ? -1 : maxSize; +} + +public long getBulkFlushInterval() { +long interval = config.get(BULK_FLUSH_INTERVAL_OPTION).toMillis(); +// convert 0 to -1, because Elasticsearch client use -1 to disable this configuration. +return interval == 0 ? -1 : interval; Review comment: Good catch, they are also checked and documented in the ElasticsearchSinkBuilder -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17106: [FLINK-20443][API/DataStream] ContinuousProcessingTimeTrigger doesn't fire at the end of the window
flinkbot edited a comment on pull request #17106: URL: https://github.com/apache/flink/pull/17106#issuecomment-910662440 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr commented on a change in pull request #17350: [FLINK-24359][table-runtime] Use ResolvedSchema in AbstractFileSystemTable
twalthr commented on a change in pull request #17350: URL: https://github.com/apache/flink/pull/17350#discussion_r724214646 ## File path: flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSink.java ## @@ -331,7 +338,7 @@ private RowDataPartitionComputer partitionComputer() { return new FileSystemFormatFactory.ReaderContext() { @Override public TableSchema getSchema() { Review comment: good point, haven't seen that is deprecated. then we don't need to do the effort. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] sjwiesman closed pull request #273: [FLINK-24464][core] Metrics sometimes report negative backlog
sjwiesman closed pull request #273: URL: https://github.com/apache/flink-statefun/pull/273 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17430: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.13.x
flinkbot commented on pull request #17430: URL: https://github.com/apache/flink/pull/17430#issuecomment-937889110 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit e2027807c6a1b22c82dc0c3d0c7f5e961ba25af5 (Thu Oct 07 15:12:44 UTC 2021) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17427: [FLINK-24472][tests] Wait for cancellation future to complete
flinkbot commented on pull request #17427: URL: https://github.com/apache/flink/pull/17427#issuecomment-937723783 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dawidwys commented on a change in pull request #17253: [FLINK-24182][task] Do not interrupt tasks/operators that are already closing
dawidwys commented on a change in pull request #17253: URL: https://github.com/apache/flink/pull/17253#discussion_r723934098 ## File path: flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java ## @@ -324,16 +324,19 @@ public void waitForTaskCompletion(long timeout) throws Exception { * * @param timeout Timeout for the task completion */ -public void waitForTaskCompletion(long timeout, boolean ignoreCancellationException) -throws Exception { +public void waitForTaskCompletion( +long timeout, boolean ignoreCancellationOrInterruptedException) throws Exception { Preconditions.checkState(taskThread != null, "Task thread was not started."); taskThread.join(timeout); if (taskThread.getError() != null) { -if (!ignoreCancellationException -|| !ExceptionUtils.findThrowable( -taskThread.getError(), CancelTaskException.class) -.isPresent()) { +boolean errorIsCancellationOrInterrupted = +ExceptionUtils.findThrowable(taskThread.getError(), CancelTaskException.class) +.isPresent() +|| ExceptionUtils.findThrowable( +taskThread.getError(), InterruptedException.class) +.isPresent(); +if (!ignoreCancellationOrInterruptedException || !errorIsCancellationOrInterrupted) { Review comment: nit: Could we revert the condition? I find way easier to reason without negations: ``` if (ignoreCancellationOrInterruptedException && errorIsCancellationOrInterrupted) { return; } throw new Exception("error in task", taskThread.getError()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on pull request #17401: [FLINK-24409][connectors] Fix metrics errors with topics names with periods
AHeise commented on pull request #17401: URL: https://github.com/apache/flink/pull/17401#issuecomment-937502238 @PatrickRen could you PTAL? I think we should have a test case for record lag metric as well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tisonkun closed pull request #16790: [hotfix][connector][serialization][docs] Fix some typos in readme.md , kafka.md , schema_evolution.md docs
tisonkun closed pull request #16790: URL: https://github.com/apache/flink/pull/16790 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise merged pull request #17363: [FLINK-24324][connectors/elasticsearch] Add Elasticsearch 7 sink based on the unified sink (FLIP-143)
AHeise merged pull request #17363: URL: https://github.com/apache/flink/pull/17363 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] snuyanzin commented on pull request #17422: [FLINK-9432][table] Enable EXTRACT for DECADE, ISODOW, ISOYEAR
snuyanzin commented on pull request #17422: URL: https://github.com/apache/flink/pull/17422#issuecomment-937735175 yes, sure, just added -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dannycranmer commented on pull request #17417: FLINK-24431 [flink-connector-kinesis] Stop consumer deregistration when EAGER EFO configured.
dannycranmer commented on pull request #17417: URL: https://github.com/apache/flink/pull/17417#issuecomment-937674900 @rudikershaw thanks for the contribution, can you also update the `cn` documentation? It is still in English but eventually someone will translate it: https://github.com/apache/flink/blob/master/docs/content.zh/docs/connectors/datastream/kinesis.md -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17374: [FLINK-24327][connectors/elasticsearch] Add Elasticsearch 7 sink for table API
flinkbot edited a comment on pull request #17374: URL: https://github.com/apache/flink/pull/17374#issuecomment-929172201 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17425: [hotfix][datastream] Add missing Internal annotations for OperatorCoordinator class
flinkbot commented on pull request #17425: URL: https://github.com/apache/flink/pull/17425#issuecomment-937642490 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit d01ed30ea78a2f9864b072dd16f873182d0c575b (Thu Oct 07 10:01:58 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dannycranmer merged pull request #17417: FLINK-24431 [flink-connector-kinesis] Stop consumer deregistration when EAGER EFO configured.
dannycranmer merged pull request #17417: URL: https://github.com/apache/flink/pull/17417 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17431: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.12.x
flinkbot commented on pull request #17431: URL: https://github.com/apache/flink/pull/17431#issuecomment-937896727 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 191eee54006303b07fd453ad6971cbf8a55c4b38 (Thu Oct 07 15:20:32 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr closed pull request #17418: [FLINK-24421][table-runtime] Fix conversion of strings to TIME
twalthr closed pull request #17418: URL: https://github.com/apache/flink/pull/17418 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann commented on a change in pull request #17414: [FLINK-24451][HA]Replace the scattered objects with encapsulated LeaderIn…
tillrohrmann commented on a change in pull request #17414: URL: https://github.com/apache/flink/pull/17414#discussion_r724319224 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/leaderelection/DefaultLeaderElectionService.java ## @@ -263,13 +257,11 @@ public void onLeaderInformationChange(LeaderInformation leaderInformation) { LOG.trace( "Leader node changed while {} is the leader with session ID {}. New leader information {}.", leaderContender.getDescription(), -confirmedLeaderSessionID, +confirmedLeaderInformation.getLeaderSessionID(), leaderInformation); } -if (confirmedLeaderSessionID != null) { -final LeaderInformation confirmedLeaderInfo = -LeaderInformation.known( -confirmedLeaderSessionID, confirmedLeaderAddress); +if (confirmedLeaderInformation.getLeaderSessionID() != null) { Review comment: ```suggestion if (!confirmedLeaderInformation.isEmpty()) { ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] zentol commented on a change in pull request #100: [FLINK-24447][zk] Bundle netty
zentol commented on a change in pull request #100: URL: https://github.com/apache/flink-shaded/pull/100#discussion_r724110204 ## File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-35/src/main/resources/META-INF/NOTICE ## @@ -7,6 +7,14 @@ The Apache Software Foundation (http://www.apache.org/). This project bundles the following dependencies under the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt) - com.google.guava:guava:27.0.1-jre Review comment: There are separate modules for the ZK versions. zookeeper-35 for 3.5, zookeeper34 for 3.4. ## File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-35/src/main/resources/META-INF/NOTICE ## @@ -7,6 +7,14 @@ The Apache Software Foundation (http://www.apache.org/). This project bundles the following dependencies under the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt) - com.google.guava:guava:27.0.1-jre Review comment: There are separate modules for the ZK versions. zookeeper-35 for 3.5, zookeeper-34 for 3.4. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17432: [BP-1.14][FLINK-24437][HA]Remove unhandled exception handler from CuratorFramework before closing it
flinkbot commented on pull request #17432: URL: https://github.com/apache/flink/pull/17432#issuecomment-937928876 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 290a87fa7651f82792867a9c726a55d262de53ec (Thu Oct 07 15:57:38 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17423: [FLINK-24443][table-runtime] Reenable IntervalJoinITCase.testRowTimeInnerJoinWithEquiTimeAttrs
flinkbot commented on pull request #17423: URL: https://github.com/apache/flink/pull/17423#issuecomment-937528000 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit db880bc54e3bea2940448f9edaf9baf598651c46 (Thu Oct 07 07:28:02 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-24443).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper commented on pull request #17423: [FLINK-24443][table-runtime] Reenable IntervalJoinITCase.testRowTimeInnerJoinWithEquiTimeAttrs
slinkydeveloper commented on pull request #17423: URL: https://github.com/apache/flink/pull/17423#issuecomment-937820171 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17429: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.14.x
flinkbot commented on pull request #17429: URL: https://github.com/apache/flink/pull/17429#issuecomment-937884401 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 37f38d4396b971881f9db17c81e78e52e9075ba4 (Thu Oct 07 15:07:38 UTC 2021) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fapaul commented on a change in pull request #17374: [FLINK-24327][connectors/elasticsearch] Add Elasticsearch 7 sink for table API
fapaul commented on a change in pull request #17374: URL: https://github.com/apache/flink/pull/17374#discussion_r723896941 ## File path: flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/KeyExtractor.java ## @@ -109,6 +115,27 @@ public int getIndex() { .orElseGet(() -> (Function & Serializable) (row) -> null); } +public static Function createKeyExtractor( +ResolvedSchema resolvedSchema, String keyDelimiter) { +Optional primaryKey = resolvedSchema.getPrimaryKey(); +if (primaryKey.isPresent()) { +int[] primaryKeyIndexes = resolvedSchema.getPrimaryKeyIndexes(); + +List formatters = new ArrayList<>(); +for (int index : primaryKeyIndexes) { +//noinspection OptionalGetWithoutIsPresent Review comment: To suppress warnings in Flink we use the `@SuppressWarnings()` annotation rather than these type of comments. ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch7DynamicSink.java ## @@ -54,54 +44,20 @@ */ @Internal final class Elasticsearch7DynamicSink implements DynamicTableSink { -@VisibleForTesting -static final Elasticsearch7RequestFactory REQUEST_FACTORY = -new Elasticsearch7DynamicSink.Elasticsearch7RequestFactory(); private final EncodingFormat> format; -private final TableSchema schema; +private final DynamicTableFactory.Context factoryContext; private final Elasticsearch7Configuration config; -public Elasticsearch7DynamicSink( -EncodingFormat> format, -Elasticsearch7Configuration config, -TableSchema schema) { -this(format, config, schema, (ElasticsearchSink.Builder::new)); -} - -// -- -// Hack to make configuration testing possible. -// -// The code in this block should never be used outside of tests. -// Having a way to inject a builder we can assert the builder in -// the test. We can not assert everything though, e.g. it is not -// possible to assert flushing on checkpoint, as it is configured -// on the sink itself. -// -- - -private final ElasticSearchBuilderProvider builderProvider; - -@FunctionalInterface -interface ElasticSearchBuilderProvider { -ElasticsearchSink.Builder createBuilder( -List httpHosts, RowElasticsearchSinkFunction upsertSinkFunction); -} - Elasticsearch7DynamicSink( EncodingFormat> format, Elasticsearch7Configuration config, -TableSchema schema, -ElasticSearchBuilderProvider builderProvider) { +DynamicTableFactory.Context factoryContext) { Review comment: If we only pass the information about the primaryKey we do not need to pass the whole context but only the extracted row data type. (`factoryContext#getPhysicalRowDataType`) ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch7DynamicSink.java ## @@ -114,188 +70,65 @@ public ChangelogMode getChangelogMode(ChangelogMode requestedMode) { } @Override -public SinkFunctionProvider getSinkRuntimeProvider(Context context) { -return () -> { -SerializationSchema format = -this.format.createRuntimeEncoder(context, schema.toRowDataType()); - -final RowElasticsearchSinkFunction upsertFunction = -new RowElasticsearchSinkFunction( - IndexGeneratorFactory.createIndexGenerator(config.getIndex(), schema), -null, // this is deprecated in es 7+ -format, -XContentType.JSON, -REQUEST_FACTORY, -KeyExtractor.createKeyExtractor(schema, config.getKeyDelimiter())); - -final ElasticsearchSink.Builder builder = -builderProvider.createBuilder(config.getHosts(), upsertFunction); - -builder.setFailureHandler(config.getFailureHandler()); -builder.setBulkFlushMaxActions(config.getBulkFlushMaxActions()); -builder.setBulkFlushMaxSizeMb((int) (config.getBulkFlushMaxByteSize() >> 20)); -builder.setBulkFlushInterval(config.getBulkFlushInterval()); -builder.setBulkFlushBackoff(config.isBulkFlushBackoffEnabled()); - config.getBulkFlushBackoffType().ifPresent(builder::setBulkFlushBackoffType); - config.getBulkFlushBackoffRetries().ifPresent(builder::setBulkFlushBackoffRetries); -
[GitHub] [flink] tisonkun closed pull request #16763: [hotfix][state_processor_api][union_state][doc] Updated variable name to match method invocation
tisonkun closed pull request #16763: URL: https://github.com/apache/flink/pull/16763 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17424: [FLINK-24455][tests]FallbackAkkaRpcSystemLoader should check for mave…
flinkbot commented on pull request #17424: URL: https://github.com/apache/flink/pull/17424#issuecomment-937547733 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 87c852bd5bd07765ef46cd6896d0fb0ad3569f15 (Thu Oct 07 07:58:28 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on pull request #17358: [hotfix] Fix typo,'Kakfa' corrected to 'Kafka'
AHeise commented on pull request #17358: URL: https://github.com/apache/flink/pull/17358#issuecomment-937485519 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] zentol merged pull request #100: [FLINK-24447][zk] Bundle netty
zentol merged pull request #100: URL: https://github.com/apache/flink-shaded/pull/100 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski merged pull request #17425: [hotfix][datastream] Add missing Internal annotations for OperatorCoordinator class
pnowojski merged pull request #17425: URL: https://github.com/apache/flink/pull/17425 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17235: [FLINK-24246][connector/pulsar] Bump pulsar client to latest 2.8.1 release
flinkbot edited a comment on pull request #17235: URL: https://github.com/apache/flink/pull/17235#issuecomment-916768949 ## CI report: * 6fba18288bfbfb1f83a6ec568be35ef749c974bc Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23949) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24817) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17428: [hotfix]Refactor the registerJobManager to registerJobMaster
flinkbot commented on pull request #17428: URL: https://github.com/apache/flink/pull/17428#issuecomment-937842619 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c71a25d545c6771e9bd2f7e659120be9fedb7a28 (Thu Oct 07 14:22:13 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise merged pull request #17420: [BP-1.14][FLINK-24382][metrics] Do not compute recordsOut metric in SinkOperat…
AHeise merged pull request #17420: URL: https://github.com/apache/flink/pull/17420 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] snuyanzin closed pull request #6614: [FLINK-9432] [table] Support extract timeunit: DECADE, EPOCH, ISODOW, ISOYEAR…
snuyanzin closed pull request #6614: URL: https://github.com/apache/flink/pull/6614 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17417: FLINK-24431 [flink-connector-kinesis] Stop consumer deregistration when EAGER EFO configured.
flinkbot edited a comment on pull request #17417: URL: https://github.com/apache/flink/pull/17417#issuecomment-935782208 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-benchmarks] dawidwys commented on a change in pull request #34: [hotfix] Increase warmup iterations and iterations in CheckpointingTimeBenchmark
dawidwys commented on a change in pull request #34: URL: https://github.com/apache/flink-benchmarks/pull/34#discussion_r724221931 ## File path: src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java ## @@ -65,37 +61,20 @@ import java.util.concurrent.Executors; import java.util.function.Function; -import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.flink.api.common.eventtime.WatermarkStrategy.noWatermarks; /** * The test verifies that the debloating kicks in and properly downsizes buffers. In the end the * checkpoint should take ~2(number of rebalance) * DEBLOATING_TARGET. - * - * Some info about the chosen numbers: Review comment: I removed the note, as it is not straightforward to calculate and hard to keep it in sync. Moreover the calculations so far were "wrong" regarding the record size, which with `1b` of payload is equal to `~29b`. ## File path: src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java ## @@ -65,37 +61,20 @@ import java.util.concurrent.Executors; import java.util.function.Function; -import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; import static org.apache.flink.api.common.eventtime.WatermarkStrategy.noWatermarks; /** * The test verifies that the debloating kicks in and properly downsizes buffers. In the end the * checkpoint should take ~2(number of rebalance) * DEBLOATING_TARGET. - * - * Some info about the chosen numbers: - * - * - * The minimal memory segment size is decreased (256b) so that the scaling possibility is - * higher. Memory segments start with 4kb - * A memory segment of the minimal size fits ~3 records (of size 64b), each record takes ~1ms - * to be processed by the sink - * We have 2 (exclusive buffers) * 4 (parallelism) + 8 floating = 64 buffers per gate, with - * 300 ms debloating target and ~1ms/record processing speed, we can buffer 300/64 = ~4.5 - * records in a buffer after debloating which means the size of a buffer is slightly above the - * minimal memory segment size. - * The buffer debloating target of 300ms means a checkpoint should take ~2(number of - * exchanges)*300ms=~600ms - * */ @OutputTimeUnit(SECONDS) -@Warmup(iterations = 4) public class CheckpointingTimeBenchmark extends BenchmarkBase { public static final int JOB_PARALLELISM = 4; -public static final MemorySize START_MEMORY_SEGMENT_SIZE = MemorySize.parse("4 kb"); Review comment: I increased the memory segment size to increase the range in which buffer debloating works. After debloating, fully backpressured pipeline has buffers of `~1000-2000b`. ## File path: src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java ## @@ -284,8 +262,9 @@ protected int getNumberOfSlotsPerTaskManager() { */ public static class SlowDiscardSink implements SinkFunction { @Override -public void invoke(T value, Context context) throws Exception { -Thread.sleep(1); +public void invoke(T value, Context context) { +final long startTime = System.nanoTime(); +while (System.nanoTime() - startTime < 200_000) {} Review comment: I replaced `Thread.sleep` with busy waiting. It improves saturation of network exchanges. With `Thread.sleep(1)` most of the unaligned checkpoints had no persisted in-flight data. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17253: [FLINK-24182][task] Do not interrupt tasks/operators that are already closing
flinkbot edited a comment on pull request #17253: URL: https://github.com/apache/flink/pull/17253#issuecomment-917975072 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-24477) Add MongoDB sink
Nir Tsruya created FLINK-24477: -- Summary: Add MongoDB sink Key: FLINK-24477 URL: https://issues.apache.org/jira/browse/FLINK-24477 Project: Flink Issue Type: New Feature Components: Connectors / Common Reporter: Nir Tsruya h2. Motivation *User stories:* As a Flink user, I’d like to use MongoDB as sink for my data pipeline. *Scope:* * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase class. The implementation can for now reside in its own module in flink-connectors. * Implement an asynchornous sink writer for MongoDB by extending the AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class that will be created as part of this story. * Java / code-level docs. * End to end testing -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol merged pull request #17427: [FLINK-24472][tests] Wait for cancellation future to complete
zentol merged pull request #17427: URL: https://github.com/apache/flink/pull/17427 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-24472) DispatcherTest is unstable
[ https://issues.apache.org/jira/browse/FLINK-24472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-24472. Resolution: Fixed master: e38cd1be718a207125e8116667d3e2675258d2b9 > DispatcherTest is unstable > --- > > Key: FLINK-24472 > URL: https://issues.apache.org/jira/browse/FLINK-24472 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Piotr Nowojski >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > https://dev.azure.com/pnowojski/Flink/_build/results?buildId=534=logs=9dc1b5dc-bcfa-5f83-eaa7-0cb181ddc267=511d2595-ec54-5ab7-86ce-92f328796f20 > testCancellationOfNonCanceledTerminalJobFailsWithAppropriateException from > DispatcherTest can fail with: > {noformat} > Oct 07 10:31:18 Expected: A CompletableFuture that failed with: > org.apache.flink.runtime.messages.FlinkJobTerminatedWithoutCancellationException > Oct 07 10:31:18 but: Future is not completed. > Oct 07 10:31:18 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > Oct 07 10:31:18 at org.junit.Assert.assertThat(Assert.java:964) > Oct 07 10:31:18 at org.junit.Assert.assertThat(Assert.java:930) > Oct 07 10:31:18 at > org.apache.flink.runtime.dispatcher.DispatcherTest.testCancellationOfNonCanceledTerminalJobFailsWithAppropriateException(DispatcherTest.java:442) > Oct 07 10:31:18 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Oct 07 10:31:18 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Oct 07 10:31:18 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Oct 07 10:31:18 at java.lang.reflect.Method.invoke(Method.java:498) > (...) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tillrohrmann commented on a change in pull request #17414: [FLINK-24451][HA]Replace the scattered objects with encapsulated LeaderIn…
tillrohrmann commented on a change in pull request #17414: URL: https://github.com/apache/flink/pull/17414#discussion_r724319224 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/leaderelection/DefaultLeaderElectionService.java ## @@ -263,13 +257,11 @@ public void onLeaderInformationChange(LeaderInformation leaderInformation) { LOG.trace( "Leader node changed while {} is the leader with session ID {}. New leader information {}.", leaderContender.getDescription(), -confirmedLeaderSessionID, +confirmedLeaderInformation.getLeaderSessionID(), leaderInformation); } -if (confirmedLeaderSessionID != null) { -final LeaderInformation confirmedLeaderInfo = -LeaderInformation.known( -confirmedLeaderSessionID, confirmedLeaderAddress); +if (confirmedLeaderInformation.getLeaderSessionID() != null) { Review comment: ```suggestion if (!confirmedLeaderInformation.isEmpty()) { ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17432: [BP-1.14][FLINK-24437][HA]Remove unhandled exception handler from CuratorFramework before closing it
flinkbot commented on pull request #17432: URL: https://github.com/apache/flink/pull/17432#issuecomment-937928876 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 290a87fa7651f82792867a9c726a55d262de53ec (Thu Oct 07 15:57:38 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #17432: [BP-1.14][FLINK-24437][HA]Remove unhandled exception handler from CuratorFramework before closing it
tillrohrmann opened a new pull request #17432: URL: https://github.com/apache/flink/pull/17432 Backport of #17409 to `release-1.14`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-24473) getString value from any datatype columns
[ https://issues.apache.org/jira/browse/FLINK-24473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425647#comment-17425647 ] Jark Wu edited comment on FLINK-24473 at 10/7/21, 3:52 PM: --- >From the exception message, the field type is int type, so you can't use >{{getString}}. You should use {{getInt()}} can cast to string instead. If you are looking for a more general way to convert any type into string, please take {{org.apache.flink.formats.json.JsonRowDataSerializationSchema}} as an example, which convers RowData into JSON string. was (Author: jark): >From the exception message, the field type is int type, so you can't use >{{getString}}. You should use {{getInt()}} can cast to string instead. If you are looking for a more general way to convert any type into string, you use {{org.apache.flink.table.data.RowData#createFieldGetter}} to get a {{FieldGetter}} which can get an Object field from the RowData, and then use {{toString}} on the object. > getString value from any datatype columns > - > > Key: FLINK-24473 > URL: https://issues.apache.org/jira/browse/FLINK-24473 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Affects Versions: 1.12.4 >Reporter: Spongebob >Priority: Minor > Attachments: image-2021-10-07-21-17-20-192.png > > > Hope flink would support getting string value from any other datatype column, > such as from int、decimal column. At current flink would throw cast exception > when we do that. > > !image-2021-10-07-21-17-20-192.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-24473) getString value from any datatype columns
[ https://issues.apache.org/jira/browse/FLINK-24473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425647#comment-17425647 ] Jark Wu edited comment on FLINK-24473 at 10/7/21, 3:51 PM: --- >From the exception message, the field type is int type, so you can't use >{{getString}}. You should use {{getInt()}} can cast to string instead. If you are looking for a more general way to convert any type into string, you use {{org.apache.flink.table.data.RowData#createFieldGetter}} to get a {{FieldGetter}} which can get an Object field from the RowData, and then use {{toString}} on the object. was (Author: jark): >From the exception message, the field type is int type, so you can't use >{{getString}}. You should use {{getInt()}} can cast to string instead. If you are looking for a more general way to convert any type into string, you use {{org.apache.flink.table.data.RowData#createFieldGetter}} to get a {{FieldGetter}} which can get an Object from the RowData, and then use {{toString}} on the object. > getString value from any datatype columns > - > > Key: FLINK-24473 > URL: https://issues.apache.org/jira/browse/FLINK-24473 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Affects Versions: 1.12.4 >Reporter: Spongebob >Priority: Minor > Attachments: image-2021-10-07-21-17-20-192.png > > > Hope flink would support getting string value from any other datatype column, > such as from int、decimal column. At current flink would throw cast exception > when we do that. > > !image-2021-10-07-21-17-20-192.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24473) getString value from any datatype columns
[ https://issues.apache.org/jira/browse/FLINK-24473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17425647#comment-17425647 ] Jark Wu commented on FLINK-24473: - >From the exception message, the field type is int type, so you can't use >{{getString}}. You should use {{getInt()}} can cast to string instead. If you are looking for a more general way to convert any type into string, you use {{org.apache.flink.table.data.RowData#createFieldGetter}} to get a {{FieldGetter}} which can get an Object from the RowData, and then use {{toString}} on the object. > getString value from any datatype columns > - > > Key: FLINK-24473 > URL: https://issues.apache.org/jira/browse/FLINK-24473 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Affects Versions: 1.12.4 >Reporter: Spongebob >Priority: Minor > Attachments: image-2021-10-07-21-17-20-192.png > > > Hope flink would support getting string value from any other datatype column, > such as from int、decimal column. At current flink would throw cast exception > when we do that. > > !image-2021-10-07-21-17-20-192.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tillrohrmann commented on pull request #17262: [FLINK-24273][kubernetes] Relocate io.fabric8 dependency.
tillrohrmann commented on pull request #17262: URL: https://github.com/apache/flink/pull/17262#issuecomment-937922730 @dmvk it would also be good to understand where a dependency conflict arises with the unrelocated `io.fabric8.*` dependencies. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann commented on pull request #17262: [FLINK-24273][kubernetes] Relocate io.fabric8 dependency.
tillrohrmann commented on pull request #17262: URL: https://github.com/apache/flink/pull/17262#issuecomment-937919332 I've tried bumping the `maven-shade-plugin` version to `3.2.4` but it suffered from the same problem. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17431: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.12.x
flinkbot commented on pull request #17431: URL: https://github.com/apache/flink/pull/17431#issuecomment-937896727 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 191eee54006303b07fd453ad6971cbf8a55c4b38 (Thu Oct 07 15:20:32 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rudikershaw opened a new pull request #17431: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.12.x
rudikershaw opened a new pull request #17431: URL: https://github.com/apache/flink/pull/17431 ## What is the purpose of the change This is a fix being pulled back into 1.12, please see #17417 for the original PR. The following is just the original text from the previous pull request; The EFO Kinesis connector will register and de-register stream consumers based on the [configured registration strategy](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kinesis/#efo-stream-consumer-registrationderegistration). When EAGER is used, the client (usually job manager) will register the consumer and then the task managers will de-register the consumer when job stops/fails. If the job is configured to restart on fail, then the consumer will not exist and the job will continuously fail over. After this change the connector will not trigger consumer deregistration in EFO if using the EAGER registration strategy. This will allow EAGER EFO configured connector applications to use restart policies without continuously failing. The documentation is updated accordingly. ## Brief change log - *Add dedicated utility function for determining whether consumer deregistration is required.* - *Ensure consumer deregistration now only occurs when EFO is configured using the LAZY registration strategy.* - *Update the documentation to reflect the changes.* ## Verifying this change Parts of this change are already covered by existing tests in `org.apache.flink.streaming.connectors.kinesis.util.StreamConsumerRegistrarUtilTest`. This change added tests and can be verified as follows: - *Added unit test to verify deregistration criteria i.e. `org.apache.flink.streaming.connectors.kinesis.util.StreamConsumerRegistrarUtilTest#testDeregisterStreamConsumersOnlyDeregistersEFOLazilyInitializedConsumers`* - *Manually verified the change by running a Flink application with a fixed delay restart strategy and a FlinkKinesisConsumer with EAGER EFO configured.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): **No** - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: **No** - The serializers: **No** - The runtime per-record code paths (performance sensitive): **No** - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **_Yes_** - The S3 file system connector: **No** ## Documentation - References to the EAGER EFO deregistration policy have been updated in `datastream/kinesis.md` and `table/kinesis.md`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17430: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.13.x
flinkbot commented on pull request #17430: URL: https://github.com/apache/flink/pull/17430#issuecomment-937889110 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit e2027807c6a1b22c82dc0c3d0c7f5e961ba25af5 (Thu Oct 07 15:12:44 UTC 2021) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rudikershaw opened a new pull request #17430: [FLINK-24431][kinesis][efo] Stop consumer deregistration when EAGER EFO configured into 1.13.x
rudikershaw opened a new pull request #17430: URL: https://github.com/apache/flink/pull/17430 ## What is the purpose of the change This is a fix being pulled back into 1.13, please see #17417 for the original PR. The following is just the original text from the previous pull request; The EFO Kinesis connector will register and de-register stream consumers based on the [configured registration strategy](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kinesis/#efo-stream-consumer-registrationderegistration). When EAGER is used, the client (usually job manager) will register the consumer and then the task managers will de-register the consumer when job stops/fails. If the job is configured to restart on fail, then the consumer will not exist and the job will continuously fail over. After this change the connector will not trigger consumer deregistration in EFO if using the EAGER registration strategy. This will allow EAGER EFO configured connector applications to use restart policies without continuously failing. The documentation is updated accordingly. ## Brief change log - *Add dedicated utility function for determining whether consumer deregistration is required.* - *Ensure consumer deregistration now only occurs when EFO is configured using the LAZY registration strategy.* - *Update the documentation to reflect the changes.* ## Verifying this change Parts of this change are already covered by existing tests in `org.apache.flink.streaming.connectors.kinesis.util.StreamConsumerRegistrarUtilTest`. This change added tests and can be verified as follows: - *Added unit test to verify deregistration criteria i.e. `org.apache.flink.streaming.connectors.kinesis.util.StreamConsumerRegistrarUtilTest#testDeregisterStreamConsumersOnlyDeregistersEFOLazilyInitializedConsumers`* - *Manually verified the change by running a Flink application with a fixed delay restart strategy and a FlinkKinesisConsumer with EAGER EFO configured.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): **No** - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: **No** - The serializers: **No** - The runtime per-record code paths (performance sensitive): **No** - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **_Yes_** - The S3 file system connector: **No** ## Documentation - References to the EAGER EFO deregistration policy have been updated in `datastream/kinesis.md` and `table/kinesis.md`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org