[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043 ## CI report: * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120432519) * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121184693) * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121348454) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] eamontaaffe commented on a change in pull request #9178: Typo in `scala_api_quickstart.md`
eamontaaffe commented on a change in pull request #9178: Typo in `scala_api_quickstart.md` URL: https://github.com/apache/flink/pull/9178#discussion_r309045044 ## File path: docs/dev/projectsetup/scala_api_quickstart.md ## @@ -217,7 +217,7 @@ take a look at the [Stream Processing Application Tutorial]({{ site.baseurl }}/g If you are writing a batch processing application and you are looking for inspiration what to write, take a look at the [Batch Application Examples]({{ site.baseurl }}/dev/batch/examples.html) -For a complete overview over the APIa, have a look at the +For a complete overview over the API, have a look at the Review comment: I think it is just referring to one API so it doesn't need to be plural? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] eamontaaffe commented on a change in pull request #9178: Typo in `scala_api_quickstart.md`
eamontaaffe commented on a change in pull request #9178: Typo in `scala_api_quickstart.md` URL: https://github.com/apache/flink/pull/9178#discussion_r309045044 ## File path: docs/dev/projectsetup/scala_api_quickstart.md ## @@ -217,7 +217,7 @@ take a look at the [Stream Processing Application Tutorial]({{ site.baseurl }}/g If you are writing a batch processing application and you are looking for inspiration what to write, take a look at the [Batch Application Examples]({{ site.baseurl }}/dev/batch/examples.html) -For a complete overview over the APIa, have a look at the +For a complete overview over the API, have a look at the Review comment: I think it is just referring to one API so it doesn't need to be plural. Are there multiple APIs? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1
flinkbot edited a comment on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1 URL: https://github.com/apache/flink/pull/9279#issuecomment-516401194 ## CI report: * ce30c4da05c4c788bbeb86a812e26f6d89ca35a2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121244540) * f7ef0aa1257e2ddbabf4f6382db1947791386708 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121348089) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13497) Checkpoints can complete after CheckpointFailureManager fails job
[ https://issues.apache.org/jira/browse/FLINK-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896773#comment-16896773 ] vinoyang commented on FLINK-13497: -- [~yunta] I have no objection to stopping the checkpoint scheduler. I just explained that calling {{CheckpointCoordinator#stopCheckpointScheduler}} directly is not a good choice in the long run. I am just wondering if we need a pure cleanup mechanism that doesn't involve counting. Because it indirectly calls {{CheckpointFailureManager#handleCheckpointException}}. It will become more complicated when we change the failure count logic in the future. > Checkpoints can complete after CheckpointFailureManager fails job > - > > Key: FLINK-13497 > URL: https://issues.apache.org/jira/browse/FLINK-13497 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.9.0 > > > I think that we introduced with FLINK-12364 an inconsistency wrt to job > termination a checkpointing. In FLINK-9900 it was discovered that checkpoints > can complete even after the {{CheckpointFailureManager}} decided to fail a > job. I think the expected behaviour should be that we fail all pending > checkpoints once the {{CheckpointFailureManager}} decides to fail the job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] hongtao12310 commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn
hongtao12310 commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn URL: https://github.com/apache/flink/pull/9237#discussion_r309035257 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/factories/HiveCatalogFactoryTest.java ## @@ -54,9 +65,49 @@ public void test() { checkEquals(expectedCatalog, (HiveCatalog) actualCatalog); } + @Test + public void testLoadHDFSConfigFromEnv() throws IOException { + final String k1 = "what is connector?"; + final String v1 = "Hive"; + final String catalogName = "HiveCatalog"; + + // set HADOOP_CONF_DIR env + final File hadoopConfDir = tempFolder.newFolder(); + final File hdfsSiteFile = new File(hadoopConfDir, "hdfs-site.xml"); + writeProperty(hdfsSiteFile, k1, v1); + final Map originalEnv = System.getenv(); + final Map newEnv = new HashMap<>(originalEnv); + newEnv.put("HADOOP_CONF_DIR", hadoopConfDir.getAbsolutePath()); + CommonTestUtils.setEnv(newEnv); + + // create HiveCatalog use the Hadoop Configuration + final CatalogDescriptor catalogDescriptor = new HiveCatalogDescriptor(); + final Map properties = catalogDescriptor.toProperties(); + final HiveCatalog hiveCatalog = (HiveCatalog) TableFactoryService.find(CatalogFactory.class, properties) + .createCatalog(catalogName, properties); + final HiveConf hiveConf = hiveCatalog.getHiveConf(); + // set the Env back + CommonTestUtils.setEnv(originalEnv); Review comment: @lirui-apache update the test case. please take a look at it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13497) Checkpoints can complete after CheckpointFailureManager fails job
[ https://issues.apache.org/jira/browse/FLINK-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896767#comment-16896767 ] Yun Tang commented on FLINK-13497: -- [~yanghua], there still exists a gap between fail all pending checkpoints to fail the job. And if we could not stop the checkpoint scheduler, some checkpoint could still be triggered and completed unexpectedly. We might simplify the logic of aborting pending checkpoints without {{CheckpointFailureManager}} involved. > Checkpoints can complete after CheckpointFailureManager fails job > - > > Key: FLINK-13497 > URL: https://issues.apache.org/jira/browse/FLINK-13497 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.9.0 > > > I think that we introduced with FLINK-12364 an inconsistency wrt to job > termination a checkpointing. In FLINK-9900 it was discovered that checkpoints > can complete even after the {{CheckpointFailureManager}} decided to fail a > job. I think the expected behaviour should be that we fail all pending > checkpoints once the {{CheckpointFailureManager}} decides to fail the job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9221: [FLINK-13376][datastream] ContinuousFileReaderOperator should respect…
flinkbot edited a comment on issue #9221: [FLINK-13376][datastream] ContinuousFileReaderOperator should respect… URL: https://github.com/apache/flink/pull/9221#issuecomment-514645431 ## CI report: * 05fe281c79ab4bd2646ec949dcbbf0c2af9fec6e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120456728) * 4a83ed7377d64ff8c5e7205891900bf7874e72ec : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121220832) * 82f6c77387589fd9688477a2a7a8baef102060bc : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121246647) * c96bac9502930f77ebc7f2591868b3b21771c081 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121348099) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn
flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn URL: https://github.com/apache/flink/pull/9237#issuecomment-515414321 ## CI report: * 2c59c8b33bcbc200978ed7b5ad27311ada599aab : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120842331) * 61008edcd78722ccd9ed143a2bd005ca91ee39b4 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121050765) * 221bf689e1968f83bc99d862b3522a9ad7d06829 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121178448) * 61ef898749b371fcf1dd1f61ff5e0911023dea90 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121217673) * 50461d50cb1970497809d37d98c22c24d91406ac : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121352328) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13497) Checkpoints can complete after CheckpointFailureManager fails job
[ https://issues.apache.org/jira/browse/FLINK-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896752#comment-16896752 ] vinoyang commented on FLINK-13497: -- Currently, the {{CheckpointFailureManager}} choose a simple counting mechanism to fail the job, so there is the possibility like [~till.rohrmann] said. It is also another issue(FLINK-12514) to track a better counting mechanism. The solution proposed by [~yunta] may fix this issue temporarily. But it may cause another risk. The {{stopCheckpointScheduler}} also called {{CheckpointFailureManager#handleCheckpointException}}. It will make counting more complex in the future. Maybe we need to call a pure method just fails all pending checkpoints when failing the job? > Checkpoints can complete after CheckpointFailureManager fails job > - > > Key: FLINK-13497 > URL: https://issues.apache.org/jira/browse/FLINK-13497 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.9.0 > > > I think that we introduced with FLINK-12364 an inconsistency wrt to job > termination a checkpointing. In FLINK-9900 it was discovered that checkpoints > can complete even after the {{CheckpointFailureManager}} decided to fail a > job. I think the expected behaviour should be that we fail all pending > checkpoints once the {{CheckpointFailureManager}} decides to fail the job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] hongtao12310 commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn
hongtao12310 commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn URL: https://github.com/apache/flink/pull/9237#discussion_r309028331 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/factories/HiveCatalogFactoryTest.java ## @@ -54,9 +65,49 @@ public void test() { checkEquals(expectedCatalog, (HiveCatalog) actualCatalog); } + @Test + public void testLoadHDFSConfigFromEnv() throws IOException { + final String k1 = "what is connector?"; + final String v1 = "Hive"; + final String catalogName = "HiveCatalog"; + + // set HADOOP_CONF_DIR env + final File hadoopConfDir = tempFolder.newFolder(); + final File hdfsSiteFile = new File(hadoopConfDir, "hdfs-site.xml"); + writeProperty(hdfsSiteFile, k1, v1); + final Map originalEnv = System.getenv(); + final Map newEnv = new HashMap<>(originalEnv); + newEnv.put("HADOOP_CONF_DIR", hadoopConfDir.getAbsolutePath()); + CommonTestUtils.setEnv(newEnv); + + // create HiveCatalog use the Hadoop Configuration + final CatalogDescriptor catalogDescriptor = new HiveCatalogDescriptor(); + final Map properties = catalogDescriptor.toProperties(); + final HiveCatalog hiveCatalog = (HiveCatalog) TableFactoryService.find(CatalogFactory.class, properties) + .createCatalog(catalogName, properties); + final HiveConf hiveConf = hiveCatalog.getHiveConf(); + // set the Env back + CommonTestUtils.setEnv(originalEnv); Review comment: good catch ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37
flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#issuecomment-515435805 ## CI report: * bb0663ddbb6eeda06b756c4ffc7094e64dbdb5b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120851212) * 86a460407693769f0d2afaa3597c70f202126099 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121104847) * 5c25e802c096e2688e6cfa01ff7f74d3c050eef5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121108411) * c26538e93fcad20bd337b3766ccdfc30d46380fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121114320) * 4f32c8d7f8d14601e8caa28d1079ae3fdce0873e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121181320) * 7661c074e35ccbca351d80181dfdc8de6bdaea0b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121207981) * 71d581e54fad407304b329b90cb7f917b29fb922 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121350937) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package
flinkbot edited a comment on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package URL: https://github.com/apache/flink/pull/9284#issuecomment-516662768 ## CI report: * fff36fd941c13764ad55fc85e4604ab64689aa21 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121345562) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13485) Translate "Table API Example Walkthrough" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896736#comment-16896736 ] Jark Wu commented on FLINK-13485: - It is a placeholder which will be generated as a table of contents (TOC). This line should not be translated. > Translate "Table API Example Walkthrough" page into Chinese > --- > > Key: FLINK-13485 > URL: https://issues.apache.org/jira/browse/FLINK-13485 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: AT-Fieldless >Priority: Major > > FLINK-12747 has added a page to walkthrough Table API. We can translate it > into Chinese now. > The page is located in {{docs/getting-started/walkthroughs/table_api.zh.md}} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength
flinkbot edited a comment on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength URL: https://github.com/apache/flink/pull/8559#issuecomment-511466573 ## CI report: * b2e38d5e9dabd95409899c56a3064e75378fdba3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/119181531) * e547ec5bd814c1b6e3c94a2b2ebc64f86f3ca66e : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121350230) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13433) Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
[ https://issues.apache.org/jira/browse/FLINK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896737#comment-16896737 ] Jark Wu commented on FLINK-13433: - Yes. I think so [~jinyu.zj] > Do not fetch data from LookupableTableSource if the JoinKey in left side of > LookupJoin contains null value > -- > > Key: FLINK-13433 > URL: https://issues.apache.org/jira/browse/FLINK-13433 > Project: Flink > Issue Type: Task > Components: Table SQL / Planner >Affects Versions: 1.9.0, 1.10.0 >Reporter: Jing Zhang >Assignee: Jing Zhang >Priority: Minor > > For LookupJoin, if joinKey in left side of a LeftOuterJoin/InnerJoin contains > null values, there is no need to fetch data from `LookupableTableSource`. > However, we don't shortcut the fetch function under the case at present, the > correctness of results depends on the `TableFunction` implementation of each > `LookupableTableSource`. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf
flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf URL: https://github.com/apache/flink/pull/9232#issuecomment-515295549 ## CI report: * 199e67fb426db2cf4389112ba914089e5dee3f35 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120801913) * 4b4ef80ebf2dabdc924407eae6a7b5d079ef6ce6 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121001697) * 4384d16b6ec36516a5126855388ee2976cee1664 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121349821) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] Aitozi commented on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength
Aitozi commented on issue #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength URL: https://github.com/apache/flink/pull/8559#issuecomment-516677279 Sorry for late response, I have addressed your comments, please take a look again @zhijiangW This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13485) Translate "Table API Example Walkthrough" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896729#comment-16896729 ] AT-Fieldless commented on FLINK-13485: -- Hello, I am confused with "This will be replaced by the TOC {:toc}" in the first paragraph. What does "TOC {:toc}" mean in this sentence? > Translate "Table API Example Walkthrough" page into Chinese > --- > > Key: FLINK-13485 > URL: https://issues.apache.org/jira/browse/FLINK-13485 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: AT-Fieldless >Priority: Major > > FLINK-12747 has added a page to walkthrough Table API. We can translate it > into Chinese now. > The page is located in {{docs/getting-started/walkthroughs/table_api.zh.md}} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
flinkbot edited a comment on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#issuecomment-516382130 ## CI report: * 60432f857803273b0e8779f0948b318e764d86f0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121236467) * 3ad8cdc920adc3e497c92d7de96f4bf401af7677 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121349486) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf
lirui-apache commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf URL: https://github.com/apache/flink/pull/9232#issuecomment-516675990 @bowenli86 Thanks for pointing out! Just rebased and fixed this test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13433) Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
[ https://issues.apache.org/jira/browse/FLINK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896728#comment-16896728 ] Jing Zhang commented on FLINK-13433: [~jark] Thanks for reminder me. However when lookup key contains constant equivalent conditions, such as d.name = LiteralValue, if Literal value is null, there is still no need to fetch data from LookupableTableSource, right? > Do not fetch data from LookupableTableSource if the JoinKey in left side of > LookupJoin contains null value > -- > > Key: FLINK-13433 > URL: https://issues.apache.org/jira/browse/FLINK-13433 > Project: Flink > Issue Type: Task > Components: Table SQL / Planner >Affects Versions: 1.9.0, 1.10.0 >Reporter: Jing Zhang >Assignee: Jing Zhang >Priority: Minor > > For LookupJoin, if joinKey in left side of a LeftOuterJoin/InnerJoin contains > null values, there is no need to fetch data from `LookupableTableSource`. > However, we don't shortcut the fetch function under the case at present, the > correctness of results depends on the `TableFunction` implementation of each > `LookupableTableSource`. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete removed a comment on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete removed a comment on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#issuecomment-516674994 Thanks for your feedbacks @dawidwys. I updated the PR. Please take another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] docete commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#issuecomment-516674994 Thanks for your feedbacks @dawidwys. I updated the PR. Please take another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] docete commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on issue #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#issuecomment-516674981 Thanks for your feedbacks @dawidwys. I updated the PR. Please take another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on a change in pull request #9279: [FLINK-13423][hive] Unable to find function in hive 1
lirui-apache commented on a change in pull request #9279: [FLINK-13423][hive] Unable to find function in hive 1 URL: https://github.com/apache/flink/pull/9279#discussion_r309021648 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV1.java ## @@ -84,6 +84,7 @@ public Function getFunction(IMetaStoreClient client, String dbName, String funct // hive-1.x doesn't throw NoSuchObjectException if function doesn't exist, instead it throws a MetaException return client.getFunction(dbName, functionName); } catch (MetaException e) { + // need to check the cause and message of this MetaException to decide whether it should actually be a NoSuchObjectException Review comment: Yeah it's specific to 1.2.1. Actually there's a comment about that a couple of lines above: ``` // hive-1.x doesn't throw NoSuchObjectException if function doesn't exist, instead it throws a MetaException ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task
flinkbot edited a comment on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task URL: https://github.com/apache/flink/pull/9283#issuecomment-516656866 ## CI report: * afd7ba4663e2298776c0e02a4fe56cc1d3cf63a3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121343773) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043 ## CI report: * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120432519) * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121184693) * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121348454) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9221: [FLINK-13376][datastream] ContinuousFileReaderOperator should respect…
flinkbot edited a comment on issue #9221: [FLINK-13376][datastream] ContinuousFileReaderOperator should respect… URL: https://github.com/apache/flink/pull/9221#issuecomment-514645431 ## CI report: * 05fe281c79ab4bd2646ec949dcbbf0c2af9fec6e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120456728) * 4a83ed7377d64ff8c5e7205891900bf7874e72ec : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121220832) * 82f6c77387589fd9688477a2a7a8baef102060bc : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121246647) * c96bac9502930f77ebc7f2591868b3b21771c081 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121348099) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1
flinkbot edited a comment on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1 URL: https://github.com/apache/flink/pull/9279#issuecomment-516401194 ## CI report: * ce30c4da05c4c788bbeb86a812e26f6d89ca35a2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121244540) * f7ef0aa1257e2ddbabf4f6382db1947791386708 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121348089) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on a change in pull request #9279: [FLINK-13423][hive] Unable to find function in hive 1
xuefuz commented on a change in pull request #9279: [FLINK-13423][hive] Unable to find function in hive 1 URL: https://github.com/apache/flink/pull/9279#discussion_r309019671 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV1.java ## @@ -84,6 +84,7 @@ public Function getFunction(IMetaStoreClient client, String dbName, String funct // hive-1.x doesn't throw NoSuchObjectException if function doesn't exist, instead it throws a MetaException return client.getFunction(dbName, functionName); } catch (MetaException e) { + // need to check the cause and message of this MetaException to decide whether it should actually be a NoSuchObjectException Review comment: Is this specific to HIve 1.2.1? If so, let's be clear, since we didn't have this check previously. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13501) Fixes a few issues in documentation for Hive integration
[ https://issues.apache.org/jira/browse/FLINK-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896711#comment-16896711 ] Terry Wang commented on FLINK-13501: Hi [~xuefuz] and [~tiwalter]. I'd like to fix these problems. Please feel free to assign this Jira to me. > Fixes a few issues in documentation for Hive integration > > > Key: FLINK-13501 > URL: https://issues.apache.org/jira/browse/FLINK-13501 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / API >Affects Versions: 1.9.0 >Reporter: Xuefu Zhang >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > Attachments: Screen Shot 2019-07-30 at 3.21.25 PM.png, Screen Shot > 2019-07-30 at 3.25.13 PM.png > > > Going thru existing Hive doc I found the following issues that should be > addressed: > 1. Section "Hive Integration" should come after "SQL client" (at the same > level). > 2. In Catalog section, there are headers named "Hive Catalog". Also, some > information is duplicated with that in "Hive Integration" > 3. "Data Type Mapping" is Hive specific and should probably move to "Hive > integration" -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] lirui-apache commented on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1
lirui-apache commented on issue #9279: [FLINK-13423][hive] Unable to find function in hive 1 URL: https://github.com/apache/flink/pull/9279#issuecomment-516670776 @xuefuz PR updated to address your comment This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf
bowenli86 commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf URL: https://github.com/apache/flink/pull/9232#issuecomment-516668599 ran tests locally and found the test is failing ``` [ERROR] Errors: [ERROR] TableEnvHiveConnectorTest.setup:59 » NullPointer Hive version cannot be null [INFO] [ERROR] Tests run: 227, Failures: 0, Errors: 1, Skipped: 0 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r309016831 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/batch/connectors/hive/HiveTableOutputFormat.java ## @@ -256,10 +258,12 @@ public void configure(Configuration parameters) { public void open(int taskNumber, int numTasks) throws IOException { try { StorageDescriptor sd = hiveTablePartition.getStorageDescriptor(); - serializer = (AbstractSerDe) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + serializer = (Serializer) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + Preconditions.checkArgument(serializer instanceof Deserializer, Review comment: Any suggestions about the name? Like `serDe`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r309016668 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java ## @@ -494,8 +495,22 @@ private static CatalogBaseTable instantiateCatalogTable(Table hiveTable) { String comment = properties.remove(HiveCatalogConfig.COMMENT); // Table schema + List fields; + if (org.apache.hadoop.hive.ql.metadata.Table.hasMetastoreBasedSchema(hiveConf, + hiveTable.getSd().getSerdeInfo().getSerializationLib())) { Review comment: I think you mean `org.apache.hadoop.hive.ql.metadata.Table.getCols()`. But we only have an instance of `org.apache.hadoop.hive.metastore.api.Table` here. Do you think we should create a `org.apache.hadoop.hive.ql.metadata.Table` and call `getCols()`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37
bowenli86 commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#discussion_r309015509 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/HiveCatalogDataTypeTest.java ## @@ -122,20 +121,18 @@ public void testDataTypes() throws Exception { } @Test - public void testNonExactlyMatchedDataTypes() throws Exception { + public void testNonSupportedBinaryDataTypes() throws Exception { Review comment: we should break this down to two individual tests, one for BINARY and one for VARBINARY This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37
bowenli86 commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#discussion_r309015597 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/HiveCatalogDataTypeTest.java ## @@ -122,20 +121,18 @@ public void testDataTypes() throws Exception { } @Test - public void testNonExactlyMatchedDataTypes() throws Exception { + public void testNonSupportedBinaryDataTypes() throws Exception { DataType[] types = new DataType[] { - DataTypes.BINARY(BinaryType.MAX_LENGTH), - DataTypes.VARBINARY(VarBinaryType.MAX_LENGTH) + DataTypes.BINARY(BinaryType.MAX_LENGTH), Review comment: pls revert unnecessary change This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] lirui-apache commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn
lirui-apache commented on a change in pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn URL: https://github.com/apache/flink/pull/9237#discussion_r309015087 ## File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/factories/HiveCatalogFactoryTest.java ## @@ -54,9 +65,49 @@ public void test() { checkEquals(expectedCatalog, (HiveCatalog) actualCatalog); } + @Test + public void testLoadHDFSConfigFromEnv() throws IOException { + final String k1 = "what is connector?"; + final String v1 = "Hive"; + final String catalogName = "HiveCatalog"; + + // set HADOOP_CONF_DIR env + final File hadoopConfDir = tempFolder.newFolder(); + final File hdfsSiteFile = new File(hadoopConfDir, "hdfs-site.xml"); + writeProperty(hdfsSiteFile, k1, v1); + final Map originalEnv = System.getenv(); + final Map newEnv = new HashMap<>(originalEnv); + newEnv.put("HADOOP_CONF_DIR", hadoopConfDir.getAbsolutePath()); + CommonTestUtils.setEnv(newEnv); + + // create HiveCatalog use the Hadoop Configuration + final CatalogDescriptor catalogDescriptor = new HiveCatalogDescriptor(); + final Map properties = catalogDescriptor.toProperties(); + final HiveCatalog hiveCatalog = (HiveCatalog) TableFactoryService.find(CatalogFactory.class, properties) + .createCatalog(catalogName, properties); + final HiveConf hiveConf = hiveCatalog.getHiveConf(); + // set the Env back + CommonTestUtils.setEnv(originalEnv); Review comment: I'd suggest move this to a finally block, so that if this test case fails, it doesn't pollute env for other tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13344) Translate "How to Contribute" page into Chinese.
[ https://issues.apache.org/jira/browse/FLINK-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896702#comment-16896702 ] WangHengWei commented on FLINK-13344: - OK, I'm glad to do. > Translate "How to Contribute" page into Chinese. > - > > Key: FLINK-13344 > URL: https://issues.apache.org/jira/browse/FLINK-13344 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Project Website >Reporter: Jark Wu >Assignee: WangHengWei >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The page is https://flink.apache.org/zh/contributing/how-to-contribute.html > The markdown file is located in > https://github.com/apache/flink-web/blob/asf-site/contributing/how-to-contribute.zh.md > Before start working on this, please read translation guideline: > https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot commented on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package
flinkbot commented on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package URL: https://github.com/apache/flink/pull/9284#issuecomment-516662768 ## CI report: * fff36fd941c13764ad55fc85e4604ab64689aa21 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121345562) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ifndef-SleePy commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase
ifndef-SleePy commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase URL: https://github.com/apache/flink/pull/9269#issuecomment-516661890 Hi @tillrohrmann , I think we could merge this PR. Actually this `blockSnapshotLatch` only works when the race condition happens. It should only block the checkpoint 7. Because normally this test job could not recover because lack of checkpoint file. So there should be no task deployed since checkpoint 6. WRT another race condition of the alternative, I will check it whether it's a bug or not. We could talk about it in another thread if needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896690#comment-16896690 ] godfrey he commented on FLINK-13502: yes, it's just a code cleanup > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.planner.plan}}. {{CatalogTableStatisticsConverter}} > should be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be > in {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309011200 ## File path: tools/travis/splits/split_misc.sh ## @@ -49,7 +49,8 @@ run_test "Queryable state (rocksdb) end-to-end test" "$END_TO_END_DIR/test-scrip run_test "Queryable state (rocksdb) with TM restart end-to-end test" "$END_TO_END_DIR/test-scripts/test_queryable_state_restart_tm.sh" "skip_check_exceptions" run_test "DataSet allround end-to-end test" "$END_TO_END_DIR/test-scripts/test_batch_allround.sh" -run_test "Streaming SQL end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_sql.sh" "skip_check_exceptions" +run_test "Streaming SQL end-to-end test (Old planner)" "$END_TO_END_DIR/test-scripts/test_streaming_sql.sh" "skip_check_exceptions" Review comment: Agree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package
flinkbot commented on issue #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package URL: https://github.com/apache/flink/pull/9284#issuecomment-516661359 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-13502: --- Description: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, {{TreeNode}} is in {{org.apache.flink.table.planner.plan}}. {{CatalogTableStatisticsConverter}} should be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in {{org.apache.flink.table.planner.expressions}}. (was: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, {{TreeNode}} is in {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in {{org.apache.flink.table.planner.expressions}}.) > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.planner.plan}}. {{CatalogTableStatisticsConverter}} > should be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be > in {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13446) Row count sliding window outputs incorrectly in blink planner
[ https://issues.apache.org/jira/browse/FLINK-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13446: Priority: Critical (was: Blocker) > Row count sliding window outputs incorrectly in blink planner > - > > Key: FLINK-13446 > URL: https://issues.apache.org/jira/browse/FLINK-13446 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.9.0 >Reporter: Hequn Cheng >Assignee: Hequn Cheng >Priority: Critical > Labels: pull-request-available > Fix For: 1.9.0, 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > For blink planner, the Row count sliding window outputs incorrectly. The > window assigner assigns less window than what expected. This means the window > outputs fewer data. The bug can be reproduced by the following test: > {code:java} > @Test > def testGroupWindowWithoutKeyInProjection(): Unit = { > val data = List( > (1L, 1, "Hi", 1, 1), > (2L, 2, "Hello", 2, 2), > (4L, 2, "Hello", 2, 2), > (8L, 3, "Hello world", 3, 3), > (16L, 3, "Hello world", 3, 3)) > val stream = failingDataSource(data) > val table = stream.toTable(tEnv, 'long, 'int, 'string, 'int2, 'int3, > 'proctime.proctime) > val weightAvgFun = new WeightedAvg > val countDistinct = new CountDistinct > val windowedTable = table > .window(Slide over 2.rows every 1.rows on 'proctime as 'w) > .groupBy('w, 'int2, 'int3, 'string) > .select(weightAvgFun('long, 'int), countDistinct('long)) > val sink = new TestingAppendSink > windowedTable.toAppendStream[Row].addSink(sink) > env.execute() > val expected = Seq("12,2", "8,1", "2,1", "3,2", "1,1") > assertEquals(expected.sorted, sink.getAppendResults.sorted) > } > {code} > The expected output is Seq("12,2", "8,1", "2,1", "3,2", "1,1") while the > actual output is Seq("12,2", "3,2") > To fix the problem, we can correct the assign logic in > CountSlidingWindowAssigner.assignWindows. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010785 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -202,7 +228,11 @@ public GeneratorTableSource(int numKeys, float recordsPerKeyAndSecond, int durat @Override public DataStream getDataStream(StreamExecutionEnvironment execEnv) { - return execEnv.addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)); + return execEnv + .addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)) Review comment: FLINK-13494 will fix the parallelism setting logic for source/sink, and remain compliant to Old planner. Ofter that, this explicit setting could be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010871 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -202,7 +228,11 @@ public GeneratorTableSource(int numKeys, float recordsPerKeyAndSecond, int durat @Override public DataStream getDataStream(StreamExecutionEnvironment execEnv) { - return execEnv.addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)); + return execEnv + .addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)) Review comment: FLINK-13494 will fix the parallelism setting logic for source/sink, and remain compliant to Old planner. Ofter that, this explicit setting could be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] godfreyhe opened a new pull request #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package
godfreyhe opened a new pull request #9284: [Flink 13502] move CatalogTableStatisticsConverter & TreeNode to correct package URL: https://github.com/apache/flink/pull/9284 ## What is the purpose of the change *move CatalogTableStatisticsConverter & TreeNode to correct package* ## Brief change log - *move CatalogTableStatisticsConverter to planner.utils* - *move TreeNode to planner.expressions* - *rename PlannerQueryOperation to RelQueryOperation* ## Verifying this change This change is a code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010810 ## File path: flink-end-to-end-tests/test-scripts/test_streaming_sql.sh ## @@ -21,15 +21,14 @@ source "$(dirname "$0")"/common.sh TEST_PROGRAM_JAR=${END_TO_END_DIR}/flink-stream-sql-test/target/StreamSQLTestProgram.jar -# copy flink-table jar into lib folder -add_optional_lib "table" - start_cluster $FLINK_DIR/bin/taskmanager.sh start $FLINK_DIR/bin/taskmanager.sh start $FLINK_DIR/bin/taskmanager.sh start -$FLINK_DIR/bin/flink run -p 4 $TEST_PROGRAM_JAR -outputPath file://${TEST_DATA_DIR}/out/result +PLANNER="${1:-old}" Review comment: Agree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010785 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -202,7 +228,11 @@ public GeneratorTableSource(int numKeys, float recordsPerKeyAndSecond, int durat @Override public DataStream getDataStream(StreamExecutionEnvironment execEnv) { - return execEnv.addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)); + return execEnv + .addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)) Review comment: FLINK-13494 will fix the parallelism setting logic for source/sink, and remain compliant to Old planner. Ofter that, this explicit setting could be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896686#comment-16896686 ] Jark Wu commented on FLINK-13502: - Currently, TreeNode is in {{org.apache.flink.table.planner.plan}}. > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010871 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -202,7 +228,11 @@ public GeneratorTableSource(int numKeys, float recordsPerKeyAndSecond, int durat @Override public DataStream getDataStream(StreamExecutionEnvironment execEnv) { - return execEnv.addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)); + return execEnv + .addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)) Review comment: FLINK-13494 will fix the parallelism setting logic for source/sink, and remain compliant to Old planner. Ofter that, this explicit setting could be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896687#comment-16896687 ] Jark Wu commented on FLINK-13502: - There are no class conflicts for these classes. I did't set it as a blocker. > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010505 ## File path: flink-end-to-end-tests/run-pre-commit-tests.sh ## @@ -60,6 +60,7 @@ run_test "Modern Kafka end-to-end test" "$END_TO_END_DIR/test-scripts/test_strea run_test "Kinesis end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_kinesis.sh" run_test "class loading end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_classloader.sh" run_test "Distributed cache end-to-end test" "$END_TO_END_DIR/test-scripts/test_streaming_distributed_cache_via_blob.sh" - +run_test "Streaming SQL end-to-end test (Old planner)" "$END_TO_END_DIR/test-scripts/test_streaming_sql.sh" "skip_check_exceptions" Review comment: Agree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-13502: --- Assignee: godfrey he > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13502: Priority: Critical (was: Major) > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: godfrey he >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309010313 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -202,7 +228,11 @@ public GeneratorTableSource(int numKeys, float recordsPerKeyAndSecond, int durat @Override public DataStream getDataStream(StreamExecutionEnvironment execEnv) { - return execEnv.addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)); + return execEnv + .addSource(new Generator(numKeys, recordsPerKeyAndSecond, durationSeconds, offsetSeconds)) Review comment: FLINK-13494 will fix the parallelism setting logic for source/sink, and remain compliant to Old planner. Ofter that, this explicit setting could be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13344) Translate "How to Contribute" page into Chinese.
[ https://issues.apache.org/jira/browse/FLINK-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896683#comment-16896683 ] Jark Wu commented on FLINK-13344: - [~WangHW] Btw, you can help to review other translation pull requests if you are free. > Translate "How to Contribute" page into Chinese. > - > > Key: FLINK-13344 > URL: https://issues.apache.org/jira/browse/FLINK-13344 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Project Website >Reporter: Jark Wu >Assignee: WangHengWei >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The page is https://flink.apache.org/zh/contributing/how-to-contribute.html > The markdown file is located in > https://github.com/apache/flink-web/blob/asf-site/contributing/how-to-contribute.zh.md > Before start working on this, please read translation guideline: > https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner
docete commented on a change in pull request #9276: [FLINK-13439] Run Streaming SQL e2e test with blink planner URL: https://github.com/apache/flink/pull/9276#discussion_r309009507 ## File path: flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java ## @@ -343,4 +373,50 @@ public void restoreState(List state) { } } + private static StreamTableEnvironment createStreamTableEnvironment( Review comment: Agree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896679#comment-16896679 ] godfrey he commented on FLINK-13502: i will fix this > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-13502: --- Description: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, {{TreeNode}} is in {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in {{org.apache.flink.table.planner.expressions}}. (was: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, its correct position is {{org.apache.flink.table.planner.utils}}. {{TreeNode}} is in {{org.apache.flink.table.plan}}, its correct position is {{org.apache.flink.table.planner.expressions}}) > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, {{TreeNode}} is in > {{org.apache.flink.table.plan}}. {{CatalogTableStatisticsConverter}} should > be in {{org.apache.flink.table.planner.utils}}, {{TreeNode}} should be in > {{org.apache.flink.table.planner.expressions}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-13502: --- Description: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, its correct position is {{org.apache.flink.table.planner.utils}}. {{TreeNode}} is in {{org.apache.flink.table.plan}}, its correct position is {{org.apache.flink.table.planner.expressions}} (was: currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, its correct position is {{org.apache.flink.table.planner.utils}}) > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, its correct position is > {{org.apache.flink.table.planner.utils}}. {{TreeNode}} is in > {{org.apache.flink.table.plan}}, its correct position is > {{org.apache.flink.table.planner.expressions}} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13502) CatalogTableStatisticsConverter & TreeNode should be in correct package
[ https://issues.apache.org/jira/browse/FLINK-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-13502: --- Summary: CatalogTableStatisticsConverter & TreeNode should be in correct package (was: CatalogTableStatisticsConverter should be in planner.utils package) > CatalogTableStatisticsConverter & TreeNode should be in correct package > --- > > Key: FLINK-13502 > URL: https://issues.apache.org/jira/browse/FLINK-13502 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > currently, {{CatalogTableStatisticsConverter}} is in > {{org.apache.flink.table.util}}, its correct position is > {{org.apache.flink.table.planner.utils}} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (FLINK-13502) CatalogTableStatisticsConverter should be in planner.utils package
godfrey he created FLINK-13502: -- Summary: CatalogTableStatisticsConverter should be in planner.utils package Key: FLINK-13502 URL: https://issues.apache.org/jira/browse/FLINK-13502 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.9.0, 1.10.0 currently, {{CatalogTableStatisticsConverter}} is in {{org.apache.flink.table.util}}, its correct position is {{org.apache.flink.table.planner.utils}} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] TsReaper commented on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking
TsReaper commented on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking URL: https://github.com/apache/flink/pull/9236#issuecomment-516658071 Travis passed: https://travis-ci.com/TsReaper/flink/builds/121249787 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task
flinkbot commented on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task URL: https://github.com/apache/flink/pull/9283#issuecomment-516656866 ## CI report: * afd7ba4663e2298776c0e02a4fe56cc1d3cf63a3 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121343773) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task
flinkbot commented on issue #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task URL: https://github.com/apache/flink/pull/9283#issuecomment-516655514 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13487) TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13487: --- Labels: pull-request-available (was: ) > TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall > failed on Travis > > > Key: FLINK-13487 > URL: https://issues.apache.org/jira/browse/FLINK-13487 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Attachments: error_log.png > > > https://api.travis-ci.org/v3/job/564925114/log.txt > {code} > 21:14:47.090 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 5.754 s <<< FAILURE! - in > org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest > 21:14:47.090 [ERROR] > testPartitionReleaseAfterReleaseCall(org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest) > Time elapsed: 0.136 s <<< ERROR! > java.util.concurrent.ExecutionException: > org.apache.flink.runtime.taskexecutor.exceptions.TaskSubmissionException: > Could not submit task because there is no JobManager associated for the job > 2a0ab40cb53241799b71ff6fd2f53d3d. > at > org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest.testPartitionRelease(TaskExecutorPartitionLifecycleTest.java:331) > at > org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall(TaskExecutorPartitionLifecycleTest.java:201) > Caused by: > org.apache.flink.runtime.taskexecutor.exceptions.TaskSubmissionException: > Could not submit task because there is no JobManager associated for the job > 2a0ab40cb53241799b71ff6fd2f53d3d. > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] gaoyunhaii opened a new pull request #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task
gaoyunhaii opened a new pull request #9283: [FLINK-13487][tests] Fix unstable partition lifecycle tests by ensuring TM has registered to JM before submitting task URL: https://github.com/apache/flink/pull/9283 ## What is the purpose of the change This pull request targets at fixing the unstable cases in TaskExecutorPartitionLifecycleTest. As described in the corresponding [JIRA](https://issues.apache.org/jira/browse/FLINK-13487), the tests may submit task before TM has successfully registered to JM, which will cause the tests fail since the checking whether the corresponding job id exists in the TaskExecutor#jobManagerTables can not be passed. The above issue does not exist for the real cases. For real cases, JM will not submit tasks before TM has offered slots, and the successful slot offering ensures TM has registered to JM. To fix the tests, we also simulate this process by ensuring slot has been offered to JM before the task gets submitted. ## Brief change log 1. b94855a7695c7a649a174820b8cf85bb38edbd6d fix the unstable tests by ensure TM has registered to JM before submitting task. ## Verifying this change This change fixes tests and can be verified as follows: - Add Thread.sleep(2000) in RetryingRegistration#register, before `completionFuture.complete(Tuple2.of(gateway, success));` which is executed in the RPC executors. With this modification, the original tests will fail stably. - Add this PR, and above failure will be fixed. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): **no** - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: **no** - The serializers: **no** - The runtime per-record code paths (performance sensitive): **no** - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: **no** - The S3 file system connector: **no** ## Documentation - Does this pull request introduce a new feature? **no** - If yes, how is the feature documented? **not applicable** This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13501) Fixes a few issues in documentation for Hive integration
[ https://issues.apache.org/jira/browse/FLINK-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated FLINK-13501: Attachment: Screen Shot 2019-07-30 at 3.25.13 PM.png Screen Shot 2019-07-30 at 3.21.25 PM.png > Fixes a few issues in documentation for Hive integration > > > Key: FLINK-13501 > URL: https://issues.apache.org/jira/browse/FLINK-13501 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / API >Affects Versions: 1.9.0 >Reporter: Xuefu Zhang >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > Attachments: Screen Shot 2019-07-30 at 3.21.25 PM.png, Screen Shot > 2019-07-30 at 3.25.13 PM.png > > > Going thru existing Hive doc I found the following issues that should be > addressed: > 1. Section "Hive Integration" should come after "SQL client" (at the same > level). > 2. In Catalog section, there are headers named "Hive Catalog". Also, some > information is duplicated with that in "Hive Integration" > 3. "Data Type Mapping" is Hive specific and should probably move to "Hive > integration" -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13501) Fixes a few issues in documentation for Hive integration
[ https://issues.apache.org/jira/browse/FLINK-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896589#comment-16896589 ] Xuefu Zhang commented on FLINK-13501: - cc: [~tiwalter], [~Terry1897] > Fixes a few issues in documentation for Hive integration > > > Key: FLINK-13501 > URL: https://issues.apache.org/jira/browse/FLINK-13501 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Table SQL / API >Affects Versions: 1.9.0 >Reporter: Xuefu Zhang >Priority: Critical > Fix For: 1.9.0, 1.10.0 > > > Going thru existing Hive doc I found the following issues that should be > addressed: > 1. Section "Hive Integration" should come after "SQL client" (at the same > level). > 2. In Catalog section, there are headers named "Hive Catalog". Also, some > information is duplicated with that in "Hive Integration" > 3. "Data Type Mapping" is Hive specific and should probably move to "Hive > integration" -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (FLINK-13501) Fixes a few issues in documentation for Hive integration
Xuefu Zhang created FLINK-13501: --- Summary: Fixes a few issues in documentation for Hive integration Key: FLINK-13501 URL: https://issues.apache.org/jira/browse/FLINK-13501 Project: Flink Issue Type: Bug Components: Connectors / Hive, Table SQL / API Affects Versions: 1.9.0 Reporter: Xuefu Zhang Fix For: 1.9.0, 1.10.0 Going thru existing Hive doc I found the following issues that should be addressed: 1. Section "Hive Integration" should come after "SQL client" (at the same level). 2. In Catalog section, there are headers named "Hive Catalog". Also, some information is duplicated with that in "Hive Integration" 3. "Data Type Mapping" is Hive specific and should probably move to "Hive integration" -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] xintongsong closed pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.
xintongsong closed pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance. URL: https://github.com/apache/flink/pull/9105 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.
xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance. URL: https://github.com/apache/flink/pull/9105#issuecomment-516602332 close in 86491b6b4ccc8c9ef5fa15e4755e2e5c252c29ad This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9266: [FLINK-13273][sql-client] Allow switching planners in SQL Client
flinkbot edited a comment on issue #9266: [FLINK-13273][sql-client] Allow switching planners in SQL Client URL: https://github.com/apache/flink/pull/9266#issuecomment-516047670 ## CI report: * ac53ced5cd134af11f453f00fc3aaaea611654f7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121114297) * 5599a24974a1755047a24fb3ef6c02784f8b71f7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121285514) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask
flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9271#issuecomment-516306550 ## CI report: * abb4ae6bde1f3d1eac787c850c04614e7c5ff907 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121204525) * aa0f69e2607c05e6ad626866895e2e4b44dc2b75 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121277961) * c7ef8872d7fc8fdab998af2bd5dd993014a8c786 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121279063) * 5ac9e7769a874f3508335cfb8f2012a5cc095df1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121281313) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13472) taskmanager.jvm-exit-on-oom doesn't work reliably with YARN
[ https://issues.apache.org/jira/browse/FLINK-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896510#comment-16896510 ] Ken Krugler commented on FLINK-13472: - Hi [~pawelbartoszek] - In my experience, when a Flink workflow fails when running on YARN, often the Job Manager is left alive, so it looks like the job is still running, but it's only using a small amount of memory. I don't know if this is a Flink bug or a YARN issue, but I'd suggest first discussing it on the user mailing list, before filing a bug in Jira, thanks. > taskmanager.jvm-exit-on-oom doesn't work reliably with YARN > --- > > Key: FLINK-13472 > URL: https://issues.apache.org/jira/browse/FLINK-13472 > Project: Flink > Issue Type: Bug > Components: core >Affects Versions: 1.6.3 >Reporter: Pawel Bartoszek >Priority: Major > > I have added *taskmanager.jvm-exit-on-oom* flag to the task manager starting > arguments. During my testing (simulating oom) I noticed that sometimes YARN > containers were still in RUNNING state even though they should haven been > killed on OutOfMemory errors with the flag on. > I could find RUNNING containers with the last log lines like this. > {code:java} > 2019-07-26 13:32:51,396 ERROR org.apache.flink.runtime.taskmanager.Task > - Encountered fatal error java.lang.OutOfMemoryError - > terminating the JVM > java.lang.OutOfMemoryError: Metaspace > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369){code} > > Does YARN make it tricky to forcefully kill JVM after OutOfMemory error? > > *Workaround* > > When using -XX:+ExitOnOutOfMemoryError JVM flag containers get always > terminated! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9282: [FLINK-13499][maprfs] Remove dependency on MapR artifact repository
flinkbot edited a comment on issue #9282: [FLINK-13499][maprfs] Remove dependency on MapR artifact repository URL: https://github.com/apache/flink/pull/9282#issuecomment-516489105 ## CI report: * d48067bb1882c3515cfbcd6c0b2d502ebd22dbd3 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121281276) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask
flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9271#issuecomment-516306550 ## CI report: * abb4ae6bde1f3d1eac787c850c04614e7c5ff907 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121204525) * aa0f69e2607c05e6ad626866895e2e4b44dc2b75 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121277961) * c7ef8872d7fc8fdab998af2bd5dd993014a8c786 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121279063) * 5ac9e7769a874f3508335cfb8f2012a5cc095df1 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121281313) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9274: [FLINK-13495][table-planner-blink] blink-planner should support decimal precision to table source
flinkbot edited a comment on issue #9274: [FLINK-13495][table-planner-blink] blink-planner should support decimal precision to table source URL: https://github.com/apache/flink/pull/9274#issuecomment-516352701 ## CI report: * 6e259d68552bf14b3c0f593706d2c879d32b294e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121223163) * d7506a84938a31ed0bee103f9fa6050437d26f34 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121271962) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9224: [FLINK-13226] [connectors / kafka] Fix race condition between transaction commit and produc…
flinkbot edited a comment on issue #9224: [FLINK-13226] [connectors / kafka] Fix race condition between transaction commit and produc… URL: https://github.com/apache/flink/pull/9224#issuecomment-514889442 ## CI report: * 78b445e8a5fb88c8c7f7080f1e6d9fdf89a5a594 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120548015) * bc9ccf9cf50aa4f288ca1eb5226234b53fe091d0 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120806289) * 199b54692ca4232e19eece4868bdc1ad9a605371 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121224471) * 2d211639b6331c54e214a61b3a45804abd6b17f9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121262539) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9281: [hotfix][table]Rename TableAggFunctionCallVisitor to TableAggFunctionCallResolver
flinkbot edited a comment on issue #9281: [hotfix][table]Rename TableAggFunctionCallVisitor to TableAggFunctionCallResolver URL: https://github.com/apache/flink/pull/9281#issuecomment-516443976 ## CI report: * d1d56e2c854c5d1324877e14edd4fb887278e090 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121262486) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job
flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job URL: https://github.com/apache/flink/pull/9277#issuecomment-516388088 ## CI report: * 7bebdd65247ac172a23f9b0a91873b01b554cd71 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121239025) * 972eca969b455cd33ebac3045adfc8bba874b400 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121258626) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13499) Remove dependency on MapR artifact repository
[ https://issues.apache.org/jira/browse/FLINK-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896395#comment-16896395 ] Till Rohrmann commented on FLINK-13499: --- Please move the issue to "In Progress" [~StephanEwen]. > Remove dependency on MapR artifact repository > - > > Key: FLINK-13499 > URL: https://issues.apache.org/jira/browse/FLINK-13499 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.9.0 >Reporter: Stephan Ewen >Assignee: Stephan Ewen >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 10m > Remaining Estimate: 0h > > The MapR artifact repository causes some problems. It does not reliably offer > a secure (https://) access. > We should change the MapR FS connector to work based on reflection and avoid > a hard dependency on any of the MapR vendor-specific artifacts. That should > allow us to get rid of the dependency without regressing on the support for > the file system. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13446) Row count sliding window outputs incorrectly in blink planner
[ https://issues.apache.org/jira/browse/FLINK-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896392#comment-16896392 ] Till Rohrmann commented on FLINK-13446: --- If this issue is not a blocker, then please update its priority [~jark]. > Row count sliding window outputs incorrectly in blink planner > - > > Key: FLINK-13446 > URL: https://issues.apache.org/jira/browse/FLINK-13446 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.9.0 >Reporter: Hequn Cheng >Assignee: Hequn Cheng >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0, 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > For blink planner, the Row count sliding window outputs incorrectly. The > window assigner assigns less window than what expected. This means the window > outputs fewer data. The bug can be reproduced by the following test: > {code:java} > @Test > def testGroupWindowWithoutKeyInProjection(): Unit = { > val data = List( > (1L, 1, "Hi", 1, 1), > (2L, 2, "Hello", 2, 2), > (4L, 2, "Hello", 2, 2), > (8L, 3, "Hello world", 3, 3), > (16L, 3, "Hello world", 3, 3)) > val stream = failingDataSource(data) > val table = stream.toTable(tEnv, 'long, 'int, 'string, 'int2, 'int3, > 'proctime.proctime) > val weightAvgFun = new WeightedAvg > val countDistinct = new CountDistinct > val windowedTable = table > .window(Slide over 2.rows every 1.rows on 'proctime as 'w) > .groupBy('w, 'int2, 'int3, 'string) > .select(weightAvgFun('long, 'int), countDistinct('long)) > val sink = new TestingAppendSink > windowedTable.toAppendStream[Row].addSink(sink) > env.execute() > val expected = Seq("12,2", "8,1", "2,1", "3,2", "1,1") > assertEquals(expected.sorted, sink.getAppendResults.sorted) > } > {code} > The expected output is Seq("12,2", "8,1", "2,1", "3,2", "1,1") while the > actual output is Seq("12,2", "3,2") > To fix the problem, we can correct the assign logic in > CountSlidingWindowAssigner.assignWindows. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13497) Checkpoints can complete after CheckpointFailureManager fails job
[ https://issues.apache.org/jira/browse/FLINK-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896390#comment-16896390 ] Yun Tang commented on FLINK-13497: -- One quick fix to this problem is to let {{CheckpointFailureManager}} call {{CheckpointCoordinator}} to {{stopCheckpointScheduler}} just before it decided to fail the job. > Checkpoints can complete after CheckpointFailureManager fails job > - > > Key: FLINK-13497 > URL: https://issues.apache.org/jira/browse/FLINK-13497 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.9.0 > > > I think that we introduced with FLINK-12364 an inconsistency wrt to job > termination a checkpointing. In FLINK-9900 it was discovered that checkpoints > can complete even after the {{CheckpointFailureManager}} decided to fail a > job. I think the expected behaviour should be that we fail all pending > checkpoints once the {{CheckpointFailureManager}} decides to fail the job. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] tillrohrmann closed pull request #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired
tillrohrmann closed pull request #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired URL: https://github.com/apache/flink/pull/9242 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13242) StandaloneResourceManagerTest fails on travis
[ https://issues.apache.org/jira/browse/FLINK-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-13242. --- Resolution: Fixed Fix Version/s: (was: 1.10.0) (was: 1.9.0) > StandaloneResourceManagerTest fails on travis > - > > Key: FLINK-13242 > URL: https://issues.apache.org/jira/browse/FLINK-13242 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Chesnay Schepler >Assignee: Xintong Song >Priority: Blocker > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > https://travis-ci.org/apache/flink/jobs/557696989 > {code} > 08:28:06.475 [ERROR] > testStartupPeriod(org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest) > Time elapsed: 10.276 s <<< FAILURE! > java.lang.AssertionError: condition was not fulfilled before the deadline > at > org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest.assertHappensUntil(StandaloneResourceManagerTest.java:114) > at > org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest.testStartupPeriod(StandaloneResourceManagerTest.java:60) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (FLINK-13408) Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired
[ https://issues.apache.org/jira/browse/FLINK-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-13408. --- Resolution: Fixed Fix Version/s: (was: 1.10.0) Fixed via 1.10.0: 0f9d0952cd89a9866d17b9d00b5ff69738974f87 1.9.0: 86491b6b4ccc8c9ef5fa15e4755e2e5c252c29ad > Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the > leadership is acquired > -- > > Key: FLINK-13408 > URL: https://issues.apache.org/jira/browse/FLINK-13408 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Andrey Zagrebin >Assignee: Xintong Song >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > We introduced _StandaloneResourceManager.__setFailUnfulfillableRequest_ to > give some time to task executors to register the available slots before the > slot requests can be checked whether they can be fulfilled or not. > _setFailUnfulfillableRequest_ is scheduled now only once when the RM is > initialised but the task executors will register themselves every time this > RM gets the leadership. Hence, _setFailUnfulfillableRequest_ should be > scheduled after each leader election. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] tillrohrmann commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase
tillrohrmann commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase URL: https://github.com/apache/flink/pull/9269#issuecomment-516536280 I think it is definitely good enough as a bandaid because we see the test failing quite often. Maybe we merge it as is and remove it once the underlying problem has been properly fixed. I agree that this test is also quite complicated with a lot of conditions which makes it quite hard to change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r308872595 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java ## @@ -494,8 +495,22 @@ private static CatalogBaseTable instantiateCatalogTable(Table hiveTable) { String comment = properties.remove(HiveCatalogConfig.COMMENT); // Table schema + List fields; + if (org.apache.hadoop.hive.ql.metadata.Table.hasMetastoreBasedSchema(hiveConf, + hiveTable.getSd().getSerdeInfo().getSerializationLib())) { Review comment: It seems that Table.getCols() is already doing what it's done here. Could you please check? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)
[ https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896376#comment-16896376 ] Till Rohrmann commented on FLINK-9900: -- Another instance: https://api.travis-ci.com/v3/job/220853113/log.txt > Failed to testRestoreBehaviourWithFaultyStateHandles > (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) > --- > > Key: FLINK-9900 > URL: https://issues.apache.org/jira/browse/FLINK-9900 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination, Tests >Affects Versions: 1.5.1, 1.6.0, 1.9.0 >Reporter: zhangminglei >Assignee: Biao Liu >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.9.0 > > Time Spent: 10m > Remaining Estimate: 0h > > https://api.travis-ci.org/v3/job/405843617/log.txt > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec > <<< FAILURE! - in > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase > > testRestoreBehaviourWithFaultyStateHandles(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) > Time elapsed: 120.036 sec <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 12 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693) > at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) > at > java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729) > at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles(ZooKeeperHighAvailabilityITCase.java:244) > Results : > Tests in error: > > ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles:244 > » TestTimedOut > Tests run: 1453, Failures: 0, Errors: 1, Skipped: 29 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9280: Flink-13405 Translate "Basic API Concepts" page into Chinese
flinkbot edited a comment on issue #9280: Flink-13405 Translate "Basic API Concepts" page into Chinese URL: https://github.com/apache/flink/pull/9280#issuecomment-516431274 ## CI report: * 3c6dadbc58f9f1d0f44efbc63ae5e6444dfb : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121257316) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r308867419 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/batch/connectors/hive/HiveTableOutputFormat.java ## @@ -256,10 +258,12 @@ public void configure(Configuration parameters) { public void open(int taskNumber, int numTasks) throws IOException { try { StorageDescriptor sd = hiveTablePartition.getStorageDescriptor(); - serializer = (AbstractSerDe) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + serializer = (Serializer) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + Preconditions.checkArgument(serializer instanceof Deserializer, Review comment: Casting is fine, but can we name the variable differently? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13452) Pipelined region failover strategy does not recover Job if checkpoint cannot be read
[ https://issues.apache.org/jira/browse/FLINK-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896372#comment-16896372 ] Yun Tang commented on FLINK-13452: -- Before FLINK-13060, the process to restart tasks within {{AdaptedRestartPipelinedRegionStrategyNG}} is: {code:java} cancelTasks --> resetTasks --> handleExcpetion {code} After FLINK-13060, the process changed to: {code:java} resetTasks |^| | | cancelTasks --> strategy restart --> handleExcpetion {code} The bug happened due to the future task of {{resetTasks}} failed but die silently. And as you can see, this bug has no direct relationship between region failover to load checkpointed state. Previously I try to catch any exception within {{resetTasks}} itself to {{failGlobal}} in my [PR-9268|https://github.com/apache/flink/pull/9268]. However, this would meet another problem if {{failGlobal}} come across an exception and cannot be caught by {{FatalExitExceptionHandler}}. I still have not found any graceful solution to this bug, [~gjy], [~Zentol] do you have any suggestions for this? > Pipelined region failover strategy does not recover Job if checkpoint cannot > be read > > > Key: FLINK-13452 > URL: https://issues.apache.org/jira/browse/FLINK-13452 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Gary Yao >Assignee: Yun Tang >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Attachments: jobmanager.log > > Time Spent: 20m > Remaining Estimate: 0h > > The job does not recover if a checkpoint cannot be read and > {{jobmanager.execution.failover-strategy}} is set to _"region"_. > *Analysis* > The {{RestartCallback}} created by > {{AdaptedRestartPipelinedRegionStrategyNG}} throws a \{{RuntimeException}} if > no checkpoints could be read. When the restart is invoked in a separate > thread pool, the exception is swallowed. See: > [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/AdaptedRestartPipelinedRegionStrategyNG.java#L117-L119] > [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java#L65] > *Expected behavior* > * Job should restart > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask
flinkbot edited a comment on issue #9271: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9271#issuecomment-516306550 ## CI report: * abb4ae6bde1f3d1eac787c850c04614e7c5ff907 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121204525) * aa0f69e2607c05e6ad626866895e2e4b44dc2b75 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121277961) * c7ef8872d7fc8fdab998af2bd5dd993014a8c786 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121279063) * 5ac9e7769a874f3508335cfb8f2012a5cc095df1 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121281313) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking
flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking URL: https://github.com/apache/flink/pull/9236#issuecomment-515390325 ## CI report: * 1135cc72f00606c7a230714838c938068887ce23 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120833949) * a1070517ff96b110db9a38e3daf28e92eccf236d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121193197) * 10b0cec1d08270287b5e3f14f03bdb4d34572670 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121249934) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired
flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired URL: https://github.com/apache/flink/pull/9242#issuecomment-515479114 ## CI report: * bc4bc3a00e1692381368af36b87d233677e8c4ac : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120867383) * dd5aa085e80d3bc785de7e4e43690687f0f27974 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120968774) * d2f527bfe79d8e25cbaf88aa13330d968364b633 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121115520) * 7a1d13ea0fc924b1f38681153b73ed27e754b9f4 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121169984) * e989bc4a48e22eb7cf9e80d3344aa9e4d30bea80 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121224437) * 431b85849192eb19f046c0c509fea87e28517f73 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121248808) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services