[GitHub] [flink] lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r310340281 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableOutputFormat.java ## @@ -124,7 +124,7 @@ private transient int numNonPartitionColumns; // SerDe in Hive-1.2.1 and Hive-2.3.4 can be of different classes, make sure to use a common base class - private transient Serializer serializer; + private transient Serializer recordSerDe; Review comment: It has to be a serializer because we need it to serialize records. Besides, using Object means we have to use reflection to call the `serialize` method. And if we do this for each record, it might hurt performance. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info
flinkbot commented on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info URL: https://github.com/apache/flink/pull/9347#issuecomment-517895998 ## CI report: * 212cd6c24fdb696aa13bed1cfff875f9bfc01d09 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121831822) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info
flinkbot commented on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info URL: https://github.com/apache/flink/pull/9347#issuecomment-517895705 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13563) TumblingGroupWindow should implement toString method
[ https://issues.apache.org/jira/browse/FLINK-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13563: --- Labels: pull-request-available (was: ) > TumblingGroupWindow should implement toString method > > > Key: FLINK-13563 > URL: https://issues.apache.org/jira/browse/FLINK-13563 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0, 1.10.0 >Reporter: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.9.0, 1.10.0 > > > {code:scala} > @Test > def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { > val util = streamTestUtil() > val table = util.addDataStream[(Long, Int, String)]( > "T1", 'long, 'int, 'string, 'rowtime.rowtime) > val windowedTable = table > .window(Tumble over 5.millis on 'rowtime as 'w) > .groupBy('w) > .select('int.count) > util.verifyPlan(windowedTable) > } > {code} > currently, it's physical plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} > we know nothing about the TumblingGroupWindow except its name. the expected > plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] godfreyhe opened a new pull request #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info
godfreyhe opened a new pull request #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info URL: https://github.com/apache/flink/pull/9347 ## What is the purpose of the change *TumblingGroupWindow should implement toString method to explain more info* ## Brief change log - *add toString method for TumblingGroupWindow* ## Verifying this change This change is already covered by existing tests ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval
flinkbot commented on issue #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval URL: https://github.com/apache/flink/pull/9346#issuecomment-517895412 ## CI report: * 255f2f78ad478b7e3cfb13a17af43872d6ad658f : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121831647) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13563) TumblingGroupWindow should implement toString method
[ https://issues.apache.org/jira/browse/FLINK-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] godfrey he updated FLINK-13563: --- Description: {code:scala} @Test def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { val util = streamTestUtil() val table = util.addDataStream[(Long, Int, String)]( "T1", 'long, 'int, 'string, 'rowtime.rowtime) val windowedTable = table .window(Tumble over 5.millis on 'rowtime as 'w) .groupBy('w) .select('int.count) util.verifyPlan(windowedTable) } {code} currently, it's physical plan is {code:java} HashWindowAggregate(window=[TumblingGroupWindow], select=[Final_COUNT(count$0) AS EXPR$0]) +- Exchange(distribution=[single]) +- LocalHashWindowAggregate(window=[TumblingGroupWindow], select=[Partial_COUNT(int) AS count$0]) +- TableSourceScan(table=[[default_catalog, default_database, Table1, source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) {code} we know nothing about the TumblingGroupWindow except its name. the expected plan is {code:java} HashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], select=[Final_COUNT(count$0) AS EXPR$0]) +- Exchange(distribution=[single]) +- LocalHashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], select=[Partial_COUNT(int) AS count$0]) +- TableSourceScan(table=[[default_catalog, default_database, Table1, source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) {code} was: {code:scala} @Test def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { val util = streamTestUtil() val table = util.addDataStream[(Long, Int, String)]( "T1", 'long, 'int, 'string, 'rowtime.rowtime) val windowedTable = table .window(Tumble over 5.millis on 'rowtime as 'w) .groupBy('w) .select('int.count) util.verifyPlan(windowedTable) } {code} currently, it's physical plan is {code:java} HashWindowAggregate(window=[TumblingGroupWindow], select=[Final_COUNT(count$0) AS EXPR$0]) +- Exchange(distribution=[single]) +- LocalHashWindowAggregate(window=[TumblingGroupWindow], select=[Partial_COUNT(int) AS count$0]) +- TableSourceScan(table=[[default_catalog, default_database, Table1, source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) {code} we know nothing about the TumblingGroupWindow except its name > TumblingGroupWindow should implement toString method > > > Key: FLINK-13563 > URL: https://issues.apache.org/jira/browse/FLINK-13563 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0, 1.10.0 >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > {code:scala} > @Test > def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { > val util = streamTestUtil() > val table = util.addDataStream[(Long, Int, String)]( > "T1", 'long, 'int, 'string, 'rowtime.rowtime) > val windowedTable = table > .window(Tumble over 5.millis on 'rowtime as 'w) > .groupBy('w) > .select('int.count) > util.verifyPlan(windowedTable) > } > {code} > currently, it's physical plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} > we know nothing about the TumblingGroupWindow except its name. the expected > plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13563) TumblingGroupWindow should implement toString method
[ https://issues.apache.org/jira/browse/FLINK-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899359#comment-16899359 ] godfrey he commented on FLINK-13563: i would like to fix this > TumblingGroupWindow should implement toString method > > > Key: FLINK-13563 > URL: https://issues.apache.org/jira/browse/FLINK-13563 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.9.0, 1.10.0 >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > {code:scala} > @Test > def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { > val util = streamTestUtil() > val table = util.addDataStream[(Long, Int, String)]( > "T1", 'long, 'int, 'string, 'rowtime.rowtime) > val windowedTable = table > .window(Tumble over 5.millis on 'rowtime as 'w) > .groupBy('w) > .select('int.count) > util.verifyPlan(windowedTable) > } > {code} > currently, it's physical plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} > we know nothing about the TumblingGroupWindow except its name. the expected > plan is > {code:java} > HashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Final_COUNT(count$0) AS EXPR$0]) > +- Exchange(distribution=[single]) >+- LocalHashWindowAggregate(window=[TumblingGroupWindow('w, long, 5)], > select=[Partial_COUNT(int) AS count$0]) > +- TableSourceScan(table=[[default_catalog, default_database, Table1, > source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (FLINK-13563) TumblingGroupWindow should implement toString method
godfrey he created FLINK-13563: -- Summary: TumblingGroupWindow should implement toString method Key: FLINK-13563 URL: https://issues.apache.org/jira/browse/FLINK-13563 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.9.0, 1.10.0 Reporter: godfrey he Fix For: 1.9.0, 1.10.0 {code:scala} @Test def testAllEventTimeTumblingGroupWindowOverTime(): Unit = { val util = streamTestUtil() val table = util.addDataStream[(Long, Int, String)]( "T1", 'long, 'int, 'string, 'rowtime.rowtime) val windowedTable = table .window(Tumble over 5.millis on 'rowtime as 'w) .groupBy('w) .select('int.count) util.verifyPlan(windowedTable) } {code} currently, it's physical plan is {code:java} HashWindowAggregate(window=[TumblingGroupWindow], select=[Final_COUNT(count$0) AS EXPR$0]) +- Exchange(distribution=[single]) +- LocalHashWindowAggregate(window=[TumblingGroupWindow], select=[Partial_COUNT(int) AS count$0]) +- TableSourceScan(table=[[default_catalog, default_database, Table1, source: [TestTableSource(long, int, string)]]], fields=[long, int, string]) {code} we know nothing about the TumblingGroupWindow except its name -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot commented on issue #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval
flinkbot commented on issue #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval URL: https://github.com/apache/flink/pull/9346#issuecomment-517895087 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13562) throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate
[ https://issues.apache.org/jira/browse/FLINK-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13562: --- Labels: pull-request-available (was: ) > throws exception when FlinkRelMdColumnInterval meets two stage stream group > aggregate > - > > Key: FLINK-13562 > URL: https://issues.apache.org/jira/browse/FLINK-13562 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Labels: pull-request-available > Fix For: 1.9.0, 1.10.0 > > > test case: > {code:scala} > @Test > def testTwoDistinctAggregateWithNonDistinctAgg(): Unit = { > util.addTableSource[(Int, Long, String)]("MyTable", 'a, 'b, 'c) > util.verifyPlan("SELECT c, SUM(DISTINCT a), SUM(a), COUNT(DISTINCT b) > FROM MyTable GROUP BY c") > } > {code} > org.apache.flink.table.api.TableException: Sum aggregate function does not > support type: ''VARCHAR''. > Please re-check the data type. > at > org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createSumAggFunction(AggFunctionFactory.scala:191) > at > org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:74) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:285) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:279) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:279) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$.getOutputIndexToAggCallIndexMap(AggregateUtil.scala:154) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getAggCallIndexInLocalAgg$1(FlinkRelMdColumnInterval.scala:504) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.estimateColumnIntervalOfAggregate(FlinkRelMdColumnInterval.scala:526) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getColumnInterval(FlinkRelMdColumnInterval.scala:417) > at GeneratedMetadataHandler_ColumnInterval.getColumnInterval_$(Unknown > Source) > at GeneratedMetadataHandler_ColumnInterval.getColumnInterval(Unknown > Source) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getColumnInterval(FlinkRelMetadataQuery.java:122) -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] godfreyhe opened a new pull request #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval
godfreyhe opened a new pull request #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval URL: https://github.com/apache/flink/pull/9346 …l stream group aggregate in FlinkRelMdColumnInterval
[jira] [Commented] (FLINK-13562) throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate
[ https://issues.apache.org/jira/browse/FLINK-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899355#comment-16899355 ] godfrey he commented on FLINK-13562: i would like to fix this > throws exception when FlinkRelMdColumnInterval meets two stage stream group > aggregate > - > > Key: FLINK-13562 > URL: https://issues.apache.org/jira/browse/FLINK-13562 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Major > Fix For: 1.9.0, 1.10.0 > > > test case: > {code:scala} > @Test > def testTwoDistinctAggregateWithNonDistinctAgg(): Unit = { > util.addTableSource[(Int, Long, String)]("MyTable", 'a, 'b, 'c) > util.verifyPlan("SELECT c, SUM(DISTINCT a), SUM(a), COUNT(DISTINCT b) > FROM MyTable GROUP BY c") > } > {code} > org.apache.flink.table.api.TableException: Sum aggregate function does not > support type: ''VARCHAR''. > Please re-check the data type. > at > org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createSumAggFunction(AggFunctionFactory.scala:191) > at > org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:74) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:285) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:279) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:279) > at > org.apache.flink.table.planner.plan.utils.AggregateUtil$.getOutputIndexToAggCallIndexMap(AggregateUtil.scala:154) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getAggCallIndexInLocalAgg$1(FlinkRelMdColumnInterval.scala:504) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.estimateColumnIntervalOfAggregate(FlinkRelMdColumnInterval.scala:526) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getColumnInterval(FlinkRelMdColumnInterval.scala:417) > at GeneratedMetadataHandler_ColumnInterval.getColumnInterval_$(Unknown > Source) > at GeneratedMetadataHandler_ColumnInterval.getColumnInterval(Unknown > Source) > at > org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getColumnInterval(FlinkRelMetadataQuery.java:122) -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (FLINK-13562) throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate
godfrey he created FLINK-13562: -- Summary: throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate Key: FLINK-13562 URL: https://issues.apache.org/jira/browse/FLINK-13562 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.9.0, 1.10.0 test case: {code:scala} @Test def testTwoDistinctAggregateWithNonDistinctAgg(): Unit = { util.addTableSource[(Int, Long, String)]("MyTable", 'a, 'b, 'c) util.verifyPlan("SELECT c, SUM(DISTINCT a), SUM(a), COUNT(DISTINCT b) FROM MyTable GROUP BY c") } {code} org.apache.flink.table.api.TableException: Sum aggregate function does not support type: ''VARCHAR''. Please re-check the data type. at org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createSumAggFunction(AggFunctionFactory.scala:191) at org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:74) at org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:285) at org.apache.flink.table.planner.plan.utils.AggregateUtil$$anonfun$9.apply(AggregateUtil.scala:279) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:279) at org.apache.flink.table.planner.plan.utils.AggregateUtil$.getOutputIndexToAggCallIndexMap(AggregateUtil.scala:154) at org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getAggCallIndexInLocalAgg$1(FlinkRelMdColumnInterval.scala:504) at org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.estimateColumnIntervalOfAggregate(FlinkRelMdColumnInterval.scala:526) at org.apache.flink.table.planner.plan.metadata.FlinkRelMdColumnInterval.getColumnInterval(FlinkRelMdColumnInterval.scala:417) at GeneratedMetadataHandler_ColumnInterval.getColumnInterval_$(Unknown Source) at GeneratedMetadataHandler_ColumnInterval.getColumnInterval(Unknown Source) at org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getColumnInterval(FlinkRelMetadataQuery.java:122) -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9345: [FLINK-13489]Fix the broken heavy deployment e2e test by adjusting config values
flinkbot edited a comment on issue #9345: [FLINK-13489]Fix the broken heavy deployment e2e test by adjusting config values URL: https://github.com/apache/flink/pull/9345#issuecomment-517888386 ## CI report: * caf6867dcd541dc4f5a95d13fba7ae69bc0936a2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121829127) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Comment Edited] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899343#comment-16899343 ] Yingjie Cao edited comment on FLINK-13489 at 8/3/19 3:34 AM: - [~StephanEwen] I run the test for many times, but only encountered the akka timeout problem only once, and nerve encountered the heartbeat timeout problem. But unfortunately, I did not get the JM/TM log of that failure. Latter, I modified the test script to print gc and JM/TM log out and run the test for many times, but the timeout problem did not occur. I noticed the gc time is a little long, many 2, 3, 4 seconds (these are for successfully finished job). I guess the previous failure may result by GC. Another problem is that the Travis test platform seems not stable, the test time varies. As for containerized.heap-cutoff-min, it is because it was used for memory calculation. If the default value (600) is used, the standalone cluster can not start up. I agree with you that this config option should not be considered by standalone mode, but it seems reusing the same code (I think it also should be fixed). The flowing is the exception stack: 2019-08-01 18:42:29,289 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent. at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:259) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:210) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: java.lang.IllegalArgumentException: The configuration value 'containerized.heap-cutoff-min'='600' is larger than the total container memory 512 at org.apache.flink.runtime.clusterframework.ContaineredTaskManagerParameters.calculateCutoffMB(ContaineredTaskManagerParameters.java:133) at org.apache.flink.runtime.util.ResourceManagerUtil.getResourceManagerConfiguration(ResourceManagerUtil.java:34) at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:171) ... 6 more was (Author: kevin.cyj): [~StephanEwen] I run the test for many times, but only encountered the akka timeout problem only once, and nerve encountered the heartbeat timeout problem. But unfortunately, I did not get the JM/TM log of that failure. Latter, I modified the test script to print gc and JM/TM log out and run the test for many times, but the timeout problem did not occur. I noticed the gc time is a little long, many 2, 3, 4 seconds (these are for successfully finished job). I guess the previous failure may result by GC. Another problem is that the Travis test platform seems not stable, the test time varies. As for containerized.heap-cutoff-min, it is because it was used for memory calculation. If the default value (600) is used, the standalone cluster can start up. I agree with you that this config option should not be considered by standalone mode, but it seems reusing the same code (I think it also should be fixed). The flowing is the exception stack: 2019-08-01 18:42:29,289 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent. at
[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899343#comment-16899343 ] Yingjie Cao commented on FLINK-13489: - [~StephanEwen] I run the test for many times, but only encountered the akka timeout problem only once, and nerve encountered the heartbeat timeout problem. But unfortunately, I did not get the JM/TM log of that failure. Latter, I modified the test script to print gc and JM/TM log out and run the test for many times, but the timeout problem did not occur. I noticed the gc time is a little long, many 2, 3, 4 seconds (these are for successfully finished job). I guess the previous failure may result by GC. Another problem is that the Travis test platform seems not stable, the test time varies. As for containerized.heap-cutoff-min, it is because it was used for memory calculation. If the default value (600) is used, the standalone cluster can start up. I agree with you that this config option should not be considered by standalone mode, but it seems reusing the same code (I think it also should be fixed). The flowing is the exception stack: 2019-08-01 18:42:29,289 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent. at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:259) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:210) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: java.lang.IllegalArgumentException: The configuration value 'containerized.heap-cutoff-min'='600' is larger than the total container memory 512 at org.apache.flink.runtime.clusterframework.ContaineredTaskManagerParameters.calculateCutoffMB(ContaineredTaskManagerParameters.java:133) at org.apache.flink.runtime.util.ResourceManagerUtil.getResourceManagerConfiguration(ResourceManagerUtil.java:34) at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:171) ... 6 more > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at >
[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application URL: https://github.com/apache/flink/pull/9336#issuecomment-517610510 ## CI report: * 4fe9e1ba5707fb4d208290116bc172142e6be08a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121719017) * 346ed33756127b27aed16fc91d8ce81048186c06 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121827648) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
[ https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingjie Cao updated FLINK-13489: Labels: pull-request-available (was: ) > Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout > -- > > Key: FLINK-13489 > URL: https://issues.apache.org/jira/browse/FLINK-13489 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yingjie Cao >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > https://api.travis-ci.org/v3/job/564925128/log.txt > {code} > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 1b4f1807cc749628cfc1bdf04647527a) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507) > at > org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247) > ... 21 more > Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager > with id ea456d6a590eca7598c19c4d35e56db9 timed out. > at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149) > at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190) > at > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > at
[GitHub] [flink] flinkbot commented on issue #9345: [FLINK-13489]Fix the broken heavy deployment e2e test by adjusting config values
flinkbot commented on issue #9345: [FLINK-13489]Fix the broken heavy deployment e2e test by adjusting config values URL: https://github.com/apache/flink/pull/9345#issuecomment-517888386 ## CI report: * caf6867dcd541dc4f5a95d13fba7ae69bc0936a2 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121829127) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9345: Fix the broken heavy deployment e2e test by adjusting config values
flinkbot commented on issue #9345: Fix the broken heavy deployment e2e test by adjusting config values URL: https://github.com/apache/flink/pull/9345#issuecomment-517888074 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wsry opened a new pull request #9345: Fix the broken heavy deployment e2e test by adjusting config values
wsry opened a new pull request #9345: Fix the broken heavy deployment e2e test by adjusting config values URL: https://github.com/apache/flink/pull/9345 ## What is the purpose of the change The purpose of this PR is fix the broken heavy deployment e2e test. By adjusting three config values of the heavy deployment test, the cluster startup failure problem can be solved and the memory pressure of JM/TM can be reduced, which make the e2e test more stable. ## Brief change log Three config values are changed for the heavy deployment e2e test. - *Decrease config value of containerized.heap-cutoff-min from default (600M) to 100M* - *Increase JobManager heap size from default (1024M) to 2048M* - *Increase TaskManager heap size from 512M to 1024M* ## Verifying this change Manually verified the change by running ./flink-end-to-end-tests/run-single-test.sh ./flink-end-to-end-tests/test-scripts/test_heavy_deployment.sh 500 times locally and modify the .travis.yml config file to trigger heavy deployment test when pushing code and run the test for dozens of times on Travis. All the tests were passed. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories
flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories URL: https://github.com/apache/flink/pull/9337#issuecomment-517628740 ## CI report: * cc8b977ff855cb18e77395fb020db509c6e0108c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121725162) * baea855d72c392a32ba4ad0b8a4429b3f400bb97 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121824278) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs
flinkbot edited a comment on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs URL: https://github.com/apache/flink/pull/9344#issuecomment-517800069 ## CI report: * f66b26063868b2a7f0f71a869cf010e6d8e4644f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121795027) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application URL: https://github.com/apache/flink/pull/9336#issuecomment-517610510 ## CI report: * 4fe9e1ba5707fb4d208290116bc172142e6be08a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121719017) * 346ed33756127b27aed16fc91d8ce81048186c06 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121827648) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example
flinkbot edited a comment on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example URL: https://github.com/apache/flink/pull/9311#issuecomment-517147867 ## CI report: * 247bcdc2c1cda7a26c2170c5d6528c6ac27a6031 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121525657) * ecc938048a7db22305a40e586e827a1c7eb1e2af : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121789888) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#issuecomment-517770642 ## CI report: * 76704f271662b57cbe36679d3d249bcdd7fdf66a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121784366) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-10392) Remove legacy mode
[ https://issues.apache.org/jira/browse/FLINK-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899314#comment-16899314 ] TisonKun commented on FLINK-10392: -- [~till.rohrmann] As I see it all subtasks under FLINK-10392 have been resolved and legacy jobmanager, taskmanager, resourcemanager, scheduler and slot implementation has been remove. Could you please make a general check whether we could close this umbrella issue and announce that it has been done? > Remove legacy mode > -- > > Key: FLINK-10392 > URL: https://issues.apache.org/jira/browse/FLINK-10392 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Major > > This issue is the umbrella issue to remove the legacy mode code from Flink. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (FLINK-10392) Remove legacy mode
[ https://issues.apache.org/jira/browse/FLINK-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899314#comment-16899314 ] TisonKun edited comment on FLINK-10392 at 8/3/19 1:00 AM: -- [~till.rohrmann] As I see it all subtasks under FLINK-10392 have been resolved and legacy jobmanager, taskmanager, resourcemanager, scheduler and slot implementation have been removed. Could you please make a general check whether we could close this umbrella issue and announce that it has been done? was (Author: tison): [~till.rohrmann] As I see it all subtasks under FLINK-10392 have been resolved and legacy jobmanager, taskmanager, resourcemanager, scheduler and slot implementation has been remove. Could you please make a general check whether we could close this umbrella issue and announce that it has been done? > Remove legacy mode > -- > > Key: FLINK-10392 > URL: https://issues.apache.org/jira/browse/FLINK-10392 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Major > > This issue is the umbrella issue to remove the legacy mode code from Flink. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-11705) Port org.apache.flink.runtime.testingUtils.TestingUtils to Java
[ https://issues.apache.org/jira/browse/FLINK-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899310#comment-16899310 ] TisonKun commented on FLINK-11705: -- As the situation changes, {{TestingUtils}} contains nothing about legacy mode(which should be removed by FLINK-10392). It is still valid that we make effort to get rid of scala in {{flink-runtime}}. However, this issue is not a subtask of FLINK-10392 any more. Thus change the field correspondingly. > Port org.apache.flink.runtime.testingUtils.TestingUtils to Java > --- > > Key: FLINK-11705 > URL: https://issues.apache.org/jira/browse/FLINK-11705 > Project: Flink > Issue Type: Improvement > Components: Tests >Reporter: Shimin Yang >Assignee: Shimin Yang >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-11705) Port org.apache.flink.runtime.testingUtils.TestingUtils to Java
[ https://issues.apache.org/jira/browse/FLINK-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TisonKun updated FLINK-11705: - Issue Type: Improvement (was: Sub-task) Parent: (was: FLINK-10392) > Port org.apache.flink.runtime.testingUtils.TestingUtils to Java > --- > > Key: FLINK-11705 > URL: https://issues.apache.org/jira/browse/FLINK-11705 > Project: Flink > Issue Type: Improvement > Components: Tests >Reporter: Shimin Yang >Assignee: Shimin Yang >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-11705) Port org.apache.flink.runtime.testingUtils.TestingUtils to Java
[ https://issues.apache.org/jira/browse/FLINK-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TisonKun updated FLINK-11705: - Summary: Port org.apache.flink.runtime.testingUtils.TestingUtils to Java (was: remove org.apache.flink.runtime.testingUtils.TestingUtils) > Port org.apache.flink.runtime.testingUtils.TestingUtils to Java > --- > > Key: FLINK-11705 > URL: https://issues.apache.org/jira/browse/FLINK-11705 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Shimin Yang >Assignee: Shimin Yang >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-11705) remove org.apache.flink.runtime.testingUtils.TestingUtils
[ https://issues.apache.org/jira/browse/FLINK-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TisonKun updated FLINK-11705: - Priority: Minor (was: Major) > remove org.apache.flink.runtime.testingUtils.TestingUtils > - > > Key: FLINK-11705 > URL: https://issues.apache.org/jira/browse/FLINK-11705 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Shimin Yang >Assignee: Shimin Yang >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9274: [FLINK-13495][table-planner-blink] blink-planner should support decimal precision to connector
flinkbot edited a comment on issue #9274: [FLINK-13495][table-planner-blink] blink-planner should support decimal precision to connector URL: https://github.com/apache/flink/pull/9274#issuecomment-516352701 ## CI report: * 6e259d68552bf14b3c0f593706d2c879d32b294e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121223163) * d7506a84938a31ed0bee103f9fa6050437d26f34 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121271962) * 56fbf8a591e1afdacff34ca8106e4409881fc86c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121781253) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13505) Translate "Java Lambda Expressions" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899303#comment-16899303 ] WangHengWei commented on FLINK-13505: - Hi [~jark], I have done the job. But when I was opening a PR, it has two commit, the other one is my previous translation [FLIN-13405]. I think that might be a problem. Should I delete my repo and fork again then open the PR? > Translate "Java Lambda Expressions" page into Chinese > - > > Key: FLINK-13505 > URL: https://issues.apache.org/jira/browse/FLINK-13505 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Affects Versions: 1.10.0 >Reporter: WangHengWei >Assignee: WangHengWei >Priority: Major > > The page url is > [https://ci.apache.org/projects/flink/flink-docs-master/dev/java_lambdas.html]. > The markdown file is located in " flink/docs/dev/java_lambdas.zh.md" -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9219: [FLINK-13404] [table] Port csv descriptors & factories to flink-table-api-java-bridge
flinkbot edited a comment on issue #9219: [FLINK-13404] [table] Port csv descriptors & factories to flink-table-api-java-bridge URL: https://github.com/apache/flink/pull/9219#issuecomment-514608060 ## CI report: * 6b9a26ad0d626ca2c3aae3d371a3b376b0093b87 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120440827) * 4f1ecd9257b3be1a2cba1955191b07f2c9eb26f4 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120606118) * 335841b7bd4e32bf1ceff5426eee9e3c742124f1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120625338) * 2c7b5d12b6af5a2a0892a4ccc6afb7155b56e1a5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121073532) * 3f10ccbc74eb839e2ea3a5d1a0be3dd7a74759b2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121697315) * 90aaddac926950db284ac3784434d15ab09d1c86 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121744801) * 9f68d5ceef4f3fb1c53f5c72090b2dd0a8b04078 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121775606) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI
flinkbot edited a comment on issue #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI URL: https://github.com/apache/flink/pull/9328#issuecomment-517534712 ## CI report: * a9b4a82d084b56a33355e6819462a79d0d5441ac : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121699234) * 2c2f2353f8939126d8eb4f065d2aef5294e02feb : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121699657) * 6b8668ebea88579f3992589a25425c23feeac9f1 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121714020) * bbfbb10797224722dc92255e1576847205c59cdb : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121742257) * a0fa0c26b433b3b0f4a46ee4c669481a6c8c5302 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121774330) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories
flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories URL: https://github.com/apache/flink/pull/9337#issuecomment-517628740 ## CI report: * cc8b977ff855cb18e77395fb020db509c6e0108c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121725162) * baea855d72c392a32ba4ad0b8a4429b3f400bb97 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121824278) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories
zjuwangg commented on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories URL: https://github.com/apache/flink/pull/9337#issuecomment-517874350 > @zjuwangg Thanks for working on this. PR LGTM, but please make sure it doesn't break any test for Hive-1.2.1 as well. > BTW, how about the jms dependencies? Is it possible to avoid that too? Yep, I tested with profile hive-1.2.1 in my box and remove the jms dependency in the lastest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9340: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask
flinkbot edited a comment on issue #9340: [FLINK-13384][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9340#issuecomment-517724315 ## CI report: * 05dcd415f3be0d030fa25e56b5144d5c08829dc6 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121765235) * cd5f451ba7eadb1c5b649e75c3029945cc66767a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121770607) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on a change in pull request #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37
zjuwangg commented on a change in pull request #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#discussion_r310324433 ## File path: docs/dev/table/catalog.md ## @@ -189,11 +190,14 @@ The following limitations in Hive's data types impact the mapping between Flink Review comment: It' not a mistake according to [hive char type](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-char) definition. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg edited a comment on issue #9310: [FLINK-13190][hive]add test to verify partition pruning for HiveTableSource
zjuwangg edited a comment on issue #9310: [FLINK-13190][hive]add test to verify partition pruning for HiveTableSource URL: https://github.com/apache/flink/pull/9310#issuecomment-517869563 The Travis has passed, ready to merge @wuchong This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zjuwangg commented on issue #9310: [FLINK-13190][hive]add test to verify partition pruning for HiveTableSource
zjuwangg commented on issue #9310: [FLINK-13190][hive]add test to verify partition pruning for HiveTableSource URL: https://github.com/apache/flink/pull/9310#issuecomment-517869563 The Travis has passed, ready to merge This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #7757: [FLINK-11630] Triggers the termination of all running Tasks when shutting down TaskExecutor
flinkbot edited a comment on issue #7757: [FLINK-11630] Triggers the termination of all running Tasks when shutting down TaskExecutor URL: https://github.com/apache/flink/pull/7757#issuecomment-517730639 ## CI report: * d7112f1e0950f625c3cd667bf5086432e310e372 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121767936) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground
flinkbot edited a comment on issue #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground URL: https://github.com/apache/flink/pull/9192#issuecomment-513664474 ## CI report: * 74a251ff6fa8c2ff9b13ae5869aacf90146024aa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119964791) * 671dd6c48049ec526030cfc2b62b853c81ed01ab : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119965292) * 0505f7e4164015d4c604963787e6111fa55d5d9f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121190039) * 93640d7051de136ad6614610cfcb2999a5e0e947 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121202130) * 4a3be96ab1b2d225e2dd30624f1b341ef540f67b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121202860) * cd3fb4579eff841e8e23906307c153d44c9ce846 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121775635) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
flinkbot edited a comment on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9343#issuecomment-517783288 ## CI report: * 0425fb36fe495cdca6e54523567e37d9bb2b8132 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121788927) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8944: [FLINK-13058][Table SQL / Runtime] Avoid memory copy for the trimming operations of BinaryString
flinkbot edited a comment on issue #8944: [FLINK-13058][Table SQL / Runtime] Avoid memory copy for the trimming operations of BinaryString URL: https://github.com/apache/flink/pull/8944#issuecomment-517730683 ## CI report: * 7cbfc8c4a202fb6102a8e14f353a29207b4cd2da : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121767879) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9341: [FLINK-13508][1.9][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
flinkbot edited a comment on issue #9341: [FLINK-13508][1.9][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9341#issuecomment-517730722 ## CI report: * 348dc519e561c02a5270a1536426a869ff3ec041 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121767850) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
flinkbot edited a comment on issue #9291: [FLINK-13508][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9291#issuecomment-516816515 ## CI report: * 01cdf79b07aa1a965a7df06c399fd937f5786c21 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121401237) * 0b92c14db86c379989a3d5017ba83cb66e34782a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121766555) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhijiangW commented on a change in pull request #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength
zhijiangW commented on a change in pull request #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength URL: https://github.com/apache/flink/pull/8559#discussion_r310306973 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGateTest.java ## @@ -559,6 +561,49 @@ public void testUpdateUnknownInputChannel() throws Exception { } } + @Test + public void testQueuedBuffers() throws Exception { + final NettyShuffleEnvironment network = createNettyShuffleEnvironment(); + + final ResultPartition localResultPartition = new ResultPartitionBuilder() + .setResultPartitionManager(network.getResultPartitionManager()) + .setupBufferPoolFactoryFromNettyShuffleEnvironment(network) + .build(); + + final SingleInputGate inputGate = createInputGate(network, 2, ResultPartitionType.PIPELINED); + + final ResultPartitionID localResultPartitionId = localResultPartition.getPartitionId(); + + final RemoteInputChannel remoteInputChannel = InputChannelBuilder.newBuilder() + .setChannelIndex(1) + .setupFromNettyShuffleEnvironment(network) + .setConnectionManager(new TestingConnectionManager()) + .buildRemoteAndSetToGate(inputGate); + + InputChannelBuilder.newBuilder() + .setChannelIndex(0) + .setPartitionId(localResultPartitionId) + .setupFromNettyShuffleEnvironment(network) + .setConnectionManager(new TestingConnectionManager()) + .buildLocalAndSetToGate(inputGate); + + try { + localResultPartition.setup(); + inputGate.setup(); + + remoteInputChannel.onBuffer(TestBufferFactory.createBuffer(1), 0, 0); + assertEquals(1, inputGate.getNumberOfQueuedBuffers()); + + localResultPartition.addBufferConsumer(BufferBuilderTestUtils.createFilledBufferConsumer(1), 0); + assertEquals(2, inputGate.getNumberOfQueuedBuffers()); + } finally { + localResultPartition.release(); + inputGate.close(); + network.close(); + } + Review comment: nit: remove this empty line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhijiangW commented on a change in pull request #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength
zhijiangW commented on a change in pull request #8559: [FLINK-12576][Network, Metrics]Take localInputChannel into account when compute inputQueueLength URL: https://github.com/apache/flink/pull/8559#discussion_r310307040 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGateTest.java ## @@ -559,6 +561,49 @@ public void testUpdateUnknownInputChannel() throws Exception { } } + @Test + public void testQueuedBuffers() throws Exception { + final NettyShuffleEnvironment network = createNettyShuffleEnvironment(); + + final ResultPartition localResultPartition = new ResultPartitionBuilder() Review comment: localResultPartition -> resultPartition This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink
flinkbot edited a comment on issue #9099: [FLINK-13237][table-planner-blink] Add expression table api test to blink URL: https://github.com/apache/flink/pull/9099#issuecomment-510762700 ## CI report: * fb347fe30a5e894e388837ed2de4f9b60513d7b1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/118885023) * 48382540ba07e7096f2b1f1548c0703fdd5ec8a1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120638945) * e08e1be6e80933dbf7526088691b0dced7673025 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121761606) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9339: [FLINK-13555][runtime] SlotPool fails batch slot requests immediately if they are unfulfillable.
flinkbot edited a comment on issue #9339: [FLINK-13555][runtime] SlotPool fails batch slot requests immediately if they are unfulfillable. URL: https://github.com/apache/flink/pull/9339#issuecomment-517712846 ## CI report: * dfe5ef072dadfc0e1510f1923911304c635e54af : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121760453) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9271: [FLINK-13384][1.9][runtime] Fix back pressure sampling for SourceStreamTask
flinkbot edited a comment on issue #9271: [FLINK-13384][1.9][runtime] Fix back pressure sampling for SourceStreamTask URL: https://github.com/apache/flink/pull/9271#issuecomment-516306550 ## CI report: * abb4ae6bde1f3d1eac787c850c04614e7c5ff907 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121204525) * aa0f69e2607c05e6ad626866895e2e4b44dc2b75 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121277961) * c7ef8872d7fc8fdab998af2bd5dd993014a8c786 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121279063) * 5ac9e7769a874f3508335cfb8f2012a5cc095df1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121281313) * b2ba35e3d9c4709394fce5c76e07107e1bc81295 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121399455) * 8f45d157ca190bd45b6efa3507c101add9bfbc15 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121400303) * 0aefd592702440d7d930b5fe51b0b4940dca0e32 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121432979) * 30ba30a3eb03fe93821c5a82c8a3852dd20bed01 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121436364) * 183be5a6fcfb90f9ed56d92c0d0397b11acfcc5f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121484870) * 64aac2f1d58288e87fb699236e348c8e7d629c52 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121582974) * b23d7f2ec907b5068657132f7c9660fbe0512aa3 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121755846) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9318: [FLINK-13044][s3][fs] Fix handling of relocated amazon classes
flinkbot edited a comment on issue #9318: [FLINK-13044][s3][fs] Fix handling of relocated amazon classes URL: https://github.com/apache/flink/pull/9318#issuecomment-517234057 ## CI report: * bb10597bd4b28c0567bea08e525daa6fa2344791 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121565755) * 7a246ec952f518d3bb6ae4b879fae5deb2b2b739 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121755810) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on issue #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37
bowenli86 commented on issue #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#issuecomment-517832519 LGTM, @twalthr can you please help to merge it? I'm on paternity leave now. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] bowenli86 commented on a change in pull request #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37
bowenli86 commented on a change in pull request #9239: [FLINK-13385][hive]Align Hive data type mapping with FLIP-37 URL: https://github.com/apache/flink/pull/9239#discussion_r310284673 ## File path: docs/dev/table/catalog.md ## @@ -189,11 +190,14 @@ The following limitations in Hive's data types impact the mapping between Flink Review comment: should be "*minimum* length is 255". It was my mistake, but would be great to fix that too This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions
flinkbot edited a comment on issue #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions URL: https://github.com/apache/flink/pull/9268#issuecomment-516257535 ## CI report: * 59b1a6d50b025925afd14a09b2b95a507889800a : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121189644) * b732229f0644a896b51245096fc0c7b7e19f0b02 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121433023) * 073d8158a43ed8ac9ed44a3e563c5e8aca8c574a : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/121712585) * d7c1f1231efa72f71c01007fdd5546ef70012452 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121750291) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] tillrohrmann closed pull request #9245: [FLINK-13334][coordination] Remove legacy implementation of slot
tillrohrmann closed pull request #9245: [FLINK-13334][coordination] Remove legacy implementation of slot URL: https://github.com/apache/flink/pull/9245 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13334) Remove legacy slot implementation
[ https://issues.apache.org/jira/browse/FLINK-13334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-13334. --- Resolution: Done Done via f75d8e1fbb16ba08ab5a625f50e988c708a8a2bf > Remove legacy slot implementation > - > > Key: FLINK-13334 > URL: https://issues.apache.org/jira/browse/FLINK-13334 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.10.0 >Reporter: TisonKun >Assignee: TisonKun >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > cc [~till.rohrmann] [~srichter] > From my investigation currently Flink use {{SlotSharingManager}} and > {{SlotSharingGroupId}} for achieving slot sharing. And thus > {{SlotSharingGroupAssignment}} {{SlotSharingGroup}} and {{SharedSlot}} are > all legacy concept. > Notice that the ongoing scheduler re-design touches frequently tests based on > legacy slot/instance logic or even uses it for testing. I'd like to nudge > this process for totally remove legacy code from our code base. > Also I attach a patch on FLINK-12179 that remove {{Instance}}. With current > contribution workflow your shepherds are significant :- ) -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9245: [FLINK-13334][coordination] Remove legacy implementation of slot
flinkbot edited a comment on issue #9245: [FLINK-13334][coordination] Remove legacy implementation of slot URL: https://github.com/apache/flink/pull/9245#issuecomment-515507399 ## CI report: * 2c7c67a0d15016887e9c917fee0bfe34e9d0e131 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120879420) * 6024d92f8e533fcee4877714bf57fa267bfa79da : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120923892) * 0b56c91db8a7e098cf744346834080a02341bdd0 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121744781) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13477) Containerized TaskManager killed because of lack of memory overhead
[ https://issues.apache.org/jira/browse/FLINK-13477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899158#comment-16899158 ] Xintong Song commented on FLINK-13477: -- Hi [~b.hanotte], I'm not against bringing any changes until the FLIP is implemented. Just trying to provide some related information. The FLIP I mentioned is actually planned for release 1.10. Since release 1.9 is already frozen, the earliest we can get changes in this issue released is also in 1.10. So maybe it makes sense to wait a bit for the FLIP doc and see how it works with this issue. The situation I'm trying to avoid here is that we make these changes now, and soon we have to rework the changes for the FLIP, even before the changes take effect in any single release. It is also possible that, after a full discussion and voting in the community we decide not to accept the FLIP or postpone it to later releases. In that case, this issue should still be a good alternative solution for the next version. > Containerized TaskManager killed because of lack of memory overhead > --- > > Key: FLINK-13477 > URL: https://issues.apache.org/jira/browse/FLINK-13477 > Project: Flink > Issue Type: Improvement > Components: Deployment / Mesos, Deployment / YARN >Affects Versions: 1.9.0 >Reporter: Benoit Hanotte >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, the `-XX:MaxDirectMemorySize` parameter is set as: > `MaxDirectMemorySize = containerMemoryMB - heapSizeMB` > (see > [https://github.com/apache/flink/blob/7fec4392b21b07c69ba15ea554731886f181609e/flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/ContaineredTaskManagerParameters.java#L162]) > However as explained at > https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html, > `MaxDirectMemorySize` only sets the maximum amount of memory that can be > used for direct buffers, thus the amount of off-heap memory used can be > greater than that value, leading to the container being killed by Mesos > or Yarn as it exceeds the allocated memory. > In addition, users might want to allocate off-heap memory through native > code, in which case they will want to keep some of the container memory > free and unallocated by Flink. > To solve this issue, we currently set the following parameter: > {code:java} > -Dcontainerized.taskmanager.env.FLINK_ENV_JAVA_OPTS='-XX:MaxDirectMemorySize=600m' > {code} > which overrides the value that Flink picks (744M in this case) with a lower > one to keep some overhead memory in the TaskManager containers. However this > is an "ugly" hack as it goes around the clever memory allocation that Flink > performs and allows to bypass the sanity checks done in > `ContaineredTaskManagerParameters`. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #8542: [FLINK-10707][web-dashboard] flink cluster overview dashboard improvements
flinkbot edited a comment on issue #8542: [FLINK-10707][web-dashboard] flink cluster overview dashboard improvements URL: https://github.com/apache/flink/pull/8542#issuecomment-517730662 ## CI report: * ea5b9fc9b374fa895cefe921a3d4e99f8e12d3f2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121767898) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9024: [FLINK-13119] add blink table config to documentation
flinkbot edited a comment on issue #9024: [FLINK-13119] add blink table config to documentation URL: https://github.com/apache/flink/pull/9024#issuecomment-512084139 ## CI report: * 1e4a2e9a584232fd8f5b441567190e1149b6f72f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119409238) * 0e651b1490efc20b8f974c651dbee6c061548a9e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119638477) * 88dffe037b8d71af38c5cb0c1d2d8452263e5bc0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119766372) * 090e45f4b1515e2e808577355e71981fda2be4fa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/119969718) * e0ff1434b2a0fae8a300bcf473f6365a5663d867 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121734928) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking
flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking URL: https://github.com/apache/flink/pull/9236#issuecomment-515390325 ## CI report: * 1135cc72f00606c7a230714838c938068887ce23 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/120833949) * a1070517ff96b110db9a38e3daf28e92eccf236d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121193197) * 10b0cec1d08270287b5e3f14f03bdb4d34572670 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121249934) * 10ce546af6ea71fd2df45c63e7d57d148f23ef01 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121545423) * c4c5ea9e073c46df4632486192ea2ef92f26b553 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121719995) * 001f7fca023fcf142b4b744fbc4affd424a2d152 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121731967) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899123#comment-16899123 ] Biao Liu commented on FLINK-13532: -- It seems that there are several documents added recently without zh version. I'm not sure whether there must be a zh version for each document. I have left a message to [~jark]. Just add these missing docs to pass the checking. [~xuefuz] you could override these docs when the other hive ticket finished > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot commented on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs
flinkbot commented on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs URL: https://github.com/apache/flink/pull/9344#issuecomment-517800069 ## CI report: * f66b26063868b2a7f0f71a869cf010e6d8e4644f : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121795027) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9338: Release 1.9FLINK-13461
flinkbot edited a comment on issue #9338: Release 1.9FLINK-13461 URL: https://github.com/apache/flink/pull/9338#issuecomment-517631601 ## CI report: * bf99b26cf9a452ebb14b6ab7b10003ad1c8a4cba : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121726292) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs
flinkbot commented on issue #9344: [FLINK-13532][docs] Fix broken links of zh docs URL: https://github.com/apache/flink/pull/9344#issuecomment-517797464 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13532: --- Labels: pull-request-available (was: ) > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] ifndef-SleePy opened a new pull request #9344: [FLINK-13532][docs] Fix broken links of zh docs
ifndef-SleePy opened a new pull request #9344: [FLINK-13532][docs] Fix broken links of zh docs URL: https://github.com/apache/flink/pull/9344 ## What is the purpose of the change * Fix broken links of zh docs ## Brief change log * Just copy the missing docs from english version ## Verifying this change * Run docs/check_links.sh ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories
flinkbot edited a comment on issue #9337: [FLINK-13475][hive]Reduce dependency on third-party maven repositories URL: https://github.com/apache/flink/pull/9337#issuecomment-517628740 ## CI report: * cc8b977ff855cb18e77395fb020db509c6e0108c : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121725162) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] tweise closed pull request #9136: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness
tweise closed pull request #9136: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness URL: https://github.com/apache/flink/pull/9136 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Comment Edited] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899075#comment-16899075 ] Xuefu Zhang edited comment on FLINK-13532 at 8/2/19 5:50 PM: - Sure. Will do. Documentation work for Hive is currently blocked by FLINK-13501. Will resume the work after that's fixed. was (Author: xuefuz): Sure. Will do. > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Priority: Blocker > Fix For: 1.9.0 > > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13532) Broken links in documentation
[ https://issues.apache.org/jira/browse/FLINK-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899075#comment-16899075 ] Xuefu Zhang commented on FLINK-13532: - Sure. Will do. > Broken links in documentation > - > > Key: FLINK-13532 > URL: https://issues.apache.org/jira/browse/FLINK-13532 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Priority: Blocker > Fix For: 1.9.0 > > > {code:java} > [2019-07-31 15:58:08] ERROR `/zh/dev/table/hive_integration_example.html' not > found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/types.html' not found. > [2019-07-31 15:58:10] ERROR `/zh/dev/table/hive_integration.html' not found. > [2019-07-31 15:58:14] ERROR `/zh/dev/restart_strategies.html' not found. > http://localhost:4000/zh/dev/table/hive_integration_example.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/table/types.html: > Remote file does not exist -- broken link!!! > http://localhost:4000/zh/dev/table/hive_integration.html: > Remote file does not exist -- broken link!!! > -- > http://localhost:4000/zh/dev/restart_strategies.html: > Remote file does not exist -- broken link!!!{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] tillrohrmann closed pull request #9183: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness
tillrohrmann closed pull request #9183: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness URL: https://github.com/apache/flink/pull/9183 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-12768) FlinkKinesisConsumerTest.testSourceSynchronization unstable on Travis
[ https://issues.apache.org/jira/browse/FLINK-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann closed FLINK-12768. - Resolution: Fixed Fixed via 1.10.0: 8da696535d0c39323f480cae8f4e9c66e866bec4 1.9.0: 66d0e31294b2588e1aabccb952fdbf2bcfabe878 > FlinkKinesisConsumerTest.testSourceSynchronization unstable on Travis > - > > Key: FLINK-12768 > URL: https://issues.apache.org/jira/browse/FLINK-12768 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.9.0 >Reporter: Till Rohrmann >Assignee: Thomas Weise >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.9.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The {{FlinkKinesisConsumerTest.testSourceSynchronization}} seems to be > unstable on Travis. It fails with > {code} > [ERROR] > testSourceSynchronization(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest) > Time elapsed: 10.031 s <<< FAILURE! > java.lang.AssertionError: > Expected: iterable containing ["1", ] > but: No item matched: > at > org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest.testSourceSynchronization(FlinkKinesisConsumerTest.java:950) > {code} > https://api.travis-ci.org/v3/job/541845510/log.txt > While looking into the problem, I noticed that the test case takes 1 second > to execute on my machine. I'm wondering whether this really needs to take > this long. Moreover, the test code contains {{Thread.sleeps}} and uses > {{Whiteboxing}} which we should avoid. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example
flinkbot edited a comment on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example URL: https://github.com/apache/flink/pull/9311#issuecomment-517147867 ## CI report: * 247bcdc2c1cda7a26c2170c5d6528c6ac27a6031 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121525657) * ecc938048a7db22305a40e586e827a1c7eb1e2af : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121789888) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
flinkbot commented on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9343#issuecomment-517783288 ## CI report: * 0425fb36fe495cdca6e54523567e37d9bb2b8132 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121788927) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] a-romero commented on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example
a-romero commented on issue #9311: FLINK-13524 [docs] Fixed typo in Builder method name from Elasticsearch example URL: https://github.com/apache/flink/pull/9311#issuecomment-517782677 > @a-romero Thank you for fixing this, could you please rebase your branch to remove the merge commit. @sjwiesman yep sorry, rebased now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
flinkbot commented on issue #9343: [FLINK-13508][1.8][tests] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9343#issuecomment-517779619 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13508) CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
[ https://issues.apache.org/jira/browse/FLINK-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Yao updated FLINK-13508: - Fix Version/s: 1.10.0 1.9.0 1.8.2 > CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time > > > Key: FLINK-13508 > URL: https://issues.apache.org/jira/browse/FLINK-13508 > Project: Flink > Issue Type: Bug > Components: Tests >Reporter: Gary Yao >Assignee: Gary Yao >Priority: Critical > Labels: pull-request-available > Fix For: 1.8.2, 1.9.0, 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The test utility > {{CommonTestUtils#waitUntilCondition(SupplierWithException Exception>, Deadline, long)}} may attempt to call {{Thread.sleep(long)}} with > a negative argument. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] GJL opened a new pull request #9343: [FLINK-13508][1.8] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
GJL opened a new pull request #9343: [FLINK-13508][1.8] CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time URL: https://github.com/apache/flink/pull/9343 See #9291 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints
[ https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephan Ewen closed FLINK-13541. > State Processor Api sets the wrong key selector when writing savepoints > --- > > Key: FLINK-13541 > URL: https://issues.apache.org/jira/browse/FLINK-13541 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The state processor api is setting the wrong key selector for its > StreamConfig when writing savepoints. It uses two key selectors internally > that happen to output the same value for integer keys but not in general. > {noformat} > Caused by: java.lang.RuntimeException: Exception occurred while setting the > current key context. > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615) > at > org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83) > at > org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378) > at > org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76) > at > org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to > java.lang.String > at > org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33) > at > org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159) > at > org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96) > at > org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] sjwiesman commented on issue #9308: [FLINK-13517][docs][hive] Restructure Hive Catalog documentation
sjwiesman commented on issue #9308: [FLINK-13517][docs][hive] Restructure Hive Catalog documentation URL: https://github.com/apache/flink/pull/9308#issuecomment-517775316 Thank you for the reminder, I am not a committer. @fhueske or @bowenli86 could one of you please merge this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints
[ https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephan Ewen resolved FLINK-13541. -- Resolution: Fixed Fixed in - 1.9.0 via 92eb0b80ed3e761f51825f4e56329085436f39e3 - 1.10.0 via 61352fb69b24ea8c2de5e2c8840cabb3acc2202e > State Processor Api sets the wrong key selector when writing savepoints > --- > > Key: FLINK-13541 > URL: https://issues.apache.org/jira/browse/FLINK-13541 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Runtime / State Backends >Reporter: Seth Wiesman >Assignee: Seth Wiesman >Priority: Blocker > Labels: pull-request-available > Fix For: 1.9.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The state processor api is setting the wrong key selector for its > StreamConfig when writing savepoints. It uses two key selectors internally > that happen to output the same value for integer keys but not in general. > {noformat} > Caused by: java.lang.RuntimeException: Exception occurred while setting the > current key context. > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615) > at > org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83) > at > org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378) > at > org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76) > at > org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to > java.lang.String > at > org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33) > at > org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159) > at > org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96) > at > org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r310218554 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableOutputFormat.java ## @@ -259,13 +259,13 @@ public void configure(Configuration parameters) { public void open(int taskNumber, int numTasks) throws IOException { try { StorageDescriptor sd = hiveTablePartition.getStorageDescriptor(); - serializer = (Serializer) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); - Preconditions.checkArgument(serializer instanceof Deserializer, - "Expect to get a SerDe, but actually got " + serializer.getClass().getName()); - ReflectionUtils.setConf(serializer, jobConf); + recordSerDe = (Serializer) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); Review comment: Can we assert the type to be "Serializer" before casting it, similar to below for "Deserializer"? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r310218168 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableOutputFormat.java ## @@ -124,7 +124,7 @@ private transient int numNonPartitionColumns; // SerDe in Hive-1.2.1 and Hive-2.3.4 can be of different classes, make sure to use a common base class - private transient Serializer serializer; + private transient Serializer recordSerDe; Review comment: Maybe the type here should be just "Object". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] asfgit closed pull request #9324: [FLINK-13541][state-processor-api] State Processor Api sets the wrong key selector when writing savepoints
asfgit closed pull request #9324: [FLINK-13541][state-processor-api] State Processor Api sets the wrong key selector when writing savepoints URL: https://github.com/apache/flink/pull/9324 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats
xuefuz commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats URL: https://github.com/apache/flink/pull/9264#discussion_r310217103 ## File path: flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/batch/connectors/hive/HiveTableOutputFormat.java ## @@ -256,10 +258,12 @@ public void configure(Configuration parameters) { public void open(int taskNumber, int numTasks) throws IOException { try { StorageDescriptor sd = hiveTablePartition.getStorageDescriptor(); - serializer = (AbstractSerDe) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + serializer = (Serializer) Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance(); + Preconditions.checkArgument(serializer instanceof Deserializer, Review comment: yeah! :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on issue #9308: [FLINK-13517][docs][hive] Restructure Hive Catalog documentation
xuefuz commented on issue #9308: [FLINK-13517][docs][hive] Restructure Hive Catalog documentation URL: https://github.com/apache/flink/pull/9308#issuecomment-517773257 @sjwiesman The CI passed. Should we merge this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value
flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value. URL: https://github.com/apache/flink/pull/9285#issuecomment-516712727 ## CI report: * bb70e45a98e76de7f95ac31e893999683cb5bde8 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121359827) * 4f96a184d471836053a7e2b09cbd1583ebced727 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121512745) * a915ad9e9323b5c0f799beae32eba104b76b583f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121587513) * 3e6a30848c721001c6bf0a514fb00b00c6f6e0ce : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121696226) * 5958000c4e08d3b4a5842467a9c56bdfeb468efa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/121720893) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
flinkbot commented on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors URL: https://github.com/apache/flink/pull/9342#issuecomment-517770642 ## CI report: * 76704f271662b57cbe36679d3d249bcdd7fdf66a : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/121784366) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (FLINK-13535) Do not abort transactions twice during KafkaProducer startup
[ https://issues.apache.org/jira/browse/FLINK-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiangjie Qin resolved FLINK-13535. -- Resolution: Fixed > Do not abort transactions twice during KafkaProducer startup > > > Key: FLINK-13535 > URL: https://issues.apache.org/jira/browse/FLINK-13535 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka >Affects Versions: 1.8.1, 1.9.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > During startup of a transactional Kafka producer from previous state, we > recover in two steps: > # in {{TwoPhaseCommitSinkFunction}}, we commit pending commit-transactions > and abort pending transactions and then call into > {{finishRecoveringContext()}} > # in {{FlinkKafkaProducer#finishRecoveringContext()}} we iterate over all > recovered transaction IDs and abort them. > This may lead to some transactions being worked on twice. Since this is quite > some expensive operation, we unnecessarily slow down the job startup but > could easily give {{finishRecoveringContext()}} a set of transactions that > {{TwoPhaseCommitSinkFunction}} already covered instead. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (FLINK-13535) Do not abort transactions twice during KafkaProducer startup
[ https://issues.apache.org/jira/browse/FLINK-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiangjie Qin updated FLINK-13535: - Fix Version/s: 1.10.0 > Do not abort transactions twice during KafkaProducer startup > > > Key: FLINK-13535 > URL: https://issues.apache.org/jira/browse/FLINK-13535 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka >Affects Versions: 1.8.1, 1.9.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > During startup of a transactional Kafka producer from previous state, we > recover in two steps: > # in {{TwoPhaseCommitSinkFunction}}, we commit pending commit-transactions > and abort pending transactions and then call into > {{finishRecoveringContext()}} > # in {{FlinkKafkaProducer#finishRecoveringContext()}} we iterate over all > recovered transaction IDs and abort them. > This may lead to some transactions being worked on twice. Since this is quite > some expensive operation, we unnecessarily slow down the job startup but > could easily give {{finishRecoveringContext()}} a set of transactions that > {{TwoPhaseCommitSinkFunction}} already covered instead. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (FLINK-13535) Do not abort transactions twice during KafkaProducer startup
[ https://issues.apache.org/jira/browse/FLINK-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899028#comment-16899028 ] Jiangjie Qin commented on FLINK-13535: -- Resolved via 1.10 ad0d5c8e256e6db5f6a51e6374cdc262283c912d > Do not abort transactions twice during KafkaProducer startup > > > Key: FLINK-13535 > URL: https://issues.apache.org/jira/browse/FLINK-13535 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka >Affects Versions: 1.8.1, 1.9.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > During startup of a transactional Kafka producer from previous state, we > recover in two steps: > # in {{TwoPhaseCommitSinkFunction}}, we commit pending commit-transactions > and abort pending transactions and then call into > {{finishRecoveringContext()}} > # in {{FlinkKafkaProducer#finishRecoveringContext()}} we iterate over all > recovered transaction IDs and abort them. > This may lead to some transactions being worked on twice. Since this is quite > some expensive operation, we unnecessarily slow down the job startup but > could easily give {{finishRecoveringContext()}} a set of transactions that > {{TwoPhaseCommitSinkFunction}} already covered instead. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[GitHub] [flink] becketqin commented on issue #9323: [FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup
becketqin commented on issue #9323: [FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup URL: https://github.com/apache/flink/pull/9323#issuecomment-517768745 Merged to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] becketqin closed pull request #9323: [FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup
becketqin closed pull request #9323: [FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup URL: https://github.com/apache/flink/pull/9323 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services