[GitHub] [flink] flinkbot edited a comment on issue #9211: [FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and source return type

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9211: 
[FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and 
source return type
URL: https://github.com/apache/flink/pull/9211#issuecomment-514475275
 
 
   ## CI report:
   
   * b465c69b21a3ac810bc34e4f95dc0b3a3d93281c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120311139)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9213: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes (for release-1.9)

2019-07-23 Thread GitBox
flinkbot commented on issue #9213: [FLINK-13266] [table] Relocate blink planner 
classes to avoid class clashes (for release-1.9)
URL: https://github.com/apache/flink/pull/9213#issuecomment-514491062
 
 
   ## CI report:
   
   * 4c1ae0077a46b9b77fd7ba3f67767b908f5bf8d2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120315868)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9212: [FLINK-13338][table-api] Sql conformance is hard to config in TableConfig

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9212: [FLINK-13338][table-api] Sql 
conformance is hard to config in TableConfig
URL: https://github.com/apache/flink/pull/9212#issuecomment-514486972
 
 
   ## CI report:
   
   * 68443427c89cca6ef5c1c39685c57c9884e101af : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120314731)
   * 51463ef8d90e7a3398b59c1efcf6ce713aaddb03 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120315489)
   * c839ffc015637729a586b0c78f61c54bad21b299 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120315878)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 

  was:
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousand of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
>  Aliyun Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of companies on Alibaba Cloud. The core storage 
> engine of Log Service is called Loghub which is a large scale distributed 
> storage system and provides producer/consumer API like Kafka or AWS Kinesis. 
> There are a lot of users of Flink are using Log Service to collect and 
> analysis data from both on premise and cloud data sources, and consuming data 
> stored in Log Service from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9213: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes (for release-1.9)

2019-07-23 Thread GitBox
flinkbot commented on issue #9213: [FLINK-13266] [table] Relocate blink planner 
classes to avoid class clashes (for release-1.9)
URL: https://github.com/apache/flink/pull/9213#issuecomment-514489698
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousand of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 

  was:
 Aliyun Log Service is a storage service which has been widely used in Alibaba 
Group and a lot of customers on Alibaba Cloud. The core storage engine is call 
Loghub which is a large scale distributed storage system and provides 
producer/consumer API as Kafka/Kinesis does. 

There are a lot of users are using Log Service to collect data from on premise 
and cloud and consuming from Flink or Blink for streaming compute. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
>  Aliyun Log Service is a big data service which has been widely used in 
> Alibaba Group and thousand of companies on Alibaba Cloud. The core storage 
> engine of Log Service is called Loghub which is a large scale distributed 
> storage system and provides producer/consumer API like Kafka or AWS Kinesis. 
> There are a lot of users of Flink are using Log Service to collect and 
> analysis data from both on premise and cloud data sources, and consuming data 
> stored in Log Service from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9212: [FLINK-13338] Sql conformance is hard to config in TableConfig

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9212: [FLINK-13338] Sql conformance is hard 
to config in TableConfig
URL: https://github.com/apache/flink/pull/9212#issuecomment-514486972
 
 
   ## CI report:
   
   * 68443427c89cca6ef5c1c39685c57c9884e101af : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120314731)
   * 51463ef8d90e7a3398b59c1efcf6ce713aaddb03 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120315489)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe opened a new pull request #9213: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes (for release-1.9)

2019-07-23 Thread GitBox
godfreyhe opened a new pull request #9213: [FLINK-13266] [table] Relocate blink 
planner classes to avoid class clashes (for release-1.9)
URL: https://github.com/apache/flink/pull/9213
 
 
   
   ## What is the purpose of the change
   
   *Relocate blink planner classes to avoid class clashes (for release-1.9), 
see the relocation part in 
https://docs.google.com/document/d/15Z1Khy23DwDBp956yBzkMYkGdoQAU_QgBoJmFWSffow*
   
   
   ## Brief change log
   
 - *move OptimizerConfigOptions & ExecutionConfigOptions to table-api-java 
module*
 - *move ExpressionParserException & UnresolvedException to table-common 
module*
 - *remove definedTimeAttributes file in blink planner*
 - *move DataView related classes to table-common module, and remove 
duplicate classes*
 - *remove Order class from table-runtime-blink*
 - *port descriptors to table-common*
 - *Improve comment for FilterableTableTableSource*
 - *Relocate blink runtime classes to avoid class clashes*
 - *Relocate blink planner classes to avoid class clashes*
   
   ## Verifying this change
   
   no new tests and the existing tests should pass
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)
Ke Li created FLINK-13395:
-

 Summary: Add source and sink connector for Aliyun Log Service
 Key: FLINK-13395
 URL: https://issues.apache.org/jira/browse/FLINK-13395
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Reporter: Ke Li


 Aliyun Log Service is a storage service which has been widely used in Alibaba 
Group and a lot of customers on Alibaba Cloud. The core storage engine is call 
Loghub which is a large scale distributed storage system and provides 
producer/consumer API as Kafka/Kinesis does. 

There are a lot of users are using Log Service to collect data from on premise 
and cloud and consuming from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper commented on a change in pull request #9208: [FLINK-13378][table-planner-blink] Fix bug: Blink-planner not support SingleValueAggFunction

2019-07-23 Thread GitBox
TsReaper commented on a change in pull request #9208: 
[FLINK-13378][table-planner-blink] Fix bug: Blink-planner not support 
SingleValueAggFunction
URL: https://github.com/apache/flink/pull/9208#discussion_r306634777
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/join/ScalarQueryITCase.scala
 ##
 @@ -46,6 +49,18 @@ class ScalarQueryITCase extends BatchTestBase {
 row(6, null)
   )
 
-}
+  @Before
+  override def before(): Unit = {
+super.before()
+registerCollection("l", l, INT_DOUBLE, "a, b")
+registerCollection("r", r, INT_DOUBLE, "c, d")
+  }
 
+  @Test
+  def testScalarQuery(): Unit = {
 
 Review comment:
   Also add a validation test for a multiple row agg?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9203: [FLINK-13375][table-api] Move ExecutionConfigOptions and OptimizerConfigOptions to table-api

2019-07-23 Thread GitBox
wuchong commented on a change in pull request #9203: [FLINK-13375][table-api] 
Move ExecutionConfigOptions and OptimizerConfigOptions to table-api
URL: https://github.com/apache/flink/pull/9203#discussion_r306633770
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/OptimizerConfigOptions.java
 ##
 @@ -56,33 +56,33 @@
"when there is data 
skew in distinct aggregation and gives the ability to scale-up the job. " +
"Default is false.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_DISTINCT_AGG_SPLIT_BUCKET_NUM =
-   key("sql.optimizer.distinct-agg.split.bucket-num")
+   public static final ConfigOption 
TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_BUCKET_NUM =
+   key("table.optimizer.distinct-agg.split.bucket-num")
.defaultValue(1024)
.withDescription("Configure the number 
of buckets when splitting distinct aggregation. " +
"The number is used in 
the first level aggregation to calculate a bucket key " +

"'hash_code(distinct_key) % BUCKET_NUM' which is used as an additional group 
key after splitting.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_REUSE_SUB_PLAN_ENABLED =
-   key("sql.optimizer.reuse.sub-plan.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_REUSE_SUB_PLAN_ENABLED =
+   key("table.optimizer.reuse.sub-plan.enabled")
.defaultValue(true)
.withDescription("When it is true, 
optimizer will try to find out duplicated " +
"sub-plan and reuse 
them.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_REUSE_TABLE_SOURCE_ENABLED =
-   key("sql.optimizer.reuse.table-source.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_REUSE_SOURCE_ENABLED =
+   key("table.optimizer.reuse.source.enabled")
.defaultValue(true)
-   .withDescription("When it is true, 
optimizer will try to find out duplicated table-source and " +
-   "reuse them. This works 
only when " + SQL_OPTIMIZER_REUSE_SUB_PLAN_ENABLED.key() + " is true.");
+   .withDescription("When it is true, 
optimizer will try to find out duplicated table source and " +
+   "reuse them. This works 
only when " + TABLE_OPTIMIZER_REUSE_SUB_PLAN_ENABLED.key() + " is true.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_PREDICATE_PUSHDOWN_ENABLED =
-   key("sql.optimizer.predicate-pushdown.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_PREDICATE_PUSHDOWN_ENABLED =
+   key("table.optimizer.predicate-pushdown.enabled")
.defaultValue(true)
.withDescription("If it is true, enable 
predicate pushdown to the FilterableTableSource. " +
"Default value is true.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_JOIN_REORDER_ENABLED =
-   key("sql.optimizer.join-reorder.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_JOIN_REORDER_ENABLED =
+   key("table.optimizer.join-reorder.enabled")
.defaultValue(false)
.withDescription("Enables join reorder 
in optimizer cbo. Default is disabled.");
 
 Review comment:
   @godfreyhe , is it only enabled in CBO? I mean, users shouldn't care about 
what optimizer is used in Flink.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9196: [FLINK-13351][table-blink-planner]duplicate case ROW match in FlinkTy…

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9196: 
[FLINK-13351][table-blink-planner]duplicate case ROW match in FlinkTy…
URL: https://github.com/apache/flink/pull/9196#issuecomment-513709777
 
 
   ## CI report:
   
   * 4f2703f8f739ea4cd2d1dead886ac8469131a022 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119980388)
   * 3844fa41477d1f1e51acab34afeae318b5f54d10 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119981971)
   * 4a82dabe4dc6ba065f45213b3b51c4cbe1a560db : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119982826)
   * d4c74758d4bfd664a2bbc15a63093f66aa9458d0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120313687)
   * 11280345cadd8b50be2179595a821c9214d6dc22 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120314747)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9212: [FLINK-13338] Sql conformance is hard to config in TableConfig

2019-07-23 Thread GitBox
flinkbot commented on issue #9212: [FLINK-13338] Sql conformance is hard to 
config in TableConfig
URL: https://github.com/apache/flink/pull/9212#issuecomment-514486972
 
 
   ## CI report:
   
   * 68443427c89cca6ef5c1c39685c57c9884e101af : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120314731)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13375) Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13375:

Issue Type: Improvement  (was: Sub-task)
Parent: (was: FLINK-13267)

> Improve config names in ExecutionConfigOptions and OptimizerConfigOptions
> -
>
> Key: FLINK-13375
> URL: https://issues.apache.org/jira/browse/FLINK-13375
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Move ExecutionConfigOptions and OptimizerConfigOptions to table-api.
> We should also go through every config options in detail in this issue. 
> Because we are now moving it to the API module. We should actually discuss 
> how the properties are named and make sure that those options follow Flink 
> naming conventions. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13375) Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13375:
---

Assignee: Jark Wu

> Improve config names in ExecutionConfigOptions and OptimizerConfigOptions
> -
>
> Key: FLINK-13375
> URL: https://issues.apache.org/jira/browse/FLINK-13375
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Move ExecutionConfigOptions and OptimizerConfigOptions to table-api.
> We should also go through every config options in detail in this issue. 
> Because we are now moving it to the API module. We should actually discuss 
> how the properties are named and make sure that those options follow Flink 
> naming conventions. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe commented on a change in pull request #9203: [FLINK-13375][table-api] Move ExecutionConfigOptions and OptimizerConfigOptions to table-api

2019-07-23 Thread GitBox
godfreyhe commented on a change in pull request #9203: [FLINK-13375][table-api] 
Move ExecutionConfigOptions and OptimizerConfigOptions to table-api
URL: https://github.com/apache/flink/pull/9203#discussion_r306633210
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/OptimizerConfigOptions.java
 ##
 @@ -56,33 +56,33 @@
"when there is data 
skew in distinct aggregation and gives the ability to scale-up the job. " +
"Default is false.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_DISTINCT_AGG_SPLIT_BUCKET_NUM =
-   key("sql.optimizer.distinct-agg.split.bucket-num")
+   public static final ConfigOption 
TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_BUCKET_NUM =
+   key("table.optimizer.distinct-agg.split.bucket-num")
.defaultValue(1024)
.withDescription("Configure the number 
of buckets when splitting distinct aggregation. " +
"The number is used in 
the first level aggregation to calculate a bucket key " +

"'hash_code(distinct_key) % BUCKET_NUM' which is used as an additional group 
key after splitting.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_REUSE_SUB_PLAN_ENABLED =
-   key("sql.optimizer.reuse.sub-plan.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_REUSE_SUB_PLAN_ENABLED =
+   key("table.optimizer.reuse.sub-plan.enabled")
.defaultValue(true)
.withDescription("When it is true, 
optimizer will try to find out duplicated " +
"sub-plan and reuse 
them.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_REUSE_TABLE_SOURCE_ENABLED =
-   key("sql.optimizer.reuse.table-source.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_REUSE_SOURCE_ENABLED =
+   key("table.optimizer.reuse.source.enabled")
.defaultValue(true)
-   .withDescription("When it is true, 
optimizer will try to find out duplicated table-source and " +
-   "reuse them. This works 
only when " + SQL_OPTIMIZER_REUSE_SUB_PLAN_ENABLED.key() + " is true.");
+   .withDescription("When it is true, 
optimizer will try to find out duplicated table source and " +
+   "reuse them. This works 
only when " + TABLE_OPTIMIZER_REUSE_SUB_PLAN_ENABLED.key() + " is true.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_PREDICATE_PUSHDOWN_ENABLED =
-   key("sql.optimizer.predicate-pushdown.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_PREDICATE_PUSHDOWN_ENABLED =
+   key("table.optimizer.predicate-pushdown.enabled")
.defaultValue(true)
.withDescription("If it is true, enable 
predicate pushdown to the FilterableTableSource. " +
"Default value is true.");
 
-   public static final ConfigOption 
SQL_OPTIMIZER_JOIN_REORDER_ENABLED =
-   key("sql.optimizer.join-reorder.enabled")
+   public static final ConfigOption 
TABLE_OPTIMIZER_JOIN_REORDER_ENABLED =
+   key("table.optimizer.join-reorder.enabled")
.defaultValue(false)
.withDescription("Enables join reorder 
in optimizer cbo. Default is disabled.");
 
 Review comment:
   cost based optimizer (e.g. VolcanoPlanner in Calcite)
   another concept is "rbo", means rule based optimizer (e.g. HepPlanner in 
Calcite)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13375) Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13375:

Summary: Improve config names in ExecutionConfigOptions and 
OptimizerConfigOptions  (was: Move ExecutionConfigOptions and 
OptimizerConfigOptions to table-api)

> Improve config names in ExecutionConfigOptions and OptimizerConfigOptions
> -
>
> Key: FLINK-13375
> URL: https://issues.apache.org/jira/browse/FLINK-13375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Move ExecutionConfigOptions and OptimizerConfigOptions to table-api.
> We should also go through every config options in detail in this issue. 
> Because we are now moving it to the API module. We should actually discuss 
> how the properties are named and make sure that those options follow Flink 
> naming conventions. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13375) Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13375:

Priority: Critical  (was: Major)

> Improve config names in ExecutionConfigOptions and OptimizerConfigOptions
> -
>
> Key: FLINK-13375
> URL: https://issues.apache.org/jira/browse/FLINK-13375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Move ExecutionConfigOptions and OptimizerConfigOptions to table-api.
> We should also go through every config options in detail in this issue. 
> Because we are now moving it to the API module. We should actually discuss 
> how the properties are named and make sure that those options follow Flink 
> naming conventions. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9181: [FLINK-13210][hive] Hive connector test should dependent on blink planner instead of legacy planner

2019-07-23 Thread GitBox
wuchong commented on issue #9181: [FLINK-13210][hive] Hive connector test 
should dependent on blink planner instead of legacy planner
URL: https://github.com/apache/flink/pull/9181#issuecomment-514485705
 
 
   As discussed offline, we will add back flink planner tests for Hive 
connector once FLINK-13267 is resolved. We will merge this pull request first 
to unblock other issues (incl. Hive UDFs). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-511360277
 
 
   ## CI report:
   
   * b6bb574fbe0e3421adb07aef7ffd1c8068675a74 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119416196)
   * 6f87a7dd6303a786b0029870f6ee887903f40d60 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441162)
   * 1e717dfbdee0eab94ce491fd965c6b8b7671 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119473960)
   * 0c2593ae0417ad1ba7dda12f08ba8f071bfaa8c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766326)
   * 79c8f9d272a1cf8b33c29d239c15739115b570d4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119789055)
   * 2bd7dde5d99431579377e332abb0b71af1276049 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120106694)
   * 8ad04989600d9b4bb347bfef3dd90174649e4ad9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120192385)
   * 7bc2eff3d3dc7c3080c13505ebbc7f80ef906062 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120310203)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9212: [FLINK-13338] Sql conformance is hard to config in TableConfig

2019-07-23 Thread GitBox
flinkbot commented on issue #9212: [FLINK-13338] Sql conformance is hard to 
config in TableConfig
URL: https://github.com/apache/flink/pull/9212#issuecomment-514485394
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13338) Sql conformance is hard to config in TableConfig

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13338:
---
Labels: pull-request-available  (was: )

> Sql conformance is hard to config in TableConfig
> 
>
> Key: FLINK-13338
> URL: https://issues.apache.org/jira/browse/FLINK-13338
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Danny Chan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> Now the TableConfig has only interface to config the SqlParser config which 
> is very broad and hard to use for user, we should at least supply an 
> interface to config the sql conformance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] danny0405 opened a new pull request #9212: [FLINK-13338] Sql conformance is hard to config in TableConfig

2019-07-23 Thread GitBox
danny0405 opened a new pull request #9212: [FLINK-13338] Sql conformance is 
hard to config in TableConfig
URL: https://github.com/apache/flink/pull/9212
 
 
   ## What is the purpose of the change
   
   This patch adds an interface `TableConfig#setSqlDialect(SqlDialect)` to make 
the sql dialect configuration more user friendly.
   
   
   ## Brief change log
   
   *(for example:)*
 - Add new class `SqlDialect` to enumerate the sql dialects Flink supports 
now
 - Add configuration interface for sql dialect in `TableConfig`
   
   
   ## Verifying this change
   
   See tests in SqlToOperationConverterTest, PartitionableSinkITCase.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9013: [FLINK-13136] Fix documentation error about stopping job with restful api

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9013: [FLINK-13136] Fix documentation error 
about stopping job with restful api
URL: https://github.com/apache/flink/pull/9013#issuecomment-514125271
 
 
   ## CI report:
   
   * 2d2908752122665dc6bc45ceb4aa28099755f8d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120146040)
   * dbf5079d7ce8b8710758fb00fe2ab6727ebe72ad : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120200569)
   * 097af583c8040c16da17b796f6e4060c270b5b1d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120308925)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9196: [FLINK-13351][table-blink-planner]duplicate case ROW match in FlinkTy…

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9196: 
[FLINK-13351][table-blink-planner]duplicate case ROW match in FlinkTy…
URL: https://github.com/apache/flink/pull/9196#issuecomment-513709777
 
 
   ## CI report:
   
   * 4f2703f8f739ea4cd2d1dead886ac8469131a022 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119980388)
   * 3844fa41477d1f1e51acab34afeae318b5f54d10 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119981971)
   * 4a82dabe4dc6ba065f45213b3b51c4cbe1a560db : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119982826)
   * d4c74758d4bfd664a2bbc15a63093f66aa9458d0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120313687)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-23 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891613#comment-16891613
 ] 

Jark Wu commented on FLINK-13266:
-

[FLINK-13266][table] Relocate blink planner classes to avoid class clashes
Fixed in 1.10.0: c601cfd662c2839f8ebc81b80879ecce55a8cbaf
Fixed in 1.9.0: TODO

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-23 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891612#comment-16891612
 ] 

Jark Wu commented on FLINK-13266:
-

[FLINK-13266][table] Relocate blink runtime classes to avoid class clashes
Fixed in 1.10.0: 9a6ca547d6bd261730c46519f6bffa0b699ec218
Fixed in 1.9.0: TODO

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-23 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891611#comment-16891611
 ] 

Jark Wu commented on FLINK-13266:
-

[FLINK-13266][table] Move OptimizerConfigOptions & ExecutionConfigOptions to 
table-api-java module
Fixed in 1.10.0: 66e55d60f819c5a1f809830190461fb6ac341b0b
Fixed in 1.9.0: TODO

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong closed pull request #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
wuchong closed pull request #9185: [FLINK-13266] [table] Relocate blink planner 
classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
wuchong commented on issue #9185: [FLINK-13266] [table] Relocate blink planner 
classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-514482816
 
 
   eac67e0a39a44c8a92dc0c4f6e2867e9f6fd90df to 
10b5f418f3ad734d85c3476478fcae1128903707 are merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13394) secure MapR repo URL is not work in E2E crontab builds

2019-07-23 Thread Zhenghua Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-13394:
-
Description: [FLINK-12578|http://example.com/] intros https URL for MapR, 
but this causes fails on Travis for some reason. travis_watchdog.sh and 
travis_controller.sh are fixed by unsafe-mapr-repo profile, but nightly.sh is 
not fixed.  (was: 
[FLINK-12578|https://issues.apache.org/jira/browse/FLINK-12578] 
[FLINK-12578|http://example.com/] intros https URL for MapR, but this causes 
fails on Travis for some reason. travis_watchdog.sh and travis_controller.sh 
are fixed by unsafe-mapr-repo profile, but nightly.sh is not fixed.)

> secure MapR repo URL is not work in E2E crontab builds
> --
>
> Key: FLINK-13394
> URL: https://issues.apache.org/jira/browse/FLINK-13394
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> [FLINK-12578|http://example.com/] intros https URL for MapR, but this causes 
> fails on Travis for some reason. travis_watchdog.sh and travis_controller.sh 
> are fixed by unsafe-mapr-repo profile, but nightly.sh is not fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13394) secure MapR repo URL is not work in E2E crontab builds

2019-07-23 Thread Zhenghua Gao (JIRA)
Zhenghua Gao created FLINK-13394:


 Summary: secure MapR repo URL is not work in E2E crontab builds
 Key: FLINK-13394
 URL: https://issues.apache.org/jira/browse/FLINK-13394
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.9.0, 1.10.0
Reporter: Zhenghua Gao
 Fix For: 1.9.0, 1.10.0


[FLINK-12578|https://issues.apache.org/jira/browse/FLINK-12578] 
[FLINK-12578|http://example.com/] intros https URL for MapR, but this causes 
fails on Travis for some reason. travis_watchdog.sh and travis_controller.sh 
are fixed by unsafe-mapr-repo profile, but nightly.sh is not fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] AjaxXu commented on issue #9196: [FLINK-13351][table-blink-planner]duplicate case ROW match in FlinkTy…

2019-07-23 Thread GitBox
AjaxXu commented on issue #9196: [FLINK-13351][table-blink-planner]duplicate 
case ROW match in FlinkTy…
URL: https://github.com/apache/flink/pull/9196#issuecomment-514482470
 
 
   @wuchong Would you mind to help me to review and merge this pr?Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink 
planner classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-513459343
 
 
   ## CI report:
   
   * 0bbaac120acf9042279412e19d4317134821092f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119881828)
   * 2266617d608ef5f22a75a6d6c6dc809f6f9df1f9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119913854)
   * 80303520eb52d46eae7f42ba67e45414e5f44d13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120114101)
   * 31471ad687d7e36778a14465c9614d9e34d32b72 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120193153)
   * cb0fbd9d2fa18749f799b3563c56df6f07105fd7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120226950)
   * e19ebb053198c52aaae25ce79416c96dffbb4db3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120228138)
   * 10b5f418f3ad734d85c3476478fcae1128903707 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120307316)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9211: [FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and source return type

2019-07-23 Thread GitBox
flinkbot commented on issue #9211: 
[FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and 
source return type
URL: https://github.com/apache/flink/pull/9211#issuecomment-514475275
 
 
   ## CI report:
   
   * b465c69b21a3ac810bc34e4f95dc0b3a3d93281c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120311139)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13338) Sql conformance is hard to config in TableConfig

2019-07-23 Thread Danny Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chan updated FLINK-13338:
---
Summary: Sql conformance is hard to config in TableConfig  (was: Add sql 
conformance config interface in TableConfig)

> Sql conformance is hard to config in TableConfig
> 
>
> Key: FLINK-13338
> URL: https://issues.apache.org/jira/browse/FLINK-13338
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Danny Chan
>Priority: Critical
> Fix For: 1.9.0
>
>
> Now the TableConfig has only interface to config the SqlParser config which 
> is very broad and hard to use for user, we should at least supply an 
> interface to config the sql conformance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13391) Blink-planner should not invoke deprecated getReturnType of TableSource

2019-07-23 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891594#comment-16891594
 ] 

Jingsong Lee commented on FLINK-13391:
--

PR: [https://github.com/apache/flink/pull/9211]

> Blink-planner should not invoke deprecated getReturnType of TableSource
> ---
>
> Key: FLINK-13391
> URL: https://issues.apache.org/jira/browse/FLINK-13391
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Now, blink-planner will invoke getDataStream of InputFormatTableSource, this 
> will invoke deprecated getReturnType method.
> We should invoke getInputFormat of InputFormatTableSource to be same as 
> flink-planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9211: [FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and source return type

2019-07-23 Thread GitBox
flinkbot commented on issue #9211: 
[FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and 
source return type
URL: https://github.com/apache/flink/pull/9211#issuecomment-514473895
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13393) Blink-planner not support generic TableSource

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13393:
---
Labels: pull-request-available  (was: )

> Blink-planner not support generic TableSource
> -
>
> Key: FLINK-13393
> URL: https://issues.apache.org/jira/browse/FLINK-13393
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> Now there is a exception when use:
> class MyTableSource[T] extend StreamTableSource[T].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi opened a new pull request #9211: [FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and source return type

2019-07-23 Thread GitBox
JingsongLi opened a new pull request #9211: 
[FLINK-13393][FLINK-13391][table-planner-blink] Fix source conversion and 
source return type
URL: https://github.com/apache/flink/pull/9211
 
 
   
   ## What is the purpose of the change
   
   1.Blink-planner should use conversion class of DataType
   2.Blink-planner should not invoke deprecated getReturnType of TableSource
   
   ## Verifying this change
   
   TableSourceITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13393) Blink-planner not support generic TableSource

2019-07-23 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891589#comment-16891589
 ] 

Jingsong Lee commented on FLINK-13393:
--

[~jark] Can you assign this one to me?

> Blink-planner not support generic TableSource
> -
>
> Key: FLINK-13393
> URL: https://issues.apache.org/jira/browse/FLINK-13393
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Now there is a exception when use:
> class MyTableSource[T] extend StreamTableSource[T].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-511360277
 
 
   ## CI report:
   
   * b6bb574fbe0e3421adb07aef7ffd1c8068675a74 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119416196)
   * 6f87a7dd6303a786b0029870f6ee887903f40d60 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441162)
   * 1e717dfbdee0eab94ce491fd965c6b8b7671 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119473960)
   * 0c2593ae0417ad1ba7dda12f08ba8f071bfaa8c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766326)
   * 79c8f9d272a1cf8b33c29d239c15739115b570d4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119789055)
   * 2bd7dde5d99431579377e332abb0b71af1276049 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120106694)
   * 8ad04989600d9b4bb347bfef3dd90174649e4ad9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120192385)
   * 7bc2eff3d3dc7c3080c13505ebbc7f80ef906062 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120310203)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
wuchong commented on issue #9185: [FLINK-13266] [table] Relocate blink planner 
classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-514468797
 
 
   Thanks for the update @godfreyhe. The rest commits looks good to me. I will 
merge them when travis passed. 
   
   @twalthr , I will merge eac67e0a39a44c8a92dc0c4f6e2867e9f6fd90df (move 
options) too to unblock many issues. We can continue to review config names in 
#9203.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9013: [FLINK-13136] Fix documentation error about stopping job with restful api

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9013: [FLINK-13136] Fix documentation error 
about stopping job with restful api
URL: https://github.com/apache/flink/pull/9013#issuecomment-514125271
 
 
   ## CI report:
   
   * 2d2908752122665dc6bc45ceb4aa28099755f8d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120146040)
   * dbf5079d7ce8b8710758fb00fe2ab6727ebe72ad : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120200569)
   * 097af583c8040c16da17b796f6e4060c270b5b1d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120308925)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11631) TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination unstable on Travis

2019-07-23 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891575#comment-16891575
 ] 

vinoyang commented on FLINK-11631:
--

Another instance: [https://api.travis-ci.com/v3/job/218607774/log.txt]

> TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination unstable on 
> Travis
> 
>
> Key: FLINK-11631
> URL: https://issues.apache.org/jira/browse/FLINK-11631
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Biao Liu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination}} is 
> unstable on Travis. It fails with 
> {code}
> 16:12:04.644 [ERROR] 
> testJobReExecutionAfterTaskExecutorTermination(org.apache.flink.runtime.taskexecutor.TaskExecutorITCase)
>   Time elapsed: 1.257 s  <<< ERROR!
> org.apache.flink.util.FlinkException: Could not close resource.
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorITCase.teardown(TaskExecutorITCase.java:83)
> Caused by: org.apache.flink.util.FlinkException: Error while shutting the 
> TaskExecutor down.
> Caused by: org.apache.flink.util.FlinkException: Could not properly shut down 
> the TaskManager services.
> Caused by: java.lang.IllegalStateException: NetworkBufferPool is not empty 
> after destroying all LocalBufferPools
> {code} 
> https://api.travis-ci.org/v3/job/493221318/log.txt
> The problem seems to be caused by the {{TaskExecutor}} not properly waiting 
> for the termination of all running {{Tasks}}. Due to this, there is a race 
> condition which causes that not all buffers are returned to the 
> {{BufferPool}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on a change in pull request #9203: [FLINK-13375][table-api] Move ExecutionConfigOptions and OptimizerConfigOptions to table-api

2019-07-23 Thread GitBox
wuchong commented on a change in pull request #9203: [FLINK-13375][table-api] 
Move ExecutionConfigOptions and OptimizerConfigOptions to table-api
URL: https://github.com/apache/flink/pull/9203#discussion_r306615719
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
 ##
 @@ -83,97 +83,97 @@
//  Resource Options
// 

 
-   public static final ConfigOption 
SQL_RESOURCE_DEFAULT_PARALLELISM =
-   key("sql.resource.default.parallelism")
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM =
+   key("table.exec.resource.default-parallelism")
.defaultValue(-1)
.withDescription("Default parallelism 
of job operators. If it is <= 0, use parallelism of 
StreamExecutionEnvironment(" +
"its default value is 
the num of cpu cores in the client host).");
 
-   public static final ConfigOption 
SQL_RESOURCE_SOURCE_PARALLELISM =
-   key("sql.resource.source.parallelism")
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_SOURCE_PARALLELISM =
+   key("table.exec.resource.source.parallelism")
.defaultValue(-1)
-   .withDescription("Sets source 
parallelism, if it is <= 0, use " + SQL_RESOURCE_DEFAULT_PARALLELISM.key() + " 
to set source parallelism.");
+   .withDescription("Sets source 
parallelism, if it is <= 0, use " + 
TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM.key() + " to set source parallelism.");
 
-   public static final ConfigOption SQL_RESOURCE_SINK_PARALLELISM 
=
-   key("sql.resource.sink.parallelism")
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_SINK_PARALLELISM =
+   key("table.exec.resource.sink.parallelism")
.defaultValue(-1)
-   .withDescription("Sets sink 
parallelism, if it is <= 0, use " + SQL_RESOURCE_DEFAULT_PARALLELISM.key() + " 
to set sink parallelism.");
+   .withDescription("Sets sink 
parallelism, if it is <= 0, use " + 
TABLE_EXEC_RESOURCE_DEFAULT_PARALLELISM.key() + " to set sink parallelism.");
 
-   public static final ConfigOption 
SQL_RESOURCE_EXTERNAL_BUFFER_MEM =
-   key("sql.resource.external-buffer.memory.mb")
-   .defaultValue(10)
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_EXTERNAL_BUFFER_MEM =
+   key("table.exec.resource.external-buffer.memory")
+   .defaultValue("10 mb")
.withDescription("Sets the 
externalBuffer memory size that is used in sortMergeJoin and overWindow.");
 
-   public static final ConfigOption 
SQL_RESOURCE_HASH_AGG_TABLE_MEM =
-   key("sql.resource.hash-agg.table.memory.mb")
-   .defaultValue(128)
-   .withDescription("Sets the table memory 
size of hashAgg operator.");
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_HASH_AGG_TABLE_MEM =
+   key("table.exec.resource.hash-agg.memory")
+   .defaultValue("128 mb")
+   .withDescription("Sets the managed 
memory size of hash aggregate operator.");
 
-   public static final ConfigOption 
SQL_RESOURCE_HASH_JOIN_TABLE_MEM =
-   key("sql.resource.hash-join.table.memory.mb")
-   .defaultValue(128)
-   .withDescription("Sets the HashTable 
reserved memory for hashJoin operator. It defines the lower limit.");
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_HASH_JOIN_TABLE_MEM =
+   key("table.exec.resource.hash-join.memory")
+   .defaultValue("128 mb")
+   .withDescription("Sets the managed 
memory for hash join operator. It defines the lower limit.");
 
-   public static final ConfigOption SQL_RESOURCE_SORT_BUFFER_MEM =
-   key("sql.resource.sort.buffer.memory.mb")
-   .defaultValue(128)
+   public static final ConfigOption 
TABLE_EXEC_RESOURCE_SORT_BUFFER_MEM =
+   key("table.exec.resource.sort.buffer.memory")
+   .defaultValue("128 mb")
.withDescription("Sets the buffer 
memory 

[GitHub] [flink] yanghua commented on issue #9204: [FLINK-13158] Remove WebMonitor interface from webmonitor package

2019-07-23 Thread GitBox
yanghua commented on issue #9204: [FLINK-13158] Remove WebMonitor interface 
from webmonitor package
URL: https://github.com/apache/flink/pull/9204#issuecomment-514466163
 
 
   cc @tillrohrmann 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13393) Blink-planner not support generic TableSource

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13393:
---

Assignee: Jingsong Lee

> Blink-planner not support generic TableSource
> -
>
> Key: FLINK-13393
> URL: https://issues.apache.org/jira/browse/FLINK-13393
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Now there is a exception when use:
> class MyTableSource[T] extend StreamTableSource[T].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13391) Blink-planner should not invoke deprecated getReturnType of TableSource

2019-07-23 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891564#comment-16891564
 ] 

Jingsong Lee commented on FLINK-13391:
--

Yeah, as flink-planner do, another bug is 
https://issues.apache.org/jira/browse/FLINK-13393, [~jark] can you assign to me?

> Blink-planner should not invoke deprecated getReturnType of TableSource
> ---
>
> Key: FLINK-13391
> URL: https://issues.apache.org/jira/browse/FLINK-13391
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Now, blink-planner will invoke getDataStream of InputFormatTableSource, this 
> will invoke deprecated getReturnType method.
> We should invoke getInputFormat of InputFormatTableSource to be same as 
> flink-planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink 
planner classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-513459343
 
 
   ## CI report:
   
   * 0bbaac120acf9042279412e19d4317134821092f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119881828)
   * 2266617d608ef5f22a75a6d6c6dc809f6f9df1f9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119913854)
   * 80303520eb52d46eae7f42ba67e45414e5f44d13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120114101)
   * 31471ad687d7e36778a14465c9614d9e34d32b72 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120193153)
   * cb0fbd9d2fa18749f799b3563c56df6f07105fd7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120226950)
   * e19ebb053198c52aaae25ce79416c96dffbb4db3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120228138)
   * 10b5f418f3ad734d85c3476478fcae1128903707 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120307316)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13391) Blink-planner should not invoke deprecated getReturnType of TableSource

2019-07-23 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891562#comment-16891562
 ] 

Jark Wu commented on FLINK-13391:
-

I think we should not call {{getReturnType}} in 
{{InputFormatTableSource#getDataStream}}. We should call 
{{TypeConversions.fromDataTypeToLegacyInfo(getProducedDataType())}} instead.

> Blink-planner should not invoke deprecated getReturnType of TableSource
> ---
>
> Key: FLINK-13391
> URL: https://issues.apache.org/jira/browse/FLINK-13391
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Now, blink-planner will invoke getDataStream of InputFormatTableSource, this 
> will invoke deprecated getReturnType method.
> We should invoke getInputFormat of InputFormatTableSource to be same as 
> flink-planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13285) Check connectors runnable in blink runner

2019-07-23 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13285:
-
Summary: Check connectors runnable in blink runner  (was: Connectors need 
to get rid of flink-table-planner dependence and check connector ITCase 
runnable in blink runner)

> Check connectors runnable in blink runner
> -
>
> Key: FLINK-13285
> URL: https://issues.apache.org/jira/browse/FLINK-13285
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Now FLIP-32 is almost done, we should let connectors get rid of 
> flink-table-planner dependence.
> And there are still some planner class need to extract to table-common, just 
> like SchemaValidator.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13381) BinaryHashTableTest and BinaryExternalSorterTest is crashed on Travis

2019-07-23 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13381:
---

Assignee: Jingsong Lee

> BinaryHashTableTest and BinaryExternalSorterTest  is crashed on Travis
> --
>
> Key: FLINK-13381
> URL: https://issues.apache.org/jira/browse/FLINK-13381
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Here is an instance of master: 
> https://api.travis-ci.org/v3/job/562437128/log.txt
> Here is an instance of 1.9: https://api.travis-ci.org/v3/job/562380020/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13393) Blink-planner not support generic TableSource

2019-07-23 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-13393:


 Summary: Blink-planner not support generic TableSource
 Key: FLINK-13393
 URL: https://issues.apache.org/jira/browse/FLINK-13393
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.9.0, 1.10.0


Now there is a exception when use:

class MyTableSource[T] extend StreamTableSource[T].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lamber-ken commented on issue #9104: [HOXFIX][mvn] upgrade frontend-maven-plugin version to 1.7.5

2019-07-23 Thread GitBox
lamber-ken commented on issue #9104: [HOXFIX][mvn] upgrade 
frontend-maven-plugin version to 1.7.5
URL: https://github.com/apache/flink/pull/9104#issuecomment-514455528
 
 
   Hi, @Myasuka, when I use 1.7.5 version, things ok.
   ```
   dcadmin-imac:flink-runtime-web dcadmin$ mvn clean package -Dfast 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Drat.skip=true -Dskip.npm 
-DskipTests
   [INFO] Scanning for projects...
   [WARNING] 
   [WARNING] Some problems were encountered while building the effective model 
for org.apache.flink:flink-runtime-web_2.11:jar:1.10-SNAPSHOT
   [WARNING] 'artifactId' contains an expression but should be a constant. @ 
org.apache.flink:flink-runtime-web_${scala.binary.version}:[unknown-version], 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/pom.xml, 
line 32, column 14
   [WARNING] 
   [WARNING] It is highly recommended to fix these problems because they 
threaten the stability of your build.
   [WARNING] 
   [WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
   [WARNING] 
   [INFO]   
  
   [INFO] 

   [INFO] Building flink-runtime-web 1.10-SNAPSHOT
   [INFO] 

   [INFO] 
   [INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ 
flink-runtime-web_2.11 ---
   [INFO] Deleting 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/target
   [INFO] 
   [INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ 
flink-runtime-web_2.11 ---
   [INFO] 
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-runtime-web_2.11 ---
   [INFO] Skipping Rule Enforcement.
   [INFO] 
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ 
flink-runtime-web_2.11 ---
   [INFO] Skipping Rule Enforcement.
   [INFO] 
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ 
flink-runtime-web_2.11 ---
   [INFO] Skipping Rule Enforcement.
   [INFO] 
   [INFO] --- directory-maven-plugin:0.1:highest-basedir (directories) @ 
flink-runtime-web_2.11 ---
   [INFO] Highest basedir set to: 
/work/projects/BigDataArtisans/flink-projects/flink
   [INFO] 
   [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ flink-runtime-web_2.11 ---
   [INFO] 
   [INFO] --- frontend-maven-plugin:1.7.5:install-node-and-npm (install node 
and npm) @ flink-runtime-web_2.11 ---
   [INFO] Node v10.9.0 is already installed.
   [INFO] 
   [INFO] --- frontend-maven-plugin:1.7.5:npm (npm install) @ 
flink-runtime-web_2.11 ---
   [INFO] Skipping execution.
   [INFO] 
   [INFO] --- frontend-maven-plugin:1.7.5:npm (npm run build) @ 
flink-runtime-web_2.11 ---
   [INFO] Skipping execution.
   [INFO] 
   [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ 
flink-runtime-web_2.11 ---
   [INFO] Using 'UTF-8' encoding to copy filtered resources.
   [INFO] Copying 0 resource
   [INFO] Copying 27 resources
   [INFO] Copying 3 resources
   [INFO] 
   [INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ 
flink-runtime-web_2.11 ---
   [INFO] Compiling 38 source files to 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/target/classes
   [INFO] 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/utils/JarHandlerUtils.java:
 某些输入文件使用或覆。
   [INFO] 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/utils/JarHandlerUtils.java:
 有关详细信息, 请使用eprecation 重新编译。
   [INFO] 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java使用了未经检查或不安全的操作。
   [INFO] 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
 有关详细信息, 请使用 -Xlint:u重新编译。
   [INFO] 
   [INFO] --- maven-resources-plugin:3.1.0:testResources 
(default-testResources) @ flink-runtime-web_2.11 ---
   [INFO] Using 'UTF-8' encoding to copy filtered resources.
   [INFO] Copying 2 resources
   [INFO] Copying 3 resources
   [INFO] 
   [INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ 
flink-runtime-web_2.11 ---
   [INFO] Compiling 30 source files to 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/target/test-classes
   [INFO] 
/work/projects/BigDataArtisans/flink-projects/flink/flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/testutils/HttpTestClient.java:
 

[GitHub] [flink] lamber-ken commented on issue #9104: [HOXFIX][mvn] upgrade frontend-maven-plugin version to 1.7.5

2019-07-23 Thread GitBox
lamber-ken commented on issue #9104: [HOXFIX][mvn] upgrade 
frontend-maven-plugin version to 1.7.5
URL: https://github.com/apache/flink/pull/9104#issuecomment-514454791
 
 
   > Thanks for your contribution. However, are you sure bumping plugin version 
is the exact solution to your problem? Moreover, it seems no other guys 
including me ever come across this problem when building Flink before.
   
   Yes, My pc env is macOs 10.13.6, maven 3.2.5. I solved this as I mentioned 
https://github.com/eirslett/frontend-maven-plugin/issues/783. 
   
   ```
   [ERROR] Failed to execute goal 
com.github.eirslett:frontend-maven-plugin:1.6:install-node-and-npm (install 
node and npm) on project flink-runtime-web_2.11: Execution install node and npm 
of goal com.github.eirslett:frontend-maven-plugin:1.6:install-node-and-npm 
failed: A required class was missing while executing 
com.github.eirslett:frontend-maven-plugin:1.6:install-node-and-npm: 
org/apache/http/protocol/HttpContext
   [ERROR] -
   [ERROR] realm =plugin>com.github.eirslett:frontend-maven-plugin:1.6
   [ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
   [ERROR] urls[0] = 
file:/work/JAVA_WORK/mvn_repo/repos/com/github/eirslett/frontend-maven-plugin/1.6/frontend-maven-plugin-1.6.jar
   [ERROR] urls[1] = 
file:/work/JAVA_WORK/mvn_repo/repos/com/github/eirslett/frontend-plugin-core/1.6/frontend-plugin-core-1.6.jar
   [ERROR] urls[2] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar
   [ERROR] urls[3] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar
   [ERROR] urls[4] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/apache/commons/commons-compress/1.5/commons-compress-1.5.jar
   [ERROR] urls[5] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/tukaani/xz/1.2/xz-1.2.jar
   [ERROR] urls[6] = 
file:/work/JAVA_WORK/mvn_repo/repos/commons-io/commons-io/1.3.2/commons-io-1.3.2.jar
   [ERROR] urls[7] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/apache/commons/commons-exec/1.3/commons-exec-1.3.jar
   [ERROR] urls[8] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/apache/httpcomponents/httpclient/4.5.1/httpclient-4.5.1.jar
   [ERROR] urls[9] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/codehaus/plexus/plexus-utils/3.0.22/plexus-utils-3.0.22.jar
   [ERROR] urls[10] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar
   [ERROR] urls[11] = 
file:/work/JAVA_WORK/mvn_repo/repos/javax/enterprise/cdi-api/1.0/cdi-api-1.0.jar
   [ERROR] urls[12] = 
file:/work/JAVA_WORK/mvn_repo/repos/javax/annotation/jsr250-api/1.0/jsr250-api-1.0.jar
   [ERROR] urls[13] = 
file:/work/JAVA_WORK/mvn_repo/repos/javax/inject/javax.inject/1/javax.inject-1.jar
   [ERROR] urls[14] = 
file:/work/JAVA_WORK/mvn_repo/repos/com/google/guava/guava/10.0.1/guava-10.0.1.jar
   [ERROR] urls[15] = 
file:/work/JAVA_WORK/mvn_repo/repos/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar
   [ERROR] urls[16] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/sonatype/sisu/sisu-guice/3.1.0/sisu-guice-3.1.0-no_aop.jar
   [ERROR] urls[17] = 
file:/work/JAVA_WORK/mvn_repo/repos/aopalliance/aopalliance/1.0/aopalliance-1.0.jar
   [ERROR] urls[18] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/eclipse/sisu/org.eclipse.sisu.inject/0.0.0.M2a/org.eclipse.sisu.inject-0.0.0.M2a.jar
   [ERROR] urls[19] = 
file:/work/JAVA_WORK/mvn_repo/repos/asm/asm/3.3.1/asm-3.3.1.jar
   [ERROR] urls[20] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/codehaus/plexus/plexus-component-annotations/1.5.5/plexus-component-annotations-1.5.5.jar
   [ERROR] urls[21] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/apache/maven/plugin-tools/maven-plugin-annotations/3.2/maven-plugin-annotations-3.2.jar
   [ERROR] urls[22] = 
file:/work/JAVA_WORK/mvn_repo/repos/org/sonatype/plexus/plexus-build-api/0.0.7/plexus-build-api-0.0.7.jar
   [ERROR] Number of foreign imports: 1
   [ERROR] import: Entry[import  from realm 
ClassRealm[project>org.apache.flink:flink-runtime-web_2.11:1.10-SNAPSHOT, 
parent: ClassRealm[maven.api, parent: null]]]
   [ERROR] 
   [ERROR] -: 
org.apache.http.protocol.HttpContext
   [ERROR] -> [Help 1]
   [ERROR] 
   [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
   [ERROR] Re-run Maven using the -X switch to enable full debug logging.
   [ERROR] 
   [ERROR] For more information about the errors and possible solutions, please 
read the following articles:
   [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginContainerException
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure 

[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-23 Thread GitBox
xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix 
Yarn/MesosResourceManager setting managed memory size into wrong configuration 
instance.
URL: https://github.com/apache/flink/pull/9105#issuecomment-514451555
 
 
   Thanks for the comments, @tillrohrmann. I find them very helpful. Will 
update the PR addressing your comments ASAP.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13387) Can not download taskmanger & jobmanager's logs in the old UI

2019-07-23 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891525#comment-16891525
 ] 

vinoyang commented on FLINK-13387:
--

Hi [~dawidwys] I'd like to take this ticket, WDYT?

> Can not download taskmanger & jobmanager's logs in the old UI
> -
>
> Key: FLINK-13387
> URL: https://issues.apache.org/jira/browse/FLINK-13387
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.9.0
>
>
> It is not possible to download the taskmanager & jobmanager logs via the old 
> UI.
> The exception is: "Unable to load requested file 
> /old-version/taskmanagers/1234ddfcc0b1d06d2615d5431a08c7b8/stdout."



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12249) Type equivalence check fails for Window Aggregates

2019-07-23 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891524#comment-16891524
 ] 

Hequn Cheng commented on FLINK-12249:
-

[~sunjincheng121] Good suggestion! I have created FLINK-13392 to further 
improve the WindowAggregate. 

> Type equivalence check fails for Window Aggregates
> --
>
> Key: FLINK-12249
> URL: https://issues.apache.org/jira/browse/FLINK-12249
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Assignee: Hequn Cheng
>Priority: Critical
> Fix For: 1.9.0
>
>
> Creating Aggregate node fails in rules: {{LogicalWindowAggregateRule}} and 
> {{ExtendedAggregateExtractProjectRule}} if the only grouping expression is a 
> window and
> we compute aggregation on NON NULLABLE field.
> The root cause for that, is how return type inference strategies in calcite 
> work and how we handle window aggregates. Take 
> {{org.apache.calcite.sql.type.ReturnTypes#AGG_SUM}} as an example, based on 
> {{groupCount}} it adjusts type nullability based on groupCount.
> Though we pass a false information as we strip down window aggregation from 
> groupSet (in {{LogicalWindowAggregateRule}}).
> One can reproduce this problem also with a unit test like this:
> {code}
> @Test
>   def testTumbleFunction2() = {
>  
> val innerQuery =
>   """
> |SELECT
> | CASE a WHEN 1 THEN 1 ELSE 99 END AS correct,
> | rowtime
> |FROM MyTable
>   """.stripMargin
> val sql =
>   "SELECT " +
> "  SUM(correct) as cnt, " +
> "  TUMBLE_START(rowtime, INTERVAL '15' MINUTE) as wStart " +
> s"FROM ($innerQuery) " +
> "GROUP BY TUMBLE(rowtime, INTERVAL '15' MINUTE)"
> val expected = ""
> streamUtil.verifySql(sql, expected)
>   }
> {code}
> This causes e2e tests to fail: 
> https://travis-ci.org/apache/flink/builds/521183361?utm_source=slack_medium=notificationhttps://travis-ci.org/apache/flink/builds/521183361?utm_source=slack_medium=notification



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] hequn8128 commented on issue #9141: [FLINK-12249][table] Fix type equivalence check problems for Window Aggregates

2019-07-23 Thread GitBox
hequn8128 commented on issue #9141: [FLINK-12249][table] Fix type equivalence 
check problems for Window Aggregates
URL: https://github.com/apache/flink/pull/9141#issuecomment-514446216
 
 
   @sunjincheng121 Hi, thanks a lot for your suggestion. I have created a new 
jira([FLINK-13392](https://issues.apache.org/jira/browse/FLINK-13392)) to 
further improve the WindowAggregate. 
   
   Best, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13392) WindowAggregate inherited from Aggregate incorrectly

2019-07-23 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-13392:
---

 Summary: WindowAggregate inherited from Aggregate incorrectly
 Key: FLINK-13392
 URL: https://issues.apache.org/jira/browse/FLINK-13392
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Hequn Cheng


As discussed in FLINK-12249, the WindowAggregate inherited from Aggregate 
incorrectly.

For WindowAggregate, the group keys are window group and normal fields (may be 
empty), while Aggregate only has normal group keys part, and know nothing about 
window group key. Currently, many planner rules match and apply transformations 
on Aggregate, however some of them does not applicable to WindowAggregate, e.g. 
AggregateJoinTransposeRule, AggregateProjectMergeRule, etc.

Although FLINK-12249 fixes the type equivalence check problem, we should do a 
step further to correct the WindowAggregate behavior. There are three options 
now:
 # make Aggregate's group key supports expressions(such as RexCall), not field 
reference only. and then the window group expression could be as a part of 
Aggregate's group key. the disadvantage is we must update all existing 
aggregate rules, metadata handlers, etc.
 # make WindowAggregate extends from SingleRel, not from Aggregate. the 
disadvantage is we must implement related planner rules about WindowAggregate.
 # in logical phase, we does not merge Aggregate and Project (with window 
group) into WindowAggregate, and convert the Project to a new kind of node 
named WindowAssigner, which could prevent Project from being pushed 
down/merged. and in physical phase, we merge them into WindowAggregate. the 
advantage is we could reuse current aggregate rules, and the disadvantage is we 
should add new rules about WindowAssigner.

We could have some further discussions in the jira ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13391) Blink-planner should not invoke deprecated getReturnType of TableSource

2019-07-23 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13391:
-
Summary: Blink-planner should not invoke deprecated getReturnType of 
TableSource  (was: Blink-planner should not invoke deprecated getReturnType)

> Blink-planner should not invoke deprecated getReturnType of TableSource
> ---
>
> Key: FLINK-13391
> URL: https://issues.apache.org/jira/browse/FLINK-13391
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Now, blink-planner will invoke getDataStream of InputFormatTableSource, this 
> will invoke deprecated getReturnType method.
> We should invoke getInputFormat of InputFormatTableSource to be same as 
> flink-planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi commented on issue #9181: [FLINK-13210][hive] Hive connector test should dependent on blink planner instead of legacy planner

2019-07-23 Thread GitBox
JingsongLi commented on issue #9181: [FLINK-13210][hive] Hive connector test 
should dependent on blink planner instead of legacy planner
URL: https://github.com/apache/flink/pull/9181#issuecomment-514445234
 
 
   > @JingsongLi thanks for the review. One side note is blink planner requires 
a table source to implement `getReturnType()` method (hence the change to 
`HiveTableSource`). This method is marked as deprecated and seems the legacy 
planner doesn't require it. So I think it's better if we can avoid this 
inconsistency.
   
   Hi @lirui-apache , you are right, Blink-planner should make some changes to 
avoid calling `getReturnType`.
   I create a JIRA: https://issues.apache.org/jira/browse/FLINK-13391 about it. 
Will soon fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-12008) Support read a whole directory or multiple input data files for read apis of HadoopInputs

2019-07-23 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891521#comment-16891521
 ] 

vinoyang edited comment on FLINK-12008 at 7/24/19 1:41 AM:
---

Agree. Your idea sounds really more concise. If you don't mind. I will start it.


was (Author: yanghua):
Agree. Your idea sounds really more concise.

> Support read a whole directory or multiple input data files for read apis of 
> HadoopInputs
> -
>
> Key: FLINK-12008
> URL: https://issues.apache.org/jira/browse/FLINK-12008
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Currently, the read APIs provided by {{HadoopInputs}} only can read one path. 
> I think it's not strong enough. We should support read a whole directory or 
> multiple input files.
> Hadoop provides {{org.apache.hadoop.mapred.FileInputFormat.setInputPaths()}} 
> to support this requirement. 
> Spark's {{sequenceFile}} API calls this 
> API([https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L1049].)
> Flink calls {{org.apache.hadoop.mapred.FileInputFormat.addInputPath}} which 
> only  supports one path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12008) Support read a whole directory or multiple input data files for read apis of HadoopInputs

2019-07-23 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891521#comment-16891521
 ] 

vinoyang commented on FLINK-12008:
--

Agree. Your idea sounds really more concise.

> Support read a whole directory or multiple input data files for read apis of 
> HadoopInputs
> -
>
> Key: FLINK-12008
> URL: https://issues.apache.org/jira/browse/FLINK-12008
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Currently, the read APIs provided by {{HadoopInputs}} only can read one path. 
> I think it's not strong enough. We should support read a whole directory or 
> multiple input files.
> Hadoop provides {{org.apache.hadoop.mapred.FileInputFormat.setInputPaths()}} 
> to support this requirement. 
> Spark's {{sequenceFile}} API calls this 
> API([https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L1049].)
> Flink calls {{org.apache.hadoop.mapred.FileInputFormat.addInputPath}} which 
> only  supports one path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13391) Blink-planner should not invoke deprecated getReturnType

2019-07-23 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-13391:


 Summary: Blink-planner should not invoke deprecated getReturnType
 Key: FLINK-13391
 URL: https://issues.apache.org/jira/browse/FLINK-13391
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.9.0, 1.10.0


Now, blink-planner will invoke getDataStream of InputFormatTableSource, this 
will invoke deprecated getReturnType method.

We should invoke getInputFormat of InputFormatTableSource to be same as 
flink-planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi removed a comment on issue #9181: [FLINK-13210][hive] Hive connector test should dependent on blink planner instead of legacy planner

2019-07-23 Thread GitBox
JingsongLi removed a comment on issue #9181: [FLINK-13210][hive] Hive connector 
test should dependent on blink planner instead of legacy planner
URL: https://github.com/apache/flink/pull/9181#issuecomment-51066
 
 
   > getReturnType
   
   Hi @lirui-apache , you should implement `getDataStream` and get rid of 
`getReturnType `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #9181: [FLINK-13210][hive] Hive connector test should dependent on blink planner instead of legacy planner

2019-07-23 Thread GitBox
JingsongLi commented on issue #9181: [FLINK-13210][hive] Hive connector test 
should dependent on blink planner instead of legacy planner
URL: https://github.com/apache/flink/pull/9181#issuecomment-51066
 
 
   > getReturnType
   
   Hi @lirui-apache , you should implement `getDataStream` and get rid of 
`getReturnType `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-23 Thread GitBox
flinkbot commented on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #9210: Datastream walkthrough

2019-07-23 Thread GitBox
sjwiesman commented on issue #9210: Datastream walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514436414
 
 
   cc @knaufk @morsapaes @fhueske 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9210: Datastream walkthrough

2019-07-23 Thread GitBox
flinkbot commented on issue #9210: Datastream walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514435923
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman opened a new pull request #9210: Datastream walkthrough

2019-07-23 Thread GitBox
sjwiesman opened a new pull request #9210: Datastream walkthrough
URL: https://github.com/apache/flink/pull/9210
 
 
   ## What is the purpose of the change
   
   As part of FLIP-42 we want to add a datastream walkthrough that new users 
can use to get started with the api. 
   
   ## Brief change log
   
   This getting started package is based on the table api getting started guide 
and follows the same structure. It is based on #8903 due to using the same 
walkthrough modules and only the last commit is relevant. 
   
   * Add two new maven archetypes
 * flink-walkthrough-datastream-java
 * flink-walkthrough-datastream-scala
   
   * Walkthrough guide
   
   ## Verifying this change
   
   The archetypes include end-to-end tests that validate they compile and run
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no 
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13385) Align Hive data type mapping with FLIP-37

2019-07-23 Thread Xuefu Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891444#comment-16891444
 ] 

Xuefu Zhang commented on FLINK-13385:
-

Hi [~twalthr], thanks for pointing this out. I'm not sure if I fully understand 
your change request regarding the following:
{code}
BINARY  >>N/A<<
VARBINARY(p)>>N/A<<
>>BYTES BINARY<<
{code}

The first two lines are currently mapped to Hive binary type, as shown in 
HiveTypeUtil.java. In addition, BINARY and VARBINARY are defined in 
LogicalTypeRoot while BYTES are defined in DataTypes. I'm not sure why we 
should put them together.

Please clarify.

> Align Hive data type mapping with FLIP-37
> -
>
> Key: FLINK-13385
> URL: https://issues.apache.org/jira/browse/FLINK-13385
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Timo Walther
>Priority: Major
>
> By looking at the Hive data type mapping of:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/catalog.html#data-type-mapping
> Based on the information available in:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types
> It seems that the type are not mapped correctly. The following changes should 
> be performed (indicated by {{>>...<<}}):
> {code}
> CHAR(p)   char(p)*
> VARCHAR(p)varchar(p)**
> STRINGstring
> BOOLEAN   boolean
> >>TINYINT<<   tinyint
> >>SMALLINT<<  smallint
> INT   int
> BIGINTlong
> FLOAT float
> DOUBLEdouble
> DECIMAL(p, s) decimal(p, s)
> DATE  date
> TIMESTAMP_WITHOUT_TIME_ZONE   TIMESTAMP
> TIMESTAMP_WITH_TIME_ZONE  N/A
> TIMESTAMP_WITH_LOCAL_TIME_ZONEN/A
> INTERVAL  >>INTERVAL?<<
> BINARY>>N/A<<
> VARBINARY(p)  >>N/A<<
> >>BYTES   BINARY<<
> >>ARRAYARRAY<<
> >>MAP   MAP* we support more than primitives<<
> ROW   struct
> MULTISET  N/A
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2019-07-23 Thread Akshay Iyangar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891375#comment-16891375
 ] 

Akshay Iyangar edited comment on FLINK-11143 at 7/23/19 9:01 PM:
-

Hi 
 Is it working on flink 1.8 ? we tried the setting and still seem to hit it 
with the deafult timeout of 1ms.

 
{code:java}
level":"WARN","level_value":3,"stack_trace":"java.util.concurrent.CompletionException:
 akka.pattern.AskTimeoutException: Ask timed out on 
[Actorakka://flink/user/dispatcher#-1731728438] after [1 ms]. Sender[null] 
sent message of type 
\"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".\n\tat 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)\n\tat
 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)\n\tat
 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)\n\tat
 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)\n\tat
 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)\n\tat
 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)\n\tat
 
{code}


was (Author: aiyangar):
Hi 
Is it working on flink 1.8 ? we tried the setting and still seem to hit it with 
the deafult timeout of 1ms.

 

```

level":"WARN","level_value":3,"stack_trace":"java.util.concurrent.CompletionException:
 akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka://flink/user/dispatcher#-1731728438]] after [1 ms]. 
Sender[null] sent message of type 
\"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".\n\tat 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)\n\tat
 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)\n\tat
 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)\n\tat
 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)\n\tat
 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)\n\tat
 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)\n\tat
 
```

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2
>Reporter: Alex Vinnik
>Priority: Major
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> 

[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2019-07-23 Thread Akshay Iyangar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891375#comment-16891375
 ] 

Akshay Iyangar commented on FLINK-11143:


Hi 
Is it working on flink 1.8 ? we tried the setting and still seem to hit it with 
the deafult timeout of 1ms.

 

```

level":"WARN","level_value":3,"stack_trace":"java.util.concurrent.CompletionException:
 akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka://flink/user/dispatcher#-1731728438]] after [1 ms]. 
Sender[null] sent message of type 
\"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".\n\tat 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)\n\tat
 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)\n\tat
 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)\n\tat
 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)\n\tat
 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)\n\tat
 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)\n\tat
 
```

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2
>Reporter: Alex Vinnik
>Priority: Major
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at 

[jira] [Resolved] (FLINK-13345) Dump jstack output for Flink JVMs after Jepsen Tests

2019-07-23 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao resolved FLINK-13345.
--
Resolution: Fixed

1.9: 6f27bb1f655941f2ea9b25281d8925873e6a250f
1.10: 869ccd68ac442f72e017232a6e7b91948cadb4dd

> Dump jstack output for Flink JVMs after Jepsen Tests
> 
>
> Key: FLINK-13345
> URL: https://issues.apache.org/jira/browse/FLINK-13345
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Dump the output of {{jstack -l }} for all Flink JVMs after each Jepsen 
> test. This is helpful for debugging deadlocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9194: [FLINK-13345][tests] Dump jstack output for Flink JVMs

2019-07-23 Thread GitBox
asfgit closed pull request #9194: [FLINK-13345][tests] Dump jstack output for 
Flink JVMs
URL: https://github.com/apache/flink/pull/9194
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13145) Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-23 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao resolved FLINK-13145.
--
Resolution: Fixed

1.9: 6a79ab2549e58623d9116b4dce31e3a83df8f795
1.10: 1c653ceb25b456a0abe65b22b2eada17ba2bed53

> Run HA dataset E2E test with new RestartPipelinedRegionStrategy
> ---
>
> Key: FLINK-13145
> URL: https://issues.apache.org/jira/browse/FLINK-13145
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.9.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Run HA dataset end-to-end test ({{test-scripts/test_ha_dataset.sh}}) with 
> \{{AdaptedRestartPipelinedRegionStrategyNG}} enabled, i.e., with config:
>  * jobmanager.execution.failover-strategy: region
>  * jobmanager.scheduler.partition.force-release-on-consumption: false
> Additionally, kill TaskManagers during job execution.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] GJL closed pull request #9198: [BP-1.9][FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-23 Thread GitBox
GJL closed pull request #9198: [BP-1.9][FLINK-13145][tests] Run HA dataset E2E 
test with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9198
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on issue #9198: [BP-1.9][FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-23 Thread GitBox
GJL commented on issue #9198: [BP-1.9][FLINK-13145][tests] Run HA dataset E2E 
test with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9198#issuecomment-514350666
 
 
   Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] asfgit closed pull request #9060: [FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-23 Thread GitBox
asfgit closed pull request #9060: [FLINK-13145][tests] Run HA dataset E2E test 
with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9060
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9131: [FLINK-12858][checkpointing] Stop-with-savepoint, workaround: fail whole job when savepoint is declined by a task

2019-07-23 Thread GitBox
Myasuka commented on a change in pull request #9131: 
[FLINK-12858][checkpointing] Stop-with-savepoint, workaround: fail whole job 
when savepoint is declined by a task
URL: https://github.com/apache/flink/pull/9131#discussion_r306489949
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointFailureReason.java
 ##
 @@ -23,60 +23,69 @@
  */
 public enum CheckpointFailureReason {
 
-   PERIODIC_SCHEDULER_SHUTDOWN("Periodic checkpoint scheduler is shut 
down."),
+   PERIODIC_SCHEDULER_SHUTDOWN(true, "Periodic checkpoint scheduler is 
shut down."),
 
-   ALREADY_QUEUED("Another checkpoint request has already been queued."),
+   ALREADY_QUEUED(true, "Another checkpoint request has already been 
queued."),
 
-   TOO_MANY_CONCURRENT_CHECKPOINTS("The maximum number of concurrent 
checkpoints is exceeded"),
+   TOO_MANY_CONCURRENT_CHECKPOINTS(true, "The maximum number of concurrent 
checkpoints is exceeded"),
 
-   MINIMUM_TIME_BETWEEN_CHECKPOINTS("The minimum time between checkpoints 
is still pending. " +
+   MINIMUM_TIME_BETWEEN_CHECKPOINTS(true, "The minimum time between 
checkpoints is still pending. " +
"Checkpoint will be triggered after the minimum time."),
 
-   NOT_ALL_REQUIRED_TASKS_RUNNING("Not all required tasks are currently 
running."),
+   NOT_ALL_REQUIRED_TASKS_RUNNING(true, "Not all required tasks are 
currently running."),
 
-   EXCEPTION("An Exception occurred while triggering the checkpoint."),
+   EXCEPTION(true, "An Exception occurred while triggering the 
checkpoint."),
 
-   CHECKPOINT_EXPIRED("Checkpoint expired before completing."),
+   CHECKPOINT_EXPIRED(false, "Checkpoint expired before completing."),
 
-   CHECKPOINT_SUBSUMED("Checkpoint has been subsumed."),
+   CHECKPOINT_SUBSUMED(false, "Checkpoint has been subsumed."),
 
-   CHECKPOINT_DECLINED("Checkpoint was declined."),
+   CHECKPOINT_DECLINED(false, "Checkpoint was declined."),
 
-   CHECKPOINT_DECLINED_TASK_NOT_READY("Checkpoint was declined (tasks not 
ready)"),
+   CHECKPOINT_DECLINED_TASK_NOT_READY(false, "Checkpoint was declined 
(tasks not ready)"),
 
-   CHECKPOINT_DECLINED_TASK_NOT_CHECKPOINTING("Task does not support 
checkpointing"),
+   CHECKPOINT_DECLINED_TASK_NOT_CHECKPOINTING(false, "Task does not 
support checkpointing"),
 
-   CHECKPOINT_DECLINED_SUBSUMED("Checkpoint was canceled because a barrier 
from newer checkpoint was received."),
+   CHECKPOINT_DECLINED_SUBSUMED(false, "Checkpoint was canceled because a 
barrier from newer checkpoint was received."),
 
-   CHECKPOINT_DECLINED_ON_CANCELLATION_BARRIER("Task received cancellation 
from one of its inputs"),
+   CHECKPOINT_DECLINED_ON_CANCELLATION_BARRIER(false, "Task received 
cancellation from one of its inputs"),
 
-   CHECKPOINT_DECLINED_ALIGNMENT_LIMIT_EXCEEDED("The checkpoint alignment 
phase needed to buffer more than the configured maximum bytes"),
+   CHECKPOINT_DECLINED_ALIGNMENT_LIMIT_EXCEEDED(false, "The checkpoint 
alignment phase needed to buffer more than the configured maximum bytes"),
 
-   CHECKPOINT_DECLINED_INPUT_END_OF_STREAM("Checkpoint was declined 
because one input stream is finished"),
+   CHECKPOINT_DECLINED_INPUT_END_OF_STREAM(false, "Checkpoint was declined 
because one input stream is finished"),
 
-   CHECKPOINT_COORDINATOR_SHUTDOWN("CheckpointCoordinator shutdown."),
+   CHECKPOINT_COORDINATOR_SHUTDOWN(true, "CheckpointCoordinator 
shutdown."),
 
 Review comment:
   If `preFlight` indicates to whether checkpoint failed before sending to 
tasks, why `CheckpointCoordinator shutdown` would be treated as preFlight?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13390) Clarify the exact meaning of state size when executing incremental checkpoint

2019-07-23 Thread Yun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891232#comment-16891232
 ] 

Yun Tang commented on FLINK-13390:
--

Another question is whether we should introduce a new rest API or checkpoint 
stats to track the full state size. Several months ago, for some internal 
purpose (monitor the overall state size as one of the factors to decide whether 
to scale up/down), we add a new method named #getFullStateSize() in 
AbstractCheckpointStats to return the overall state size in our internal Flink. 
I have discussed with [~srichter] about this before. But from my point of view, 
not very sure whether this should be contributed back to Flink community as 
this would only make a difference for RocksDB incremental checkpoint now.

Fundamentally, this problem happened when we introduced 
PlaceholderStreamStateHandle with zero state size, I even wonder was the 
meaning of state size changed just an unexpected side-effect?

> Clarify the exact meaning of state size when executing incremental checkpoint
> -
>
> Key: FLINK-13390
> URL: https://issues.apache.org/jira/browse/FLINK-13390
> Project: Flink
>  Issue Type: Improvement
>Reporter: Yun Tang
>Priority: Major
> Fix For: 1.10.0
>
>
> This issue is inspired from [a user 
> mail|https://lists.apache.org/thread.html/56069ce869afbfca66179e89788c05d3b092e3fe363f3540dcdeb7a1@%3Cuser.flink.apache.org%3E]
>  which confused about the state size meaning.
> I think changing the description of state size and add some notices on 
> documentation could help this. Moreover, change the log when complete 
> checkpoint should be also taken into account.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13371) Release partitions in JM if producer restarts

2019-07-23 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-13371:


Assignee: Chesnay Schepler

> Release partitions in JM if producer restarts
> -
>
> Key: FLINK-13371
> URL: https://issues.apache.org/jira/browse/FLINK-13371
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Network
>Affects Versions: 1.9.0
>Reporter: Andrey Zagrebin
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> As discussed in FLINK-13245, there can be a case that producer does not even 
> detect any consumption attempt if consumer fails before the connection is 
> established. It means we cannot fully rely on shuffle service for the release 
> on consumption in case of consumer failure. When producer restarts it will 
> leak partitions from the previous attempt. Previously we had an explicit 
> release call for this case in Execution.cancel/suspend. Basically JM has to 
> explicitly release all partitions produced by the previous task execution 
> attempt in case of producer restart, including `released on consumption` 
> partitions. For this change, we might need to track all partitions in 
> PartitionTrackerImpl.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13390) Clarify the exact meaning of state size when executing incremental checkpoint

2019-07-23 Thread Yun Tang (JIRA)
Yun Tang created FLINK-13390:


 Summary: Clarify the exact meaning of state size when executing 
incremental checkpoint
 Key: FLINK-13390
 URL: https://issues.apache.org/jira/browse/FLINK-13390
 Project: Flink
  Issue Type: Improvement
Reporter: Yun Tang
 Fix For: 1.10.0


This issue is inspired from [a user 
mail|https://lists.apache.org/thread.html/56069ce869afbfca66179e89788c05d3b092e3fe363f3540dcdeb7a1@%3Cuser.flink.apache.org%3E]
 which confused about the state size meaning.
I think changing the description of state size and add some notices on 
documentation could help this. Moreover, change the log when complete 
checkpoint should be also taken into account.





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink 
planner classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-513459343
 
 
   ## CI report:
   
   * 0bbaac120acf9042279412e19d4317134821092f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119881828)
   * 2266617d608ef5f22a75a6d6c6dc809f6f9df1f9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119913854)
   * 80303520eb52d46eae7f42ba67e45414e5f44d13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120114101)
   * 31471ad687d7e36778a14465c9614d9e34d32b72 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120193153)
   * cb0fbd9d2fa18749f799b3563c56df6f07105fd7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120226950)
   * e19ebb053198c52aaae25ce79416c96dffbb4db3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120228138)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink planner classes to avoid class clashes

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9185: [FLINK-13266] [table] Relocate blink 
planner classes to avoid class clashes
URL: https://github.com/apache/flink/pull/9185#issuecomment-513459343
 
 
   ## CI report:
   
   * 0bbaac120acf9042279412e19d4317134821092f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119881828)
   * 2266617d608ef5f22a75a6d6c6dc809f6f9df1f9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119913854)
   * 80303520eb52d46eae7f42ba67e45414e5f44d13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120114101)
   * 31471ad687d7e36778a14465c9614d9e34d32b72 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120193153)
   * cb0fbd9d2fa18749f799b3563c56df6f07105fd7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120226950)
   * e19ebb053198c52aaae25ce79416c96dffbb4db3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120228138)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9209: [FLINK-13388][web][docs] Updated screenshots to ones taken from the new UI

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9209: [FLINK-13388][web][docs] Updated 
screenshots to ones taken from the new UI
URL: https://github.com/apache/flink/pull/9209#issuecomment-514247617
 
 
   ## CI report:
   
   * 82371dc90bbd38238a9c66dbe61afbca5c167442 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120226903)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tweise commented on a change in pull request #9183: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness

2019-07-23 Thread GitBox
tweise commented on a change in pull request #9183: [FLINK-12768][tests] 
FlinkKinesisConsumerTest.testSourceSynchronization flakiness
URL: https://github.com/apache/flink/pull/9183#discussion_r306404698
 
 

 ##
 File path: 
flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumerTest.java
 ##
 @@ -952,27 +965,33 @@ public void emitWatermark(Watermark mark) {
 
// trigger sync
testHarness.setProcessingTime(testHarness.getProcessingTime() + 
1);
-   TestWatermarkTracker.assertSingleWatermark(-4);
+   TestWatermarkTracker.assertGlobalWatermark(-4);
 
final long record2 = record1 + (watermarkSyncInterval * 3) + 1;
shard1.put(Long.toString(record2));
 
-   // TODO: check for record received instead
-   Thread.sleep(100);
+   // wait for the record to be buffered in the emitter
+   final RecordEmitter emitter = 
org.powermock.reflect.Whitebox.getInternalState(fetcher, "recordEmitter");
 
 Review comment:
   Yes, that's more of a philosophical discussion outside of the issue at hand. 
Whitebox avoids extra code to be added to the class under test purely for the 
purpose of assertions in the test case. The other side of the argument is 
usually that tests can break more easily because there isn't a compile time 
check for such access. I think using Whitebox within the same module is fine 
because one would always run the tests while making potentially breaking 
modifications.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Move ExecutionConfigOptions and OptimizerConfigOptions to table-api

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Move 
ExecutionConfigOptions and OptimizerConfigOptions to table-api
URL: https://github.com/apache/flink/pull/9203#issuecomment-514046368
 
 
   ## CI report:
   
   * f5e680b52e6a85e85642fc22a41724c5a452505c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120109078)
   * c1388ab2867ad134b2300ccad6ca519eff547ccb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120216941)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tweise commented on issue #9183: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness

2019-07-23 Thread GitBox
tweise commented on issue #9183: [FLINK-12768][tests] 
FlinkKinesisConsumerTest.testSourceSynchronization flakiness
URL: https://github.com/apache/flink/pull/9183#issuecomment-514272711
 
 
   The records and watermarks are emitted from separate threads (the thread 
that emits the records will indicate what the next candidate watermark is). The 
logic in this test relies on the watermark for the last record being emitted 
when advancing the processing time. Therefore it is important that  record emit 
and watermark state change occur atomically. Otherwise the context switch can 
occur after the record emit and the watermark is generated using previous 
state, which lead to the flakiness. All this only matters for this test. In a 
job watermarks are emitted periodically and it only matters that the record is 
emitted prior to the corresponding watermark state change.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9183: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness

2019-07-23 Thread GitBox
tillrohrmann commented on a change in pull request #9183: [FLINK-12768][tests] 
FlinkKinesisConsumerTest.testSourceSynchronization flakiness
URL: https://github.com/apache/flink/pull/9183#discussion_r306390890
 
 

 ##
 File path: 
flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumerTest.java
 ##
 @@ -952,27 +965,33 @@ public void emitWatermark(Watermark mark) {
 
// trigger sync
testHarness.setProcessingTime(testHarness.getProcessingTime() + 
1);
-   TestWatermarkTracker.assertSingleWatermark(-4);
+   TestWatermarkTracker.assertGlobalWatermark(-4);
 
final long record2 = record1 + (watermarkSyncInterval * 3) + 1;
shard1.put(Long.toString(record2));
 
-   // TODO: check for record received instead
-   Thread.sleep(100);
+   // wait for the record to be buffered in the emitter
+   final RecordEmitter emitter = 
org.powermock.reflect.Whitebox.getInternalState(fetcher, "recordEmitter");
 
 Review comment:
   Using `Whitebox` usually indicates that the class under test is not well 
suited for testing. Maybe it can be refactored.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-23 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891156#comment-16891156
 ] 

Timo Walther commented on FLINK-13266:
--

[FLINK-13266][table] Port function-related descriptors to flink-table-common
Fixed in 1.9.0: 22538cc3827c3d483f4ca23fc52ae00b822706ba
Fixed in 1.10.0: 24f1dce1ceca35ef0177fa16648fd40cbcb52ded

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9208: [FLINK-13378][table-planner-blink] Fix bug: Blink-planner not support SingleValueAggFunction

2019-07-23 Thread GitBox
flinkbot edited a comment on issue #9208: [FLINK-13378][table-planner-blink] 
Fix bug: Blink-planner not support SingleValueAggFunction
URL: https://github.com/apache/flink/pull/9208#issuecomment-514213423
 
 
   ## CI report:
   
   * 315ce64bb50731a850ea12f24290b8723a61e3dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120212538)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13389) Setting DataStream return type breaks some type conversion between Table and DataStream

2019-07-23 Thread Rong Rong (JIRA)
Rong Rong created FLINK-13389:
-

 Summary: Setting DataStream return type breaks some type 
conversion between Table and DataStream
 Key: FLINK-13389
 URL: https://issues.apache.org/jira/browse/FLINK-13389
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Table SQL / API
Reporter: Rong Rong


When converting between data stream and table, there are situations where only 
GenericTypeInfo can be successfully applied, but not directly setting the 
specific RowTypeInfo.
For example the following code doesn't work

{code:java}
TypeInformation[] types = {
BasicTypeInfo.INT_TYPE_INFO,
TimeIndicatorTypeInfo.ROWTIME_INDICATOR(),
BasicTypeInfo.STRING_TYPE_INFO};
String[] names = {"a", "b", "c"};
RowTypeInfo typeInfo = new RowTypeInfo(types, names);
DataStream ds = env.fromCollection(data).returns(typeInfo);
Table sourceTable = tableEnv.fromDataStream(ds, "a,b,c");
tableEnv.registerTable("MyTableRow", sourceTable);

DataStream stream = tableEnv.toAppendStream(sourceTable, 
Row.class)
.map(a -> a)
// this line breaks the conversion, it sets the 
typeinfo to RowTypeInfo.
// without this line the output type is 
GenericTypeInfo(Row)
.returns(sourceTable.getSchema().toRowType());  
stream.addSink(new StreamITCase.StringSink());
env.execute();
{code}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-23 Thread GitBox
tillrohrmann commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r306380629
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/resourcemanager/ResourceManagerTest.java
 ##
 @@ -270,4 +282,49 @@ private TestingResourceManager 
createAndStartResourceManager(HeartbeatServices h
 
return resourceManager;
}
+
+   /**
+* Tests that RM and TM calculate same slot resource profile.
+*/
+   @Test
+   public void testCreateSlotsPerWorker() throws Exception {
+   testCreateSlotsPerWorker(new Configuration());
+
+   Configuration config1 = new Configuration();
+   config1.setInteger(TaskManagerOptions.NUM_TASK_SLOTS, 5);
+   testCreateSlotsPerWorker(config1);
+
+   Configuration config2 = new Configuration();
+   config2.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, 
"789m");
+   testCreateSlotsPerWorker(config2);
+
+   Configuration config3 = new Configuration();
+   config3.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, 
"300m");
+   config3.setBoolean(TaskManagerOptions.MEMORY_OFF_HEAP, true);
+   testCreateSlotsPerWorker(config3);
+
+   Configuration config4 = new Configuration();
+   
config4.setString(NettyShuffleEnvironmentOptions.NETWORK_BUFFERS_MEMORY_MAX, 
"10m");
+   
config4.setString(NettyShuffleEnvironmentOptions.NETWORK_BUFFERS_MEMORY_MIN, 
"10m");
+   config4.setBoolean(TaskManagerOptions.MEMORY_OFF_HEAP, true);
+   testCreateSlotsPerWorker(config4);
+   }
+
+   private void testCreateSlotsPerWorker(Configuration config) throws  
Exception {
+   resourceManager = 
createAndStartResourceManager(heartbeatServices, config);
 
 Review comment:
   I feels a bit weird that we are instantiating the `ResourceManager` in order 
to test a static helper method. I think this is an indicator that something is 
wrong with the chosen abstractions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-23 Thread GitBox
tillrohrmann commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r306378649
 
 

 ##
 File path: 
flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnConfigurationITCase.java
 ##
 @@ -208,4 +215,12 @@ private boolean 
hasTaskManagerConnectedAndReportedSlots(Collection 0;
}
}
+
+   private static int calculateManagedMemorySizeMB(Configuration 
originalConfiguration, int numSlotsPerTaskManager) {
+   Configuration configuration = new 
Configuration(originalConfiguration); // copy, because we alter the config
 
 Review comment:
   It is a bit weird that we have to do something like this in a test. This 
might indicate a design flaw of the 
`YarnResourceManager#updateTaskManagerConfigAndCreateWorkerSlotProfiles` method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-23 Thread GitBox
tillrohrmann commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r306381627
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##
 @@ -1203,6 +1221,16 @@ protected int getNumberRequiredTaskManagerSlots() {
//  Helper methods
// 

 
+   @VisibleForTesting
+   Configuration getFlinkConfig() {
+   return flinkConfig;
+   }
+
+   @VisibleForTesting
+   Collection getSlotsPerWorker() {
+   return slotsPerWorker;
+   }
 
 Review comment:
   Getting access to internal state is usually an indicator that our class 
under test is too powerful and is not well suited for testing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-23 Thread GitBox
tillrohrmann commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r306384549
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##
 @@ -185,6 +199,10 @@ public ResourceManager(
this.jmResourceIdRegistrations = new HashMap<>(4);
this.taskExecutors = new HashMap<>(8);
this.taskExecutorGatewayFutures = new HashMap<>(8);
+
+   this.defaultTaskManagerMemoryMB = 
ConfigurationUtils.getTaskManagerHeapMemory(flinkConfig).getMebiBytes();
+   this.numberOfTaskSlots = 
flinkConfig.getInteger(TaskManagerOptions.NUM_TASK_SLOTS);
+   this.slotsPerWorker = 
updateTaskManagerConfigAndCreateWorkerSlotProfiles(this.flinkConfig, 
defaultTaskManagerMemoryMB, numberOfTaskSlots);
 
 Review comment:
   Can we move these computations out of the `ResourceManager`? It seems as if 
we only rely on static helper methods. We could create a 
`TaskManagerSpecification` object containing the required information. That 
way, it would also be easier to test the correct computation without having to 
introduce these `VisibleForTesting` methods and having to create `RpcEndpoints` 
for only testing static helper methods.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >