[GitHub] [flink] wuchong commented on issue #9160: FLINK-13302][table-planner] DateTimeUtils.unixDateCeil returns the same value as unixDateFloor does

2019-07-18 Thread GitBox
wuchong commented on issue #9160: FLINK-13302][table-planner] 
DateTimeUtils.unixDateCeil returns the same value as unixDateFloor does
URL: https://github.com/apache/flink/pull/9160#issuecomment-513102133
 
 
   Travis passed: https://travis-ci.com/flink-ci/flink/builds/119598057
   
   Merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13314) Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner

2019-07-18 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13314:
---
Priority: Blocker  (was: Critical)

> Correct resultType of some PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
> --
>
> Key: FLINK-13314
> URL: https://issues.apache.org/jira/browse/FLINK-13314
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Correct resultType of the following PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner:
> Minus/plus/Div/Mul/Ceil/Floor/Round
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-13269) copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-18 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13269.
-
Resolution: Fixed

Fixed in 1.10.0: 32ad3fa960681ffc8c21179b6592f6e0a6875c11
Fixed in 1.9.0: 5bff82205ee29e9ae82cd15588a5dd471c48

> copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix 
> CALCITE-3169 & CALCITE-3170
> --
>
> Key: FLINK-13269
> URL: https://issues.apache.org/jira/browse/FLINK-13269
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [CALCITE-3169|https://issues.apache.org/jira/browse/CALCITE-3169] & 
> [CALCITE-3170|https://issues.apache.org/jira/browse/CALCITE-3170] are not 
> fixed in Calcite-1.20. 
> {{RelDecorrelator}} & {{FlinkFilterJoinRule}} is copied from Calcite to blink 
> planner to resolve those two bug. to make both planners available in one jar, 
> {{RelDecorrelator}} & {{FlinkFilterJoinRule}} should also be copied to flink 
> planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13271) Add documentation for all the new features of blink planner

2019-07-18 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13271:
---
Priority: Blocker  (was: Critical)

> Add documentation for all the new features of blink planner
> ---
>
> Key: FLINK-13271
> URL: https://issues.apache.org/jira/browse/FLINK-13271
> Project: Flink
>  Issue Type: Task
>  Components: Documentation, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0
>
>
> This is an umbrella issue to track documentations for blink planner. All new 
> features introduced by blink planner, or behavior different with flink 
> planner should be documented.
> Structure and Tasks are proposed in the google doc: 
> https://docs.google.com/document/d/1xcI77x-15CbSPOdluRaFzx7jf2_V2SBOrlTyDPhIUHE/edit#
> Subtasks will be added later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9173: [FLINK-13037][docs-zh] Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread GitBox
wuchong commented on issue #9173: [FLINK-13037][docs-zh] Translate "Concepts -> 
Glossary" page into Chinese
URL: https://github.com/apache/flink/pull/9173#issuecomment-513100389
 
 
   I agree with @TisonKun , it would be better to translate the existing file 
first and then add additional glossary in another JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-18 Thread GitBox
flinkbot commented on issue #9175: [FLINK-12038] [test] fix YARNITCase random 
fail
URL: https://github.com/apache/flink/pull/9175#issuecomment-513088306
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12038) YARNITCase stalls on travis

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12038:
---
Labels: pull-request-available test-stability  (was: test-stability)

> YARNITCase stalls on travis
> ---
>
> Key: FLINK-12038
> URL: https://issues.apache.org/jira/browse/FLINK-12038
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: shuai.xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.9.0
>
>
> https://travis-ci.org/apache/flink/jobs/511932978



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] shuai-xu opened a new pull request #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-18 Thread GitBox
shuai-xu opened a new pull request #9175: [FLINK-12038] [test] fix YARNITCase 
random fail
URL: https://github.com/apache/flink/pull/9175
 
 
   
   ## What is the purpose of the change
   
   This pr fix that the YARNITCase may random fail due to 
[YARN-2853](https://issues.apache.org/jira/browse/YARN-2853). 
   In fact, the killApplication when case finished is not needed as in tearDown 
it will stop the YARN mini cluster.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13269) copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13269:
---
Labels: pull-request-available  (was: )

> copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix 
> CALCITE-3169 & CALCITE-3170
> --
>
> Key: FLINK-13269
> URL: https://issues.apache.org/jira/browse/FLINK-13269
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> [CALCITE-3169|https://issues.apache.org/jira/browse/CALCITE-3169] & 
> [CALCITE-3170|https://issues.apache.org/jira/browse/CALCITE-3170] are not 
> fixed in Calcite-1.20. 
> {{RelDecorrelator}} & {{FlinkFilterJoinRule}} is copied from Calcite to blink 
> planner to resolve those two bug. to make both planners available in one jar, 
> {{RelDecorrelator}} & {{FlinkFilterJoinRule}} should also be copied to flink 
> planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9122: [FLINK-13269] [table] copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-18 Thread GitBox
asfgit closed pull request #9122: [FLINK-13269] [table] copy RelDecorrelator & 
FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170
URL: https://github.com/apache/flink/pull/9122
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9083: [FLINK-13116] [table-planner-blink] Supports catalog statistics in blink planner

2019-07-18 Thread GitBox
godfreyhe commented on issue #9083: [FLINK-13116] [table-planner-blink] 
Supports catalog statistics in blink planner
URL: https://github.com/apache/flink/pull/9083#issuecomment-513086622
 
 
   hi @zentol , this PR is in the 1.9 plan. I hope this PR could merge into 
1.9, and that could make the hive function more complete.  i will rebase master 
and resolve the conflicts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-18 Thread GitBox
wuchong commented on issue #9113: [FLINK-13222] [runtime] Add documentation for 
failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-513086127
 
 
   Thanks for updating the Chinese documentation as well. It looks good to me.  
I only left a minor comment about the link.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-18 Thread GitBox
wuchong commented on a change in pull request #9113: [FLINK-13222] [runtime] 
Add documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#discussion_r305198169
 
 

 ##
 File path: docs/dev/task_failure_recovery.zh.md
 ##
 @@ -260,4 +265,49 @@ env.setRestartStrategy(RestartStrategies.noRestart())
 这对于启用了 checkpoint 的流处理程序很有帮助。
 如果没有定义其他重启策略,默认选择固定延时重启策略。
 
+## 故障恢复策略
+
+Flink 支持多种不同的故障恢复策略,该策略需要通过 Flink 配置文件 `flink-conf.yaml` 中的 
*jobmanager.execution.failover-strategy*
+配置项进行配置。
+
+
+  
+
+  故障恢复策略
+  jobmanager.execution.failover-strategy 配置值
+
+  
+  
+
+全图重启
+full
+
+
+基于 Region 的局部重启
+region
+
+  
+
+
+### 全图重启故障恢复策略
+
+在全图重启故障恢复策略下,Task 发生故障时会重启作业中的所有 Task 进行故障恢复。
+
+### 基于 Region 的局部重启故障恢复策略
+
+本策略会以 Region 为粒度来决定需要重启的 Task。
+
+此处 Region 指以 Pipelined 形式进行数据交换的 Task 集合。
+- DataStream 和 流式 Table 作业的所有数据交换都是 Pipelined 形式的。
+- 批处理式 Table 作业的所有数据交换都是 Batch 形式的。
+- DataSet 作业中的数据交换形式会根据 [ExecutionConfig]({{ site.baseurl 
}}/dev/execution_configuration.html) 
 
 Review comment:
   Use the Chinese version link which is start with "zh": `({{ site.baseurl 
}}/zh/dev/execution_configuration.html) `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13331) Add TestMiniClusters to maintain cache share cluster between Tests

2019-07-18 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-13331:


 Summary: Add TestMiniClusters to maintain cache share cluster 
between Tests
 Key: FLINK-13331
 URL: https://issues.apache.org/jira/browse/FLINK-13331
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.9.0, 1.10.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9122: [FLINK-13269] [table] copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-18 Thread GitBox
wuchong commented on issue #9122: [FLINK-13269] [table] copy RelDecorrelator & 
FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170
URL: https://github.com/apache/flink/pull/9122#issuecomment-513084773
 
 
   Thanks @godfreyhe .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13330) Remove unnecessary to reduce testing time in blink

2019-07-18 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13330:
---

Assignee: Jingsong Lee

> Remove unnecessary to reduce testing time in blink
> --
>
> Key: FLINK-13330
> URL: https://issues.apache.org/jira/browse/FLINK-13330
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9174: [FLINK-13330][table-planner-blink] Remove unnecessary to reduce testing time in blink

2019-07-18 Thread GitBox
flinkbot commented on issue #9174: [FLINK-13330][table-planner-blink] Remove 
unnecessary to reduce testing time in blink
URL: https://github.com/apache/flink/pull/9174#issuecomment-513082991
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13330) Remove unnecessary to reduce testing time in blink

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13330:
---
Labels: pull-request-available  (was: )

> Remove unnecessary to reduce testing time in blink
> --
>
> Key: FLINK-13330
> URL: https://issues.apache.org/jira/browse/FLINK-13330
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi opened a new pull request #9174: [FLINK-13330][table-planner-blink] Remove unnecessary to reduce testing time in blink

2019-07-18 Thread GitBox
JingsongLi opened a new pull request #9174: [FLINK-13330][table-planner-blink] 
Remove unnecessary to reduce testing time in blink
URL: https://github.com/apache/flink/pull/9174
 
 
   
   ## What is the purpose of the change
   
   Some tests are unnecessary.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13330) Remove unnecessary to reduce testing time in blink

2019-07-18 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-13330:


 Summary: Remove unnecessary to reduce testing time in blink
 Key: FLINK-13330
 URL: https://issues.apache.org/jira/browse/FLINK-13330
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.9.0, 1.10.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13318) Blink planner tests failing on Scala 2.12

2019-07-18 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888496#comment-16888496
 ] 

godfrey he commented on FLINK-13318:


i fix it now

> Blink planner tests failing on Scala 2.12
> -
>
> Key: FLINK-13318
> URL: https://issues.apache.org/jira/browse/FLINK-13318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> [https://travis-ci.org/apache/flink/builds/559909681]
> {code:java}
> 13:30:03.531 [INFO] Results:
> 13:30:03.531 [INFO] 
> 13:30:03.533 [ERROR] Failures: 
> 13:30:03.534 [ERROR]   CalcTest.testScalarFunctionAccess:64 planBefore 
> expected:<...t$giveMeCaseClass$$f[e1bff2b06d8e0e495536102224cfe83().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83]().clazz])
> +- Logica...> but 
> was:<...t$giveMeCaseClass$$f[4a420732fc04b1351889eb0e88eb891().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891]().clazz])
> +- Logica...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFromGroupedTableWithFunctionKey:154 
> planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)])
>   +- Log...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)])
>   +- Log...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFunction:109 planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)], 
> b=[$1])
> +- L...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)], b=[$1])
> +- L...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin:41 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin2:52 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.534 [ERROR]   CorrelateTest.testLeftOuterJoinWithLiteralTrue:74 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   
> CorrelateTest.testLeftOuterJoinWithoutJoinPredicates:63 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   JoinTest.testFilterJoinRule:143 planBefore 
> expected:<...le$JoinTest$Merger$$[223b7380fec29c4077a893c60165d845($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$223b7380fec29c4077a893c60165d845]($2,
>  $5))])
>+- Lo...> but 
> was:<...le$JoinTest$Merger$$[d18a3011491fab359eccb50f2d0d9a18($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$d18a3011491fab359eccb50f2d0d9a18]($2,
>  $5))])
>+- Lo...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins1:39 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins2:45 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins3:51 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins4:57 
> planBefore 
> expected:<...ble$util$TableFunc2$[b3b1f988779be024ed9386bce5019112]($2)], 
> rowType=[Reco...> but 
> 

[jira] [Commented] (FLINK-13329) Set env config for sql jobs

2019-07-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888493#comment-16888493
 ] 

Jark Wu commented on FLINK-13329:
-

Hi  [~julien1987], do you have any plan about this?
Because regarding to FLIP-32, we will have the env configs and checkpoint 
config in {{TableConfig}}. But this might happen in 1.10. 

> Set env config for sql jobs
> ---
>
> Key: FLINK-13329
> URL: https://issues.apache.org/jira/browse/FLINK-13329
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: XuPingyong
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> Now we execute streaming job through TableEnvironment, but
> StreamExecutionEnvironment can not be touched by users, so we can not set 
> checkpoint and other env configs when we execute sql jobs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhuzhurk commented on a change in pull request #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-18 Thread GitBox
zhuzhurk commented on a change in pull request #9113: [FLINK-13222] [runtime] 
Add documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#discussion_r305193941
 
 

 ##
 File path: docs/dev/task_failure_recovery.md
 ##
 @@ -264,4 +268,44 @@ The cluster defined restart strategy is used.
 This is helpful for streaming programs which enable checkpointing.
 By default, a fixed delay restart strategy is chosen if there is no other 
restart strategy defined.
 
+## Failover Strategies
+
+Flink supports different failover strategies which can be configured via the 
configuration parameter
+*jobmanager.execution.failover-strategy* in Flink's configuration file 
`flink-conf.yaml`.
+
+
+  
+
+  Failover Strategy
+  Value for 
jobmanager.execution.failover-strategy
+
+  
+  
+
+Restart all
+full
+
+
+Restart pipelined region
+region
+
+  
+
+
+### Restart All Strategy
+
+With this strategy, all tasks in the job will be restarted to recover from a 
task failure.
+
+### Restart Pipelined Region Strategy
+
+With this strategy, tasks to restart depend on the regions to restart.
+A region is defined by this strategy as tasks that communicate via pipelined 
data exchange.
 
 Review comment:
   The ExecutionMod is for DataSet job only.
   Considering the streaming job and the batch table job  introduced in 1.9, 
will elaborate more about the data exchanges for different job types.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12249) Type equivalence check fails for Window Aggregates

2019-07-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888491#comment-16888491
 ] 

Jark Wu commented on FLINK-12249:
-

What would be the effort if we make `WindowAggregate` doesn't extend from 
`Aggregate` ? 
I mean we don't have much optimization rules for window aggregate, it might be 
not a big effort and can aim to 1.9 if possible. 

> Type equivalence check fails for Window Aggregates
> --
>
> Key: FLINK-12249
> URL: https://issues.apache.org/jira/browse/FLINK-12249
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Assignee: Hequn Cheng
>Priority: Critical
> Fix For: 1.9.0
>
>
> Creating Aggregate node fails in rules: {{LogicalWindowAggregateRule}} and 
> {{ExtendedAggregateExtractProjectRule}} if the only grouping expression is a 
> window and
> we compute aggregation on NON NULLABLE field.
> The root cause for that, is how return type inference strategies in calcite 
> work and how we handle window aggregates. Take 
> {{org.apache.calcite.sql.type.ReturnTypes#AGG_SUM}} as an example, based on 
> {{groupCount}} it adjusts type nullability based on groupCount.
> Though we pass a false information as we strip down window aggregation from 
> groupSet (in {{LogicalWindowAggregateRule}}).
> One can reproduce this problem also with a unit test like this:
> {code}
> @Test
>   def testTumbleFunction2() = {
>  
> val innerQuery =
>   """
> |SELECT
> | CASE a WHEN 1 THEN 1 ELSE 99 END AS correct,
> | rowtime
> |FROM MyTable
>   """.stripMargin
> val sql =
>   "SELECT " +
> "  SUM(correct) as cnt, " +
> "  TUMBLE_START(rowtime, INTERVAL '15' MINUTE) as wStart " +
> s"FROM ($innerQuery) " +
> "GROUP BY TUMBLE(rowtime, INTERVAL '15' MINUTE)"
> val expected = ""
> streamUtil.verifySql(sql, expected)
>   }
> {code}
> This causes e2e tests to fail: 
> https://travis-ci.org/apache/flink/builds/521183361?utm_source=slack_medium=notificationhttps://travis-ci.org/apache/flink/builds/521183361?utm_source=slack_medium=notification



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13238) Reduce blink planner's testing time

2019-07-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888489#comment-16888489
 ] 

Jark Wu commented on FLINK-13238:
-

Hi [~lzljs3620320], I assigned it to you. I think we can create sub-tasks under 
this issue, because we may have different ways to reduce the time.

> Reduce blink planner's testing time
> ---
>
> Key: FLINK-13238
> URL: https://issues.apache.org/jira/browse/FLINK-13238
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
>
> The blink planner has an independent CI profile, but still exceeded 50 min 
> limit from time to time. We need to optimize the tests to reduce the testing 
> time. This will leading to Travis failure.
>  
> We need do some work to reduce time:
> 1.Optimizer big tests:
> 192.503 s LongHashTableTest
> 83.969 s BinaryExternalSorterTest
> 261.497 s BinaryHashTableTest
> 74.223 s - in org.apache.flink.table.runtime.stream.sql.RankITCase
> 135.375 s - in org.apache.flink.table.runtime.stream.sql.JoinITCase
> 99.007 s - in org.apache.flink.table.runtime.stream.sql.SplitAggregateITCase
> 61.216 s - in org.apache.flink.table.runtime.stream.sql.OverWindowITCase
> 77.409 s - in 
> org.apache.flink.table.runtime.stream.sql.SemiAntiJoinStreamITCase
> 83.83 s - in org.apache.flink.table.runtime.stream.sql.AggregateRemoveITCase
> 314.376 s - in org.apache.flink.table.runtime.stream.sql.AggregateITCase
> 121.19 s - in org.apache.flink.table.runtime.stream.table.JoinITCase
> 74.417 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.SortDistinctAggregateITCase
> 109.185 s - in org.apache.flink.table.runtime.batch.sql.agg.HashAggITCase
> 178.181 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.AggregateReduceGroupingITCase
> 112.006 s - in org.apache.flink.table.runtime.batch.sql.agg.SortAggITCase
> 61.863 s - in org.apache.flink.table.runtime.batch.sql.agg.GroupingSetsITCase
> 62.941 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.HashDistinctAggregateITCase
> 64.58 s - in org.apache.flink.table.runtime.batch.sql.CalcITCase
> 81.272 s - in org.apache.flink.table.runtime.batch.sql.OverWindowITCase
> 75.298 s - in org.apache.flink.table.runtime.batch.sql.join.JoinITCase
> 82.923 s - in org.apache.flink.table.runtime.batch.sql.join.OuterJoinITCase
> 145.538 s - in org.apache.flink.table.runtime.batch.sql.join.SemiJoinITCase
> 214.933 s - in org.apache.flink.table.runtime.batch.sql.join.InnerJoinITCase
>  
> 2.Reuse miniCluster in ITCases.
> Every MiniCluster initialization takes 15 seconds, and MiniCluster is 
> class-level reuse. We have many ITCase classes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13238) Reduce blink planner's testing time

2019-07-18 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13238:
---

Assignee: Jingsong Lee

> Reduce blink planner's testing time
> ---
>
> Key: FLINK-13238
> URL: https://issues.apache.org/jira/browse/FLINK-13238
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
>
> The blink planner has an independent CI profile, but still exceeded 50 min 
> limit from time to time. We need to optimize the tests to reduce the testing 
> time. This will leading to Travis failure.
>  
> We need do some work to reduce time:
> 1.Optimizer big tests:
> 192.503 s LongHashTableTest
> 83.969 s BinaryExternalSorterTest
> 261.497 s BinaryHashTableTest
> 74.223 s - in org.apache.flink.table.runtime.stream.sql.RankITCase
> 135.375 s - in org.apache.flink.table.runtime.stream.sql.JoinITCase
> 99.007 s - in org.apache.flink.table.runtime.stream.sql.SplitAggregateITCase
> 61.216 s - in org.apache.flink.table.runtime.stream.sql.OverWindowITCase
> 77.409 s - in 
> org.apache.flink.table.runtime.stream.sql.SemiAntiJoinStreamITCase
> 83.83 s - in org.apache.flink.table.runtime.stream.sql.AggregateRemoveITCase
> 314.376 s - in org.apache.flink.table.runtime.stream.sql.AggregateITCase
> 121.19 s - in org.apache.flink.table.runtime.stream.table.JoinITCase
> 74.417 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.SortDistinctAggregateITCase
> 109.185 s - in org.apache.flink.table.runtime.batch.sql.agg.HashAggITCase
> 178.181 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.AggregateReduceGroupingITCase
> 112.006 s - in org.apache.flink.table.runtime.batch.sql.agg.SortAggITCase
> 61.863 s - in org.apache.flink.table.runtime.batch.sql.agg.GroupingSetsITCase
> 62.941 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.HashDistinctAggregateITCase
> 64.58 s - in org.apache.flink.table.runtime.batch.sql.CalcITCase
> 81.272 s - in org.apache.flink.table.runtime.batch.sql.OverWindowITCase
> 75.298 s - in org.apache.flink.table.runtime.batch.sql.join.JoinITCase
> 82.923 s - in org.apache.flink.table.runtime.batch.sql.join.OuterJoinITCase
> 145.538 s - in org.apache.flink.table.runtime.batch.sql.join.SemiJoinITCase
> 214.933 s - in org.apache.flink.table.runtime.batch.sql.join.InnerJoinITCase
>  
> 2.Reuse miniCluster in ITCases.
> Every MiniCluster initialization takes 15 seconds, and MiniCluster is 
> class-level reuse. We have many ITCase classes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 commented on issue #9172: [FLINK-13313][table] create CatalogTableBuilder to support building CatalogTable from descriptors

2019-07-18 Thread GitBox
bowenli86 commented on issue #9172: [FLINK-13313][table] create 
CatalogTableBuilder to support building CatalogTable from descriptors
URL: https://github.com/apache/flink/pull/9172#issuecomment-513079203
 
 
   cc @twalthr @xuefuz @lirui-apache @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9168: [FLINK-13286][table-api] Port connector related validators to api-java-bridge

2019-07-18 Thread GitBox
JingsongLi commented on a change in pull request #9168:  
[FLINK-13286][table-api] Port connector related validators to api-java-bridge
URL: https://github.com/apache/flink/pull/9168#discussion_r305192167
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/descriptors/SchemaValidator.java
 ##
 @@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.factories.TableFormatFactory;
+import org.apache.flink.table.sources.RowtimeAttributeDescriptor;
+import org.apache.flink.table.sources.tsextractors.TimestampExtractor;
+import org.apache.flink.table.sources.wmstrategies.WatermarkStrategy;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+import static java.lang.String.format;
+import static org.apache.flink.table.descriptors.Rowtime.ROWTIME;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_CLASS;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_FROM;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_SERIALIZED;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_TYPE;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_TYPE_VALUE_FROM_FIELD;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_CLASS;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_DELAY;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_SERIALIZED;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_TYPE;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_FROM;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_NAME;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_PROCTIME;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_TYPE;
+
+/**
+ * Validator for {@link Schema}.
+ */
+@PublicEvolving
+public class SchemaValidator implements DescriptorValidator {
+
+   private final boolean isStreamEnvironment;
+   private final boolean supportsSourceTimestamps;
+   private final boolean supportsSourceWatermarks;
+
+   public SchemaValidator(boolean isStreamEnvironment, boolean 
supportsSourceTimestamps,
+   boolean supportsSourceWatermarks) {
+   this.isStreamEnvironment = isStreamEnvironment;
+   this.supportsSourceTimestamps = supportsSourceTimestamps;
+   this.supportsSourceWatermarks = supportsSourceWatermarks;
+   }
+
+   @Override
+   public void validate(DescriptorProperties properties) {
+   Map names = 
properties.getIndexedProperty(SCHEMA, SCHEMA_NAME);
+   Map types = 
properties.getIndexedProperty(SCHEMA, SCHEMA_TYPE);
+
+   if (names.isEmpty() && types.isEmpty()) {
+   throw new ValidationException(
+   format("Could not find the required 
schema in property '%s'.", SCHEMA));
+   }
+
+   boolean proctimeFound = false;
+
+   for (int i = 0; i < Math.max(names.size(), types.size()); i++) {
+   properties.validateString(SCHEMA + "." + i + "." + 
SCHEMA_NAME, false, 1);
+   properties.validateType(SCHEMA + "." + i + "." + 
SCHEMA_TYPE, false, false);
+   properties.validateString(SCHEMA + "." + i + "." + 
SCHEMA_FROM, true, 1);
+   // either proctime or rowtime
+   String proctime = 

[jira] [Comment Edited] (FLINK-13318) Blink planner tests failing on Scala 2.12

2019-07-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888486#comment-16888486
 ] 

Jark Wu edited comment on FLINK-13318 at 7/19/19 3:41 AM:
--

cc [~godfreyhe] [~jinyu.zj]


was (Author: jark):
cc [~godfreyhe]

> Blink planner tests failing on Scala 2.12
> -
>
> Key: FLINK-13318
> URL: https://issues.apache.org/jira/browse/FLINK-13318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> [https://travis-ci.org/apache/flink/builds/559909681]
> {code:java}
> 13:30:03.531 [INFO] Results:
> 13:30:03.531 [INFO] 
> 13:30:03.533 [ERROR] Failures: 
> 13:30:03.534 [ERROR]   CalcTest.testScalarFunctionAccess:64 planBefore 
> expected:<...t$giveMeCaseClass$$f[e1bff2b06d8e0e495536102224cfe83().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83]().clazz])
> +- Logica...> but 
> was:<...t$giveMeCaseClass$$f[4a420732fc04b1351889eb0e88eb891().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891]().clazz])
> +- Logica...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFromGroupedTableWithFunctionKey:154 
> planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)])
>   +- Log...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)])
>   +- Log...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFunction:109 planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)], 
> b=[$1])
> +- L...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)], b=[$1])
> +- L...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin:41 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin2:52 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.534 [ERROR]   CorrelateTest.testLeftOuterJoinWithLiteralTrue:74 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   
> CorrelateTest.testLeftOuterJoinWithoutJoinPredicates:63 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   JoinTest.testFilterJoinRule:143 planBefore 
> expected:<...le$JoinTest$Merger$$[223b7380fec29c4077a893c60165d845($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$223b7380fec29c4077a893c60165d845]($2,
>  $5))])
>+- Lo...> but 
> was:<...le$JoinTest$Merger$$[d18a3011491fab359eccb50f2d0d9a18($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$d18a3011491fab359eccb50f2d0d9a18]($2,
>  $5))])
>+- Lo...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins1:39 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins2:45 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins3:51 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins4:57 
> planBefore 
> 

[jira] [Commented] (FLINK-13318) Blink planner tests failing on Scala 2.12

2019-07-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888486#comment-16888486
 ] 

Jark Wu commented on FLINK-13318:
-

cc [~godfreyhe]

> Blink planner tests failing on Scala 2.12
> -
>
> Key: FLINK-13318
> URL: https://issues.apache.org/jira/browse/FLINK-13318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> [https://travis-ci.org/apache/flink/builds/559909681]
> {code:java}
> 13:30:03.531 [INFO] Results:
> 13:30:03.531 [INFO] 
> 13:30:03.533 [ERROR] Failures: 
> 13:30:03.534 [ERROR]   CalcTest.testScalarFunctionAccess:64 planBefore 
> expected:<...t$giveMeCaseClass$$f[e1bff2b06d8e0e495536102224cfe83().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$fe1bff2b06d8e0e495536102224cfe83]().clazz])
> +- Logica...> but 
> was:<...t$giveMeCaseClass$$f[4a420732fc04b1351889eb0e88eb891().my], 
> _c1=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().clazz],
>  
> _c2=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891().my],
>  
> _c3=[org$apache$flink$table$plan$batch$table$CalcTest$giveMeCaseClass$$f4a420732fc04b1351889eb0e88eb891]().clazz])
> +- Logica...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFromGroupedTableWithFunctionKey:154 
> planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)])
>   +- Log...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)])
>   +- Log...>
> 13:30:03.534 [ERROR]   CalcTest.testSelectFunction:109 planBefore 
> expected:<...alcTest$MyHashCode$$[d14b486109d9dd062ae7c60e0497797]5($2)], 
> b=[$1])
> +- L...> but 
> was:<...alcTest$MyHashCode$$[3cd929923219fc59162b13a4941ead4]5($2)], b=[$1])
> +- L...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin:41 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   CorrelateTest.testCrossJoin2:52 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.534 [ERROR]   CorrelateTest.testLeftOuterJoinWithLiteralTrue:74 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.534 [ERROR]   
> CorrelateTest.testLeftOuterJoinWithoutJoinPredicates:63 planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   JoinTest.testFilterJoinRule:143 planBefore 
> expected:<...le$JoinTest$Merger$$[223b7380fec29c4077a893c60165d845($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$223b7380fec29c4077a893c60165d845]($2,
>  $5))])
>+- Lo...> but 
> was:<...le$JoinTest$Merger$$[d18a3011491fab359eccb50f2d0d9a18($2, 
> org$apache$flink$table$plan$batch$table$JoinTest$Merger$$d18a3011491fab359eccb50f2d0d9a18]($2,
>  $5))])
>+- Lo...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins1:39 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins2:45 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2)], 
> rowType=[Rec...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2)], 
> rowType=[Rec...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins3:51 
> planBefore 
> expected:<...ble$util$TableFunc1$[ad38060966060e704b09fa4c9428769]6($2, 
> _UTF-16LE'$')]...> but 
> was:<...ble$util$TableFunc1$[e1a0c63ecf595c7329d87aae4f6f425]6($2, 
> _UTF-16LE'$')]...>
> 13:30:03.535 [ERROR]   CorrelateStringExpressionTest.testCorrelateJoins4:57 
> planBefore 
> expected:<...ble$util$TableFunc2$[b3b1f988779be024ed9386bce5019112]($2)], 
> rowType=[Reco...> but 
> 

[GitHub] [flink] godfreyhe commented on a change in pull request #9168: [FLINK-13286][table-api] Port connector related validators to api-java-bridge

2019-07-18 Thread GitBox
godfreyhe commented on a change in pull request #9168:  
[FLINK-13286][table-api] Port connector related validators to api-java-bridge
URL: https://github.com/apache/flink/pull/9168#discussion_r305189384
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/descriptors/SchemaValidator.java
 ##
 @@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.factories.TableFormatFactory;
+import org.apache.flink.table.sources.RowtimeAttributeDescriptor;
+import org.apache.flink.table.sources.tsextractors.TimestampExtractor;
+import org.apache.flink.table.sources.wmstrategies.WatermarkStrategy;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+import static java.lang.String.format;
+import static org.apache.flink.table.descriptors.Rowtime.ROWTIME;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_CLASS;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_FROM;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_SERIALIZED;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_TYPE;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_TIMESTAMPS_TYPE_VALUE_FROM_FIELD;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_CLASS;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_DELAY;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_SERIALIZED;
+import static 
org.apache.flink.table.descriptors.Rowtime.ROWTIME_WATERMARKS_TYPE;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_FROM;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_NAME;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_PROCTIME;
+import static org.apache.flink.table.descriptors.Schema.SCHEMA_TYPE;
+
+/**
+ * Validator for {@link Schema}.
+ */
+@PublicEvolving
+public class SchemaValidator implements DescriptorValidator {
+
+   private final boolean isStreamEnvironment;
+   private final boolean supportsSourceTimestamps;
+   private final boolean supportsSourceWatermarks;
+
+   public SchemaValidator(boolean isStreamEnvironment, boolean 
supportsSourceTimestamps,
+   boolean supportsSourceWatermarks) {
+   this.isStreamEnvironment = isStreamEnvironment;
+   this.supportsSourceTimestamps = supportsSourceTimestamps;
+   this.supportsSourceWatermarks = supportsSourceWatermarks;
+   }
+
+   @Override
+   public void validate(DescriptorProperties properties) {
+   Map names = 
properties.getIndexedProperty(SCHEMA, SCHEMA_NAME);
+   Map types = 
properties.getIndexedProperty(SCHEMA, SCHEMA_TYPE);
+
+   if (names.isEmpty() && types.isEmpty()) {
+   throw new ValidationException(
+   format("Could not find the required 
schema in property '%s'.", SCHEMA));
+   }
+
+   boolean proctimeFound = false;
+
+   for (int i = 0; i < Math.max(names.size(), types.size()); i++) {
+   properties.validateString(SCHEMA + "." + i + "." + 
SCHEMA_NAME, false, 1);
+   properties.validateType(SCHEMA + "." + i + "." + 
SCHEMA_TYPE, false, false);
+   properties.validateString(SCHEMA + "." + i + "." + 
SCHEMA_FROM, true, 1);
+   // either proctime or rowtime
+   String proctime = 

[jira] [Commented] (FLINK-12038) YARNITCase stalls on travis

2019-07-18 Thread shuai.xu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888482#comment-16888482
 ] 

shuai.xu commented on FLINK-12038:
--

This failure can be easily re-produced in my local machine. I enabled the logs 
of YARN, and found the reason. You can find the log of unregisterAM in 
jobmanager.log. When the job is finished, it will try to unregisterAM to YARN. 
In fact, it is not necessary to call killApplication, as the whole YARN mini 
cluster will be closed in the tearDown of test case. 

The bellowing is part of logs of job master:

2019-07-16 18:20:34,376 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(1/2) (e13567c7f2d7a389c74f4583a67e34e8) switched from SCHEDULED to DEPLOYING.
2019-07-16 18:20:34,376 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: 
Custom Source (1/2) (attempt #0) to container_1563272405568_0001_01_02 @ 
e011239174096.et15sqa (dataPort=42072)
2019-07-16 18:20:34,404 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(2/2) (fc3d9d65a75eabaf00d7d9372d2b9884) switched from SCHEDULED to DEPLOYING.
2019-07-16 18:20:34,405 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: 
Custom Source (2/2) (attempt #0) to container_1563272405568_0001_01_03 @ 
e011239174096.et15sqa (dataPort=41793)
2019-07-16 18:20:34,405 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/2) 
(65db57ac7166e0a96a3c5318bb262fb0) switched from SCHEDULED to DEPLOYING.
2019-07-16 18:20:34,414 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Sink: 
Unnamed (1/2) (attempt #0) to container_1563272405568_0001_01_03 @ 
e011239174096.et15sqa (dataPort=41793)
2019-07-16 18:20:34,447 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (2/2) 
(22c3e0c0fd37dd00e75fcf855e2a6ca4) switched from SCHEDULED to DEPLOYING.
2019-07-16 18:20:34,447 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Sink: 
Unnamed (2/2) (attempt #0) to container_1563272405568_0001_01_02 @ 
e011239174096.et15sqa (dataPort=42072)
2019-07-16 18:20:34,897 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(1/2) (e13567c7f2d7a389c74f4583a67e34e8) switched from DEPLOYING to RUNNING.
2019-07-16 18:20:34,949 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(2/2) (fc3d9d65a75eabaf00d7d9372d2b9884) switched from DEPLOYING to RUNNING.
2019-07-16 18:20:35,056 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/2) 
(65db57ac7166e0a96a3c5318bb262fb0) switched from DEPLOYING to RUNNING.
2019-07-16 18:20:35,067 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (2/2) 
(22c3e0c0fd37dd00e75fcf855e2a6ca4) switched from DEPLOYING to RUNNING.
2019-07-16 18:20:35,450 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(2/2) (fc3d9d65a75eabaf00d7d9372d2b9884) switched from RUNNING to FINISHED.
2019-07-16 18:20:35,480 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
(1/2) (e13567c7f2d7a389c74f4583a67e34e8) switched from RUNNING to FINISHED.
2019-07-16 18:20:35,494 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (2/2) 
(22c3e0c0fd37dd00e75fcf855e2a6ca4) switched from RUNNING to FINISHED.
2019-07-16 18:20:35,508 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/2) 
(65db57ac7166e0a96a3c5318bb262fb0) switched from RUNNING to FINISHED.
2019-07-16 18:20:35,513 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job Flink Streaming 
Job (2f9313ea4fd33bef68111ed380a2ae1b) switched from state RUNNING to FINISHED.
2019-07-16 18:20:35,513 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping checkpoint 
coordinator for job 2f9313ea4fd33bef68111ed380a2ae1b.
2019-07-16 18:20:35,513 INFO 
org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore - 
Shutting down
2019-07-16 18:20:35,564 INFO org.apache.flink.runtime.dispatcher.MiniDispatcher 
- Job 2f9313ea4fd33bef68111ed380a2ae1b reached globally terminal state FINISHED.
2019-07-16 18:20:35,573 INFO org.apache.flink.runtime.jobmaster.JobMaster - 
Stopping the JobMaster for job Flink Streaming 
Job(2f9313ea4fd33bef68111ed380a2ae1b).
2019-07-16 18:20:35,664 INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Suspending SlotPool.
2019-07-16 18:20:35,666 INFO org.apache.flink.runtime.jobmaster.JobMaster - 
Close ResourceManager connection 165d22977dc31b3b410489789fdc1050: JobManager 
is shutting down..
2019-07-16 18:20:35,668 INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Stopping SlotPool.
2019-07-16 18:20:35,668 INFO org.apache.flink.yarn.YarnResourceManager - 
Disconnect job 

[GitHub] [flink] zjuwangg commented on issue #9130: [FLINK-13274]Refactor HiveTableSourceTest using HiveRunner

2019-07-18 Thread GitBox
zjuwangg commented on issue #9130: [FLINK-13274]Refactor HiveTableSourceTest 
using HiveRunner
URL: https://github.com/apache/flink/pull/9130#issuecomment-513072890
 
 
   @bowenli86 
   The [ci build](https://travis-ci.com/flink-ci/flink/jobs/217107050) failed 
for Kinesis end-to-end test while the flink-connector-hive test has passed. 
Maybe we can ci this commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13238) Reduce blink planner's testing time

2019-07-18 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888466#comment-16888466
 ] 

Jingsong Lee commented on FLINK-13238:
--

[~ykt836] [~jark] Can you assign this ticket to me?

> Reduce blink planner's testing time
> ---
>
> Key: FLINK-13238
> URL: https://issues.apache.org/jira/browse/FLINK-13238
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Kurt Young
>Priority: Major
>
> The blink planner has an independent CI profile, but still exceeded 50 min 
> limit from time to time. We need to optimize the tests to reduce the testing 
> time. This will leading to Travis failure.
>  
> We need do some work to reduce time:
> 1.Optimizer big tests:
> 192.503 s LongHashTableTest
> 83.969 s BinaryExternalSorterTest
> 261.497 s BinaryHashTableTest
> 74.223 s - in org.apache.flink.table.runtime.stream.sql.RankITCase
> 135.375 s - in org.apache.flink.table.runtime.stream.sql.JoinITCase
> 99.007 s - in org.apache.flink.table.runtime.stream.sql.SplitAggregateITCase
> 61.216 s - in org.apache.flink.table.runtime.stream.sql.OverWindowITCase
> 77.409 s - in 
> org.apache.flink.table.runtime.stream.sql.SemiAntiJoinStreamITCase
> 83.83 s - in org.apache.flink.table.runtime.stream.sql.AggregateRemoveITCase
> 314.376 s - in org.apache.flink.table.runtime.stream.sql.AggregateITCase
> 121.19 s - in org.apache.flink.table.runtime.stream.table.JoinITCase
> 74.417 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.SortDistinctAggregateITCase
> 109.185 s - in org.apache.flink.table.runtime.batch.sql.agg.HashAggITCase
> 178.181 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.AggregateReduceGroupingITCase
> 112.006 s - in org.apache.flink.table.runtime.batch.sql.agg.SortAggITCase
> 61.863 s - in org.apache.flink.table.runtime.batch.sql.agg.GroupingSetsITCase
> 62.941 s - in 
> org.apache.flink.table.runtime.batch.sql.agg.HashDistinctAggregateITCase
> 64.58 s - in org.apache.flink.table.runtime.batch.sql.CalcITCase
> 81.272 s - in org.apache.flink.table.runtime.batch.sql.OverWindowITCase
> 75.298 s - in org.apache.flink.table.runtime.batch.sql.join.JoinITCase
> 82.923 s - in org.apache.flink.table.runtime.batch.sql.join.OuterJoinITCase
> 145.538 s - in org.apache.flink.table.runtime.batch.sql.join.SemiJoinITCase
> 214.933 s - in org.apache.flink.table.runtime.batch.sql.join.InnerJoinITCase
>  
> 2.Reuse miniCluster in ITCases.
> Every MiniCluster initialization takes 15 seconds, and MiniCluster is 
> class-level reuse. We have many ITCase classes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zjuwangg commented on issue #9118: [FLINK-13206][sql client]replace `use database xxx` with `use xxx` in sql client parser

2019-07-18 Thread GitBox
zjuwangg commented on issue #9118: [FLINK-13206][sql client]replace `use 
database xxx` with `use xxx` in sql client parser
URL: https://github.com/apache/flink/pull/9118#issuecomment-513070976
 
 
   @bowenli86 I have tested in my traivs account, there is a failure in 
blink-planner that is not related to this commit.
   [CI BUILD](https://travis-ci.com/zjuwangg/flink/jobs/217144849)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13321) In Blink Planner, Join a udf with constant arguments or without argument in TableAPI query does not work now

2019-07-18 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13321:
---

Assignee: Jing Zhang

> In Blink Planner, Join a udf with constant arguments or without argument in 
> TableAPI query does not work now
> 
>
> Key: FLINK-13321
> URL: https://issues.apache.org/jira/browse/FLINK-13321
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In blink planner, Join a udf with constant arguments or without argument in 
> TableAPI query does not work now, for example: error will be thrown if run 
> the following two TableAPI query in Blink planner:
> {code:java}
> leftT.select('c).joinLateral(func0("1", "2"))
> // leftT.select('c).joinLateral(func0())
> {code}
> The following error will be thrown:
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> FlinkLogicalSink(name=[5771dc74-8986-4ffa-828f-8ed40602593a], fields=[c, f0])
> +- FlinkLogicalCorrelate(correlation=[$cor3], joinType=[inner], 
> requiredColumns=[{}])
>:- FlinkLogicalCalc(select=[c])
>:  +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, 
> default_database, 15cbb5bf-816b-4319-9be8-6c648c868843]])
>+- FlinkLogicalCorrelate(correlation=[$cor4], joinType=[inner], 
> requiredColumns=[{}])
>   :- FlinkLogicalValues(tuples=[[{  }]])
>   +- 
> FlinkLogicalTableFunctionScan(invocation=[org$apache$flink$table$util$VarArgsFunc0$2ad590150fcbadcd9e420797d27a5eb1(_UTF-16LE'1',
>  _UTF-16LE'2')], rowType=[RecordType(VARCHAR(2147483647) f0)], 
> elementType=[class [Ljava.lang.Object;])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
>   at 
> org.apache.flink.table.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:72)
>   at 
> org.apache.flink.table.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:63)
>   at 
> org.apache.flink.table.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> ...
> 
> {code}
> The root cause is the `FlinkLogicalTableFunctionScan`.CONVERTER translates a 
> `TableFunctionScan` to a `Correlate`. Which will translate the original 
> `RelNode` tree to a `RelNode` with two Cascaded ·Correlate` (could be found 
> in the above thrown message), which could not translate to Physical `RelNode`.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe commented on issue #9141: [FLINK-12249][table] Fix type equivalence check problems for Window Aggregates

2019-07-18 Thread GitBox
godfreyhe commented on issue #9141: [FLINK-12249][table] Fix type equivalence 
check problems for Window Aggregates
URL: https://github.com/apache/flink/pull/9141#issuecomment-513070691
 
 
   > @dawidwys Thanks for your comments. I think you are right.
   > 
   > We had a discussion just now. (with @godfreyhe @wuchong @sunjincheng121 ) 
We think that, to solve the problem fundamentally, the current window aggregate 
should not extend from `org.apache.calcite.rel.core.Aggregate`, instead it 
should extend from `SingleRel`. This makes sure the right semantics of window 
aggregate.
   > 
   > To correct this, the changes may be somehow very large as there are many 
existing window related rules. Also, this is a problem a long time ago and the 
probability of occurrence is relatively small.
   > 
   > Considering the reasons above, I think it would be good if we can fix this 
issue later after release-1.9?
   > 
   > What do you think? @dawidwys @godfreyhe @wuchong @sunjincheng121
   > 
   > Best, Hequn
   
   yes, it's a big issue to refactor WindowAggregate completely using solution3 
mentioned in [FLINK-12249](https://issues.apache.org/jira/browse/FLINK-12249). 
"Creating special window aggregate call function" is not a proper approach, how 
about users defined aggregate function? currently, i think we should make 
WindowAggregate does not extend from Aggregate in this pr, which had been 
validated in  a blink minor branch. and do the clean refactor after 1.9 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread Jeff Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888463#comment-16888463
 ] 

Jeff Yang commented on FLINK-13037:
---

Hi,[~jark], Please take a look . [https://github.com/apache/flink/pull/9173]

> Translate "Concepts -> Glossary" page into Chinese
> --
>
> Key: FLINK-13037
> URL: https://issues.apache.org/jira/browse/FLINK-13037
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Konstantin Knauf
>Assignee: Jeff Yang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate Glossary page into Chinese: 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html
> The markdown file is located in {{docs/concepts/glossary.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9173: [FLINK-13037][docs] Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread GitBox
flinkbot commented on issue #9173: [FLINK-13037][docs] Translate "Concepts -> 
Glossary" page into Chinese
URL: https://github.com/apache/flink/pull/9173#issuecomment-513069667
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on issue #9072: [FLINK-11630] Wait for the termination of all running Tasks when shutting down TaskExecutor

2019-07-18 Thread GitBox
ifndef-SleePy commented on issue #9072: [FLINK-11630] Wait for the termination 
of all running Tasks when shutting down TaskExecutor
URL: https://github.com/apache/flink/pull/9072#issuecomment-513069756
 
 
   Hi @azagrebin 
   Thank you for explanation. Nice work to take the old PR over!
   Anyway, for this PR, it looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] highfei2011 opened a new pull request #9173: [FLINK-13037][docs] Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread GitBox
highfei2011 opened a new pull request #9173: [FLINK-13037][docs] Translate 
"Concepts -> Glossary" page into Chinese
URL: https://github.com/apache/flink/pull/9173
 
 
   
   
   ## What is the purpose of the change
   
   *Translate "Concepts -> Glossary" page into Chinese.*
   
   
   ## Brief change log
   
 - *I modified the glossary.zh.md file and added a new glossary to 
glossary.md.*
   
   
   ## Verifying this change
   
   *I have  verified the changes by starting the build script in preview mode.*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no )
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no )
 - The serializers: ( no )
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: ( no )
 - The S3 file system connector: ( no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (  no )
 - If yes, how is the feature documented? ( not documented )
   
   @wuchong @xccui @klion26 
   Please Take A Look .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13037:
---
Labels: pull-request-available  (was: )

> Translate "Concepts -> Glossary" page into Chinese
> --
>
> Key: FLINK-13037
> URL: https://issues.apache.org/jira/browse/FLINK-13037
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Konstantin Knauf
>Assignee: Jeff Yang
>Priority: Major
>  Labels: pull-request-available
>
> Translate Glossary page into Chinese: 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html
> The markdown file is located in {{docs/concepts/glossary.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13329) Set env config for sql jobs

2019-07-18 Thread XuPingyong (JIRA)
XuPingyong created FLINK-13329:
--

 Summary: Set env config for sql jobs
 Key: FLINK-13329
 URL: https://issues.apache.org/jira/browse/FLINK-13329
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Affects Versions: 1.9.0, 1.10.0
Reporter: XuPingyong
 Fix For: 1.9.0, 1.10.0


Now we execute streaming job through TableEnvironment, but

StreamExecutionEnvironment can not be touched by users, so we can not set 
checkpoint and other env configs when we execute sql jobs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-18 Thread GitBox
godfreyhe commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct 
some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-513062687
 
 
   > @godfreyhe @JingsongLi Agree with you that we should offer a deterministic 
semantic for those 'dirty data', I think we can achieve this for two steps:
   > 
   > 1. unify all the builtin functions' exception handling behavior for blink 
planner(since it differs with flink planner), I found two exception functions 
and will create another issue to fix it.
   > 2. add a global configuration to support something like [MySQL's 
strict/non-strict sql 
mode](https://dev.mysql.com/doc/refman/5.6/en/sql-mode.html#sql-mode-strict) 
for exception handling includes numeric out-of-range and overflow and illegal 
inputs for sources. We can start a new thread to discuss it, what do you think?
   
   sounds good, look forward the discussion


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #9162: [FLINK-13321][table-planner-blink] Fix bug in Blink Planner, Join a udf with constant arguments or without argument in TableAPI

2019-07-18 Thread GitBox
godfreyhe commented on a change in pull request #9162: 
[FLINK-13321][table-planner-blink] Fix bug in Blink Planner, Join a udf with 
constant arguments or without argument in TableAPI query does not work now
URL: https://github.com/apache/flink/pull/9162#discussion_r305179097
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/rules/physical/batch/BatchExecConstantTableFunctionScanRule.scala
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.rules.physical.batch
+
+import org.apache.flink.table.plan.nodes.FlinkConventions
+import org.apache.flink.table.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.plan.nodes.physical.batch.{BatchExecCorrelate, 
BatchExecValues}
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall}
+import org.apache.calcite.plan.RelOptRule._
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.calcite.rex.{RexLiteral, RexUtil}
+
+/**
+  * Converts [[FlinkLogicalTableFunctionScan]] with constant RexCall to
+  * {{{
+  *[[BatchExecCorrelate]]
+  *  /   \
+  * empty [[BatchExecValues]]  [[FlinkLogicalTableFunctionScan]]
+  * }}}
+  *
+  * Add the rule to support the following SQL:
+  * SELECT * FROM LATERAL TABLE(FUNC()) as T(C1)
 
 Review comment:
   it's better add some comments to explain "how does `SELECT * FROM T, LATERAL 
TABLE(FUNC()) as T(C1)` work" 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #9162: [FLINK-13321][table-planner-blink] Fix bug in Blink Planner, Join a udf with constant arguments or without argument in TableAPI

2019-07-18 Thread GitBox
godfreyhe commented on a change in pull request #9162: 
[FLINK-13321][table-planner-blink] Fix bug in Blink Planner, Join a udf with 
constant arguments or without argument in TableAPI query does not work now
URL: https://github.com/apache/flink/pull/9162#discussion_r305179186
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/stream/sql/CorrelateITCase.scala
 ##
 @@ -84,6 +84,30 @@ class CorrelateITCase extends StreamingTestBase {
 assertEquals(expected.sorted, sink.getAppendResults.sorted)
   }
 
+  @Test
+  def testConstantTableFunc(): Unit = {
+tEnv.registerFunction("str_split", new StringSplit())
+val query = "SELECT * FROM LATERAL TABLE(str_split()) as T0(d)"
+val sink = new TestingAppendSink
+tEnv.sqlQuery(query).toAppendStream[Row].addSink(sink)
+env.execute()
+
+val expected = List("a","b","c")
 
 Review comment:
   nit: add blank between`a,b,c`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-18 Thread Jeff Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888433#comment-16888433
 ] 

Jeff Yang commented on FLINK-13037:
---

Hi, [~jark] [~knaufk] ,I have aready translated this doc , but  I find that 
this doc is not complete. As a glossary, it should be complete as much as 
possible. So ,  I think we can add parallelism and slot to the English 
document. What do you mean?

> Translate "Concepts -> Glossary" page into Chinese
> --
>
> Key: FLINK-13037
> URL: https://issues.apache.org/jira/browse/FLINK-13037
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Konstantin Knauf
>Assignee: Jeff Yang
>Priority: Major
>
> Translate Glossary page into Chinese: 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html
> The markdown file is located in {{docs/concepts/glossary.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-12723) Adds a wiki page about setting up a Python Table API development environment

2019-07-18 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-12723.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

Fixed in master: ee668541c7b39a30f74e44465e686153df191bd9

> Adds a wiki page about setting up a Python Table API development environment
> 
>
> Key: FLINK-12723
> URL: https://issues.apache.org/jira/browse/FLINK-12723
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: sunjincheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should add a wiki page showing how to set up a Python Table API 
> development environment to help contributors who are interested in the Python 
> Table API to join in easily.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-12723) Adds a wiki page about setting up a Python Table API development environment

2019-07-18 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-12723:

Fix Version/s: 1.10.0

> Adds a wiki page about setting up a Python Table API development environment
> 
>
> Key: FLINK-12723
> URL: https://issues.apache.org/jira/browse/FLINK-12723
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: sunjincheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should add a wiki page showing how to set up a Python Table API 
> development environment to help contributors who are interested in the Python 
> Table API to join in easily.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] xintongsong commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-18 Thread GitBox
xintongsong commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r305173128
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##
 @@ -185,6 +199,10 @@ public ResourceManager(
this.jmResourceIdRegistrations = new HashMap<>(4);
this.taskExecutors = new HashMap<>(8);
this.taskExecutorGatewayFutures = new HashMap<>(8);
+
+   this.defaultTaskManagerMemoryMB = 
ConfigurationUtils.getTaskManagerHeapMemory(flinkConfig).getMebiBytes();
+   this.numberOfTaskSlots = 
flinkConfig.getInteger(TaskManagerOptions.NUM_TASK_SLOTS);
+   this.slotsPerWorker = 
updateTaskManagerConfigAndCreateWorkerSlotProfiles(this.flinkConfig, 
defaultTaskManagerMemoryMB, numberOfTaskSlots);
 
 Review comment:
   Yes, the background of this calculating `slotsPerWorker` on RM is that, we 
need to know the resource profiles of slots for `PendingTaskManagerSlot` before 
TM is started and registered. For standalone, we don't have any pending task 
manager slot because RM cannot actively start any TM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunhaibotb commented on issue #8471: [FLINK-12529][runtime] Release record-deserializer buffers timely to improve the efficiency of heap usage on taskmanager

2019-07-18 Thread GitBox
sunhaibotb commented on issue #8471: [FLINK-12529][runtime] Release 
record-deserializer buffers timely to improve the efficiency of heap usage on 
taskmanager
URL: https://github.com/apache/flink/pull/8471#issuecomment-513053581
 
 
   The code has been updated and the travis is green (flinkbot doesn't seem to 
have updated the status on this page and still displays `PENDING`, but it has 
actually succeeded). Because of the conflict with the latest master branch, I 
rebased on the latest master and forcibly pushed.  @StefanRRichter  @pnowojski 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-12249) Type equivalence check fails for Window Aggregates

2019-07-18 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887834#comment-16887834
 ] 

godfrey he edited comment on FLINK-12249 at 7/19/19 1:23 AM:
-

there is another big issue: is {{WindowAggregate}} inherited from {{Aggregate}} 
correct? My answer is NO.

for {{WindowAggregate}}, the group keys are window group and normal fields (may 
be empty), while {{Aggregate}} only has normal group keys part, and know 
nothing about window group key. currently, many planner rules match and apply 
transformation on {{Aggregate}}, however some of them does not applicable to 
{{WindowAggregate}}, e.g. {{AggregateJoinTransposeRule}}, 
{{AggregateProjectMergeRule}}, etc. I think the design violates the Liskov 
Substitution Principle. 

there are three solutions: 
1. make {{Aggregate}}'s group key supports expressions(such as RexCall), not 
field reference only. and then the window group expression could be as a part 
of {{Aggregate}}'s group key. the disadvantage is we must update all existing 
aggregate rules, metadata handlers, etc.
2. make {{WindowAggregate}} extends from {{SingleRel}}, not from {{Aggregate}}. 
the disadvantage is we must implement related planner rules about 
WindowAggregate. 
3. in logical phase, we does not merge {{Aggregate}} and {{Project}} (with 
window group) into {{WindowAggregate}}, and convert the {{Project}} to a new 
kind of node named {{WindowAssigner}}, which could prevent {{Project}} from 
being pushed down/merged. and in physical phase, we merge them into 
{{WindowAggregate}}. the advantage is we could reuse current aggregate rules, 
and the disadvantage is we should add new rules about {{WindowAssigner}}.

i think solution3 is a more easier approach, which could make sure all rules 
are correct.

if this refactor is finished, i think the above bug is fixed too.

thank~


was (Author: godfreyhe):
there is another big issue: is {{WindowAggregate}} inherited from {{Aggregate}} 
correct? My answer is NO.

for {{WindowAggregate}}, the group keys are window group and normal fields (may 
be empty), while {{Aggregate}} only has normal group keys part, and know 
nothing about window group key. currently, many planner rules match and apply 
transformation on {{Aggregate}}, however some of them does not applicable to 
{{WindowAggregate}}, e.g. {{AggregateJoinTransposeRule}}, 
{{AggregateProjectMergeRule}}, etc. I think the design violates the Liskov 
Substitution Principle. 

there are two solutions: 
1. make {{Aggregate}}'s group key supports expressions(such as RexCall), not 
field reference only. and then the window group expression could be as a part 
of {{Aggregate}}'s group key. the disadvantage is we must update all existing 
aggregate rules, metadata handlers, etc.
2. make {{WindowAggregate}} extends from {{SingleRel}}, not from {{Aggregate}}. 
the disadvantage is we must implement related planner rules about 
WindowAggregate. 
3. in logical phase, we does not merge {{Aggregate}} and {{Project}} (with 
window group) into {{WindowAggregate}}, and convert the {{Project}} to a new 
kind of node named {{WindowAssigner}}, which could prevent {{Project}} from 
being pushed down/merged. and in physical phase, we merge them into 
{{WindowAggregate}}. the advantage is we could reuse current aggregate rules, 
and the disadvantage is we should add new rules about {{WindowAssigner}}.

i think solution3 is a more easier approach, which could make sure all rules 
are correct.

if this refactor is finished, i think the above bug is fixed too.

thank~

> Type equivalence check fails for Window Aggregates
> --
>
> Key: FLINK-12249
> URL: https://issues.apache.org/jira/browse/FLINK-12249
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Assignee: Hequn Cheng
>Priority: Critical
> Fix For: 1.9.0
>
>
> Creating Aggregate node fails in rules: {{LogicalWindowAggregateRule}} and 
> {{ExtendedAggregateExtractProjectRule}} if the only grouping expression is a 
> window and
> we compute aggregation on NON NULLABLE field.
> The root cause for that, is how return type inference strategies in calcite 
> work and how we handle window aggregates. Take 
> {{org.apache.calcite.sql.type.ReturnTypes#AGG_SUM}} as an example, based on 
> {{groupCount}} it adjusts type nullability based on groupCount.
> Though we pass a false information as we strip down window aggregation from 
> groupSet (in {{LogicalWindowAggregateRule}}).
> One can reproduce this problem also with a unit test like this:
> {code}
> @Test
>   def testTumbleFunction2() = {
>  
> val innerQuery =
>   """
> |SELECT
> | CASE a WHEN 1 THEN 1 ELSE 99 END AS 

[jira] [Created] (FLINK-13328) add IT case for reading and writing generic table metadata via HiveCatalog

2019-07-18 Thread Bowen Li (JIRA)
Bowen Li created FLINK-13328:


 Summary: add IT case for reading and writing generic table 
metadata via HiveCatalog 
 Key: FLINK-13328
 URL: https://issues.apache.org/jira/browse/FLINK-13328
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.0, 1.10.0


we lack IT case for reading and writing generic table metadata via HiveCatalog. 
This ticket is for adding some IT case



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11205) Task Manager Metaspace Memory Leak

2019-07-18 Thread Joey Echeverria (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888411#comment-16888411
 ] 

Joey Echeverria commented on FLINK-11205:
-

I didn't get a chance to reproduce this using a sample job, but we found two 
causes for our jobs.

(1) Apache Commons Logging was caching LogFactory instances. We added the 
following code to our close() methods to release those LogFactories:

{code:java}
  ClassLoader contextLoader = 
Thread.currentThread().getContextClassLoader();
  LogFactory.release(contextLoader);
{code}

(2) We saw a leak that seemed to have an upper limit in ObjectStreamClass's 
caches. This one was trickier as we had to use reflection to clear out the 
caches, again in the close() method:

{code:java}
public static void cleanUpLeakingObjects(ClassLoader contextLoader) {
try {
  Class caches = Class.forName("java.io.ObjectStreamClass$Caches");
  clearCache(caches, "localDescs", contextLoader);
  clearCache(caches, "reflectors", contextLoader);
} catch (ReflectiveOperationException | SecurityException | 
ClassCastException ex) {
  // Clean-up failed
  logger.warn("Cleanup of ObjectStreamClass caches failed with exception 
{}: {}", ex.getClass().getSimpleName(),
ex.getMessage());
  logger.debug("Stack trace follows.", ex);
}
  }


  private static void clearCache(Class caches, String mapName, ClassLoader 
contextLoader)
throws ReflectiveOperationException, SecurityException, ClassCastException {
Field field = caches.getDeclaredField(mapName);
field.setAccessible(true);

Map map = TypeUtils.coerce(field.get(null));
Iterator keys = map.keySet().iterator();
while (keys.hasNext()) {
  Object key = keys.next();
  if (key instanceof Reference) {
Object clazz = ((Reference) key).get();
if (clazz instanceof Class) {
  ClassLoader cl = ((Class) clazz).getClassLoader();
  while (cl != null) {
if (cl == contextLoader) {
  keys.remove();
  break;
}
cl = cl.getParent();
  }
}
  }
}
  }
{code}

> Task Manager Metaspace Memory Leak 
> ---
>
> Key: FLINK-11205
> URL: https://issues.apache.org/jira/browse/FLINK-11205
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.5, 1.6.2, 1.7.0
>Reporter: Nawaid Shamim
>Priority: Major
> Attachments: Screenshot 2018-12-18 at 12.14.11.png, Screenshot 
> 2018-12-18 at 15.47.55.png
>
>
> Job Restarts causes task manager to dynamically load duplicate classes. 
> Metaspace is unbounded and grows with every restart. YARN aggressively kill 
> such containers but this affect is immediately seems on different task 
> manager which results in death spiral.
> Task Manager uses dynamic loader as described in 
> [https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/debugging_classloading.html]
> {quote}
> *YARN*
> YARN classloading differs between single job deployments and sessions:
>  * When submitting a Flink job/application directly to YARN (via {{bin/flink 
> run -m yarn-cluster ...}}), dedicated TaskManagers and JobManagers are 
> started for that job. Those JVMs have both Flink framework classes and user 
> code classes in the Java classpath. That means that there is _no dynamic 
> classloading_ involved in that case.
>  * When starting a YARN session, the JobManagers and TaskManagers are started 
> with the Flink framework classes in the classpath. The classes from all jobs 
> that are submitted against the session are loaded dynamically.
> {quote}
> The above is not entirely true specially when you set {{-yD 
> classloader.resolve-order=parent-first}} . We also above observed the above 
> behaviour when submitting a Flink job/application directly to YARN (via 
> {{bin/flink run -m yarn-cluster ...}}).
> !Screenshot 2018-12-18 at 12.14.11.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11843) Dispatcher fails to recover jobs if leader change happens during JobManagerRunner termination

2019-07-18 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888402#comment-16888402
 ] 

TisonKun commented on FLINK-11843:
--

[~stevenz3wu] I have sent an email to you :- )

> Dispatcher fails to recover jobs if leader change happens during 
> JobManagerRunner termination
> -
>
> Key: FLINK-11843
> URL: https://issues.apache.org/jira/browse/FLINK-11843
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.2, 1.8.0, 1.9.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> The {{Dispatcher}} fails to recover jobs if a leader change happens during 
> the {{JobManagerRunner}} termination of the previous run. The problem is that 
> we schedule the start future of the recovered {{JobGraph}} using the 
> {{MainThreadExecutor}} and additionally require that this future is completed 
> before any other recovery operation from a subsequent leadership session is 
> executed. If now the leadership changes, the {{MainThreadExecutor}} will be 
> invalidated and the scheduled future will never be completed.
> The relevant ML thread: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/1-7-1-job-stuck-in-suspended-state-td26439.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11568) Exception in Kinesis ShardConsumer hidden by InterruptedException

2019-07-18 Thread Thomas Weise (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888383#comment-16888383
 ] 

Thomas Weise commented on FLINK-11568:
--

Missed to resolve this when merging the PR. There is an issue WRT test 
flakiness for follow-up in FLINK-12595

> Exception in Kinesis ShardConsumer hidden by InterruptedException 
> --
>
> Key: FLINK-11568
> URL: https://issues.apache.org/jira/browse/FLINK-11568
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Affects Versions: 1.6.2
>Reporter: Shannon Carey
>Assignee: Shannon Carey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When the Kinesis ShardConsumer encounters an exception, for example due to a 
> problem in the Deserializer, the root cause exception is often hidden by a 
> non-informative InterruptedException caused by the FlinkKinesisConsumer 
> thread being interrupted.
> Ideally, the root cause exception would be preserved and thrown so that the 
> logs contain enough information to diagnose the issue.
> This probably affects all versions.
> Here's an example of a log message with the unhelpful InterruptedException:
> {code:java}
> 2019-02-05 13:29:31:383 thread=Source: Custom Source -> Filter -> Map -> 
> Sink: Unnamed (1/8), level=WARN, 
> logger=org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer, 
> message="Error while closing Kinesis data fetcher"
> java.lang.InterruptedException: sleep interrupted
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher.awaitTermination(KinesisDataFetcher.java:450)
> at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.cancel(FlinkKinesisConsumer.java:314)
> at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.close(FlinkKinesisConsumer.java:323)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:477)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an example of the real exception that we're actually interested 
> in, which is stored inside KinesisDataFetcher#error, but is not thrown or 
> logged:
> {code:java}
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:416)
> org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:290)
> org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267)
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:178)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
> org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:240)
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:230)
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:174)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:144)
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:135)
> org.apache.flink.streaming.connectors.kinesis.serialization.KinesisDeserializationSchemaWrapper.deserialize(KinesisDeserializationSchemaWrapper.java:44)
> org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.deserializeRecordForCollectionAndUpdateState(ShardConsumer.java:332)
> org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.run(ShardConsumer.java:231)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
> java.util.concurrent.FutureTask.run(FutureTask.java)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-11568) Exception in Kinesis ShardConsumer hidden by InterruptedException

2019-07-18 Thread Thomas Weise (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise resolved FLINK-11568.
--
   Resolution: Fixed
Fix Version/s: 1.9.0

> Exception in Kinesis ShardConsumer hidden by InterruptedException 
> --
>
> Key: FLINK-11568
> URL: https://issues.apache.org/jira/browse/FLINK-11568
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Affects Versions: 1.6.2
>Reporter: Shannon Carey
>Assignee: Shannon Carey
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When the Kinesis ShardConsumer encounters an exception, for example due to a 
> problem in the Deserializer, the root cause exception is often hidden by a 
> non-informative InterruptedException caused by the FlinkKinesisConsumer 
> thread being interrupted.
> Ideally, the root cause exception would be preserved and thrown so that the 
> logs contain enough information to diagnose the issue.
> This probably affects all versions.
> Here's an example of a log message with the unhelpful InterruptedException:
> {code:java}
> 2019-02-05 13:29:31:383 thread=Source: Custom Source -> Filter -> Map -> 
> Sink: Unnamed (1/8), level=WARN, 
> logger=org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer, 
> message="Error while closing Kinesis data fetcher"
> java.lang.InterruptedException: sleep interrupted
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher.awaitTermination(KinesisDataFetcher.java:450)
> at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.cancel(FlinkKinesisConsumer.java:314)
> at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.close(FlinkKinesisConsumer.java:323)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:477)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an example of the real exception that we're actually interested 
> in, which is stored inside KinesisDataFetcher#error, but is not thrown or 
> logged:
> {code:java}
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:416)
> org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:290)
> org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267)
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:178)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
> org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:240)
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:230)
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:174)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:144)
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:135)
> org.apache.flink.streaming.connectors.kinesis.serialization.KinesisDeserializationSchemaWrapper.deserialize(KinesisDeserializationSchemaWrapper.java:44)
> org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.deserializeRecordForCollectionAndUpdateState(ShardConsumer.java:332)
> org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.run(ShardConsumer.java:231)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
> java.util.concurrent.FutureTask.run(FutureTask.java)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TengHu commented on issue #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.

2019-07-18 Thread GitBox
TengHu commented on issue #8885: [FLINK-12855] 
[streaming-java][window-assigners] Add functionality that staggers panes on 
partitions to distribute workload.
URL: https://github.com/apache/flink/pull/8885#issuecomment-512990774
 
 
   @zentol @twalthr @StefanRRichter Can you guys take a look at this ? Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on a change in pull request #9060: [FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-18 Thread GitBox
GJL commented on a change in pull request #9060: [FLINK-13145][tests] Run HA 
dataset E2E test with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9060#discussion_r305113537
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-dataset-fine-grained-recovery-test/src/main/java/org/apache/flink/batch/tests/util/FileBasedOneShotLatch.java
 ##
 @@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.batch.tests.util;
+
+import com.sun.nio.file.SensitivityWatchEventModifier;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardWatchEventKinds;
+import java.nio.file.WatchEvent;
+import java.nio.file.WatchKey;
+import java.nio.file.WatchService;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A synchronization aid that allows a single thread to wait on the creation 
of a specified file.
+ */
+@NotThreadSafe
+public class FileBasedOneShotLatch implements Closeable {
+
+   private final Path latchFile;
+
+   private final WatchService watchService;
+
+   private boolean released;
+
+   public FileBasedOneShotLatch(final Path latchFile) {
+   this.latchFile = checkNotNull(latchFile);
+
+   final Path parentDir = checkNotNull(latchFile.getParent(), 
"latchFile must have a parent");
+   this.watchService = initWatchService(parentDir);
+   }
+
+   private static WatchService initWatchService(final Path parentDir) {
+   final WatchService watchService = createWatchService(parentDir);
+   watchForLatchFile(watchService, parentDir);
+   return watchService;
+   }
+
+   private static WatchService createWatchService(final Path parentDir) {
+   try {
+   return parentDir.getFileSystem().newWatchService();
+   } catch (IOException e) {
+   throw new RuntimeException(e);
+   }
+   }
+
+   private static void watchForLatchFile(final WatchService watchService, 
final Path parentDir) {
+   try {
+   parentDir.register(
+   watchService,
+   new 
WatchEvent.Kind[]{StandardWatchEventKinds.ENTRY_CREATE},
+   SensitivityWatchEventModifier.HIGH);
+   } catch (IOException e) {
+   throw new RuntimeException(e);
+   }
+   }
+
+   /**
+* Waits until the latch file is created.
+*
+* @throws InterruptedException if interrupted while waiting
+*/
+   public void await() throws InterruptedException {
+   if (isReleasedOrReleasable()) {
+   return;
+   }
+
+   awaitLatchFile(watchService);
+   }
+
+   private void awaitLatchFile(final WatchService watchService) throws 
InterruptedException {
+   while (true) {
+   WatchKey take = watchService.take();
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on a change in pull request #9060: [FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-18 Thread GitBox
GJL commented on a change in pull request #9060: [FLINK-13145][tests] Run HA 
dataset E2E test with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9060#discussion_r305113144
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-dataset-fine-grained-recovery-test/src/main/java/org/apache/flink/batch/tests/util/FileBasedOneShotLatch.java
 ##
 @@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.batch.tests.util;
+
+import com.sun.nio.file.SensitivityWatchEventModifier;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardWatchEventKinds;
+import java.nio.file.WatchEvent;
+import java.nio.file.WatchKey;
+import java.nio.file.WatchService;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A synchronization aid that allows a single thread to wait on the creation 
of a specified file.
+ */
+@NotThreadSafe
+public class FileBasedOneShotLatch implements Closeable {
+
+   private final Path latchFile;
+
+   private final WatchService watchService;
+
+   private boolean released;
+
+   public FileBasedOneShotLatch(final Path latchFile) {
+   this.latchFile = checkNotNull(latchFile);
+
+   final Path parentDir = checkNotNull(latchFile.getParent(), 
"latchFile must have a parent");
+   this.watchService = initWatchService(parentDir);
+   }
+
+   private static WatchService initWatchService(final Path parentDir) {
+   final WatchService watchService = createWatchService(parentDir);
+   watchForLatchFile(watchService, parentDir);
+   return watchService;
+   }
+
+   private static WatchService createWatchService(final Path parentDir) {
+   try {
+   return parentDir.getFileSystem().newWatchService();
+   } catch (IOException e) {
+   throw new RuntimeException(e);
+   }
+   }
+
+   private static void watchForLatchFile(final WatchService watchService, 
final Path parentDir) {
+   try {
+   parentDir.register(
+   watchService,
+   new 
WatchEvent.Kind[]{StandardWatchEventKinds.ENTRY_CREATE},
+   SensitivityWatchEventModifier.HIGH);
+   } catch (IOException e) {
+   throw new RuntimeException(e);
+   }
+   }
+
+   /**
+* Waits until the latch file is created.
+*
+* @throws InterruptedException if interrupted while waiting
+*/
+   public void await() throws InterruptedException {
+   if (isReleasedOrReleasable()) {
+   return;
+   }
+
+   awaitLatchFile(watchService);
+   }
+
+   private void awaitLatchFile(final WatchService watchService) throws 
InterruptedException {
+   while (true) {
+   WatchKey take = watchService.take();
 
 Review comment:
   rename to `watchKey`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9172: [FLINK-13313][table] create CatalogTableBuilder to support building CatalogTable from descriptors

2019-07-18 Thread GitBox
flinkbot commented on issue #9172: [FLINK-13313][table] create 
CatalogTableBuilder to support building CatalogTable from descriptors
URL: https://github.com/apache/flink/pull/9172#issuecomment-512975941
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13313) create CatalogTableBuilder to support building CatalogTable from descriptors

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13313:
---
Labels: pull-request-available  (was: )

> create CatalogTableBuilder to support building CatalogTable from descriptors
> 
>
> Key: FLINK-13313
> URL: https://issues.apache.org/jira/browse/FLINK-13313
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> Found a usability issue.
> Previously, users can create an ExternalCatalogTable (deprecated) from 
> descriptors via ExternalCatalogTableBuilder, and this helps smooth user 
> experience of Flink Table API. E.g.
> {code:java}
> ExternalCatalogTable table = ExternalCatalogTableBuilder(
>   new ExternalSystemXYZ()
>   .version("0.11"))
>   .withFormat(
>   new Json()
>   .jsonSchema("{...}")
>   .failOnMissingField(false))
>   。withSchema(
>   new Schema()
>   .field("user-name", "VARCHAR").from("u_name")
>   .field("count", "DECIMAL")
>   .supportsStreaming()
>   .asTableSource()
> oldCatalog.createTable("tble_name", table, false)
> {code}
> If we don't have a builder to connect new CatalogTable and descriptor, how a 
> user creates CatalogTable would be like the following example, which is quite 
> inconvenient given users have to know all the key names.
> {code:java}
> TableSchema schema = TableSchema.builder()
>   .field("name", DataTypes.STRING())
>   .field("age", DataTypes.INT())
>   .build();
> Map properties = new HashMap<>();
> properties.put(CatalogConfig.IS_GENERIC, String.valueOf(true));
> properties.put("connector.type", "filesystem");
> properties.put("connector.path", "/tmp");
> properties.put("connector.property-version", "1");
> properties.put("update-mode", "append");
> properties.put("format.type", "csv");
> properties.put("format.property-version", "1");
> properties.put("format.fields.0.name", "name");
> properties.put("format.fields.0.type", "STRING");
> properties.put("format.fields.1.name", "age");
> properties.put("format.fields.1.type", "INT");
> ObjectPath path = new ObjectPath("mydb", "mytable");
> CatalogTable table = new CatalogTableImpl(schema, properties, "csv table");
> {code}
> We need a similar new class {{CatalogTableBuilder}} for new Catalog APIs
> cc [~tzulitai] [~ykt836] [~xuefuz]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 opened a new pull request #9172: [FLINK-13313][table] create CatalogTableBuilder to support building CatalogTable from descriptors

2019-07-18 Thread GitBox
bowenli86 opened a new pull request #9172: [FLINK-13313][table] create 
CatalogTableBuilder to support building CatalogTable from descriptors
URL: https://github.com/apache/flink/pull/9172
 
 
   ## What is the purpose of the change
   
   This PR adds `CatalogTableBuilder` as a replacement of 
`ExternalCatalogTableBuilder` to help users convert table source/sink 
descriptors to CatalogTable. The gap was mainly discovered when I  was writing 
tests for `HiveCatalog` to make sure it works as expected to persist Flink 
generic tables
   
   ## Brief change log
   
   - added `CatalogTableBuilder` as a replacement of 
`ExternalCatalogTableBuilder` to help users convert table source/sink 
descriptors to CatalogTable
   - added unit test `HiveCatalogITCase` for two things
 - test `HiveCatalog` work as expected to persist Flink generic tables
 - test `CatalogTableBuilder` work as expect 
 - `HiveCatalogITCase` should be moved to `flink-connector-hive-test` 
module once it's setup
   
   ## Verifying this change
   
   This change added tests and can be verified in `HiveCatalogITCase`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs / JavaDocs)
   
   Docs will be added separately


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13327) Blink planner not compiling with Scala 2.12

2019-07-18 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-13327.

Resolution: Fixed

Fixed in
master via: abeb1f5cc73e33800d0af1367f345d1bf2f2822d
1.9: 0faa66747775bd75cc2f55c4d6b00560a8c41c05

> Blink planner not compiling with Scala 2.12
> ---
>
> Key: FLINK-13327
> URL: https://issues.apache.org/jira/browse/FLINK-13327
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.9.0
>
>
> [https://travis-ci.org/apache/flink/jobs/560428262]
>  
> {code:java}
> 11:48:37.007 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/plan/nodes/resource/ExecNodeResourceTest.scala:183:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:37.007 [ERROR]  value isBounded needs `override' modifier
> 11:48:37.007 [ERROR] class MockTableSource(val isBounded: Boolean, schema: 
> TableSchema)
> 11:48:37.007 [ERROR]   ^
> 11:48:40.784 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/TableTestBase.scala:852:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.784 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.784 [ERROR] class TestTableSource(val isBounded: Boolean, schema: 
> TableSchema)
> 11:48:40.785 [ERROR]   ^
> 11:48:40.855 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:135:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.855 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.855 [ERROR] val isBounded: Boolean,
> 11:48:40.855 [ERROR] ^
> 11:48:40.906 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:345:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.906 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.906 [ERROR] val isBounded: Boolean,
> 11:48:40.906 [ERROR] ^
> 11:48:40.982 [WARNING] 6 warnings found
> 11:48:40.987 [ERROR] four errors found{code}
>  
> [~godfreyhe]  [~dawidwys]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13312) move tests for data type mappings between Flink and Hive into its own test class

2019-07-18 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-13312.

Resolution: Fixed

merged in 1.10.0: 9783d88463ace4c728de3f5861efd42c2c07e23e   1.9.0: 
d230aac5a479891b3d5421105c1e862de94a9a89

> move tests for data type mappings between Flink and Hive into its own test 
> class
> 
>
> Key: FLINK-13312
> URL: https://issues.apache.org/jira/browse/FLINK-13312
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13312) move tests for data type mappings between Flink and Hive into its own test class

2019-07-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13312:
---
Labels: pull-request-available  (was: )

> move tests for data type mappings between Flink and Hive into its own test 
> class
> 
>
> Key: FLINK-13312
> URL: https://issues.apache.org/jira/browse/FLINK-13312
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9151: [FLINK-13312][hive] move tests for data type mappings between Flink and Hive into its own test class

2019-07-18 Thread GitBox
asfgit closed pull request #9151: [FLINK-13312][hive] move tests for data type 
mappings between Flink and Hive into its own test class
URL: https://github.com/apache/flink/pull/9151
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9151: [FLINK-12755][hive] move tests for data type mappings between Flink and Hive into its own test class

2019-07-18 Thread GitBox
bowenli86 commented on issue #9151: [FLINK-12755][hive] move tests for data 
type mappings between Flink and Hive into its own test class
URL: https://github.com/apache/flink/pull/9151#issuecomment-512932626
 
 
   @xuefuz thanks for your review!
   
   the kafka test failure in CI is unrelated. Merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on issue #9167: [FLINK-13279][table] Fallback to the builtin catalog when looking for registered tables.

2019-07-18 Thread GitBox
xuefuz commented on issue #9167: [FLINK-13279][table] Fallback to the builtin 
catalog when looking for registered tables.
URL: https://github.com/apache/flink/pull/9167#issuecomment-512920634
 
 
   @dawidwys Thanks for working on this. I have some concerns about the 
approach and posted my comment in the JIRA. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13279) not able to query table registered in catalogs in SQL CLI

2019-07-18 Thread Xuefu Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888197#comment-16888197
 ] 

Xuefu Zhang commented on FLINK-13279:
-

[~dawidwys] thanks for looking into this. After thinking about the proposal and 
reading the PR, I'm very concerned about the approach.

 

Table resolution is different from that for function and should be 
deterministic and unambiguous. In fact, I don't know any of DB product that has 
such a behavior. A table reference in user's query should be uniquely 
identified: either the reference itself is fully qualified, or the query engine 
qualifies it with the current database. None of database engine I know of would 
further resolve the table, if previous resolution fails, in the default 
database, for example. Instead, it just simple reports an error.

 

What's the consequence of this fallback resolution? A user query, which should 
fail, might not fail depending on if built-in catalog happens to have a table 
with the same name. The query may further succeed in execution and produce 
unexpected result. This subtle implication, however slim the chance is, 
introduces unpredictability in query behavior and can cause severe consequences 
for the user.

 

In summary, I think table reference and resolution should be deterministic and 
unambiguous and this proposal violates the principle.

 

The original problem, as I understand, is that the planner internally creates a 
table in built-in catalog and subsequently look up that table in the current 
catalog. Wouldn't the natural solution is to qualify the created table with the 
built-in catalog/database? This way, we don't have to change table resolution 
at the system level.

 

> not able to query table registered in catalogs in SQL CLI
> -
>
> Key: FLINK-13279
> URL: https://issues.apache.org/jira/browse/FLINK-13279
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Client, Table SQL / Legacy 
> Planner, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When querying a simple table in catalog, SQL CLI reports 
> "org.apache.flink.table.api.TableException: No table was registered under the 
> name ArrayBuffer(default: select * from hivetable)."
> [~ykt836] can you please help to triage this ticket to proper person?
> Repro steps in SQL CLI (to set up dependencies of HiveCatalog, please refer 
> to dev/table/catalog.md):
> {code:java}
> Flink SQL> show catalogs;
> default_catalog
> myhive
> Flink SQL> use catalog myhive
> > ;
> Flink SQL> show databases;
> default
> Flink SQL> show tables;
> hivetable
> products
> test
> Flink SQL> describe hivetable;
> root
>  |-- name: STRING
>  |-- score: DOUBLE
> Flink SQL> select * from hivetable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.TableException: No table was registered under the 
> name ArrayBuffer(default: select * from hivetable).
> {code}
> Exception in log:
> {code:java}
> 2019-07-15 14:59:12,273 WARN  org.apache.flink.table.client.cli.CliClient 
>   - Could not execute SQL statement.
> org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL 
> query.
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:485)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:317)
>   at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:469)
>   at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:291)
>   at java.util.Optional.ifPresent(Optional.java:159)
>   at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:123)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:194)
> Caused by: org.apache.flink.table.api.TableException: No table was registered 
> under the name ArrayBuffer(default: select * from hivetable).
>   at 
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:529)
>   at 
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:507)
>   at 
> org.apache.flink.table.api.internal.BatchTableEnvImpl.insertInto(BatchTableEnvImpl.scala:58)
>   at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428)
>   at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:416)

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305030971
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
 
 Review comment:
   I tried to remove the passive voice from this introduction. Also maybe drop 
references to DataSet? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r304912777
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
 
 Review comment:
   Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types were tightly coupled to Flink's TypeInformation. TypeInformation is used 
in the DataStream API and is sufficient to describe all information needed to 
serialize and deserialize JVM-based objects in a distributed setting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r304912777
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
 
 Review comment:
   Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types were tightly coupled to Flink's TypeInformation. TypeInformation is used 
in DataSet and DataStream API's and is sufficient to describe all information 
needed to serialize and deserialize JVM-based objects in a distributed setting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305029535
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
 
 Review comment:
   I tried to rewrite some of the introductory paragraphs to use a less passive 
voice. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305024945
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
 
 Review comment:
   ```suggestion
   Users of the JVM-based API work with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305025804
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305027128
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305025907
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305027613
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305026501
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305022907
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
 
 Review comment:
   ```suggestion
   spans multiple releases, and the community aims to finish this effort by 
Flink 1.10.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305024070
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
 
 Review comment:
   >A *data type* describes a data type . . . 
   
   It sounds strange to me since this sentence is trying to describe what a 
data type is. Would something like "A *data type* describes the logical types 
of a value . .  " make sense? I don't know enough about the new type system to 
know if that's correct. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305024625
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
 
 Review comment:
   Is JVM correct or are DataTypes in the python table api as well? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r304912777
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
 
 Review comment:
   Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types are tightly coupled to Flink's TypeInformation. TypeInformation is used 
in DataSet and DataStream API's and is sufficient to describe all information 
needed to serialize and deserialize JVM-based objects in a distributed setting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305025425
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
 
 Review comment:
   Can you expand on what the edges are? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r304913208
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
 
 Review comment:
   However, TypeInformation was not designed to represent logical types 
independent of an actual JVM class. In the past, it was difficult to map SQL 
standard types to this abstraction. Furthermore, some types were not 
SQL-compliant and introduced without a bigger picture in mind.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305026826
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305022448
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
 
 Review comment:
   ```suggestion
   solution for API stability and standard compliance.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r304915601
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
 
 Review comment:
   ```suggestion
   A list of all pre-defined data types can be found 
[below](#list-of-data-types).
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add documentation for the new Table & SQL API type system

2019-07-18 Thread GitBox
sjwiesman commented on a change in pull request #9161: [FLINK-13262][docs] Add 
documentation for the new Table & SQL API type system
URL: https://github.com/apache/flink/pull/9161#discussion_r305027290
 
 

 ##
 File path: docs/dev/table/types.md
 ##
 @@ -0,0 +1,1201 @@
+---
+title: "Data Types"
+nav-parent_id: tableapi
+nav-pos: 1
+---
+
+
+Due to historical reasons, the data types of Flink's Table & SQL API were 
closely coupled to Flink's
+`TypeInformation` before Flink 1.9. `TypeInformation` is used in DataSet and 
DataStream API and is
+sufficient to describe all information needed to serialize and deserialize 
JVM-based objects in a
+distributed setting.
+
+However, `TypeInformation` was not designed to properly represent logical 
types independent of an
+actual JVM class. In the past, it was difficult to properly map SQL standard 
types to this abstraction.
+Furthermore, some types were not SQL-compliant and were introduced without a 
bigger picture in mind.
+
+Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
+solution for API stablility and standard compliance.
+
+Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its introduction
+spans multiple releases and the community aims to finish this effort by Flink 
1.10.
+
+Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
+not every combination of planner and data type is supported. Furthermore, 
planners might not support every
+data type with the desired precision or parameter.
+
+Attention Please see the planner 
compatibility table and limitations
+section before using a data type.
+
+* This will be replaced by the TOC
+{:toc}
+
+Data Type
+-
+
+A *data type* describes the data type of a value in the table ecosystem. It 
can be used to declare input and/or
+output types of operations.
+
+Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
+about the nullability of a value for efficient handling of scalar expressions.
+
+Examples of data types are:
+- `INT`
+- `INT NOT NULL`
+- `INTERVAL DAY TO SECOND(3)`
+- `ROW, myOtherField TIMESTAMP(3)>`
+
+A list of all pre-defined data types can be found in 
[below](#list-of-data-types).
+
+### Data Types in the Table API
+
+Users of the JVM-based API are dealing with instances of 
`org.apache.flink.table.types.DataType` within the Table API or when
+defining connectors, catalogs, or user-defined functions.
+
+A `DataType` instance has two responsibilities:
+- **Declaration of a logical type** which does not imply a concrete physical 
representation for transmission
+or storage but defines the boundaries between JVM-based languages and the 
table ecosystem.
+- *Optional:* **Giving hints about the physical representation of data to the 
planner** which is useful at the edges to other APIs .
+
+For JVM-based languages, all pre-defined data types are available in 
`org.apache.flink.table.api.DataTypes`.
+
+It is recommended to add a star import to your table programs for having a 
fluent API:
+
+
+
+
+{% highlight java %}
+import static org.apache.flink.table.api.DataTypes.*;
+
+DataType t = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.DataTypes._
+
+val t: DataType = INTERVAL(DAY(), SECOND(3));
+{% endhighlight %}
+
+
+
+
+ Physical Hints
+
+Physical hints are required at the edges of the table ecosystem. Hints 
indicate the data format that an implementation
+expects.
+
+For example, a data source could express that it produces values for logical 
`TIMESTAMP`s using a `java.sql.Timestamp` class
+instead of using `java.time.LocalDateTime` which would be the default. With 
this information, the runtime is able to convert
+the produced class into its internal data format. In return, a data sink can 
declare the data format it consumes from the runtime.
+
+Here are some examples of how to declare a bridging conversion class:
+
+
+
+
+{% highlight java %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+// tell the runtime to not produce or consume java.time.LocalDateTime instances
+// but java.sql.Timestamp
+val t: DataType = 
DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp]);
+
+// tell the runtime to not produce or consume boxed integer arrays
+// but primitive int arrays
+val t: DataType = 
DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]]);
+{% endhighlight %}
+
+
+
+

[jira] [Commented] (FLINK-11843) Dispatcher fails to recover jobs if leader change happens during JobManagerRunner termination

2019-07-18 Thread Steven Zhen Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888172#comment-16888172
 ] 

Steven Zhen Wu commented on FLINK-11843:


[~Tison] where do I email you the log?

> Dispatcher fails to recover jobs if leader change happens during 
> JobManagerRunner termination
> -
>
> Key: FLINK-11843
> URL: https://issues.apache.org/jira/browse/FLINK-11843
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.2, 1.8.0, 1.9.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> The {{Dispatcher}} fails to recover jobs if a leader change happens during 
> the {{JobManagerRunner}} termination of the previous run. The problem is that 
> we schedule the start future of the recovered {{JobGraph}} using the 
> {{MainThreadExecutor}} and additionally require that this future is completed 
> before any other recovery operation from a subsequent leadership session is 
> executed. If now the leadership changes, the {{MainThreadExecutor}} will be 
> invalidated and the scheduled future will never be completed.
> The relevant ML thread: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/1-7-1-job-stuck-in-suspended-state-td26439.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 commented on a change in pull request #9049: [FLINK-13176][SQL CLI] remember current catalog and database in SQL CLI SessionContext

2019-07-18 Thread GitBox
bowenli86 commented on a change in pull request #9049: [FLINK-13176][SQL CLI] 
remember current catalog and database in SQL CLI SessionContext
URL: https://github.com/apache/flink/pull/9049#discussion_r305016325
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/DependencyTest.java
 ##
 @@ -193,7 +200,20 @@ public Catalog createCatalog(String name, Map properties) {
ADDITIONAL_TEST_DATABASE,
new CatalogDatabaseImpl(new 
HashMap<>(), null),
false);
-   } catch (DatabaseAlreadyExistException e) {
+   hiveCatalog.createTable(
 
 Review comment:
   yeah, the reason at that time is we don't have a good way to test and cover 
catalog discovery for `HiveCatalog`. As soon as we have end-2-end test (WIP 
https://github.com/apache/flink/pull/9149), we should be able to remove hive 
test dependencies from sql cli


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #9049: [FLINK-13176][SQL CLI] remember current catalog and database in SQL CLI SessionContext

2019-07-18 Thread GitBox
bowenli86 commented on a change in pull request #9049: [FLINK-13176][SQL CLI] 
remember current catalog and database in SQL CLI SessionContext
URL: https://github.com/apache/flink/pull/9049#discussion_r305014586
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ##
 @@ -232,7 +238,10 @@ public void useCatalog(SessionContext session, String 
catalogName) throws SqlExe
.getTableEnvironment();
 
 Review comment:
   @twalthr can you elaborate a bit more? AFAICT, 
`org.apache.flink.table.client.gateway.local.LocalExecutor#validateSession` 
seems only to be responsible for creating an execution context, but cannot set 
any current catalog and current database?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9157: [FLINK-13086]add Chinese documentation for catalogs

2019-07-18 Thread GitBox
bowenli86 commented on issue #9157: [FLINK-13086]add Chinese documentation for 
catalogs
URL: https://github.com/apache/flink/pull/9157#issuecomment-512890338
 
 
   Hi @yiduwangkai , thanks for your contribution. Please submit the PR against 
master branch, and I will merge to both master and 1.9 branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13283) JDBCLookup Exception: Unsupported type: LocalDate

2019-07-18 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-13283:
---
Component/s: Connectors / JDBC

> JDBCLookup Exception: Unsupported type: LocalDate
> -
>
> Key: FLINK-13283
> URL: https://issues.apache.org/jira/browse/FLINK-13283
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.9.0
>
>
> Hi , when I use Flink 1.9  JDBCTableSource,and I create  TableSchema like 
> this:
> final TableSchema schema = TableSchema.builder()
>   .field("id", DataTypes.INT())
>   .field("create", DataTypes.DATE())
>   .field("update", DataTypes.DATE())
>   .field("name", DataTypes.STRING())
>   .field("age", DataTypes.INT())
>   .field("address", DataTypes.STRING())
>   .field("birthday",DataTypes.DATE())
>   .field("likethings", DataTypes.STRING())
>   .build();
> I use  JDBCTableSource.builder() to create JDBCTableSource, I run the 
> program, and there is a exception :
> {color:red}java.lang.IllegalArgumentException: Unsupported type: 
> LocalDate{color}
> I saw the src code , I find that in LegacyTypeInfoDataTypeConverter , 
> DateType convert to Types.LOCAL_DATE,but in JDBCTypeUtil class, the HashMap  
> TYPE_MAPPING  doesn't have the key Types.LOCAL_DATE,so that throw the 
> exception.
> Does the JDBC dim table support the time data,Like Date? May it is bug for 
> JDBCTableSource join.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13277) add documentation of Hive source/sink

2019-07-18 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-13277:


Assignee: Rui Li

> add documentation of Hive source/sink
> -
>
> Key: FLINK-13277
> URL: https://issues.apache.org/jira/browse/FLINK-13277
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> add documentation of Hive source/sink in {{batch/connector.md}}
> its corresponding Chinese one is FLINK-13278
> cc [~xuefuz] [~lirui] [~Terry1897]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13327) Blink planner not compiling with Scala 2.12

2019-07-18 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-13327:


Assignee: Dawid Wysakowicz

> Blink planner not compiling with Scala 2.12
> ---
>
> Key: FLINK-13327
> URL: https://issues.apache.org/jira/browse/FLINK-13327
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.9.0
>
>
> [https://travis-ci.org/apache/flink/jobs/560428262]
>  
> {code:java}
> 11:48:37.007 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/plan/nodes/resource/ExecNodeResourceTest.scala:183:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:37.007 [ERROR]  value isBounded needs `override' modifier
> 11:48:37.007 [ERROR] class MockTableSource(val isBounded: Boolean, schema: 
> TableSchema)
> 11:48:37.007 [ERROR]   ^
> 11:48:40.784 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/TableTestBase.scala:852:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.784 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.784 [ERROR] class TestTableSource(val isBounded: Boolean, schema: 
> TableSchema)
> 11:48:40.785 [ERROR]   ^
> 11:48:40.855 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:135:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.855 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.855 [ERROR] val isBounded: Boolean,
> 11:48:40.855 [ERROR] ^
> 11:48:40.906 [ERROR] 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:345:
>  error: overriding method isBounded in trait StreamTableSource of type 
> ()Boolean;
> 11:48:40.906 [ERROR]  value isBounded needs `override' modifier
> 11:48:40.906 [ERROR] val isBounded: Boolean,
> 11:48:40.906 [ERROR] ^
> 11:48:40.982 [WARNING] 6 warnings found
> 11:48:40.987 [ERROR] four errors found{code}
>  
> [~godfreyhe]  [~dawidwys]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13327) Blink planner not compiling with Scala 2.12

2019-07-18 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-13327:


 Summary: Blink planner not compiling with Scala 2.12
 Key: FLINK-13327
 URL: https://issues.apache.org/jira/browse/FLINK-13327
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: Chesnay Schepler
 Fix For: 1.9.0


[https://travis-ci.org/apache/flink/jobs/560428262]

 
{code:java}
11:48:37.007 [ERROR] 
/home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/plan/nodes/resource/ExecNodeResourceTest.scala:183:
 error: overriding method isBounded in trait StreamTableSource of type 
()Boolean;
11:48:37.007 [ERROR]  value isBounded needs `override' modifier
11:48:37.007 [ERROR] class MockTableSource(val isBounded: Boolean, schema: 
TableSchema)
11:48:37.007 [ERROR]   ^
11:48:40.784 [ERROR] 
/home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/TableTestBase.scala:852:
 error: overriding method isBounded in trait StreamTableSource of type 
()Boolean;
11:48:40.784 [ERROR]  value isBounded needs `override' modifier
11:48:40.784 [ERROR] class TestTableSource(val isBounded: Boolean, schema: 
TableSchema)
11:48:40.785 [ERROR]   ^
11:48:40.855 [ERROR] 
/home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:135:
 error: overriding method isBounded in trait StreamTableSource of type 
()Boolean;
11:48:40.855 [ERROR]  value isBounded needs `override' modifier
11:48:40.855 [ERROR] val isBounded: Boolean,
11:48:40.855 [ERROR] ^
11:48:40.906 [ERROR] 
/home/travis/build/apache/flink/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/util/testTableSources.scala:345:
 error: overriding method isBounded in trait StreamTableSource of type 
()Boolean;
11:48:40.906 [ERROR]  value isBounded needs `override' modifier
11:48:40.906 [ERROR] val isBounded: Boolean,
11:48:40.906 [ERROR] ^
11:48:40.982 [WARNING] 6 warnings found
11:48:40.987 [ERROR] four errors found{code}
 

[~godfreyhe]  [~dawidwys]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] azagrebin commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-18 Thread GitBox
azagrebin commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r304992456
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##
 @@ -185,6 +199,10 @@ public ResourceManager(
this.jmResourceIdRegistrations = new HashMap<>(4);
this.taskExecutors = new HashMap<>(8);
this.taskExecutorGatewayFutures = new HashMap<>(8);
+
+   this.defaultTaskManagerMemoryMB = 
ConfigurationUtils.getTaskManagerHeapMemory(flinkConfig).getMebiBytes();
+   this.numberOfTaskSlots = 
flinkConfig.getInteger(TaskManagerOptions.NUM_TASK_SLOTS);
+   this.slotsPerWorker = 
updateTaskManagerConfigAndCreateWorkerSlotProfiles(this.flinkConfig, 
defaultTaskManagerMemoryMB, numberOfTaskSlots);
 
 Review comment:
   ok, I see it is actually not used in `StandaloneResourceManager`.
   Yarn and mesos RMs could extend an abstract 
`ResourceManagerWithSlotsPerWorker` but it will complicate things.
   It will probably need to change with the dynamic memory slicing anyways.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-18 Thread GitBox
azagrebin commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r304992456
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##
 @@ -185,6 +199,10 @@ public ResourceManager(
this.jmResourceIdRegistrations = new HashMap<>(4);
this.taskExecutors = new HashMap<>(8);
this.taskExecutorGatewayFutures = new HashMap<>(8);
+
+   this.defaultTaskManagerMemoryMB = 
ConfigurationUtils.getTaskManagerHeapMemory(flinkConfig).getMebiBytes();
+   this.numberOfTaskSlots = 
flinkConfig.getInteger(TaskManagerOptions.NUM_TASK_SLOTS);
+   this.slotsPerWorker = 
updateTaskManagerConfigAndCreateWorkerSlotProfiles(this.flinkConfig, 
defaultTaskManagerMemoryMB, numberOfTaskSlots);
 
 Review comment:
   ok, I see it is actually not used in `StandaloneResourceManager`.
   Yarn and mesos RMs could extend an abstract 
`ResourceManagerWithSlotsPerWorker` but it will probably complicate things.
   It will probably need to change with the dynamic memory slicing anyways.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >