[jira] [Created] (FLINK-15432) JDBCOutputFormat add flushInterval just like JDBCUpsertOutputFormat

2019-12-27 Thread xiaodao (Jira)
xiaodao created FLINK-15432:
---

 Summary: JDBCOutputFormat  add flushInterval just like 
JDBCUpsertOutputFormat
 Key: FLINK-15432
 URL: https://issues.apache.org/jira/browse/FLINK-15432
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: xiaodao


JDBCOutputFormat also need to flush data with period time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10711: [hotfix][docs] Fix a typo in [flink-example] - CollectionExecutionExample.java

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10711: [hotfix][docs] Fix a typo in 
[flink-example] - CollectionExecutionExample.java
URL: https://github.com/apache/flink/pull/10711#issuecomment-569393203
 
 
   
   ## CI report:
   
   * 286126c476bc24bace36cc84ce7e284ab7b4e741 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142513531) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3964)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10712: [FLINK-15389][Connectors/flink-jdbc] JDBCUpsertOutputFormat no need to create schedule flush when flushMaxSize = 1

2019-12-27 Thread GitBox
flinkbot commented on issue #10712: [FLINK-15389][Connectors/flink-jdbc] 
JDBCUpsertOutputFormat no need to create schedule flush when flushMaxSize = 1
URL: https://github.com/apache/flink/pull/10712#issuecomment-569394891
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1ff4b6620fb5a9c9c9dcdf0131a9d448999ed9fa (Sat Dec 28 
07:29:37 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15389) JDBCUpsertOutputFormat no need to create schedule flush when flushMaxSize = 1

2019-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15389:
---
Labels: pull-request-available  (was: )

> JDBCUpsertOutputFormat  no need to create schedule flush when  flushMaxSize = 
> 1
> ---
>
> Key: FLINK-15389
> URL: https://issues.apache.org/jira/browse/FLINK-15389
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
>Reporter: xiaodao
>Assignee: xiaodao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> when flushMaxSize set 1; then it will call fush  func every time;  it's no 
> need to create schedule to flush data.
> if (flushIntervalMills != 0)  it's better to modify with
> if (flushIntervalMills != 0 and flushMaxSize != 1)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zoudaokoulife opened a new pull request #10712: [FLINK-15389][Connectors/flink-jdbc] JDBCUpsertOutputFormat no need to create schedule flush when flushMaxSize = 1

2019-12-27 Thread GitBox
zoudaokoulife opened a new pull request #10712: 
[FLINK-15389][Connectors/flink-jdbc] JDBCUpsertOutputFormat no need to create 
schedule flush when flushMaxSize = 1
URL: https://github.com/apache/flink/pull/10712
 
 
   
   ## What is the purpose of the change
   JDBCUpsertOutputFormat no need to create schedule flush when flushMaxSize = 1
   
   ## Brief change log
   modify create create flush scheduler condition
   `if (flushIntervalMills != 0)`  
   to
   `if (flushIntervalMills != 0 && flushMaxSize != 1)`
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: ( no )
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: ( no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable )
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10711: [hotfix][docs] Fix a typo in [flink-example] - CollectionExecutionExample.java

2019-12-27 Thread GitBox
flinkbot commented on issue #10711: [hotfix][docs] Fix a typo in 
[flink-example] - CollectionExecutionExample.java
URL: https://github.com/apache/flink/pull/10711#issuecomment-569393203
 
 
   
   ## CI report:
   
   * 286126c476bc24bace36cc84ce7e284ab7b4e741 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-27 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li updated FLINK-15430:
---
Component/s: Table SQL / Planner

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Priority: Major
>
> Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink 
> planner. We find that some large SQL meets the problem of code gen which 
> exceeds Java 64k method limitation.
>  
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15431) Add numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator

2019-12-27 Thread Benchao Li (Jira)
Benchao Li created FLINK-15431:
--

 Summary: Add 
numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator
 Key: FLINK-15431
 URL: https://issues.apache.org/jira/browse/FLINK-15431
 Project: Flink
  Issue Type: Improvement
  Components: Library / CEP
Reporter: Benchao Li


I find that current CepOperator has no metric indicates 
numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency which is 
available in WindowOperator.

It is inconvenient  for users to debugging and monitoring. So I propose to add 
these metrics in CepOperator too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-27 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004409#comment-17004409
 ] 

Benchao Li commented on FLINK-15430:


I know that current fix is not perfect, just fixes part of the problem which 
occurs when projection list is very long.

We did this for our internal branch as well. If the community is fine with 
fixing this problem this way for now, I'm willing to open a pr for this.

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>Reporter: Benchao Li
>Priority: Major
>
> Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink 
> planner. We find that some large SQL meets the problem of code gen which 
> exceeds Java 64k method limitation.
>  
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-27 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li updated FLINK-15430:
---
Description: 
Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink planner. 
We find that some large SQL meets the problem of code gen which exceeds Java 
64k method limitation.

 

After searching in issues, we find 
https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
extent. But for blink planner, it has not been fixed for now.

  was:
Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink planner. 
We find that some large SQL meets the problem of code gen which exceeds Java 
64k method limitation.

 

After searching in issues, we find 
https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
extent.


> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>Reporter: Benchao Li
>Priority: Major
>
> Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink 
> planner. We find that some large SQL meets the problem of code gen which 
> exceeds Java 64k method limitation.
>  
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-27 Thread Benchao Li (Jira)
Benchao Li created FLINK-15430:
--

 Summary: Fix Java 64K method compiling limitation for blink 
planner.
 Key: FLINK-15430
 URL: https://issues.apache.org/jira/browse/FLINK-15430
 Project: Flink
  Issue Type: Improvement
Reporter: Benchao Li


Our SQL is migrated from 1.5 to 1.9, and from legacy planner to blink planner. 
We find that some large SQL meets the problem of code gen which exceeds Java 
64k method limitation.

 

After searching in issues, we find 
https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
extent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10711: [hotfix][docs] Fix a typo in [flink-example] - CollectionExecutionExample.java

2019-12-27 Thread GitBox
flinkbot commented on issue #10711: [hotfix][docs] Fix a typo in 
[flink-example] - CollectionExecutionExample.java
URL: https://github.com/apache/flink/pull/10711#issuecomment-569391263
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 286126c476bc24bace36cc84ce7e284ab7b4e741 (Sat Dec 28 
06:33:51 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangxianghu opened a new pull request #10711: [hotfix][docs] Fix a typo in [flink-example] - CollectionExecutionExample.java

2019-12-27 Thread GitBox
wangxianghu opened a new pull request #10711: [hotfix][docs] Fix a typo in 
[flink-example] - CollectionExecutionExample.java
URL: https://github.com/apache/flink/pull/10711
 
 
   ## What is the purpose of the change
   
   *Fix a typo in [flink-example] - CollectionExecutionExample.java*
   
   ## Brief change log
   
   *Fix a typo in [flink-example] - CollectionExecutionExample.java*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables 
correlate with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710#issuecomment-569382162
 
 
   
   ## CI report:
   
   * 2515369f9f288d23e53ff933b698a998aebd9a42 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142509131) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3963)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-15426) TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15426.
-
Resolution: Fixed

Fixed in 1.11.0: 450795ecbfd40d1f39df0769440bbfcb5abbcc93
1.10.0: 3d88f711bfadb9e48d5a70d0797ce6652ef1e7a5

> TPC-DS end-to-end test (Blink planner) fails on travis
> --
>
> Key: FLINK-15426
> URL: https://issues.apache.org/jira/browse/FLINK-15426
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TPC-DS end-to-end test (Blink planner) fails on travis with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Field types of query result and registered TableSink 
> default_catalog.default_database.query2_sinkTable do not match.
> ...
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> {code}
> https://api.travis-ci.org/v3/job/629699422/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables 
correlate with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710#issuecomment-569382162
 
 
   
   ## CI report:
   
   * 2515369f9f288d23e53ff933b698a998aebd9a42 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142509131) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3963)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10710: [hotfix][doc] Fix temporal_tables 
correlate with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710#issuecomment-569382162
 
 
   
   ## CI report:
   
   * 2515369f9f288d23e53ff933b698a998aebd9a42 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142509131) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3963)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
wuchong merged pull request #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end 
test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
wuchong commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end 
test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569382258
 
 
   Verified TPC-DS are all passed in my local machine. Merging it for hotfix. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-15413) ScalarOperatorsTest failed in travis

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15413.
-
Resolution: Fixed

Fixed in 1.9.2: 025aca82a541d838101d14de5190111597c6db83

> ScalarOperatorsTest failed in travis
> 
>
> Key: FLINK-15413
> URL: https://issues.apache.org/jira/browse/FLINK-15413
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:50:19.796 [ERROR] ScalarOperatorsTest>ExpressionTestBase.evaluateExprs:161 
> Wrong result for: [CASE WHEN (CASE WHEN f2 = 1 THEN CAST('' as INT) ELSE 0 
> END) is null THEN 'null' ELSE 'not null' END] optimized to: [_UTF-16LE'not 
> null':VARCHAR(8) CHARACTER SET "UTF-16LE"] expected: but was: n]ull>
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636107/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
flinkbot commented on issue #10710: [hotfix][doc] Fix temporal_tables correlate 
with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710#issuecomment-569382162
 
 
   
   ## CI report:
   
   * 2515369f9f288d23e53ff933b698a998aebd9a42 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
wuchong merged pull request #10708: [FLINK-15413][table-planner-blink] Fix 
ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
wuchong commented on issue #10708: [FLINK-15413][table-planner-blink] Fix 
ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708#issuecomment-569382047
 
 
   Tests are passed except `LocalExecutorITCase.testParameterizedTypes` which 
will be fixed by FLINK-15412.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
flinkbot commented on issue #10710: [hotfix][doc] Fix temporal_tables correlate 
with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710#issuecomment-569380059
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2515369f9f288d23e53ff933b698a998aebd9a42 (Sat Dec 28 
03:14:45 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao opened a new pull request #10710: [hotfix][doc] Fix temporal_tables correlate with a changing dimension table section

2019-12-27 Thread GitBox
libenchao opened a new pull request #10710: [hotfix][doc] Fix temporal_tables 
correlate with a changing dimension table section
URL: https://github.com/apache/flink/pull/10710
 
 
   ##  What is the purpose of the change
   
   Fix doc temporal_tables, correlate with a changing dimension table section.
   
   ## Brief change log
   
   -
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15411) Planner can't prune partition on DATE/TIMESTAMP columns

2019-12-27 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004342#comment-17004342
 ] 

Bowen Li commented on FLINK-15411:
--

yes, date is probably the most popular partition type. We'd better fix this in 
1.10

> Planner can't prune partition on DATE/TIMESTAMP columns
> ---
>
> Key: FLINK-15411
> URL: https://issues.apache.org/jira/browse/FLINK-15411
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Hive should work after planner fixed due to: 
> [https://github.com/apache/flink/pull/10690#issuecomment-569021089]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15411) Planner can't prune partition on DATE/TIMESTAMP columns

2019-12-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15411:
-
Fix Version/s: 1.11.0

> Planner can't prune partition on DATE/TIMESTAMP columns
> ---
>
> Key: FLINK-15411
> URL: https://issues.apache.org/jira/browse/FLINK-15411
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>
> Hive should work after planner fixed due to: 
> [https://github.com/apache/flink/pull/10690#issuecomment-569021089]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman commented on issue #10669: [FLINK-15192][docs][table] Restructure "SQL" pages for better readability

2019-12-27 Thread GitBox
sjwiesman commented on issue #10669: [FLINK-15192][docs][table] Restructure 
"SQL" pages for better readability
URL: https://github.com/apache/flink/pull/10669#issuecomment-569356922
 
 
   Please add a redirect from the old sql page to the new overview page.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qinjunjerry commented on a change in pull request #10502: [FLINK-14825][state-processor-api][docs] Rework state processor api documentation

2019-12-27 Thread GitBox
qinjunjerry commented on a change in pull request #10502: 
[FLINK-14825][state-processor-api][docs] Rework state processor api 
documentation
URL: https://github.com/apache/flink/pull/10502#discussion_r361751039
 
 

 ##
 File path: docs/dev/libs/state_processor_api.md
 ##
 @@ -413,45 +551,25 @@ The `KeyedStateBootstrapFunction` supports setting event 
time and processing tim
 The timers will not fire inside the bootstrap function and only become active 
once restored within a `DataStream` application.
 If a processing time timer is set but the state is not restored until after 
that time has passed, the timer will fire immediatly upon start.
 
-Once one or more transformations have been created they may be combined into a 
single `Savepoint`. 
-`Savepoint`'s are created using a state backend and max parallelism, they may 
contain any number of operators. 
+Attention If your bootstrap function 
creates timers, the state can only be restored using one of the [process]({{ 
site.baseurl }}/dev/stream/operators/process_function.html) type functions.
+
+## Modifying Savepoints
 
-
-
-{% highlight java %}
-Savepoint
-.create(backend, 128)
-.withOperator("uid1", transformation1)
-.withOperator("uid2", transformation2)
-.write(savepointPath);
-{% endhighlight %}
-
-
-{% highlight scala %}
-Savepoint
-.create(backend, 128)
-.withOperator("uid1", transformation1)
-.withOperator("uid2", transformation2)
-.write(savepointPath)
-{% endhighlight %}
-
-
-   
 Besides creating a savepoint from scratch, you can base on off an existing 
savepoint such as when bootstrapping a single new operator for an existing job.
 
 
 
 {% highlight java %}
 Savepoint
-.load(backend, oldPath)
+.load(new MemoryStateBackend(), oldPath)
 .withOperator("uid", transformation)
 .write(newPath);
 {% endhighlight %}
 
 
 {% highlight scala %}
 Savepoint
-.load(backend, oldPath)
+.load(new MemoryStateBackend(), oldPath)
 .withOperator("uid", transformation)
 .write(newPath)
 {% endhighlight %}
 
 Review comment:
   I believe there are three parameters for Savepoint.load(): 
ExecutionEnvironment, a path and StateBackend. Could you double check? To me, 
the Savepoint.load() call at the beginning of the "Reading State" section is a 
good one. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qinjunjerry commented on a change in pull request #10502: [FLINK-14825][state-processor-api][docs] Rework state processor api documentation

2019-12-27 Thread GitBox
qinjunjerry commented on a change in pull request #10502: 
[FLINK-14825][state-processor-api][docs] Rework state processor api 
documentation
URL: https://github.com/apache/flink/pull/10502#discussion_r361525469
 
 

 ##
 File path: docs/dev/libs/state_processor_api.md
 ##
 @@ -239,114 +225,302 @@ public class StatefulFunctionWithTime extends 
KeyedProcessFunction state;
  
+   ListState updateTimes;
+
@Override
public void open(Configuration parameters) {
   ValueStateDescriptor stateDescriptor = new 
ValueStateDescriptor<>("state", Types.INT);
   state = getRuntimeContext().getState(stateDescriptor);
+
+  ListStateDescriptor updateDescriptor = new 
ListStateDescriptor<>("times", Types.LONG);
+  updateTimes = getRuntimeContext().getListState(updateDescriptor);
}
  
@Override
public void processElement(Integer value, Context ctx, Collector out) 
throws Exception {
   state.update(value + 1);
+  updateTimes.add(System.currentTimeMillis());
}
 }
 {% endhighlight %}
 
 
 {% highlight scala %}
-public class StatefulFunctionWithTime extends KeyedProcessFunction[Integer, 
Integer, Void] {
+class StatefulFunctionWithTime extends KeyedProcessFunction[Integer, Integer, 
Void] {
  
-   var state: ValueState[Integer];
+   var state: ValueState[Integer] = _
  
-   override def open(parameters: Configuration) {
-  val stateDescriptor = new ValueStateDescriptor("state", Types.INT);
-  state = getRuntimeContext().getState(stateDescriptor);
+   var updateTimes: ListState[Long] = _ 
+
+   @throws[Exception]
+   override def open(parameters: Configuration): Unit {
+  val stateDescriptor = new ValueStateDescriptor("state", Types.INT)
+  state = getRuntimeContext().getState(stateDescriptor)
+
+  val updateDescirptor = new ListStateDescriptor("times", Types.LONG)
+  updateTimes = getRuntimeContext().getListState(updateDescriptor)
}
  
-   override def processElement(value: Integer, ctx: Context, out: 
Collector[Void]) {
-  state.update(value + 1);
+   @throws[Exception]
+   override def processElement(value: Integer, ctx: Context, out: 
Collector[Void]): Unit = {
+  state.update(value + 1)
+  updateTimes.add(System.currentTimeMillis)
}
 }
 {% endhighlight %}
 
 
 
-Then it can read by defining an output type and corresponding 
KeyedStateReaderFunction. 
+Then it can read by defining an output type and corresponding 
`KeyedStateReaderFunction`. 
 
 
 
 {% highlight java %}
-class KeyedState {
-  Integer key;
-  Integer value;
+DataSet keyedState = savepoint.readKeyedState("my-uid", new 
ReaderFunction());
+
+public class KeyedState {
+  public int key;
+
+  public int value;
+
+  public List times;
 }
  
-class ReaderFunction extends KeyedStateReaderFunction {
+public class ReaderFunction extends KeyedStateReaderFunction {
+
   ValueState state;
  
+  ListState updateTimes;
+
   @Override
   public void open(Configuration parameters) {
- ValueStateDescriptor stateDescriptor = new 
ValueStateDescriptor<>("state", Types.INT);
- state = getRuntimeContext().getState(stateDescriptor);
+ValueStateDescriptor stateDescriptor = new 
ValueStateDescriptor<>("state", Types.INT);
+state = getRuntimeContext().getState(stateDescriptor);
+
+ListStateDescriptor updateDescriptor = new 
ListStateDescriptor<>("times", Types.LONG);
+updateTimes = getRuntimeContext().getListState(updateDescriptor);
   }
  
   @Override
   public void readKey(
 Integer key,
 Context ctx,
 Collector out) throws Exception {
- 
- KeyedState data = new KeyedState();
- data.key= key;
- data.value  = state.value();
- out.collect(data);
+
+KeyedState data = new KeyedState();
+data.key= key;
+data.value  = state.value();
+data.times  = StreamSupport
+  .stream(updateTimes.get().spliterator(), false)
+  .collect(Collectors.toList());
+
+out.collect(data);
   }
 }
- 
-DataSet keyedState = savepoint.readKeyedState("my-uid", new 
ReaderFunction());
 {% endhighlight %}
 
 
 {% highlight scala %}
-case class KeyedState(key: Int, value: Int)
+val keyedState = savepoint.readKeyedState("my-uid", new ReaderFunction)
+
+case class KeyedState(key: Int, value: Int, List[Long])
  
 class ReaderFunction extends KeyedStateReaderFunction[Integer, KeyedState] {
-  var state: ValueState[Integer];
  
-  override def open(parameters: Configuration) {
- val stateDescriptor = new ValueStateDescriptor("state", Types.INT);
- state = getRuntimeContext().getState(stateDescriptor);
-  }
+  var state: ValueState[Integer] = _
+
+  var updateTimes: ListState[Long] = _
  
+  @throws[Exception]
+  override def open(parameters: Configuration): Unit {
+ val stateDescriptor = new ValueStateDescriptor("state", Types.INT)
+ state = getRuntimeContext().getState(stateDescriptor)
+
+  val updateDescirptor = new ListStateDescriptor("times", Types.LONG)
+  updateTimes = 

[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   * 4cee7a516b3a8f47de0b3574f44bb285a54fda35 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142473273) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3961)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   * 4cee7a516b3a8f47de0b3574f44bb285a54fda35 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142473273) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3961)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14849) Fix documentation about Hive dependencies

2019-12-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-14849.

Fix Version/s: 1.11.0
   Resolution: Fixed

master: 28b6221aaaeb535afa728bb887e733b6576982a7
1.10: 7ada211863e02039b52088deb198611e8c762c47

> Fix documentation about Hive dependencies
> -
>
> Key: FLINK-14849
> URL: https://issues.apache.org/jira/browse/FLINK-14849
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Jingsong Lee
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> With:
> 
> org.apache.hive
> hive-exec
> 3.1.1
> 
> Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
> cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
>   at 
> org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
>   ... 68 more
> {code}
> After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will 
> use default child-first resolve-order.
> If user jar has some conflict dependents, there will be some problem.
> Maybe we should update document to add some exclusions to hive dependents.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10681: [FLINK-14849][hive][doc] Fix documentation about Hive dependencies

2019-12-27 Thread GitBox
bowenli86 closed pull request #10681: [FLINK-14849][hive][doc] Fix 
documentation about Hive dependencies
URL: https://github.com/apache/flink/pull/10681
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15412) LocalExecutorITCase#testParameterizedTypes failed in travis

2019-12-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15412.

Resolution: Fixed

release-1.9: baeb9b38bfef4b39f12bc7fe3934e140b3926698

> LocalExecutorITCase#testParameterizedTypes failed in travis
> ---
>
> Key: FLINK-15412
> URL: https://issues.apache.org/jira/browse/FLINK-15412
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Dian Fu
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:43:17.916 [INFO] Running 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] Tests run: 34, Failures: 0, Errors: 1, Skipped: 1, Time 
> elapsed: 89.468 s <<< FAILURE! - in 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] testParameterizedTypes[Planner: 
> blink](org.apache.flink.table.client.gateway.local.LocalExecutorITCase) Time 
> elapsed: 7.88 s <<< ERROR!
> org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL 
> statement at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.TableException: 
> findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: 
> Could not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: No context matches.
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636106/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10702: [FLINK-15412][hive] LocalExecutorITCase#testParameterizedTypes failed…

2019-12-27 Thread GitBox
bowenli86 closed pull request #10702: [FLINK-15412][hive] 
LocalExecutorITCase#testParameterizedTypes failed…
URL: https://github.com/apache/flink/pull/10702
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10702: [FLINK-15412][hive] LocalExecutorITCase#testParameterizedTypes failed…

2019-12-27 Thread GitBox
bowenli86 commented on issue #10702: [FLINK-15412][hive] 
LocalExecutorITCase#testParameterizedTypes failed…
URL: https://github.com/apache/flink/pull/10702#issuecomment-569318004
 
 
   release-1.9: baeb9b38bfef4b39f12bc7fe3934e140b3926698


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10702: [FLINK-15412][hive] LocalExecutorITCase#testParameterizedTypes failed…

2019-12-27 Thread GitBox
bowenli86 commented on issue #10702: [FLINK-15412][hive] 
LocalExecutorITCase#testParameterizedTypes failed…
URL: https://github.com/apache/flink/pull/10702#issuecomment-569317916
 
 
   @lirui-apache thanks for the fix!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15189) Add documentation for hive view

2019-12-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15189.

Fix Version/s: 1.11.0
   Resolution: Fixed

master: 2e0ba5d92750ab21273f8dc15ae7051c431f4eb7
1.10: a87bc8ffda040d7bbcd489f0474abeb44cdd77df

> Add documentation for hive view
> ---
>
> Key: FLINK-15189
> URL: https://issues.apache.org/jira/browse/FLINK-15189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10701: [FLINK-15189][hive][doc] Add documentation for hive view

2019-12-27 Thread GitBox
bowenli86 closed pull request #10701: [FLINK-15189][hive][doc] Add 
documentation for hive view
URL: https://github.com/apache/flink/pull/10701
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   * 4cee7a516b3a8f47de0b3574f44bb285a54fda35 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142473273) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3961)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10709: [FLINK-13437][test] Add Hive SQL E2E 
test
URL: https://github.com/apache/flink/pull/10709#issuecomment-569310266
 
 
   
   ## CI report:
   
   * 31cc4b771858ddf31a5cc19f2d3257b38f944825 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142473285) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3962)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   * 4cee7a516b3a8f47de0b3574f44bb285a54fda35 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
flinkbot commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test
URL: https://github.com/apache/flink/pull/10709#issuecomment-569310266
 
 
   
   ## CI report:
   
   * 31cc4b771858ddf31a5cc19f2d3257b38f944825 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15416) Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel

2019-12-27 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004273#comment-17004273
 ] 

Zhenqiu Huang commented on FLINK-15416:
---

[~pnowojski] Would you please review this ticket? If it makes sense, would you 
please assign it to me?

> Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel
> ---
>
> Key: FLINK-15416
> URL: https://issues.apache.org/jira/browse/FLINK-15416
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Priority: Major
>
> We run a flink with 256 TMs in production. The job internally has keyby 
> logic. Thus, it builds a 256 * 256 communication channels. An outage happened 
> when there is a chip internal link of one of the network switchs broken that 
> connecting these machines. During the outage, the flink can't restart 
> successfully as there is always an exception like  "Connecting the channel 
> failed: Connecting to remote task manager + '/10.14.139.6:41300' has 
> failed. This might indicate that the remote task manager has been lost. 
> After deep investigation with the network infrastructure team, we found there 
> are 6 switchs connecting with these machines. Each switch has 32 physcal 
> links. Every socket is round-robin assigned to each of links for load 
> balances. Thus, there is always average 256 * 256 / 6 * 32  * 2 = 170 
> channels will be assigned to the broken link. The issue lasted for 4 hours 
> until we found the broken link and restart the problematic switch. 
> Given this, we found that the retry of creating channel will help to resolve 
> this issue. For our networking topology, we can set retry to 2. As 170 / (132 
> * 132) < 1, which means after retry twice no channel in 170 channels will be 
> assigned to the broken link in the average case.
> I think it is valuable fix for this kind of partial network partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10707: [FLINK-15426][e2e] Fix TPC-DS 
end-to-end test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569288086
 
 
   
   ## CI report:
   
   * 67c35de0fffa00011edb3c8c54b3fe0bae50155f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142463445) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3960)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10708: [FLINK-15413][table-planner-blink] 
Fix ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708#issuecomment-569293648
 
 
   
   ## CI report:
   
   * aadfc4f8b17404c0d22d12c1c95d4591ffe86617 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142465699) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
zjuwangg commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test
URL: https://github.com/apache/flink/pull/10709#issuecomment-569303511
 
 
   It's a base work, and we can add more ITCase based on this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
flinkbot commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test
URL: https://github.com/apache/flink/pull/10709#issuecomment-569303649
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 31cc4b771858ddf31a5cc19f2d3257b38f944825 (Fri Dec 27 
16:40:35 UTC 2019)
   
   **Warnings:**
* **3 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
zjuwangg commented on issue #10709: [FLINK-13437][test] Add Hive SQL E2E test
URL: https://github.com/apache/flink/pull/10709#issuecomment-569303439
 
 
   cc @bowenli86 @xuefuz @JingsongLi @KurtYoung @lirui-apache to have a review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13437) Add Hive SQL E2E test

2019-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13437:
---
Labels: pull-request-available  (was: )

> Add Hive SQL E2E test
> -
>
> Key: FLINK-13437
> URL: https://issues.apache.org/jira/browse/FLINK-13437
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Terry Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> We should add an E2E test for the Hive integration: List all tables and read 
> some metadata, read from an existing table, register a new table in Hive, use 
> a registered function, write to an existing table, write to a new table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjuwangg opened a new pull request #10709: [FLINK-13437][test] Add Hive SQL E2E test

2019-12-27 Thread GitBox
zjuwangg opened a new pull request #10709: [FLINK-13437][test] Add Hive SQL E2E 
test
URL: https://github.com/apache/flink/pull/10709
 
 
   ## What is the purpose of the change
   Set up a docker-based yarn-cluster and hive service using the new java based 
test runtime framework, add HiveConnectorITCase to cover data read/write 
function, including:
 1. hive data writen by Hive, read by Flink.
 2. hive data writen by Flink, read by Hive.
 3. read/write to a non-partition table.
 4. multi-format for read and write, cover textfile/orc/parquet
   Based on this PR, we can add more test such as function/view in further more.
   
   ## Brief change log
   
 - 3488ec6 Add e2e test for hive data connector using docker based 
environment
 - 2f8b127 refactor hive e2e test using new java-based test framework
 - 76c6f08 add muliti format test and all data types test case
 - 31cc4b7 remote e2e bash test
   
   
   ## Verifying this change
 - *Added integration tests for end-to-end deployment *
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable )
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-27 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r361699910
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
 
 Review comment:
   I would add some comments here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-27 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r361700199
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
 
 Review comment:
   I would make sure this end to end test and the per-API test in DataSet Util 
exist in both side when merging. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-27 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r361699849
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
+   return toTable(sessionId, data, schema.getFieldNames(), 
schema.getFieldTypes());
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames, TypeInformation [] colTypes) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames, colTypes);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param session the MLEnvironment using to convert DataSet to Table.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public 

[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-27 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r361699961
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
+   return toTable(sessionId, data, schema.getFieldNames(), 
schema.getFieldTypes());
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames, TypeInformation [] colTypes) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames, colTypes);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param session the MLEnvironment using to convert DataSet to Table.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(MLEnvironment session, DataSet  

[jira] [Assigned] (FLINK-15427) State TTL RocksDb backend end-to-end test stalls on travis

2019-12-27 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-15427:
---

Assignee: Congxian Qiu(klion26)

> State TTL RocksDb backend end-to-end test stalls on travis
> --
>
> Key: FLINK-15427
> URL: https://issues.apache.org/jira/browse/FLINK-15427
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Congxian Qiu(klion26)
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The 'State TTL RocksDb backend end-to-end test' case stalls and finally 
> timedout with error message:
> {noformat}
> The job exceeded the maximum log length, and has been terminated.
> {noformat}
> https://api.travis-ci.org/v3/job/629699416/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10708: [FLINK-15413][table-planner-blink] 
Fix ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708#issuecomment-569293648
 
 
   
   ## CI report:
   
   * aadfc4f8b17404c0d22d12c1c95d4591ffe86617 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142465699) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10707: [FLINK-15426][e2e] Fix TPC-DS 
end-to-end test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569288086
 
 
   
   ## CI report:
   
   * 67c35de0fffa00011edb3c8c54b3fe0bae50155f Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142463445) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3960)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142463428) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3959)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
flinkbot commented on issue #10708: [FLINK-15413][table-planner-blink] Fix 
ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708#issuecomment-569293648
 
 
   
   ## CI report:
   
   * aadfc4f8b17404c0d22d12c1c95d4591ffe86617 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
flinkbot commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end 
test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569288086
 
 
   
   ## CI report:
   
   * 67c35de0fffa00011edb3c8c54b3fe0bae50155f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   * 6365f53e67dd2bbd91a11cc1e4e19f35d62c49e0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (FLINK-15408) Interval join support no equi-condition

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reopened FLINK-15408:
-
  Assignee: hailong wang

> Interval join support no equi-condition
> ---
>
> Key: FLINK-15408
> URL: https://issues.apache.org/jira/browse/FLINK-15408
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
> Fix For: 1.11.0
>
>
> For Now, Interval join must has at least one equi-condition. Should we need 
> to allow no equi-condition like regular join?
> For that, if sql like as follow:
> {code:java}
> INSERT INTO A SELECT * FROM B join C on B.rowtime BETWEEN C.rowtime - 
> INTERVAL '20' SECOND AND C.rowtime + INTERVAL '30' SECOND
> {code}
> It will has no matched rule to convert.
> {code:java}
> Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There 
> are not enough rules to produce a node with desired properties: 
> convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, 
> MiniBatchIntervalTraitDef=None: 0, UpdateAsRetractionTraitDef=false, 
> AccModeTraitDef=UNKNOWN.
> Missing conversion is FlinkLogicalJoin[convention: LOGICAL -> STREAM_PHYSICAL]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
flinkbot commented on issue #10708: [FLINK-15413][table-planner-blink] Fix 
ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708#issuecomment-569287214
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit aadfc4f8b17404c0d22d12c1c95d4591ffe86617 (Fri Dec 27 
15:04:23 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15413) ScalarOperatorsTest failed in travis

2019-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15413:
---
Labels: pull-request-available  (was: )

> ScalarOperatorsTest failed in travis
> 
>
> Key: FLINK-15413
> URL: https://issues.apache.org/jira/browse/FLINK-15413
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2
>
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:50:19.796 [ERROR] ScalarOperatorsTest>ExpressionTestBase.evaluateExprs:161 
> Wrong result for: [CASE WHEN (CASE WHEN f2 = 1 THEN CAST('' as INT) ELSE 0 
> END) is null THEN 'null' ELSE 'not null' END] optimized to: [_UTF-16LE'not 
> null':VARCHAR(8) CHARACTER SET "UTF-16LE"] expected: but was: n]ull>
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636107/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong opened a new pull request #10708: [FLINK-15413][table-planner-blink] Fix ScalarOperatorsTest failed in travis in 1.9 branch

2019-12-27 Thread GitBox
wuchong opened a new pull request #10708: [FLINK-15413][table-planner-blink] 
Fix ScalarOperatorsTest failed in travis in 1.9 branch
URL: https://github.com/apache/flink/pull/10708
 
 
   
   
   
   ## What is the purpose of the change
   
   `ScalarOperatorsTest` is failed because we use `CASE WHEN xxx is null ...` 
to check the null result, however, this will optimized into a not-null constant 
in some cases, e.g. `cast('' as int)`. We shouldn't use `testSqlNullable` to 
check null result for now, which is already removed in 1.10/master branch. 
   
   ## Brief change log
   
   Remove `testSqlNullable`.
   
   ## Verifying this change
   
   This change is covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15426) TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004233#comment-17004233
 ] 

Jark Wu commented on FLINK-15426:
-

The root cause is:


{code}
Caused by: org.apache.flink.table.api.ValidationException: Field types of query 
result and registered TableSink 
default_catalog.default_database.query2_sinkTable do not match.
Query schema: [d_week_seq1: INT, EXPR$1: DECIMAL(35, 2), EXPR$2: DECIMAL(35, 
2), EXPR$3: DECIMAL(35, 2), EXPR$4: DECIMAL(35, 2), EXPR$5: DECIMAL(35, 2), 
EXPR$6: DECIMAL(35, 2), EXPR$7: DECIMAL(35, 2)]
Sink schema: [d_week_seq1: INT, EXPR$1: LEGACY('RAW', 
'ANY'),
 EXPR$2: LEGACY('RAW', 'ANY'),
 EXPR$3: LEGACY('RAW', 'ANY'),
 EXPR$4: LEGACY('RAW', 'ANY'),
 EXPR$5: LEGACY('RAW', 'ANY'),
 EXPR$6: LEGACY('RAW', 'ANY'),
 EXPR$7: LEGACY('RAW', 'ANY')]
{code}


> TPC-DS end-to-end test (Blink planner) fails on travis
> --
>
> Key: FLINK-15426
> URL: https://issues.apache.org/jira/browse/FLINK-15426
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> TPC-DS end-to-end test (Blink planner) fails on travis with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Field types of query result and registered TableSink 
> default_catalog.default_database.query2_sinkTable do not match.
> ...
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> {code}
> https://api.travis-ci.org/v3/job/629699422/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
flinkbot commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end 
test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569283995
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 67c35de0fffa00011edb3c8c54b3fe0bae50155f (Fri Dec 27 
14:47:13 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15426) TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15426:
---
Labels: pull-request-available test-stability  (was: test-stability)

> TPC-DS end-to-end test (Blink planner) fails on travis
> --
>
> Key: FLINK-15426
> URL: https://issues.apache.org/jira/browse/FLINK-15426
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>
> TPC-DS end-to-end test (Blink planner) fails on travis with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Field types of query result and registered TableSink 
> default_catalog.default_database.query2_sinkTable do not match.
> ...
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> {code}
> https://api.travis-ci.org/v3/job/629699422/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong opened a new pull request #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
wuchong opened a new pull request #10707: [FLINK-15426][e2e] Fix TPC-DS 
end-to-end test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707
 
 
   
   
   
   ## What is the purpose of the change
   
   FLINK-15313 forbids to use TypeInformation defined in blink planner
   internally as the output type of TableSink (physical type and logical
   type not equal). However, the TPC-DS frameworks breaks this. The fixing
   is to use DataTypes which is the standard way.
   
   ## Brief change log
   
   Update TPC-DS framework to use DataTypes to create `CsvTableSink`. 
   
   ## Verifying this change
   
   Verified the TPC-DS end to end test in my local machine.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end test (Blink planner) fails on travis

2019-12-27 Thread GitBox
wuchong commented on issue #10707: [FLINK-15426][e2e] Fix TPC-DS end-to-end 
test (Blink planner) fails on travis
URL: https://github.com/apache/flink/pull/10707#issuecomment-569283633
 
 
   cc @JingsongLi @leonardBang 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10705: [FLINK-15346][connectors / filesystem] Support treat empty column as null option in CsvTableSourceFactory

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10705: [FLINK-15346][connectors / 
filesystem] Support treat empty column as null option in CsvTableSourceFactory
URL: https://github.com/apache/flink/pull/10705#issuecomment-569245359
 
 
   
   ## CI report:
   
   * 825a2c9773fca40aeb85d056aec0e177a2ba9319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142446415) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3955)
 
   * 16c3f1999a6f1e9bab234622905f946b98def353 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142452534) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3956)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15429) read hive table null value of timestamp type will throw an npe

2019-12-27 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15429:
-
Priority: Blocker  (was: Major)

> read hive table null value of timestamp type will throw an npe
> --
>
> Key: FLINK-15429
> URL: https://issues.apache.org/jira/browse/FLINK-15429
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Terry Wang
>Priority: Blocker
> Fix For: 1.10.0
>
>
> When there is null value of timestamp type in hive table, will have exception 
> like following:
> Caused by: org.apache.flink.table.api.TableException: Exception in writeRecord
>   at 
> org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:122)
>   at 
> org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction.invoke(OutputFormatSinkFunction.java:87)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
>   at SinkConversion$1.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.table.catalog.hive.client.HiveShimV100.ensureSupportedFlinkTimestamp(HiveShimV100.java:386)
>   at 
> org.apache.flink.table.catalog.hive.client.HiveShimV100.toHiveTimestamp(HiveShimV100.java:357)
>   at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$b054b59b$1(HiveInspectors.java:216)
>   at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$7f882244$1(HiveInspectors.java:172)
>   at 
> org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.getConvertedRow(HiveOutputFormatFactory.java:190)
>   at 
> org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:206)
>   at 
> org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:178)
>   at 
> org.apache.flink.table.filesystem.SingleDirectoryWriter.write(SingleDirectoryWriter.java:52)
>   at 
> org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:120)
>   ... 19 more
> We should add null check in HiveShim100#ensureSupportedFlinkTimestamp and 
> return a prper value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15409) Add semicolon to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15409:
---

Assignee: hailong wang

> Add semicolon to WindowJoinUtil#generateJoinFunction 
> '$collectorTerm.collect($joinedRow)' statement
> ---
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15408) Interval join support no equi-condition

2019-12-27 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004201#comment-17004201
 ] 

hailong wang commented on FLINK-15408:
--

Oh [~jark], I got it wrong :D. Thank you for assigning to me.

> Interval join support no equi-condition
> ---
>
> Key: FLINK-15408
> URL: https://issues.apache.org/jira/browse/FLINK-15408
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.11.0
>
>
> For Now, Interval join must has at least one equi-condition. Should we need 
> to allow no equi-condition like regular join?
> For that, if sql like as follow:
> {code:java}
> INSERT INTO A SELECT * FROM B join C on B.rowtime BETWEEN C.rowtime - 
> INTERVAL '20' SECOND AND C.rowtime + INTERVAL '30' SECOND
> {code}
> It will has no matched rule to convert.
> {code:java}
> Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There 
> are not enough rules to produce a node with desired properties: 
> convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, 
> MiniBatchIntervalTraitDef=None: 0, UpdateAsRetractionTraitDef=false, 
> AccModeTraitDef=UNKNOWN.
> Missing conversion is FlinkLogicalJoin[convention: LOGICAL -> STREAM_PHYSICAL]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10706: [FLINK-15418][table-planner-blink] 
Set FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456775) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3958)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15409) Add semicolon to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-27 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004200#comment-17004200
 ] 

hailong wang commented on FLINK-15409:
--

Hi [~jark], It is my pleasure to take it. Thank you for assigning to me.

> Add semicolon to WindowJoinUtil#generateJoinFunction 
> '$collectorTerm.collect($joinedRow)' statement
> ---
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangxianghu closed pull request #10691: [hotfix] [javadocs] Fix typo in DistCp and CollectionExecutionExample

2019-12-27 Thread GitBox
wangxianghu closed pull request #10691: [hotfix] [javadocs] Fix typo in DistCp 
and CollectionExecutionExample
URL: https://github.com/apache/flink/pull/10691
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15399) Join with a LookupableTableSource:java.lang.RuntimeException: while converting XXXX Caused by: java.lang.AssertionError: Field ordinal 26 is invalid for type

2019-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004194#comment-17004194
 ] 

Jark Wu commented on FLINK-15399:
-

Thanks for reporting this [~Rockey Cui], I reproduced this problem in my local 
machine. 
Will dig into it in the next days. 

> Join with a LookupableTableSource:java.lang.RuntimeException: while 
> converting   Caused by: java.lang.AssertionError: Field ordinal 26 is 
> invalid for  type
> ---
>
> Key: FLINK-15399
> URL: https://issues.apache.org/jira/browse/FLINK-15399
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.1
> Environment: jdk1.8.0_211
>Reporter: Rockey Cui
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: JoinTest-1.0-SNAPSHOT.jar
>
>
>  
> {code:java}
> //代码占位符
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, 
> settings);
> env.setParallelism(1);
> DataStreamSource stringDataStreamSource1 = env.fromElements(
> "HA"
> );
> String[] fields1 = new String[]{"ORD_ID", "PS_PARTKEY", "PS_SUPPKEY", 
> "PS_AVAILQTY", "PS_SUPPLYCOST", "PS_COMMENT"
> // key
> , "PS_INT", "PS_LONG"
> , "PS_DOUBLE8", "PS_DOUBLE14", "PS_DOUBLE15"
> , "PS_NUMBER1", "PS_NUMBER2", "PS_NUMBER3", "PS_NUMBER4"
> , "PS_DATE", "PS_TIMESTAMP", "PS_DATE_EVENT", 
> "PS_TIMESTAMP_EVENT"};
> TypeInformation[] types1 = new TypeInformation[]{Types.STRING, 
> Types.INT, Types.LONG, Types.LONG, Types.DOUBLE, Types.STRING
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE, Types.DOUBLE
> , Types.LONG, Types.LONG, Types.DOUBLE, Types.DOUBLE
> , Types.SQL_DATE, Types.SQL_TIMESTAMP, Types.SQL_DATE, 
> Types.SQL_TIMESTAMP};
> RowTypeInfo typeInformation1 = new RowTypeInfo(types1, fields1);
> DataStream stream1 = stringDataStreamSource1.map(new 
> MapFunction() {
> private static final long serialVersionUID = 2349572544179673356L;
> @Override
> public Row map(String s) {
> return new Row(typeInformation1.getArity());
> }
> }).returns(typeInformation1);
> tableEnv.registerDataStream("FUN_1", stream1, String.join(",", 
> typeInformation1.getFieldNames()) + ",PROCTIME.proctime");
> DataStreamSource stringDataStreamSource2 = env.fromElements(
> "HA"
> );
> String[] fields2 = new String[]{"C_NAME", "C_ADDRESS", "C_NATIONKEY"
> // key
> , "C_INT", "C_LONG"
> , "C_DOUBLE8", "C_DOUBLE14"
> , "C_DATE_EVENT", "C_TIMESTAMP_EVENT"};
> TypeInformation[] types2 = new TypeInformation[]{Types.STRING, 
> Types.STRING, Types.LONG
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE
> , Types.SQL_DATE, Types.SQL_TIMESTAMP};
> RowTypeInfo typeInformation2 = new RowTypeInfo(types2, fields2);
> DataStream stream2 = stringDataStreamSource2.map(new 
> MapFunction() {
> private static final long serialVersionUID = 2349572544179673349L;
> @Override
> public Row map(String s) {
> return new Row(typeInformation2.getArity());
> }
> }).returns(typeInformation2);
> tableEnv.registerDataStream("FUN_2", stream2, String.join(",", 
> typeInformation2.getFieldNames()) + ",PROCTIME.proctime");
> MyLookupTableSource tableSource = MyLookupTableSource.newBuilder()
> .withFieldNames(new String[]{
> "S_NAME", "S_ADDRESS", "S_PHONE"
> , "S_ACCTBAL", "S_COMMENT"
> // key
> , "S_INT", "S_LONG"
> , "S_DOUBLE8", "S_DOUBLE14"
> , "S_DOUBLE15", "S_DATE_EVENT", "S_TIMESTAMP_EVENT"})
> .withFieldTypes(new TypeInformation[]{
> Types.STRING, Types.STRING, Types.STRING
> , Types.DOUBLE, Types.STRING
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE
> , Types.DOUBLE, Types.SQL_DATE, Types.SQL_TIMESTAMP})
> .build();
> tableEnv.registerTableSource("INFO", tableSource);
> String sql = "SELECT LN(F.PS_INT),LOG(F2.C_INT,1)\n" +
> "  FROM 

[jira] [Updated] (FLINK-15399) Join with a LookupableTableSource:java.lang.RuntimeException: while converting XXXX Caused by: java.lang.AssertionError: Field ordinal 26 is invalid for type

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15399:

Fix Version/s: 1.10.0

> Join with a LookupableTableSource:java.lang.RuntimeException: while 
> converting   Caused by: java.lang.AssertionError: Field ordinal 26 is 
> invalid for  type
> ---
>
> Key: FLINK-15399
> URL: https://issues.apache.org/jira/browse/FLINK-15399
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.1
> Environment: jdk1.8.0_211
>Reporter: Rockey Cui
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: JoinTest-1.0-SNAPSHOT.jar
>
>
>  
> {code:java}
> //代码占位符
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, 
> settings);
> env.setParallelism(1);
> DataStreamSource stringDataStreamSource1 = env.fromElements(
> "HA"
> );
> String[] fields1 = new String[]{"ORD_ID", "PS_PARTKEY", "PS_SUPPKEY", 
> "PS_AVAILQTY", "PS_SUPPLYCOST", "PS_COMMENT"
> // key
> , "PS_INT", "PS_LONG"
> , "PS_DOUBLE8", "PS_DOUBLE14", "PS_DOUBLE15"
> , "PS_NUMBER1", "PS_NUMBER2", "PS_NUMBER3", "PS_NUMBER4"
> , "PS_DATE", "PS_TIMESTAMP", "PS_DATE_EVENT", 
> "PS_TIMESTAMP_EVENT"};
> TypeInformation[] types1 = new TypeInformation[]{Types.STRING, 
> Types.INT, Types.LONG, Types.LONG, Types.DOUBLE, Types.STRING
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE, Types.DOUBLE
> , Types.LONG, Types.LONG, Types.DOUBLE, Types.DOUBLE
> , Types.SQL_DATE, Types.SQL_TIMESTAMP, Types.SQL_DATE, 
> Types.SQL_TIMESTAMP};
> RowTypeInfo typeInformation1 = new RowTypeInfo(types1, fields1);
> DataStream stream1 = stringDataStreamSource1.map(new 
> MapFunction() {
> private static final long serialVersionUID = 2349572544179673356L;
> @Override
> public Row map(String s) {
> return new Row(typeInformation1.getArity());
> }
> }).returns(typeInformation1);
> tableEnv.registerDataStream("FUN_1", stream1, String.join(",", 
> typeInformation1.getFieldNames()) + ",PROCTIME.proctime");
> DataStreamSource stringDataStreamSource2 = env.fromElements(
> "HA"
> );
> String[] fields2 = new String[]{"C_NAME", "C_ADDRESS", "C_NATIONKEY"
> // key
> , "C_INT", "C_LONG"
> , "C_DOUBLE8", "C_DOUBLE14"
> , "C_DATE_EVENT", "C_TIMESTAMP_EVENT"};
> TypeInformation[] types2 = new TypeInformation[]{Types.STRING, 
> Types.STRING, Types.LONG
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE
> , Types.SQL_DATE, Types.SQL_TIMESTAMP};
> RowTypeInfo typeInformation2 = new RowTypeInfo(types2, fields2);
> DataStream stream2 = stringDataStreamSource2.map(new 
> MapFunction() {
> private static final long serialVersionUID = 2349572544179673349L;
> @Override
> public Row map(String s) {
> return new Row(typeInformation2.getArity());
> }
> }).returns(typeInformation2);
> tableEnv.registerDataStream("FUN_2", stream2, String.join(",", 
> typeInformation2.getFieldNames()) + ",PROCTIME.proctime");
> MyLookupTableSource tableSource = MyLookupTableSource.newBuilder()
> .withFieldNames(new String[]{
> "S_NAME", "S_ADDRESS", "S_PHONE"
> , "S_ACCTBAL", "S_COMMENT"
> // key
> , "S_INT", "S_LONG"
> , "S_DOUBLE8", "S_DOUBLE14"
> , "S_DOUBLE15", "S_DATE_EVENT", "S_TIMESTAMP_EVENT"})
> .withFieldTypes(new TypeInformation[]{
> Types.STRING, Types.STRING, Types.STRING
> , Types.DOUBLE, Types.STRING
> // key
> , Types.INT, Types.LONG
> , Types.DOUBLE, Types.DOUBLE
> , Types.DOUBLE, Types.SQL_DATE, Types.SQL_TIMESTAMP})
> .build();
> tableEnv.registerTableSource("INFO", tableSource);
> String sql = "SELECT LN(F.PS_INT),LOG(F2.C_INT,1)\n" +
> "  FROM (SELECT *\n" +
> "  FROM FUN_1 F1\n" +
> "  JOIN INFO FOR SYSTEM_TIME AS OF F1.PROCTIME D1\n" +
>

[GitHub] [flink] flinkbot edited a comment on issue #10705: [FLINK-15346][connectors / filesystem] Support treat empty column as null option in CsvTableSourceFactory

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10705: [FLINK-15346][connectors / 
filesystem] Support treat empty column as null option in CsvTableSourceFactory
URL: https://github.com/apache/flink/pull/10705#issuecomment-569245359
 
 
   
   ## CI report:
   
   * 825a2c9773fca40aeb85d056aec0e177a2ba9319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142446415) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3955)
 
   * 16c3f1999a6f1e9bab234622905f946b98def353 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142452534) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3956)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10706: [FLINK-15418][table-planner-blink] Set FlinkRelDistribution in StreamExecMatchRule

2019-12-27 Thread GitBox
flinkbot commented on issue #10706: [FLINK-15418][table-planner-blink] Set 
FlinkRelDistribution in StreamExecMatchRule
URL: https://github.com/apache/flink/pull/10706#issuecomment-569270452
 
 
   
   ## CI report:
   
   * daaa89195ca2ff2cc09823f6f8505b99b9010997 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15419) Validate SQL syntax not need to depend on connector jar

2019-12-27 Thread baijingjing (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004189#comment-17004189
 ] 

baijingjing commented on FLINK-15419:
-

* There is currently no exposed method for sql syntax validation.

 * sqlUpdate do some thing include sql parse, sql validate, sql convert to 
operator.
 * there is a unexposed method 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl#validate is to do this.

we had experience at flink sql extension that, if you want to use 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl#validate,  it is depend 
on flink version. we had some expr on 1.8 and 1.9, it is a big change.

it is can register table into catalogManager before create  FlinkPlannerImpl.

 

 

 

> Validate SQL syntax not need to depend on connector jar
> ---
>
> Key: FLINK-15419
> URL: https://issues.apache.org/jira/browse/FLINK-15419
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Kaibo Zhou
>Priority: Major
> Fix For: 1.11.0
>
>
> As a platform user, I want to integrate Flink SQL in my platform.
> The users will register Source/Sink Tables and Functions to catalog service 
> through UI, and write SQL scripts on Web SQLEditor. I want to validate the 
> SQL syntax and validate that all catalog objects exist (table, fields, UDFs). 
> After some investigation, I decided to use the `tEnv.sqlUpdate/sqlQuery` API 
> to do this.`SqlParser` and`FlinkSqlParserImpl` is not a good choice, as it 
> will not read the catalog.
> The users have registered *Kafka* source/sink table in the catalog, so the 
> validation logic will be:
> {code:java}
> TableEnvironment tableEnv = 
> tEnv.registerCatalog(CATALOG_NAME, catalog);
> tEnv.useCatalog(CATALOG_NAME);
> tEnv.useDatabase(DB_NAME);
> tEnv.sqlUpdate("INSERT INTO sinkTable SELECT f1,f2 FROM sourceTable"); 
> or  
> tEnv.sqlQuery("SELECT * FROM tableName")
> {code}
> It will through exception on Flink 1.9.0 because I do not have 
> `flink-connector-kafka_2.11-1.9.0.jar`  in my classpath.
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. 
> findAndCreateTableSource 
> failed.org.apache.flink.table.api.ValidationException: SQL validation failed. 
> findAndCreateTableSource failed. at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:125)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:82)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.parse(PlannerBase.scala:132)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:335)
> The following factories have been considered:
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.table.planner.delegation.BlinkPlannerFactory
> org.apache.flink.table.planner.delegation.BlinkExecutorFactory
> org.apache.flink.table.catalog.GenericInMemoryCatalogFactory
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
>   at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)
>   at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)
>   at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)
>   at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)
>   at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)
> {code}
> For a platform provider, the user's SQL may depend on *ANY* connector or even 
> a custom connector. It is complicated to do dynamic loading connector jar 
> after parser the connector type in SQL. And this requires the users must 
> upload their custom connector jar before doing a syntax check.
> I hope that Flink can provide a friendly way to verify the syntax of SQL 
> whose tables/functions are already registered in the catalog, *NOT* need to 
> depend on the jar of the connector. This makes it easier for SQL to be 
> integrated by external platforms.
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-27 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15230) flink1.9.1 table API JSON schema array type exception

2019-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004188#comment-17004188
 ] 

Jark Wu commented on FLINK-15230:
-

Could you try {{new Json().deriveSchema()}}  + {{new 
Schema().schema(TableSchema)}} or {{new Schema().field(String, DataType)}}? 

json schema string and TypeInformation are both deprecated in Flink SQL. 

> flink1.9.1 table API JSON schema array type exception
> -
>
> Key: FLINK-15230
> URL: https://issues.apache.org/jira/browse/FLINK-15230
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
> Environment: flink1.9.1
>Reporter: kevin
>Priority: Major
>
> strings: { type: 'array', items:
> { type: 'string' }
> }   
> .field("strings", Types.OBJECT_ARRAY(Types.STRING))
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Type LEGACY(BasicArrayTypeInfo) of table field 'strings' does not 
> match with type BasicArrayTypeInfo of the field 'strings' of the 
> TableSource return type.Exception in thread "main" 
> org.apache.flink.table.api.ValidationException: Type 
> LEGACY(BasicArrayTypeInfo) of table field 'strings' does not match 
> with type BasicArrayTypeInfo of the field 'strings' of the 
> TableSource return type. at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:121)
>  at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:92)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) at 
> org.apache.flink.table.planner.sources.TableSourceUtil$.computeIndexMapping(TableSourceUtil.scala:92)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:100)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:55)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:55)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:86)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:46)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlan(StreamExecCalc.scala:46)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToTransformation(StreamExecSink.scala:185)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:154)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:50)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:50)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:61)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:60)
>  at 
> 

[jira] [Assigned] (FLINK-15425) update outdated document of Filesystem connector and csv format in Connect to External Systems

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15425:
---

Assignee: Leonard Xu

> update outdated document of Filesystem connector and csv format in Connect to 
> External Systems
> --
>
> Key: FLINK-15425
> URL: https://issues.apache.org/jira/browse/FLINK-15425
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>
> Document of Connect to External Systems is outdated and is easy to mislead 
> users.
> from now,
> (1)Flie system connector can only support  OldCsv format in YAML/DDL
> (2)Csv format can not be used by Flie system connector in YAML/DDL
>  docs  missed these and  may mislead user



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15412) LocalExecutorITCase#testParameterizedTypes failed in travis

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15412:

Fix Version/s: (was: 1.10.0)
   1.9.2

> LocalExecutorITCase#testParameterizedTypes failed in travis
> ---
>
> Key: FLINK-15412
> URL: https://issues.apache.org/jira/browse/FLINK-15412
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Dian Fu
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:43:17.916 [INFO] Running 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] Tests run: 34, Failures: 0, Errors: 1, Skipped: 1, Time 
> elapsed: 89.468 s <<< FAILURE! - in 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] testParameterizedTypes[Planner: 
> blink](org.apache.flink.table.client.gateway.local.LocalExecutorITCase) Time 
> elapsed: 7.88 s <<< ERROR!
> org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL 
> statement at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.TableException: 
> findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: 
> Could not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: No context matches.
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636106/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15425) update outdated document of Filesystem connector and csv format in Connect to External Systems

2019-12-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-15425:
---
Description: 
Document of Connect to External Systems is outdated and is easy to mislead 
users.

from now,

(1)Flie system connector can only support  OldCsv format in YAML/DDL

(2)Csv format can not be used by Flie system connector in YAML/DDL

 docs  missed these and  may mislead user

  was:Document of Connect to External Systems is outdated and is easy to 
mislead users.


> update outdated document of Filesystem connector and csv format in Connect to 
> External Systems
> --
>
> Key: FLINK-15425
> URL: https://issues.apache.org/jira/browse/FLINK-15425
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>
> Document of Connect to External Systems is outdated and is easy to mislead 
> users.
> from now,
> (1)Flie system connector can only support  OldCsv format in YAML/DDL
> (2)Csv format can not be used by Flie system connector in YAML/DDL
>  docs  missed these and  may mislead user



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15412) LocalExecutorITCase#testParameterizedTypes failed in travis

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15412:
---

Assignee: Rui Li

> LocalExecutorITCase#testParameterizedTypes failed in travis
> ---
>
> Key: FLINK-15412
> URL: https://issues.apache.org/jira/browse/FLINK-15412
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Dian Fu
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:43:17.916 [INFO] Running 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] Tests run: 34, Failures: 0, Errors: 1, Skipped: 1, Time 
> elapsed: 89.468 s <<< FAILURE! - in 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 14:44:47.388 [ERROR] testParameterizedTypes[Planner: 
> blink](org.apache.flink.table.client.gateway.local.LocalExecutorITCase) Time 
> elapsed: 7.88 s <<< ERROR!
> org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL 
> statement at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.TableException: 
> findAndCreateTableSource failed
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testParameterizedTypes(LocalExecutorITCase.java:557)
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: 
> Could not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: No context matches.
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636106/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15413) ScalarOperatorsTest failed in travis

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15413:
---

Assignee: Jark Wu

> ScalarOperatorsTest failed in travis
> 
>
> Key: FLINK-15413
> URL: https://issues.apache.org/jira/browse/FLINK-15413
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.9.2
>
>
> The travis of release-1.9 failed with the following error:
> {code:java}
> 14:50:19.796 [ERROR] ScalarOperatorsTest>ExpressionTestBase.evaluateExprs:161 
> Wrong result for: [CASE WHEN (CASE WHEN f2 = 1 THEN CAST('' as INT) ELSE 0 
> END) is null THEN 'null' ELSE 'not null' END] optimized to: [_UTF-16LE'not 
> null':VARCHAR(8) CHARACTER SET "UTF-16LE"] expected: but was: n]ull>
> {code}
> instance: [https://api.travis-ci.org/v3/job/629636107/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-27 Thread GitBox
lirui-apache commented on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-569266869
 
 
   @JingsongLi @bowenli86 I have updated to use writable OIs for constant 
arguments, so that we don't need to our own implementations of OIs. Please note:
   1. Non-constant arguments still use java OIs, but I can also update that if 
you think we should be consistent.
   2. Also added test for constant DATE and TIMESTAMP values and realized we 
need a shim for the `java -> writable` conversion.
   3. I haven't removed the shim for getting constant OIs. Hopefully it makes 
review a little bit easier. Can do that if you think we're moving in the right 
direction.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15425) update outdated document of Filesystem connector and csv format in Connect to External Systems

2019-12-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-15425:
---
Summary: update outdated document of Filesystem connector and csv format in 
Connect to External Systems  (was: update outdated document of Connect to 
External Systems)

> update outdated document of Filesystem connector and csv format in Connect to 
> External Systems
> --
>
> Key: FLINK-15425
> URL: https://issues.apache.org/jira/browse/FLINK-15425
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>
> Document of Connect to External Systems is outdated and is easy to mislead 
> users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15409) Add semicolon to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004180#comment-17004180
 ] 

Jark Wu commented on FLINK-15409:
-

[~hailong wang] Good catch! Are you willing to contribute the fix? 

> Add semicolon to WindowJoinUtil#generateJoinFunction 
> '$collectorTerm.collect($joinedRow)' statement
> ---
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15420) Cast string to timestamp will loose precision

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15420:
---

Assignee: Zhenghua Gao

> Cast string to timestamp will loose precision
> -
>
> Key: FLINK-15420
> URL: https://issues.apache.org/jira/browse/FLINK-15420
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> cast('2010-10-14 12:22:22.123456' as timestamp(9))
> {code}
> Will produce "2010-10-14 12:22:22.123" in blink planner, this should not 
> happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15418) StreamExecMatchRule not set FlinkRelDistribution

2019-12-27 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15418:
---

Assignee: Benchao Li

> StreamExecMatchRule not set FlinkRelDistribution
> 
>
> Key: FLINK-15418
> URL: https://issues.apache.org/jira/browse/FLINK-15418
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1, 1.10.0
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> StreamExecMatchRule forgets to set FlinkRelDistribution. When match clause 
> with `partition by`, and parallelism > 1, will result in following exception:
> ```
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.state.heap.StateTable.put(StateTable.java:336)
>   at 
> org.apache.flink.runtime.state.heap.StateTable.put(StateTable.java:159)
>   at 
> org.apache.flink.runtime.state.heap.HeapMapState.put(HeapMapState.java:100)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.put(UserFacingMapState.java:52)
>   at 
> org.apache.flink.cep.nfa.sharedbuffer.SharedBuffer.registerEvent(SharedBuffer.java:141)
>   at 
> org.apache.flink.cep.nfa.sharedbuffer.SharedBufferAccessor.registerEvent(SharedBufferAccessor.java:74)
>   at org.apache.flink.cep.nfa.NFA$EventWrapper.getEventId(NFA.java:483)
>   at org.apache.flink.cep.nfa.NFA.computeNextStates(NFA.java:605)
>   at org.apache.flink.cep.nfa.NFA.doProcess(NFA.java:292)
>   at org.apache.flink.cep.nfa.NFA.process(NFA.java:228)
>   at 
> org.apache.flink.cep.operator.CepOperator.processEvent(CepOperator.java:420)
>   at 
> org.apache.flink.cep.operator.CepOperator.processElement(CepOperator.java:242)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:488)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >