[GitHub] [flink] wuchong commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in travis cron job

2019-08-01 Thread GitBox
wuchong commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in 
travis cron job
URL: https://github.com/apache/flink/pull/9330#issuecomment-517562362
 
 
   The change looks good to me.
   
   It would be better to push a branch to your fork repo to test whether the 
cron job works well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9325: [FLINK-13542][metrics] Update datadog reporter to send metrics if the…

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9325: [FLINK-13542][metrics] Update datadog 
reporter to send metrics if the…
URL: https://github.com/apache/flink/pull/9325#issuecomment-517481903
 
 
   ## CI report:
   
   * 166fa42707dd620d54756c1f28103f85649f7941 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121681747)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-08-01 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898576#comment-16898576
 ] 

Jark Wu edited comment on FLINK-13377 at 8/2/19 5:38 AM:
-

I pushed a branch to just test stream-sql e2e case for 20 times. 
https://github.com/wuchong/flink/tree/stream-sql-e2e

Will come back when we reproduce the problem on travis. 


was (Author: jark):
I pushed a branch to just test stream-sql e2e case in 20 times. 
https://github.com/wuchong/flink/tree/stream-sql-e2e

Will come back when we reproduce the problem on travis. 

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> {code}
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 

[jira] [Commented] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-08-01 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898576#comment-16898576
 ] 

Jark Wu commented on FLINK-13377:
-

I pushed a branch to just test stream-sql e2e case in 20 times. 
https://github.com/wuchong/flink/tree/stream-sql-e2e

Will come back when we reproduce the problem on travis. 

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> {code}
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zentol commented on issue #9325: [FLINK-13542][metrics] Update datadog reporter to send metrics if the…

2019-08-01 Thread GitBox
zentol commented on issue #9325: [FLINK-13542][metrics] Update datadog reporter 
to send metrics if the…
URL: https://github.com/apache/flink/pull/9325#issuecomment-517556352
 
 
   This looks a bit like introducing unnecessary complexity; the time-frame in 
which no metrics exist is _incredibly_ small.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9324: [FLINK-13541][state-processor-api] State Processor Api sets the wrong key selector when writing savepoints

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9324: [FLINK-13541][state-processor-api] 
State Processor Api sets the wrong key selector when writing savepoints
URL: https://github.com/apache/flink/pull/9324#issuecomment-517449414
 
 
   ## CI report:
   
   * 75ce1d9c9c2daa0140befe909dcc80217c797f89 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121668935)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-01 Thread GitBox
flinkbot commented on issue #9331: [FLINK-13523][table-planner-blink] Verify 
and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517548502
 
 
   ## CI report:
   
   * 5e757e180999aea0a3346ba841d9d48e456cdc0c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121703207)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9323: [FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9323: [FLINK-13535][kafka] do not abort 
transactions twice during KafkaProducer startup
URL: https://github.com/apache/flink/pull/9323#issuecomment-517319186
 
 
   ## CI report:
   
   * 821776478355299e757e9436bc2661fbf09c407a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121619014)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-08-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13427.
-
   Resolution: Fixed
Fix Version/s: 1.10.0

Fixed in 1.10.0: 36fdbefc098421781a7cd56ed3440336d29f5811
Fixed in 1.9.0: 32dd1b17eee257ff2e7f757c15a3ee54ff6dee2d

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-08-01 Thread GitBox
asfgit closed pull request #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7262: [FLINK-10478] Kafka Producer wrongly formats % for transaction ID

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #7262: [FLINK-10478] Kafka Producer wrongly 
formats % for transaction ID
URL: https://github.com/apache/flink/pull/7262#issuecomment-510752916
 
 
   ## CI report:
   
   * 15f05f4c5791d9d42610099324e59057d26bd3ff : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118882595)
   * 2fc680ca11665e05fe0a66f3a92ba5e5d2d734cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120541120)
   * 0a3b0c74582dca5a5ccad0477b4c549e73a15700 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121702839)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-01 Thread GitBox
flinkbot commented on issue #9331: [FLINK-13523][table-planner-blink] Verify 
and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete opened a new pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-01 Thread GitBox
docete opened a new pull request #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331
 
 
   ## What is the purpose of the change
   
   Verify and correct arithmetic function's semantic for Blink planner
   
   ## Brief change log
   
   - Refactor arithmetic divide to keep it compatible with old planner
   - Refactor avg agg to keep it compatible with old planner
   - Remove non-standard bitwise/div scalar function from Blink planner
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13523) Verify and correct arithmetic function's semantic for Blink planner

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13523:
---
Labels: pull-request-available  (was: )

> Verify and correct arithmetic function's semantic for Blink planner
> ---
>
> Key: FLINK-13523
> URL: https://issues.apache.org/jira/browse/FLINK-13523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in travis cron job

2019-08-01 Thread GitBox
flinkbot commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in 
travis cron job
URL: https://github.com/apache/flink/pull/9330#issuecomment-517545433
 
 
   ## CI report:
   
   * 5f1c8dda9ff5cda54cdea47e3dc2a8e1860822cf : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121702524)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leesf commented on issue #7262: [FLINK-10478] Kafka Producer wrongly formats % for transaction ID

2019-08-01 Thread GitBox
leesf commented on issue #7262: [FLINK-10478] Kafka Producer wrongly formats % 
for transaction ID
URL: https://github.com/apache/flink/pull/7262#issuecomment-517544752
 
 
   @becketqin Updated this PR to address your comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent 
leaks of blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772917
 
 
   ## CI report:
   
   * 4e15048a256b338df18ad8f9d89e0a576ae06a27 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120980283)
   * fbca009f51dc08f4a497ceab8628a59c307d5bc4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121384142)
   * 95c61d66b51c9c58cb9bd9cb4b6a575851284ca6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121389395)
   * e6967758d6e66e149a2f9623d67a19b26596e3ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121390402)
   * b5685e1c18955a6646b3fe4cd48cc1259b2afa8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121396586)
   * 74f71d4b93b48d53b724fc069c10ccb5e40d41dd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121544224)
   * 0218532ef99e5c3d1d407f65eda411a77c9b3996 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121550485)
   * 6f631d5bfe987baf31ab6dec71b9405c00423e4f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121588808)
   * cb678bfc9821e4cbcbf324010b7242c0ff99dd53 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121613789)
   * c94afaa7c60621886f7982711b68830e2dd618b6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121615460)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in travis cron job

2019-08-01 Thread GitBox
flinkbot commented on issue #9330: [FLINK-13546][travis] Add TPC-H E2E test in 
travis cron job
URL: https://github.com/apache/flink/pull/9330#issuecomment-517543553
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13546) Run TPC-H E2E test on travis cron job

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13546:
---
Labels: pull-request-available  (was: )

> Run TPC-H E2E test on travis cron job 
> --
>
> Key: FLINK-13546
> URL: https://issues.apache.org/jira/browse/FLINK-13546
> Project: Flink
>  Issue Type: Task
>  Components: Travis
>Reporter: Jark Wu
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> FLINK-13436 added a TPC-H e2e test but didn't include it in travis. We should 
> add it to travis cron job. One place is {{split_misc.sh}}, another choice is 
> {{split_heavy.sh}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper opened a new pull request #9330: [FLINK-13546][travis] Add TPC-H E2E test in travis cron job

2019-08-01 Thread GitBox
TsReaper opened a new pull request #9330: [FLINK-13546][travis] Add TPC-H E2E 
test in travis cron job
URL: https://github.com/apache/flink/pull/9330
 
 
   ## What is the purpose of the change
   
   This PR adds TPC-H E2E test in travis cron job.
   
   ## Brief change log
   
- Add TPC-H E2E test in `split_heavy.sh`
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13546) Run TPC-H E2E test on travis cron job

2019-08-01 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13546:
---

 Summary: Run TPC-H E2E test on travis cron job 
 Key: FLINK-13546
 URL: https://issues.apache.org/jira/browse/FLINK-13546
 Project: Flink
  Issue Type: Task
  Components: Travis
Reporter: Jark Wu
Assignee: Caizhi Weng
 Fix For: 1.9.0, 1.10.0


FLINK-13436 added a TPC-H e2e test but didn't include it in travis. We should 
add it to travis cron job. One place is {{split_misc.sh}}, another choice is 
{{split_heavy.sh}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent 
leaks of blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772917
 
 
   ## CI report:
   
   * 4e15048a256b338df18ad8f9d89e0a576ae06a27 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120980283)
   * fbca009f51dc08f4a497ceab8628a59c307d5bc4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121384142)
   * 95c61d66b51c9c58cb9bd9cb4b6a575851284ca6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121389395)
   * e6967758d6e66e149a2f9623d67a19b26596e3ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121390402)
   * b5685e1c18955a6646b3fe4cd48cc1259b2afa8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121396586)
   * 74f71d4b93b48d53b724fc069c10ccb5e40d41dd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121544224)
   * 0218532ef99e5c3d1d407f65eda411a77c9b3996 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121550485)
   * 6f631d5bfe987baf31ab6dec71b9405c00423e4f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121588808)
   * cb678bfc9821e4cbcbf324010b7242c0ff99dd53 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121613789)
   * c94afaa7c60621886f7982711b68830e2dd618b6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121615460)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9329: [FLINK-13545] [table-planner-blink] JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread GitBox
flinkbot commented on issue #9329: [FLINK-13545] [table-planner-blink] 
JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
URL: https://github.com/apache/flink/pull/9329#issuecomment-517539518
 
 
   ## CI report:
   
   * 079992c709866991888ec10cda40ec0395138dc2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121700683)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13436) Add TPC-H queries as E2E tests

2019-08-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13436.
-
   Resolution: Fixed
Fix Version/s: 1.10.0

[table-api-java] Change the default value of table.exec.sort.default-limit to -1
 - Fixed in 1.10.0: 7abac4e938629c46a0b5eca0ab6da17461b78c59
 - Fixed in 1.9.0: 816741b4b718477dab9cda9f7fdfb72703a72a2e

[FLINK-13436][e2e] Add TPC-H queries as E2E tests
 - Fixed in 1.10.0: 274ff58e52c826c5f358ddbc539c96de4d8af801
 - Fixed in 1.9.0: c465ce1fe4d4dd7188d3cc77d183ac38a8af7e2c

> Add TPC-H queries as E2E tests
> --
>
> Key: FLINK-13436
> URL: https://issues.apache.org/jira/browse/FLINK-13436
> Project: Flink
>  Issue Type: Test
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should add the TPC-H queries as E2E tests in order to verify the blink 
> planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9329: [FLINK-13545] [table-planner-blink] JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread GitBox
flinkbot commented on issue #9329: [FLINK-13545] [table-planner-blink] 
JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
URL: https://github.com/apache/flink/pull/9329#issuecomment-517538262
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9253: [Flink-12164][tests] Harden JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9253: [Flink-12164][tests] Harden 
JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout
URL: https://github.com/apache/flink/pull/9253#issuecomment-515843418
 
 
   ## CI report:
   
   * ee881273027480bd8901c2a06cc4f082f1c0604c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121007482)
   * f786c67f7c0b311d46649183300cf797a24e936f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121063459)
   * 5389b3ce848744dfd79afd9b8d5d57cf25013ecd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121549144)
   * b3ee3de720be9eab9138e537cc763e093af933fe : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121700340)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13545) JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13545:
---
Labels: pull-request-available  (was: )

> JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
> --
>
> Key: FLINK-13545
> URL: https://issues.apache.org/jira/browse/FLINK-13545
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> run tpcds 14.a on blink planner, an exception will thrown
> java.lang.ArrayIndexOutOfBoundsException: 84
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:564)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:555)
>   at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112)
>   at 
> org.apache.calcite.rex.RexVisitorImpl.visitCall(RexVisitorImpl.java:80)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:191)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule.addOnJoinFieldRefCounts(JoinToMultiJoinRule.java:481)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule.onMatch(JoinToMultiJoinRule.java:166)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:284)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
> the reason is {{JoinToMultiJoinRule}} should match SEMI/ANTI LogicalJoin. 
> before calcite-1.20, SEMI join is represented by {{SemiJoin}} which is not 
> matched {{JoinToMultiJoinRule}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe opened a new pull request #9329: [FLINK-13545] [table-planner-blink] JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread GitBox
godfreyhe opened a new pull request #9329: [FLINK-13545] [table-planner-blink] 
JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
URL: https://github.com/apache/flink/pull/9329
 
 
   
   
   ## What is the purpose of the change
   
   *fix "JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin"*
   
   
   ## Brief change log
   
 - *copy JoinToMultiJoinRule from Calcite to blink-planner, and does not 
match SEMI/ANTI LogicalJoin in new rule*
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
 - *Added FlinkJoinToMultiJoinRuleTest to verify the bug*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] asfgit closed pull request #9312: [FLINK-13436][end-to-end-tests] Add TPC-H queries as E2E tests

2019-08-01 Thread GitBox
asfgit closed pull request #9312: [FLINK-13436][end-to-end-tests] Add TPC-H 
queries as E2E tests
URL: https://github.com/apache/flink/pull/9312
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9328: [FLINK-13521][sql-client] Allow 
setting configurations in SQL CLI
URL: https://github.com/apache/flink/pull/9328#issuecomment-517534712
 
 
   ## CI report:
   
   * a9b4a82d084b56a33355e6819462a79d0d5441ac : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121699234)
   * 2c2f2353f8939126d8eb4f065d2aef5294e02feb : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121699657)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9313: [FLINK-13504][table] Fixed shading issues in table modules.

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9313:  [FLINK-13504][table] Fixed shading 
issues in table modules. 
URL: https://github.com/apache/flink/pull/9313#issuecomment-517206630
 
 
   ## CI report:
   
   * 8f5327afea7e343ed0effb541050ebfdc6f443e4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121544194)
   * afe7fed18e325b39d4001abd797f282b8cea1f0a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121565778)
   * 46ea0c967baf3633eb70a23a5528aa17e685b6d6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121612042)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI

2019-08-01 Thread GitBox
flinkbot commented on issue #9328: [FLINK-13521][sql-client] Allow setting 
configurations in SQL CLI
URL: https://github.com/apache/flink/pull/9328#issuecomment-517534712
 
 
   ## CI report:
   
   * a9b4a82d084b56a33355e6819462a79d0d5441ac : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121699234)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13545) JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898519#comment-16898519
 ] 

godfrey he commented on FLINK-13545:


i would like to fix this bug

> JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
> --
>
> Key: FLINK-13545
> URL: https://issues.apache.org/jira/browse/FLINK-13545
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> run tpcds 14.a on blink planner, an exception will thrown
> java.lang.ArrayIndexOutOfBoundsException: 84
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:564)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:555)
>   at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112)
>   at 
> org.apache.calcite.rex.RexVisitorImpl.visitCall(RexVisitorImpl.java:80)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:191)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule.addOnJoinFieldRefCounts(JoinToMultiJoinRule.java:481)
>   at 
> org.apache.calcite.rel.rules.JoinToMultiJoinRule.onMatch(JoinToMultiJoinRule.java:166)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:284)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
> the reason is {{JoinToMultiJoinRule}} should match SEMI/ANTI LogicalJoin. 
> before calcite-1.20, SEMI join is represented by {{SemiJoin}} which is not 
> matched {{JoinToMultiJoinRule}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13545) JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin

2019-08-01 Thread godfrey he (JIRA)
godfrey he created FLINK-13545:
--

 Summary: JoinToMultiJoinRule should not match SEMI/ANTI LogicalJoin
 Key: FLINK-13545
 URL: https://issues.apache.org/jira/browse/FLINK-13545
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0, 1.10.0
Reporter: godfrey he
 Fix For: 1.9.0, 1.10.0


run tpcds 14.a on blink planner, an exception will thrown

java.lang.ArrayIndexOutOfBoundsException: 84

at 
org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:564)
at 
org.apache.calcite.rel.rules.JoinToMultiJoinRule$InputReferenceCounter.visitInputRef(JoinToMultiJoinRule.java:555)
at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112)
at 
org.apache.calcite.rex.RexVisitorImpl.visitCall(RexVisitorImpl.java:80)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:191)
at 
org.apache.calcite.rel.rules.JoinToMultiJoinRule.addOnJoinFieldRefCounts(JoinToMultiJoinRule.java:481)
at 
org.apache.calcite.rel.rules.JoinToMultiJoinRule.onMatch(JoinToMultiJoinRule.java:166)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:284)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)


the reason is {{JoinToMultiJoinRule}} should match SEMI/ANTI LogicalJoin. 
before calcite-1.20, SEMI join is represented by {{SemiJoin}} which is not 
matched {{JoinToMultiJoinRule}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] 
Only use env parallelism for sql job
URL: https://github.com/apache/flink/pull/9277#issuecomment-516388088
 
 
   ## CI report:
   
   * 7bebdd65247ac172a23f9b0a91873b01b554cd71 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121239025)
   * 972eca969b455cd33ebac3045adfc8bba874b400 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121258626)
   * 016ea4143be32fb976685173a8407b7f1aecce15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121359067)
   * 8ffef5f9cbf8793f6a8653c08a286dcceb6c69e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121361743)
   * dab6e02f5509d97c6570846beef7bc527d9cbf1d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121368256)
   * 74896cb6f41fa0111a9e3b9be2aec790a388baef : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121510824)
   * 472b5a48ea9cf86fe800cba8a2df086775373a08 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121563795)
   * abb3f79728d5298702cdb19fc3648718ba0f28e8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121696234)
   * c348c9f217d38c231e299dfa6916213d221e23d9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121698905)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-517485163
 
 
   ## CI report:
   
   * 6258756f25cfcd1fc1cc10e3b05a4ef3d612fd81 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121682948)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI

2019-08-01 Thread GitBox
flinkbot commented on issue #9328: [FLINK-13521][sql-client] Allow setting 
configurations in SQL CLI
URL: https://github.com/apache/flink/pull/9328#issuecomment-517533262
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions

2019-08-01 Thread GitBox
Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to 
fail global when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9268#discussion_r309969915
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/RegionFailoverITCase.java
 ##
 @@ -282,7 +301,11 @@ public void initializeState(FunctionInitializationContext 
context) throws Except
 
unionListState = 
context.getOperatorStateStore().getUnionListState(unionStateDescriptor);
Set actualIndices = 
StreamSupport.stream(unionListState.get().spliterator(), 
false).collect(Collectors.toSet());
-   
Assert.assertTrue(CollectionUtils.isEqualCollection(EXPECTED_INDICES, 
actualIndices));
+   if 
(getRuntimeContext().getTaskName().contains(SINGLE_REGION_SOURCE_NAME)) {
 
 Review comment:
   If we want to let this integration test to include global failover scenario, 
this is necessary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions

2019-08-01 Thread GitBox
Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to 
fail global when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9268#discussion_r309969704
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/RegionFailoverITCase.java
 ##
 @@ -97,8 +111,10 @@
 
@Before
public void setup() throws Exception {
+   HighAvailabilityServicesUtilsTest.TestHAFactory.haServices = 
new FailHaServices(new TestingCheckpointRecoveryFactory(new 
FailRecoverCompletedCheckpointStore(1, 1), new 
StandaloneCheckpointIDCounter()), TestingUtils.defaultExecutor());
Configuration configuration = new Configuration();

configuration.setString(JobManagerOptions.EXECUTION_FAILOVER_STRATEGY, 
"region");
+   configuration.setString(HighAvailabilityOptions.HA_MODE, 
HighAvailabilityServicesUtilsTest.TestHAFactory.class.getName());
 
 Review comment:
   Previously, the `RegionFailoverITCase` did not catch bug of 
[FLINK-13452](https://issues.apache.org/jira/browse/FLINK-13452) due to it did 
not involve global failover but only region failover. I prefer to add global 
failover in this integration test to verify the job could be restarted and 
result still correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13521) Allow setting configuration in SQL CLI

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13521:
---
Labels: pull-request-available  (was: )

> Allow setting configuration in SQL CLI
> --
>
> Key: FLINK-13521
> URL: https://issues.apache.org/jira/browse/FLINK-13521
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> Currently, we provide a set of configurations in blink planner to support 
> optimization or enable some advanced feature. However, all the configurations 
> can't be set in SQL CLI. 
> It would be great to allow set configurations in SQL CLI via {{SET}} command. 
> This maybe a new feature, but considering the implementation effort is rather 
> low (pass configurations to TableConfig), I would like to add it to 1.9 too, 
> but I won't set it as blocker.
> For example:
> {code:sql}
> SET table.exec.mini-batch.enabled = true;
> SET table.exec.mini-batch.allow-latency = 5s;
> {code}
> Meanwhile, we may also need to add an entry in yaml file, we propose to add a 
> {{config}} entry. However, this might be different with other entries. 
> Because we don't need the {{config.}} prefix in {{SET}} command. 
> {code}
> config:
>   table.exec.mini-batch.enabled: true
>   table.exec.mini-batch.allow-latency: 5s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper opened a new pull request #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI

2019-08-01 Thread GitBox
TsReaper opened a new pull request #9328: [FLINK-13521][sql-client] Allow 
setting configurations in SQL CLI
URL: https://github.com/apache/flink/pull/9328
 
 
   ## What is the purpose of the change
   
   Currently, we provide a set of configurations in blink planner to support 
optimization or enable some advanced feature. However, all the configurations 
can't be set in SQL CLI.
   
   This PR allows setting table environment configurations in SQL CLI by using 
the following methods:
   
   * Modify config files:
   ```yaml
   configuration:
 table:
   exec:
 spill-compression:
   enabled: true
   block-size: 128kb
   optimizer:
 join-reorder-enabled: true
   ```
   * Or use `set` command:
   ```
   SET table.exec.spill-compression.enable=true;
   SET table.exec.spill-compression.block-size=128kb;
   SET table.optimizer.join-reorder-enabled=true;
   ```
   
   Note that different from other options in SQL CLI, the `SET` command about 
table environment configurations does not start with `configuration.` prefix 
but directly uses the option name.
   
   ## Brief change log
   
- Allow setting configurations in SQL CLI
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: run the newly added 
tests `LocalExecutorITCase::testTableEnvConfigurations` and 
`ExecutionContextTest::testConfigurations`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11605) Translate the "Dataflow Programming Model" page into Chinese

2019-08-01 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898515#comment-16898515
 ] 

Jark Wu commented on FLINK-11605:
-

Hi [~iluvex], are you still working on this?

> Translate the "Dataflow Programming Model" page into Chinese
> 
>
> Key: FLINK-11605
> URL: https://issues.apache.org/jira/browse/FLINK-11605
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Xin Ma
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/programming-model.html
> The markdown file is located in flink/docs/concepts/programming-model.zh.md
> The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job

2019-08-01 Thread GitBox
wuchong commented on issue #9277: [FLINK-13494][table-planner-blink] Only use 
env parallelism for sql job
URL: https://github.com/apache/flink/pull/9277#issuecomment-517532422
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9327: [hotfix][tests] Move SettableLeaderRetrievalService to test scope

2019-08-01 Thread GitBox
flinkbot commented on issue #9327: [hotfix][tests] Move 
SettableLeaderRetrievalService to test scope
URL: https://github.com/apache/flink/pull/9327#issuecomment-517532078
 
 
   ## CI report:
   
   * bca8c3d0972f1769dbb79b366871caa9e21aa985 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121698517)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9327: [hotfix][tests] Move SettableLeaderRetrievalService to test scope

2019-08-01 Thread GitBox
flinkbot commented on issue #9327: [hotfix][tests] Move 
SettableLeaderRetrievalService to test scope
URL: https://github.com/apache/flink/pull/9327#issuecomment-517530609
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun opened a new pull request #9327: [hotfix][tests] Move SettableLeaderRetrievalService to test scope

2019-08-01 Thread GitBox
TisonKun opened a new pull request #9327: [hotfix][tests] Move 
SettableLeaderRetrievalService to test scope
URL: https://github.com/apache/flink/pull/9327
 
 
   ## What is the purpose of the change
   
   Move SettableLeaderRetrievalService to test scope. It is a testing purpose 
class.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9275: [FLINK-13290][table-planner-blink][hbase] SinkCodeGenerator should not compare row type field names and enable blink planner for hbase IT cas

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9275: 
[FLINK-13290][table-planner-blink][hbase] SinkCodeGenerator should not compare 
row type field names and enable blink planner for hbase IT case
URL: https://github.com/apache/flink/pull/9275#issuecomment-516364194
 
 
   ## CI report:
   
   * 821ab87a6c7e0ccc301a0e35a6615c9df79f88c9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121228456)
   * 9e7c51e08f619e75b82ebc6cedccb4fe096bff64 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121408745)
   * cef04c07f422c8d0190780e6dc3ab5a02dc798a2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121409905)
   * 2d5d23ba2b44ff164e516040cf3020cb9a6ff29f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121431213)
   * 9020ec1a4ac23a703b0a242b18fda403af3eb2df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121438026)
   * fa30e7d000b2e793556785339fe9e8e7137daf5f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121588775)
   * 310c68b379c87ee673b24c3facb88528061de191 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121591309)
   * 49b379007fc4cf9037292c3c895de3e650a8f85e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121605666)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9264: [FLINK-13192][hive] Add tests for different Hive table formats

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9264: [FLINK-13192][hive] Add tests for 
different Hive table formats
URL: https://github.com/apache/flink/pull/9264#issuecomment-515988891
 
 
   ## CI report:
   
   * 152412f354ed83326a23edbf5f47b138fdd8e9e2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121089131)
   * 44d0b0cc679843750143d004e707382448822969 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121197470)
   * eedd6e9dc58ec721099a4f7fafb4c4b9f4184782 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121198545)
   * bac68b3fcb46b2d9553203ca693dd63b4db3c9d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121371497)
   * 7deff3374041a83dd8efe9115507b0e7cef0e4eb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121512758)
   * ca8af7930be36f2c6df1de02fa58bd4d0355c5c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121525022)
   * a20e191710be0d73891f2d8d0ef72b65da605603 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121697748)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13544) Set parallelism of table sink operator to input transformation parallelism

2019-08-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13544:

Fix Version/s: 1.10.0
   1.9.0

> Set parallelism of table sink operator to input transformation parallelism
> --
>
> Key: FLINK-13544
> URL: https://issues.apache.org/jira/browse/FLINK-13544
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Table SQL / Planner
>Reporter: Jark Wu
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> Currently, there are a lot of {{TableSink}} connectors uses 
> {{dataStream.addSink()}} without {{setParallelism}} explicitly. This will use 
> default parallelism of the environment. However, the parallelism of input 
> transformation might not be env.parallelism, for example, global aggregation 
> has 1 parallelism. In this case, it will lead to data reorder, and result in 
> incorrect result.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13544) Set parallelism of table sink operator to input transformation parallelism

2019-08-01 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13544:
---

 Summary: Set parallelism of table sink operator to input 
transformation parallelism
 Key: FLINK-13544
 URL: https://issues.apache.org/jira/browse/FLINK-13544
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common, Table SQL / Planner
Reporter: Jark Wu


Currently, there are a lot of {{TableSink}} connectors uses 
{{dataStream.addSink()}} without {{setParallelism}} explicitly. This will use 
default parallelism of the environment. However, the parallelism of input 
transformation might not be env.parallelism, for example, global aggregation 
has 1 parallelism. In this case, it will lead to data reorder, and result in 
incorrect result.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9219: [FLINK-13404] [table] Port csv factories and descriptors to flink-csv

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9219: [FLINK-13404] [table] Port csv 
factories and descriptors to flink-csv
URL: https://github.com/apache/flink/pull/9219#issuecomment-514608060
 
 
   ## CI report:
   
   * 6b9a26ad0d626ca2c3aae3d371a3b376b0093b87 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120440827)
   * 4f1ecd9257b3be1a2cba1955191b07f2c9eb26f4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120606118)
   * 335841b7bd4e32bf1ceff5426eee9e3c742124f1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120625338)
   * 2c7b5d12b6af5a2a0892a4ccc6afb7155b56e1a5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121073532)
   * 3f10ccbc74eb839e2ea3a5d1a0be3dd7a74759b2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121697315)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9326: [FLINK-13543][table-planner-blink] Enable reuse forks for integration tests in blink planner

2019-08-01 Thread GitBox
flinkbot commented on issue #9326: [FLINK-13543][table-planner-blink] Enable 
reuse forks for integration tests in blink planner
URL: https://github.com/apache/flink/pull/9326#issuecomment-517527054
 
 
   ## CI report:
   
   * d68de0a47c3232717ad3c02c03d0f50d0faaf28d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121696933)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9219: [FLINK-13404] [table] Port csv factories and descriptors to flink-csv

2019-08-01 Thread GitBox
godfreyhe commented on issue #9219: [FLINK-13404] [table] Port csv factories 
and descriptors to flink-csv
URL: https://github.com/apache/flink/pull/9219#issuecomment-517526352
 
 
   thanks for your suggestion @twalthr , i had updated this pr based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job

2019-08-01 Thread GitBox
wuchong commented on a change in pull request #9277: 
[FLINK-13494][table-planner-blink] Only use env parallelism for sql job
URL: https://github.com/apache/flink/pull/9277#discussion_r309964072
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecSink.scala
 ##
 @@ -145,12 +145,12 @@ class StreamExecSink[T](
 "implemented and return the sink transformation DataStreamSink. " +
 s"However, ${sink.getClass.getCanonicalName} doesn't implement 
this method.")
 }
-if (!UpdatingPlanChecker.isAppendOnly(this) &&
-dsSink.getTransformation.getParallelism != 
transformation.getParallelism) {
-  throw new TableException(s"The configured sink parallelism should be 
equal to the" +
+if (!UpdatingPlanChecker.isAppendOnly(this) && 
dsSink.getTransformation.getParallelism != 1
+&& dsSink.getTransformation.getParallelism != 
transformation.getParallelism) {
 
 Review comment:
   Is there a possible `transformation.getParallelism` is -1, however, 
`dsSink.getTransformation.getParallelism` is env.getParallelism which is 4? In 
this case, they can be chained, but an exception will be thrown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-08-01 Thread Yingjie Cao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898497#comment-16898497
 ] 

Yingjie Cao commented on FLINK-13489:
-

I tested different setting on Travis for the heavy deployment test and find the 
running time is relevant to state size and taskmanager memory size. When the 
original setting is used, the test encountered akka timeout problem sometimes, 
but after the taskmanager memory size was set to 1G (the original value is 
512M), the problem did not occur anymore. I think 512M memory for 10 slot is a 
little small, so I suggest to increase the taskmanager memory size. What do you 
think [~tzulitai]?

> Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
> --
>
> Key: FLINK-13489
> URL: https://issues.apache.org/jira/browse/FLINK-13489
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Yingjie Cao
>Priority: Blocker
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925128/log.txt
> {code}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 1b4f1807cc749628cfc1bdf04647527a)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
>   ... 21 more
> Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager 
> with id ea456d6a590eca7598c19c4d35e56db9 timed out.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
>   at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at 

[GitHub] [flink] flinkbot edited a comment on issue #9316: [FLINK-13529][table-planner-blink] Verify and correct agg function's semantic for Blink planner

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9316: [FLINK-13529][table-planner-blink] 
Verify and correct agg function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9316#issuecomment-517223639
 
 
   ## CI report:
   
   * 6d6d1ea77cdbe56560d60483e3cf9d14aec6223c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121551628)
   * 4b575ebb37f6fcda975d0e02f16ed88979656f28 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121696609)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13482) How can I cleanly shutdown streaming jobs in local mode?

2019-08-01 Thread Donghui Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898496#comment-16898496
 ] 

Donghui Xu commented on FLINK-13482:


The data source I used was FlinkKafka Consumer010. After consuming kafka's 
data, I want to terminate stream job processing. 
Can this be implemented in code?

> How can I cleanly shutdown streaming jobs in local mode?
> 
>
> Key: FLINK-13482
> URL: https://issues.apache.org/jira/browse/FLINK-13482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Donghui Xu
>Priority: Minor
>
> Currently, streaming jobs can be stopped using "cancel" and "stop" command 
> only in cluster mode, not in local mode.
> When users need to explicitly terminate jobs, it is necessary to provide a 
> termination mechanism for streaming jobs in local mode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13482) How can I cleanly shutdown streaming jobs in local mode?

2019-08-01 Thread Donghui Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898496#comment-16898496
 ] 

Donghui Xu edited comment on FLINK-13482 at 8/2/19 2:40 AM:


The data source I used was FlinkKafkaConsumer010. After consuming kafka's data, 
I want to terminate stream job processing. 
Can this be implemented in code?


was (Author: davidxdh):
The data source I used was FlinkKafka Consumer010. After consuming kafka's 
data, I want to terminate stream job processing. 
Can this be implemented in code?

> How can I cleanly shutdown streaming jobs in local mode?
> 
>
> Key: FLINK-13482
> URL: https://issues.apache.org/jira/browse/FLINK-13482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Donghui Xu
>Priority: Minor
>
> Currently, streaming jobs can be stopped using "cancel" and "stop" command 
> only in cluster mode, not in local mode.
> When users need to explicitly terminate jobs, it is necessary to provide a 
> termination mechanism for streaming jobs in local mode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9180: [FLINK-13331][table-planner-blink] Add CachedMiniCluster to share cluster between ITCases

2019-08-01 Thread GitBox
wuchong commented on issue #9180: [FLINK-13331][table-planner-blink] Add 
CachedMiniCluster to share cluster between ITCases
URL: https://github.com/apache/flink/pull/9180#issuecomment-517525594
 
 
   #9306 has been closed. I created a PR #9326 to continue the work. 
   
   Closing this PR. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong closed pull request #9180: [FLINK-13331][table-planner-blink] Add CachedMiniCluster to share cluster between ITCases

2019-08-01 Thread GitBox
wuchong closed pull request #9180: [FLINK-13331][table-planner-blink] Add 
CachedMiniCluster to share cluster between ITCases
URL: https://github.com/apache/flink/pull/9180
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9326: [FLINK-13543][table-planner-blink] Enable reuse forks for integration tests in blink planner

2019-08-01 Thread GitBox
flinkbot commented on issue #9326: [FLINK-13543][table-planner-blink] Enable 
reuse forks for integration tests in blink planner
URL: https://github.com/apache/flink/pull/9326#issuecomment-517525618
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13482) How can I cleanly shutdown streaming jobs in local mode?

2019-08-01 Thread Donghui Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donghui Xu updated FLINK-13482:
---
Description: 
Currently, streaming jobs can be stopped using "cancel" and "stop" command only 
in cluster mode, not in local mode.
When users need to explicitly terminate jobs, it is necessary to provide a 
termination mechanism for streaming jobs in local mode.

  was:
Currently, streaming jobs can be stopped using "cancel" and "stop" command only 
in cluster mode, not in local mode.
When users need to explicitly terminate jobs, it is necessary to provide a 
termination mechanism for local mode flow jobs.


> How can I cleanly shutdown streaming jobs in local mode?
> 
>
> Key: FLINK-13482
> URL: https://issues.apache.org/jira/browse/FLINK-13482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Donghui Xu
>Priority: Minor
>
> Currently, streaming jobs can be stopped using "cancel" and "stop" command 
> only in cluster mode, not in local mode.
> When users need to explicitly terminate jobs, it is necessary to provide a 
> termination mechanism for streaming jobs in local mode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13543) Enable reuse forks for integration tests in blink planner

2019-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13543:
---
Labels: pull-request-available  (was: )

> Enable reuse forks for integration tests in blink planner
> -
>
> Key: FLINK-13543
> URL: https://issues.apache.org/jira/browse/FLINK-13543
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> As discussed in https://github.com/apache/flink/pull/9180 , we find that with 
> enabling reuse forks we can save ~20min (50min -> 30min) for blink planner 
> test. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong opened a new pull request #9326: [FLINK-13543][table-planner-blink] Enable reuse forks for integration tests in blink planner

2019-08-01 Thread GitBox
wuchong opened a new pull request #9326: [FLINK-13543][table-planner-blink] 
Enable reuse forks for integration tests in blink planner
URL: https://github.com/apache/flink/pull/9326
 
 
   
   
   
   
   ## What is the purpose of the change
   
   Enable reuse forks for integration tests in blink planner.
   
   ## Brief change log
   
   Override `maven-surefire-plugin` configuration in blink planner pom.xml.
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] Only use env parallelism for sql job

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9277: [FLINK-13494][table-planner-blink] 
Only use env parallelism for sql job
URL: https://github.com/apache/flink/pull/9277#issuecomment-516388088
 
 
   ## CI report:
   
   * 7bebdd65247ac172a23f9b0a91873b01b554cd71 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121239025)
   * 972eca969b455cd33ebac3045adfc8bba874b400 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121258626)
   * 016ea4143be32fb976685173a8407b7f1aecce15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121359067)
   * 8ffef5f9cbf8793f6a8653c08a286dcceb6c69e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121361743)
   * dab6e02f5509d97c6570846beef7bc527d9cbf1d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121368256)
   * 74896cb6f41fa0111a9e3b9be2aec790a388baef : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121510824)
   * 472b5a48ea9cf86fe800cba8a2df086775373a08 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121563795)
   * abb3f79728d5298702cdb19fc3648718ba0f28e8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121696234)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink] Do not fetch data from LookupableTableSource if the JoinKey in left side of LookupJoin contains null value

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9285: [FLINK-13433][table-planner-blink]  
Do not fetch data from LookupableTableSource if the JoinKey in left side of 
LookupJoin contains null value.
URL: https://github.com/apache/flink/pull/9285#issuecomment-516712727
 
 
   ## CI report:
   
   * bb70e45a98e76de7f95ac31e893999683cb5bde8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121359827)
   * 4f96a184d471836053a7e2b09cbd1583ebced727 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121512745)
   * a915ad9e9323b5c0f799beae32eba104b76b583f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121587513)
   * 3e6a30848c721001c6bf0a514fb00b00c6f6e0ce : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121696226)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9312: [FLINK-13436][end-to-end-tests] Add TPC-H queries as E2E tests

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9312: [FLINK-13436][end-to-end-tests] Add 
TPC-H queries as E2E tests
URL: https://github.com/apache/flink/pull/9312#issuecomment-517175361
 
 
   ## CI report:
   
   * bf81288dc40cf5a3bee5ece840d21821ea903135 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121534009)
   * a62d69769da91d1b3081f71b24c8abe9caafa6f7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121592846)
   * af818c8453180e7877d13e55a8c46f825f2ef100 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121605644)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions

2019-08-01 Thread GitBox
Myasuka commented on a change in pull request #9268: [FLINK-13452] Ensure to 
fail global when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9268#discussion_r309962573
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/AdaptedRestartPipelinedRegionStrategyNGFailoverTest.java
 ##
 @@ -376,12 +410,13 @@ private JobGraph createBatchJobGraph() {
}
 
private ExecutionGraph createExecutionGraph(final JobGraph jobGraph) 
throws Exception {
-   return createExecutionGraph(jobGraph, new 
FixedDelayRestartStrategy(10, 0));
+   return createExecutionGraph(jobGraph, new 
FixedDelayRestartStrategy(10, 0), false);
}
 
private ExecutionGraph createExecutionGraph(
final JobGraph jobGraph,
-   final RestartStrategy restartStrategy) throws Exception 
{
+   final RestartStrategy restartStrategy,
+   final boolean failOnRecoverCheckpoint) throws Exception 
{
 
 Review comment:
   Agreed, combine `testFailGlobalIfErrorOnResetTasks` and previous 
`testFailGlobalIfErrorOnRescheduleTasks` into one unit test as 
`testFailGlobalIfErrorOnRestartTasks` with given RestartStrategy which would 
fail when restarting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-08-01 Thread GitBox
lirui-apache commented on issue #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254#issuecomment-517522620
 
 
   The latest CI succeeded.
   @wuchong would you mind help review and merge this PR? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13543) Enable reuse forks for integration tests in blink planner

2019-08-01 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13543:
---

 Summary: Enable reuse forks for integration tests in blink planner
 Key: FLINK-13543
 URL: https://issues.apache.org/jira/browse/FLINK-13543
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jark Wu
Assignee: Jark Wu
 Fix For: 1.9.0, 1.10.0


As discussed in https://github.com/apache/flink/pull/9180 , we find that with 
enabling reuse forks we can save ~20min (50min -> 30min) for blink planner 
test. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13331) Add TestMiniClusters to maintain cache share cluster between Tests

2019-08-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13331:

Fix Version/s: (was: 1.9.0)

> Add TestMiniClusters to maintain cache share cluster between Tests
> --
>
> Key: FLINK-13331
> URL: https://issues.apache.org/jira/browse/FLINK-13331
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9106: [FLINK-13184][yarn] Support launching task executors with multi-thread on YARN.

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9106: [FLINK-13184][yarn] Support launching 
task executors with multi-thread on YARN.
URL: https://github.com/apache/flink/pull/9106#issuecomment-511183558
 
 
   ## CI report:
   
   * 25fc95f30720209e19bd010cdd517ff5e3c685d8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119056693)
   * d6fe7407e45c8bf68bfb9ec2eaa8ceaa5e9ab555 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121553876)
   * 725a9f2a602afa58ad9e1829c55e6adb64aa8b40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121599470)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37

2019-08-01 Thread GitBox
danny0405 commented on a change in pull request #9243: 
[FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
URL: https://github.com/apache/flink/pull/9243#discussion_r309960001
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java
 ##
 @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() {
}
 
@Test
-   public void testCreateTableWithDecimalType() {
+   public void testCreateTableWithComplexType() {
check("CREATE TABLE tbl1 (\n" +
-   "  a decimal, \n" +
-   "  b decimal(10, 0),\n" +
-   "  c decimal(38, 38),\n" +
+   "  a ARRAY, \n" +
+   "  b MAP,\n" +
+   "  c ROW,\n" +
+   "  d MULTISET,\n" +
"  PRIMARY KEY (a, b) \n" +
") with (\n" +
"  x = 'y', \n" +
"  asd = 'data'\n" +
")\n", "CREATE TABLE `TBL1` (\n" +
-   "  `A`  DECIMAL,\n" +
-   "  `B`  DECIMAL(10, 0),\n" +
-   "  `C`  DECIMAL(38, 38),\n" +
+   "  `A`  ARRAY< BIGINT >,\n" +
+   "  `B`  MAP< INTEGER, VARCHAR >,\n" +
+   "  `C`  ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR 
>,\n" +
+   "  `D`  MULTISET< VARCHAR >,\n" +
 
 Review comment:
   But most of the users have used the `VARCHAR(65536)` for the long time, it 
would be a breaking change if we change it to default `VARCHAR(1)`, it is a SQL 
standard, but we do need follow everything.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13290) HBaseITCase bug using blink-runner: SinkCodeGenerator should not compare row type field names

2019-08-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13290.
-
   Resolution: Fixed
Fix Version/s: 1.10.0

[FLINK-13290][table-api] Add method to check LogicalType compatible
 - Fixed in 1.10.0: 73bb670ec6888ce55a9185a222b7b4a7cdb62d05
 - Fixed in 1.9.0: 4925389b30ffa93e077bd31cd1c7d896a12dfea1


[FLINK-13290][table-planner-blink] SinkCodeGenerator should not compare row 
type field names
 - Fixed in 1.10.0: 5b3b1890ea55b7791cb13d9a17551e48d3cbd567
 - Fixed in 1.9.0: 55f399c5b43f5fa39484c01a5f1d305e8b262209

[FLINK-13290][hbase] Enable blink planner for integration tests of HBase
 - Fixed in 1.10.0: 5d981ebc16ce6a1089e021fcb7a634ebe0167be5
 - Fixed in 1.9.0: 61adb66107c462a60be021e21a3cb6251452fa45

> HBaseITCase bug using blink-runner: SinkCodeGenerator should not compare row 
> type field names
> --
>
> Key: FLINK-13290
> URL: https://issues.apache.org/jira/browse/FLINK-13290
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running {{org.apache.flink.addons.hbase.HBaseSinkITCase}} using blink planner 
> will encountered this error:
> {code}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 28.042 sec
> <<< FAILURE! - in org.apache.flink.addons.hbase.HBaseSinkITCase
> testTableSink(org.apache.flink.addons.hbase.HBaseSinkITCase)  Time elapsed:
> 2.431 sec  <<< ERROR!
> org.apache.flink.table.api.TableException: Result field 'family1' does not
> match requested type. Requested: Row(col1: Integer); Actual: Row(EXPR$0:
> Integer)
> at
> org.apache.flink.addons.hbase.HBaseSinkITCase.testTableSink(HBaseSinkITCase.java:140)
> {code}
> This might because 
> [SinkCodeGenerator.scala#L243|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/codegen/SinkCodeGenerator.scala#L243]
>  compare RowType with field names.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] danny0405 commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37

2019-08-01 Thread GitBox
danny0405 commented on a change in pull request #9243: 
[FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
URL: https://github.com/apache/flink/pull/9243#discussion_r309959336
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) :
 }
 }
 
+SqlIdentifier FlinkCollectionsTypeName() :
+{
+}
+{
+LOOKAHEAD(2)
+ {
+return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos());
+}
+|
+ {
+return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos());
+}
+}
+
+SqlIdentifier FlinkTypeName() :
+{
+final SqlTypeName sqlTypeName;
+final SqlIdentifier typeName;
+final Span s = Span.of();
+}
+{
+(
+<#-- additional types are included here -->
+<#-- make custom data types in front of Calcite core data types -->
+<#list parser.flinkDataTypeParserMethods as method>
+<#if (method?index > 0)>
+|
+
+LOOKAHEAD(2)
+typeName = ${method}
+
+|
+LOOKAHEAD(2)
+sqlTypeName = SqlTypeName(s) {
+typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this));
+}
+|
+LOOKAHEAD(2)
+typeName = FlinkCollectionsTypeName()
+|
+typeName = CompoundIdentifier() {
+throw new ParseException("UDT in DDL is not supported yet.");
 
 Review comment:
   Yes. Let's postpone full type support for cast in Flink 1.10, when i 
finished the data type parse refactoring in Calcite[1], which would be into 
release 1.21.
   
   BTW, the full implicit type cast[2] support would be into release 1.21 too.
   
   [1] https://issues.apache.org/jira/browse/CALCITE-3213
   [2] https://issues.apache.org/jira/browse/CALCITE-2302
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37

2019-08-01 Thread GitBox
danny0405 commented on a change in pull request #9243: 
[FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
URL: https://github.com/apache/flink/pull/9243#discussion_r309959336
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) :
 }
 }
 
+SqlIdentifier FlinkCollectionsTypeName() :
+{
+}
+{
+LOOKAHEAD(2)
+ {
+return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos());
+}
+|
+ {
+return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos());
+}
+}
+
+SqlIdentifier FlinkTypeName() :
+{
+final SqlTypeName sqlTypeName;
+final SqlIdentifier typeName;
+final Span s = Span.of();
+}
+{
+(
+<#-- additional types are included here -->
+<#-- make custom data types in front of Calcite core data types -->
+<#list parser.flinkDataTypeParserMethods as method>
+<#if (method?index > 0)>
+|
+
+LOOKAHEAD(2)
+typeName = ${method}
+
+|
+LOOKAHEAD(2)
+sqlTypeName = SqlTypeName(s) {
+typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this));
+}
+|
+LOOKAHEAD(2)
+typeName = FlinkCollectionsTypeName()
+|
+typeName = CompoundIdentifier() {
+throw new ParseException("UDT in DDL is not supported yet.");
 
 Review comment:
   Yes. Let's postpone full type support for cast for 1.10, when i finished the 
data type parse refactoring in Calcite[1], which would be into release 1.21.
   
   BTW, the full implicit type cast[2] support would be into release 1.21 too.
   
   [1] https://issues.apache.org/jira/browse/CALCITE-3213
   [2] https://issues.apache.org/jira/browse/CALCITE-2302
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] asfgit closed pull request #9275: [FLINK-13290][table-planner-blink][hbase] SinkCodeGenerator should not compare row type field names and enable blink planner for hbase IT case

2019-08-01 Thread GitBox
asfgit closed pull request #9275: [FLINK-13290][table-planner-blink][hbase] 
SinkCodeGenerator should not compare row type field names and enable blink 
planner for hbase IT case
URL: https://github.com/apache/flink/pull/9275
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9288: [FLINK-13421] [runtime] Do not allocate slots in a MultiTaskSlot when it’s releasing children

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9288: [FLINK-13421] [runtime] Do not 
allocate slots in a MultiTaskSlot when it’s releasing children
URL: https://github.com/apache/flink/pull/9288#issuecomment-516728840
 
 
   ## CI report:
   
   * 09b5fe381383979e6d7726d5ba54898d1e34282e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121364685)
   * edcfe5f0130e6e5884a6a9374cbe9fc1350dc3ee : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121592869)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13417) Bump Zookeeper to 3.5.5

2019-08-01 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898470#comment-16898470
 ] 

TisonKun commented on FLINK-13417:
--

[~StephanEwen] I'm glad to help. Let me see what I can do for it and I'd like 
to file a subtask of this thread once I got one.

Could you show me where is a good start for making use of {{flink-jepsen}}?

> Bump Zookeeper to 3.5.5
> ---
>
> Key: FLINK-13417
> URL: https://issues.apache.org/jira/browse/FLINK-13417
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Priority: Blocker
> Fix For: 1.10.0
>
>
> User might want to secure their Zookeeper connection via SSL.
> This requires a Zookeeper version >= 3.5.1. We might as well try to bump it 
> to 3.5.5, which is the latest version. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9312: [FLINK-13436][end-to-end-tests] Add TPC-H queries as E2E tests

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9312: [FLINK-13436][end-to-end-tests] Add 
TPC-H queries as E2E tests
URL: https://github.com/apache/flink/pull/9312#issuecomment-517175361
 
 
   ## CI report:
   
   * bf81288dc40cf5a3bee5ece840d21821ea903135 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121534009)
   * a62d69769da91d1b3081f71b24c8abe9caafa6f7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121592846)
   * af818c8453180e7877d13e55a8c46f825f2ef100 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121605644)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints

2019-08-01 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-13541:
---

Assignee: Tzu-Li (Gordon) Tai

> State Processor Api sets the wrong key selector when writing savepoints
> ---
>
> Key: FLINK-13541
> URL: https://issues.apache.org/jira/browse/FLINK-13541
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / State Backends
>Reporter: Seth Wiesman
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The state processor api is setting the wrong key selector for its 
> StreamConfig when writing savepoints. It uses two key selectors internally 
> that happen to output the same value for integer keys but not in general. 
> {noformat}
> Caused by: java.lang.RuntimeException: Exception occurred while setting the 
> current key context.
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615)
>   at 
> org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
>   at 
> org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76)
>   at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.String
>   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639)
>   ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints

2019-08-01 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-13541:
---

Assignee: Seth Wiesman  (was: Tzu-Li (Gordon) Tai)

> State Processor Api sets the wrong key selector when writing savepoints
> ---
>
> Key: FLINK-13541
> URL: https://issues.apache.org/jira/browse/FLINK-13541
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / State Backends
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The state processor api is setting the wrong key selector for its 
> StreamConfig when writing savepoints. It uses two key selectors internally 
> that happen to output the same value for integer keys but not in general. 
> {noformat}
> Caused by: java.lang.RuntimeException: Exception occurred while setting the 
> current key context.
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615)
>   at 
> org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
>   at 
> org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76)
>   at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.String
>   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639)
>   ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9275: [FLINK-13290][table-planner-blink][hbase] SinkCodeGenerator should not compare row type field names and enable blink planner for hbase IT cas

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9275: 
[FLINK-13290][table-planner-blink][hbase] SinkCodeGenerator should not compare 
row type field names and enable blink planner for hbase IT case
URL: https://github.com/apache/flink/pull/9275#issuecomment-516364194
 
 
   ## CI report:
   
   * 821ab87a6c7e0ccc301a0e35a6615c9df79f88c9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121228456)
   * 9e7c51e08f619e75b82ebc6cedccb4fe096bff64 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121408745)
   * cef04c07f422c8d0190780e6dc3ab5a02dc798a2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121409905)
   * 2d5d23ba2b44ff164e516040cf3020cb9a6ff29f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121431213)
   * 9020ec1a4ac23a703b0a242b18fda403af3eb2df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121438026)
   * fa30e7d000b2e793556785339fe9e8e7137daf5f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121588775)
   * 310c68b379c87ee673b24c3facb88528061de191 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121591309)
   * 49b379007fc4cf9037292c3c895de3e650a8f85e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121605666)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11605) Translate the "Dataflow Programming Model" page into Chinese

2019-08-01 Thread WangHengWei (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898461#comment-16898461
 ] 

WangHengWei commented on FLINK-11605:
-

Hi [~jark], I can do the job, please assign it to me.

> Translate the "Dataflow Programming Model" page into Chinese
> 
>
> Key: FLINK-11605
> URL: https://issues.apache.org/jira/browse/FLINK-11605
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Xin Ma
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/programming-model.html
> The markdown file is located in flink/docs/concepts/programming-model.zh.md
> The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints

2019-08-01 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13541:

Fix Version/s: (was: 1.10.0)

> State Processor Api sets the wrong key selector when writing savepoints
> ---
>
> Key: FLINK-13541
> URL: https://issues.apache.org/jira/browse/FLINK-13541
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / State Backends
>Reporter: Seth Wiesman
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The state processor api is setting the wrong key selector for its 
> StreamConfig when writing savepoints. It uses two key selectors internally 
> that happen to output the same value for integer keys but not in general. 
> {noformat}
> Caused by: java.lang.RuntimeException: Exception occurred while setting the 
> current key context.
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615)
>   at 
> org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
>   at 
> org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76)
>   at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.String
>   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639)
>   ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13541) State Processor Api sets the wrong key selector when writing savepoints

2019-08-01 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13541:

Priority: Blocker  (was: Major)

> State Processor Api sets the wrong key selector when writing savepoints
> ---
>
> Key: FLINK-13541
> URL: https://issues.apache.org/jira/browse/FLINK-13541
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / State Backends
>Reporter: Seth Wiesman
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The state processor api is setting the wrong key selector for its 
> StreamConfig when writing savepoints. It uses two key selectors internally 
> that happen to output the same value for integer keys but not in general. 
> {noformat}
> Caused by: java.lang.RuntimeException: Exception occurred while setting the 
> current key context.
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:641)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:627)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:615)
>   at 
> org.apache.flink.state.api.output.BoundedStreamTask.performDefaultAction(BoundedStreamTask.java:83)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:140)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378)
>   at 
> org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76)
>   at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.String
>   at 
> org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.serializeKeyGroupAndKey(RocksDBSerializedCompositeKeyBuilder.java:159)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBSerializedCompositeKeyBuilder.setKeyAndKeyGroup(RocksDBSerializedCompositeKeyBuilder.java:96)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.setCurrentKey(RocksDBKeyedStateBackend.java:303)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:639)
>   ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9296: [FLINK-13476][coordination] Release pipelined partitions on vertex restart / job failure

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9296: [FLINK-13476][coordination] Release 
pipelined partitions on vertex restart / job failure
URL: https://github.com/apache/flink/pull/9296#issuecomment-516833551
 
 
   ## CI report:
   
   * 53b55b57bf8f8359987bc775b91e650661924b7a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121408712)
   * faf363d0baf8b6962ea74a8842b0fcd695da2af0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121550445)
   * 032ab5a2f7f49e03eb94ddf5fe63e6ac5c40ee2b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121590049)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9322: [FLINK-13471][table] Add FlatAggregate support to stream Table API(blink planner)

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9322: [FLINK-13471][table] Add 
FlatAggregate support to stream Table API(blink planner)
URL: https://github.com/apache/flink/pull/9322#issuecomment-517273321
 
 
   ## CI report:
   
   * e3d5e780a11ddd27ab6721772925db730d95c75c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121590025)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent 
leaks of blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772917
 
 
   ## CI report:
   
   * 4e15048a256b338df18ad8f9d89e0a576ae06a27 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120980283)
   * fbca009f51dc08f4a497ceab8628a59c307d5bc4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121384142)
   * 95c61d66b51c9c58cb9bd9cb4b6a575851284ca6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121389395)
   * e6967758d6e66e149a2f9623d67a19b26596e3ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121390402)
   * b5685e1c18955a6646b3fe4cd48cc1259b2afa8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121396586)
   * 74f71d4b93b48d53b724fc069c10ccb5e40d41dd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121544224)
   * 0218532ef99e5c3d1d407f65eda411a77c9b3996 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121550485)
   * 6f631d5bfe987baf31ab6dec71b9405c00423e4f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121588808)
   * cb678bfc9821e4cbcbf324010b7242c0ff99dd53 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121613789)
   * c94afaa7c60621886f7982711b68830e2dd618b6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121615460)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9310: [FLINK-13190]add test to verify partition pruning for HiveTableSource

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9310: [FLINK-13190]add test to verify 
partition pruning for HiveTableSource
URL: https://github.com/apache/flink/pull/9310#issuecomment-517106716
 
 
   ## CI report:
   
   * fc57b4955bfaad92fb17ce6b90cd1e8955368c08 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121513918)
   * 3c2cd5d1a43a2875e46c795e95441c38a31debc9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121550390)
   * 211bf3740ffb735c59a9ec220d0c34a8b75743e5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121688520)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type 
mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-515435805
 
 
   ## CI report:
   
   * bb0663ddbb6eeda06b756c4ffc7094e64dbdb5b9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120851212)
   * 86a460407693769f0d2afaa3597c70f202126099 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121104847)
   * 5c25e802c096e2688e6cfa01ff7f74d3c050eef5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121108411)
   * c26538e93fcad20bd337b3766ccdfc30d46380fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121114320)
   * 4f32c8d7f8d14601e8caa28d1079ae3fdce0873e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121181320)
   * 7661c074e35ccbca351d80181dfdc8de6bdaea0b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121207981)
   * 71d581e54fad407304b329b90cb7f917b29fb922 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121350937)
   * 93feb4ab1a859857dbaf0d8b31128d5f940cab0f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121377899)
   * 78fae992496817346192ec9d7aab4c91763cf37e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121508022)
   * c397b5dc03751e77216b8fdd0696541fb5d18e19 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121686999)
   * edaa92135577ffcbc86c774d945532816d4f01bc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121688052)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-08-01 Thread GitBox
zjuwangg commented on issue #9239: [FLINK-13385]Align Hive data type mapping 
with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-517496333
 
 
   Hi, @twalthr ~
   Thanks for your detailed review and valuable comments.
   I have just updated my code and please have a look when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #9310: [FLINK-13190]add test to verify partition pruning for HiveTableSource

2019-08-01 Thread GitBox
zjuwangg commented on issue #9310: [FLINK-13190]add test to verify partition 
pruning for HiveTableSource
URL: https://github.com/apache/flink/pull/9310#issuecomment-517496162
 
 
   rebase code to fix conflicts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type 
mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-515435805
 
 
   ## CI report:
   
   * bb0663ddbb6eeda06b756c4ffc7094e64dbdb5b9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120851212)
   * 86a460407693769f0d2afaa3597c70f202126099 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121104847)
   * 5c25e802c096e2688e6cfa01ff7f74d3c050eef5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121108411)
   * c26538e93fcad20bd337b3766ccdfc30d46380fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121114320)
   * 4f32c8d7f8d14601e8caa28d1079ae3fdce0873e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121181320)
   * 7661c074e35ccbca351d80181dfdc8de6bdaea0b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121207981)
   * 71d581e54fad407304b329b90cb7f917b29fb922 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121350937)
   * 93feb4ab1a859857dbaf0d8b31128d5f940cab0f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121377899)
   * 78fae992496817346192ec9d7aab4c91763cf37e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121508022)
   * c397b5dc03751e77216b8fdd0696541fb5d18e19 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121686999)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-08-01 Thread GitBox
zjuwangg commented on issue #9239: [FLINK-13385]Align Hive data type mapping 
with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-517494752
 
 
   Hi, @twalthr ~
   Thanks for your detailed review and valuable comments.
   I have just updated my code and please have a look when you have time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-08-01 Thread GitBox
zjuwangg commented on a change in pull request #9239: [FLINK-13385]Align Hive 
data type mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#discussion_r309935735
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTypeUtil.java
 ##
 @@ -85,65 +86,71 @@ public static TypeInfo toHiveTypeInfo(DataType dataType) {
LogicalTypeRoot type = dataType.getLogicalType().getTypeRoot();
 
if (dataType instanceof AtomicDataType) {
 
 Review comment:
   Yep, that would be more elegant.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13424) HiveCatalog should add hive version in conf

2019-08-01 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-13424.

Resolution: Fixed

merged in master: b20ac47258228aed51be0d0e6fc958ac303ce4a1  1.9.0: 
4b8d35c7f9eb6573e3b099fd2ca8d4ca42ccae72

> HiveCatalog should add hive version in conf
> ---
>
> Key: FLINK-13424
> URL: https://issues.apache.org/jira/browse/FLINK-13424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{HiveTableSource}} and {{HiveTableSink}} retrieve hive version from conf. 
> Therefore {{HiveCatalog}} has to add it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13424) HiveCatalog should add hive version in conf

2019-08-01 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13424:
-
Fix Version/s: 1.10.0

> HiveCatalog should add hive version in conf
> ---
>
> Key: FLINK-13424
> URL: https://issues.apache.org/jira/browse/FLINK-13424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{HiveTableSource}} and {{HiveTableSink}} retrieve hive version from conf. 
> Therefore {{HiveCatalog}} has to add it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-08-01 Thread GitBox
asfgit closed pull request #9232: [FLINK-13424][hive] HiveCatalog should add 
hive version in conf
URL: https://github.com/apache/flink/pull/9232
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9321: [FLINK-13486][tests] Optimize AsyncDataStreamITCase to alleviate the …

2019-08-01 Thread GitBox
flinkbot edited a comment on issue #9321: [FLINK-13486][tests] Optimize 
AsyncDataStreamITCase to alleviate the …
URL: https://github.com/apache/flink/pull/9321#issuecomment-517266286
 
 
   ## CI report:
   
   * 8738fc7248be508edcf3e1c0dcea6578f7b35a7e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121587485)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   >