[GitHub] [flink] flinkbot commented on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-08 Thread GitBox
flinkbot commented on issue #10492: [FLINK-15140][runtime] Fix shuffle data 
compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10491: [FLINK-15093][streaming-java] 
StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#issuecomment-563083959
 
 
   
   ## CI report:
   
   * 2e942aa60f0807a6a41fd0483237d8b571fdca1f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140171167)
   * f9657812903c088739c262b4e418d69683c52989 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10493: [FLINK-15061][table]create/alter and table/databases properties should be case sensitive stored in catalog

2019-12-08 Thread GitBox
flinkbot commented on issue #10493: [FLINK-15061][table]create/alter and 
table/databases properties should be case sensitive stored in catalog
URL: https://github.com/apache/flink/pull/10493#issuecomment-563111006
 
 
   
   ## CI report:
   
   * a79225d75a53240951a4378380ff2aac457c1278 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15139) misc end to end test failed on 'SQL Client end-to-end test (Old planner)'

2019-12-08 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-15139:
--
Priority: Critical  (was: Major)

> misc end to end test failed on 'SQL Client end-to-end test (Old planner)'
> -
>
> Key: FLINK-15139
> URL: https://issues.apache.org/jira/browse/FLINK-15139
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: wangxiyuan
>Priority: Critical
> Fix For: 1.10.0
>
>
> The test Running 'SQL Client end-to-end test (Old planner)' in misc e2e test 
> failed
> log:
> {code:java}
> (a94d1da25baf2a5586a296d9e933743c) switched from RUNNING to FAILED.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
> ClassLoader info: URL ClassLoader:
> Class not resolvable through given classloader.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:430)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:419)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:144)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:432)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:460)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at 
> org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:60)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550)
>   at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:254)
>   ... 10 more
> {code}
> link: [https://travis-ci.org/apache/flink/jobs/622261358]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] 
Flatten all the connector properties keys to make it easy to configure in DDL
URL: https://github.com/apache/flink/pull/10468#issuecomment-562648066
 
 
   
   ## CI report:
   
   * 9802f136dd38c6ba22752a7761e644b7bd05e99d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139785260)
   * 96db6fee76e36a40e62140bbc55d71f3ef7e8e41 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139853358)
   * 58217bef8f03db408b29dda2916ab19b2ac57ceb : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139862375)
   * 3e4465a4b29b2635dca4b482661aa384e4528ceb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139865498)
   * 51be8b2eb123d3e174a54be23d0426c3461bc647 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139940483)
   * 0a28b0da54c3edc944de40cc22d9b2bb9c28856d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140044208)
   * bc77e779da82c359eede72fa8701d7295143e56a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126749)
   * 08cd3ae69e01907f72c16080e34fed4a28c5745d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130222)
   * 9c768bc00705629df52d5cc6074c58bc3e9a9c91 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140155749)
   * 539e0ddf21018bdaf5c87aba91070ea7be9359a7 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164814)
   * 06d5bf1cf8dd165aa4051f5dd706f2e57fd2668e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140177489)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility class for the Table

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility 
class for the Table
URL: https://github.com/apache/flink/pull/9404#issuecomment-519872127
 
 
   
   ## CI report:
   
   * 4a726770daa19b1a587c5f8d9221a95e08387592 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122573104)
   * cd4cfbe81b4d511ec985bc4ad55a5dad82722863 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/124368074)
   * 6efbedf230f6fdeec96b9802d296629269a230ab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125253936)
   * 51f02e22e7372fa20ab2dd679aaa1099b37d0b1e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125893622)
   * 7d48bb2f3277398b2924f6a21ec31d8ea5570f38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131786275)
   * 32df598f496a30f516ece1189633899b6d7dc201 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139147586)
   * 4bbe55250ac0e748a19313f654e811eb730cd627 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139507969)
   * 3b28c422b180379b6a2249c393196143a2a2de37 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164804)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 commented on issue #10300: [FLINK-14926] [state backends] Make sure no resource leak of RocksObject

2019-12-08 Thread GitBox
carp84 commented on issue #10300: [FLINK-14926] [state backends] Make sure no 
resource leak of RocksObject
URL: https://github.com/apache/flink/pull/10300#issuecomment-563107225
 
 
   @StephanEwen Thanks for the comments and suggestions! Codes updated 
accordingly and already got a green build in [local 
travis](https://travis-ci.org/carp84/flink/builds/622490826), FYI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15141) Using decimal type in a sink table, the result returns a not match ValidationException

2019-12-08 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991210#comment-16991210
 ] 

xiaojin.wy commented on FLINK-15141:


[~ykt836] can you please assign someone to check it?

> Using decimal type in a sink table, the result returns a not match 
> ValidationException 
> ---
>
> Key: FLINK-15141
> URL: https://issues.apache.org/jira/browse/FLINK-15141
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The planner what I used is blink.
> *The source table is:*
> CREATE TABLE `aggtest` (
> a smallint,
> b float
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
> 'format.type'='csv'
> ); 
>  
> *The sink table is:*
> CREATE TABLE `agg_decimal_res` (
> avg_107_943 DECIMAL(10, 3)
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
> 'format.type'='csv'
> );
>  
> *The sql is:*
> INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS 
> avg_107_943 FROM aggtest;
>  
> After execute the sql, there will be a exception appear, just like this:
> [INFO] Submitting SQL update statement to the cluster...
>  [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink 
> `default_catalog`.`default_database`.`agg_decimal_res1` do not match.
>  Query result schema: [avg_107_943: DECIMAL(10, 3)]
>  TableSink schema: [avg_107_943: DECIMAL(38, 18)]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15141) Using decimal type in a sink table, the result returns a not match ValidationException

2019-12-08 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15141:
---
Description: 
The planner what I used is blink.

*The source table is:*
CREATE TABLE `aggtest` (
a smallint,
b float
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
'format.type'='csv'
); 
 

*The sink table is:*
CREATE TABLE `agg_decimal_res` (
avg_107_943 DECIMAL(10, 3)
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
'format.type'='csv'
);
 

*The sql is:*

INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS avg_107_943 
FROM aggtest;

 

After execute the sql, there will be a exception appear, just like this:

[INFO] Submitting SQL update statement to the cluster...
 [ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.table.api.ValidationException: Field types of query result 
and registered TableSink 
`default_catalog`.`default_database`.`agg_decimal_res1` do not match.
 Query result schema: [avg_107_943: DECIMAL(10, 3)]
 TableSink schema: [avg_107_943: DECIMAL(38, 18)]

 

  was:
The planner what I used is blink.

*The source table is:*

 

CREATE TABLE `aggtest` (CREATE TABLE `aggtest` ( a smallint, b float) WITH ( 
'format.field-delimiter'='|', 'connector.type'='filesystem', 
'format.derive-schema'='true', 
'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
 'format.type'='csv');

 

 

 

*The sink table is:*

CREATE TABLE `agg_decimal_res` (CREATE TABLE `agg_decimal_res` ( avg_107_943 
DECIMAL(10, 3)) WITH ( 'format.field-delimiter'='|', 
'connector.type'='filesystem', 'format.derive-schema'='true', 
'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
 'format.type'='csv');

 

*The sql is:*

INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS avg_107_943 
FROM aggtest;

 

After execute the sql, there will be a exception appear, just like this:

[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink `default_catalog`.`default_database`.`agg_decimal_res1` do 
not match.
Query result schema: [avg_107_943: DECIMAL(10, 3)]
TableSink schema: [avg_107_943: DECIMAL(38, 18)]

 


> Using decimal type in a sink table, the result returns a not match 
> ValidationException 
> ---
>
> Key: FLINK-15141
> URL: https://issues.apache.org/jira/browse/FLINK-15141
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The planner what I used is blink.
> *The source table is:*
> CREATE TABLE `aggtest` (
> a smallint,
> b float
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
> 'format.type'='csv'
> ); 
>  
> *The sink table is:*
> CREATE TABLE `agg_decimal_res` (
> avg_107_943 DECIMAL(10, 3)
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
> 'format.type'='csv'
> );
>  
> *The sql is:*
> INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS 
> avg_107_943 FROM aggtest;
>  
> After execute the sql, there will be a exception appear, just like this:
> [INFO] Submitting SQL update statement to the cluster...
>  [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink 
> `default_catalog`.`default_database`.`agg_decimal_res1` do not match.
>  Query result schema: [avg_107_943: DECIMAL(10, 3)]
>  TableSink schema: [avg_107_943: DECIMAL(38, 18)]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15141) Using decimal type in a sink table, the result returns a not match ValidationException

2019-12-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991209#comment-16991209
 ] 

Jingsong Lee commented on FLINK-15141:
--

Hi [~xiaojin.wy], is this duplicated with 
https://issues.apache.org/jira/browse/FLINK-15124 ?

> Using decimal type in a sink table, the result returns a not match 
> ValidationException 
> ---
>
> Key: FLINK-15141
> URL: https://issues.apache.org/jira/browse/FLINK-15141
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The planner what I used is blink.
> *The source table is:*
>  
> CREATE TABLE `aggtest` (CREATE TABLE `aggtest` ( a smallint, b float) WITH ( 
> 'format.field-delimiter'='|', 'connector.type'='filesystem', 
> 'format.derive-schema'='true', 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
>  'format.type'='csv');
>  
>  
>  
> *The sink table is:*
> CREATE TABLE `agg_decimal_res` (CREATE TABLE `agg_decimal_res` ( avg_107_943 
> DECIMAL(10, 3)) WITH ( 'format.field-delimiter'='|', 
> 'connector.type'='filesystem', 'format.derive-schema'='true', 
> 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
>  'format.type'='csv');
>  
> *The sql is:*
> INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS 
> avg_107_943 FROM aggtest;
>  
> After execute the sql, there will be a exception appear, just like this:
> [INFO] Submitting SQL update statement to the cluster...
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink 
> `default_catalog`.`default_database`.`agg_decimal_res1` do not match.
> Query result schema: [avg_107_943: DECIMAL(10, 3)]
> TableSink schema: [avg_107_943: DECIMAL(38, 18)]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10493: [FLINK-15061][table]create/alter and table/databases properties should be case sensitive stored in catalog

2019-12-08 Thread GitBox
flinkbot commented on issue #10493: [FLINK-15061][table]create/alter and 
table/databases properties should be case sensitive stored in catalog
URL: https://github.com/apache/flink/pull/10493#issuecomment-563104281
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a79225d75a53240951a4378380ff2aac457c1278 (Mon Dec 09 
07:35:50 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15061).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15061) create/alter table/databases properties should be case sensitive stored in catalog

2019-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15061:
---
Labels: pull-request-available  (was: )

> create/alter table/databases properties should be case sensitive stored in 
> catalog
> --
>
> Key: FLINK-15061
> URL: https://issues.apache.org/jira/browse/FLINK-15061
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Terry Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Now in the class `SqlToOperationConverter`, when creating a table the logic 
> will convert all properties key to lower format, which will cause the 
> properties stored in catalog to lose the case style and not intuitively be 
> observed to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjuwangg commented on issue #10493: [FLINK-15061][table]create/alter and table/databases properties should be case sensitive stored in catalog

2019-12-08 Thread GitBox
zjuwangg commented on issue #10493: [FLINK-15061][table]create/alter and 
table/databases properties should be case sensitive stored in catalog
URL: https://github.com/apache/flink/pull/10493#issuecomment-563103554
 
 
   cc @xuefuz @bowenli86 @KurtYoung  to have a review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12692) Support disk spilling in HeapKeyedStateBackend

2019-12-08 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-12692:
--
Fix Version/s: (was: 1.10.0)
   1.11.0

Sorry but we have to postpone the work to 1.11.0 due to comparative limited 
review resource. We will try to supply a trial version in 
[flink-packages|https://flink-packages.org] for those who'd like to try this 
out in production. Will give a note here once the trial version is ready.

> Support disk spilling in HeapKeyedStateBackend
> --
>
> Key: FLINK-12692
> URL: https://issues.apache.org/jira/browse/FLINK-12692
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / State Backends
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HeapKeyedStateBackend}} is one of the two {{KeyedStateBackends}} in Flink, 
> since state lives as Java objects on the heap and the de/serialization only 
> happens during state snapshot and restore, it outperforms 
> {{RocksDBKeyedStateBackend}} when all data could reside in memory.
> However, along with the advantage, {{HeapKeyedStateBackend}} also has its 
> shortcomings, and the most painful one is the difficulty to estimate the 
> maximum heap size (Xmx) to set, and we will suffer from GC impact once the 
> heap memory is not enough to hold all state data. There’re several 
> (inevitable) causes for such scenario, including (but not limited to):
> * Memory overhead of Java object representation (tens of times of the 
> serialized data size).
> * Data flood caused by burst traffic.
> * Data accumulation caused by source malfunction.
> To resolve this problem, we propose a solution to support spilling state data 
> to disk before heap memory is exhausted. We will monitor the heap usage and 
> choose the coldest data to spill, and reload them when heap memory is 
> regained after data removing or TTL expiration, automatically.
> More details please refer to the design doc and mailing list discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjuwangg opened a new pull request #10493: [FLINK-15061][table]create/alter and table/databases properties should be case sensitive stored in catalog

2019-12-08 Thread GitBox
zjuwangg opened a new pull request #10493: [FLINK-15061][table]create/alter and 
table/databases properties should be case sensitive stored in catalog
URL: https://github.com/apache/flink/pull/10493
 
 
   ## What is the purpose of the change
   
   *Now in the class `SqlToOperationConverter`, when creating a table the logic 
will convert all properties key to lower format, which will cause the 
properties stored in catalog to lose the case style and not intuitively be 
observed to user.*
   
   
   ## Brief change log
   
 - * 
[a79225d](https://github.com/apache/flink/commit/a79225d75a53240951a4378380ff2aac457c1278)
 *
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11911) KafkaTopicPartition is not a valid POJO

2019-12-08 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991204#comment-16991204
 ] 

Yu Li commented on FLINK-11911:
---

{quote}
The essential change this requires in the serialization stack is that the 
KryoSerializer needs to be able to detect that the new serializer on restore is 
a PojoSerializer, and allows the change by first performing state schema 
migration.

I suggest to address that first before coming back to this PR.
{quote}
[~fokko] From the 
[comment|https://github.com/apache/flink/pull/7979#issuecomment-472684022] on 
the linked PR, more efforts required and cannot be accomplished in 1.10.0, will 
change the fix version to 1.11 if no objections. Thanks.

> KafkaTopicPartition is not a valid POJO
> ---
>
> Key: FLINK-11911
> URL: https://issues.apache.org/jira/browse/FLINK-11911
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> KafkaTopicPartition is not a POJO, and therefore it cannot be serialized 
> efficiently. This is using the KafkaDeserializationSchema.
> When enforcing POJO's:
> ```
> java.lang.UnsupportedOperationException: Generic types have been disabled in 
> the ExecutionConfig and type 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition is 
> treated as a generic type.
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:86)
>   at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:107)
>   at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:52)
>   at 
> org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:102)
>   at 
> org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:288)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:289)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:219)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.initializeState(FlinkKafkaConsumerBase.java:856)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>   at java.lang.Thread.run(Thread.java:748)
> ```
> And in the logs:
> ```
> 2019-03-13 16:41:28,217 INFO  
> org.apache.flink.api.java.typeutils.TypeExtractor - class 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
> does not contain a setter for field topic
> 2019-03-13 16:41:28,221 INFO  
> org.apache.flink.api.java.typeutils.TypeExtractor - Class class 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
> cannot be used as a POJO type because not all fields are valid POJO fields, 
> and must be processed as GenericType. Please read the Flink documentation on 
> "Data Types & Serialization" for details of the effect on performance.
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15141) Using decimal type in a sink table, the result returns a not match ValidationException

2019-12-08 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15141:
--

 Summary: Using decimal type in a sink table, the result returns a 
not match ValidationException 
 Key: FLINK-15141
 URL: https://issues.apache.org/jira/browse/FLINK-15141
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


The planner what I used is blink.

*The source table is:*

 

CREATE TABLE `aggtest` (CREATE TABLE `aggtest` ( a smallint, b float) WITH ( 
'format.field-delimiter'='|', 'connector.type'='filesystem', 
'format.derive-schema'='true', 
'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/sources/aggtest.csv',
 'format.type'='csv');

 

 

 

*The sink table is:*

CREATE TABLE `agg_decimal_res` (CREATE TABLE `agg_decimal_res` ( avg_107_943 
DECIMAL(10, 3)) WITH ( 'format.field-delimiter'='|', 
'connector.type'='filesystem', 'format.derive-schema'='true', 
'connector.path'='hdfs://zthdev/defender_test_data/daily/test_aggregates/test_aggregates__test_avg_cast_batch/results/agg_decimal_res.csv',
 'format.type'='csv');

 

*The sql is:*

INSERT INTO agg_decimal_res SELECT CAST(avg(b) AS numeric(10,3)) AS avg_107_943 
FROM aggtest;

 

After execute the sql, there will be a exception appear, just like this:

[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink `default_catalog`.`default_database`.`agg_decimal_res1` do 
not match.
Query result schema: [avg_107_943: DECIMAL(10, 3)]
TableSink schema: [avg_107_943: DECIMAL(38, 18)]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10490: [FLINK-15136][docs] Update the Chinese version of Working with State

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10490: [FLINK-15136][docs] Update the 
Chinese version of Working with State
URL: https://github.com/apache/flink/pull/10490#issuecomment-563052850
 
 
   
   ## CI report:
   
   * a2fdaa407c0cf02efee89f2c65912445ee3d34e1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140159080)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] 
Flatten all the connector properties keys to make it easy to configure in DDL
URL: https://github.com/apache/flink/pull/10468#issuecomment-562648066
 
 
   
   ## CI report:
   
   * 9802f136dd38c6ba22752a7761e644b7bd05e99d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139785260)
   * 96db6fee76e36a40e62140bbc55d71f3ef7e8e41 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139853358)
   * 58217bef8f03db408b29dda2916ab19b2ac57ceb : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139862375)
   * 3e4465a4b29b2635dca4b482661aa384e4528ceb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139865498)
   * 51be8b2eb123d3e174a54be23d0426c3461bc647 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139940483)
   * 0a28b0da54c3edc944de40cc22d9b2bb9c28856d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140044208)
   * bc77e779da82c359eede72fa8701d7295143e56a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126749)
   * 08cd3ae69e01907f72c16080e34fed4a28c5745d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130222)
   * 9c768bc00705629df52d5cc6074c58bc3e9a9c91 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140155749)
   * 539e0ddf21018bdaf5c87aba91070ea7be9359a7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164814)
   * 06d5bf1cf8dd165aa4051f5dd706f2e57fd2668e : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13197) support querying Hive's view in Flink

2019-12-08 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-13197:
---
Issue Type: Test  (was: Improvement)

> support querying Hive's view in Flink
> -
>
> Key: FLINK-13197
> URL: https://issues.apache.org/jira/browse/FLINK-13197
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>
> One goal of HiveCatalog and hive integration is to enable Flink-Hive 
> interoperability, that is Flink should understand existing Hive meta-objects, 
> and Hive meta-objects created thru Flink should be understood by Hive.
> Taking an example of a Hive view v1 in HiveCatalog and database hc.db. Unlike 
> an equivalent Flink view whose full path in expanded query should be 
> hc.db.v1, the Hive view's full path in the expanded query should be db.v1 
> such that Hive can understand it, no matter it's created by Hive or Flink.
> [~lirui] can you help to ensure that Flink can also query Hive's view in both 
> Flink planner and Blink planner?
> cc [~xuefuz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13197) Verify querying Hive's view in Flink

2019-12-08 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-13197:
---
Summary: Verify querying Hive's view in Flink  (was: support querying 
Hive's view in Flink)

> Verify querying Hive's view in Flink
> 
>
> Key: FLINK-13197
> URL: https://issues.apache.org/jira/browse/FLINK-13197
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>
> One goal of HiveCatalog and hive integration is to enable Flink-Hive 
> interoperability, that is Flink should understand existing Hive meta-objects, 
> and Hive meta-objects created thru Flink should be understood by Hive.
> Taking an example of a Hive view v1 in HiveCatalog and database hc.db. Unlike 
> an equivalent Flink view whose full path in expanded query should be 
> hc.db.v1, the Hive view's full path in the expanded query should be db.v1 
> such that Hive can understand it, no matter it's created by Hive or Flink.
> [~lirui] can you help to ensure that Flink can also query Hive's view in both 
> Flink planner and Blink planner?
> cc [~xuefuz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-560368141
 
 
   
   ## CI report:
   
   * ebaf14134fb520e1e66c7788139fb619e8fb2be0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138957663)
   * 9df2a0f1ad0d34ef3ae9a5068c3befd26cbfa41b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139847920)
   * cccf5a0caf535a1cc1d2e85a97c581f8b5e95774 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139855927)
   * d26505044d18e5bed32dc2cbe951c6288fd70d95 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140091581)
   * 420eeda2acb74315cf284ae51f80e3548eb10d7d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130215)
   * 7bd7a7411c7de50717f29230d64775338671c037 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164791)
   * 821e68e39f6430f08b95b65adc14d8b23c72d568 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/140171153)
   * bb5841b725e8ceea0374ca8fd2c567ae453f2e60 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140173377)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15109) InternalTimerServiceImpl references restored state after use, taking up resources unnecessarily

2019-12-08 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang reassigned FLINK-15109:


Assignee: Roman Khachatryan

> InternalTimerServiceImpl references restored state after use, taking up 
> resources unnecessarily
> ---
>
> Key: FLINK-15109
> URL: https://issues.apache.org/jira/browse/FLINK-15109
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.9.1
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> E.g. 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl#restoredTimersSnapshot:
>  # written in restoreTimersForKeyGroup()
>  # used in startTimerService()
>  # and then never used again.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15109) InternalTimerServiceImpl references restored state after use, taking up resources unnecessarily

2019-12-08 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang resolved FLINK-15109.
--
Resolution: Fixed

Fixed in master: 59ac51814c46d790f1ae030e1a199bddf00a8b01

> InternalTimerServiceImpl references restored state after use, taking up 
> resources unnecessarily
> ---
>
> Key: FLINK-15109
> URL: https://issues.apache.org/jira/browse/FLINK-15109
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.9.1
>Reporter: Roman Khachatryan
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> E.g. 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl#restoredTimersSnapshot:
>  # written in restoreTimersForKeyGroup()
>  # used in startTimerService()
>  # and then never used again.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15109) InternalTimerServiceImpl references restored state after use, taking up resources unnecessarily

2019-12-08 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-15109:
-
Fix Version/s: 1.10.0

> InternalTimerServiceImpl references restored state after use, taking up 
> resources unnecessarily
> ---
>
> Key: FLINK-15109
> URL: https://issues.apache.org/jira/browse/FLINK-15109
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.9.1
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> E.g. 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl#restoredTimersSnapshot:
>  # written in restoreTimersForKeyGroup()
>  # used in startTimerService()
>  # and then never used again.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW merged pull request #10465: [FLINK-15109][runtime] null out restored state ref in InternalTimerServiceImpl after use

2019-12-08 Thread GitBox
zhijiangW merged pull request #10465: [FLINK-15109][runtime] null out restored 
state ref in InternalTimerServiceImpl after use
URL: https://github.com/apache/flink/pull/10465
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-08 Thread GitBox
flinkbot commented on issue #10492: [FLINK-15140][runtime] Fix shuffle data 
compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563096144
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bef118a977b3aa635fc748260d07b0d5079b2c0e (Mon Dec 09 
07:10:40 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15140).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15140) Shuffle data compression does not work with BroadcastRecordWriter.

2019-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15140:
---
Labels: pull-request-available  (was: )

> Shuffle data compression does not work with BroadcastRecordWriter.
> --
>
> Key: FLINK-15140
> URL: https://issues.apache.org/jira/browse/FLINK-15140
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> I tested the newest code of master branch last weekend with more test cases. 
> Unfortunately, several problems were encountered, including a bug of 
> compression.
> When BroadcastRecordWriter is used, for pipelined mode, because the 
> compressor copies the data back to the input buffer, however, the underlying 
> buffer is shared when BroadcastRecordWriter is used. So we can not copy the 
> compressed buffer back to the input buffer if the underlying buffer is 
> shared. For blocking mode, we wrongly recycle the buffer when buffer is not 
> compressed, and the problem is also triggered when BroadcastRecordWriter is 
> used.
> To fix the problem, for blocking shuffle, the reference counter should be 
> maintained correctly, for pipelined shuffle, the simplest way maybe disable 
> compression when the underlying buffer is shared. I will open a PR to fix the 
> problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry opened a new pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-08 Thread GitBox
wsry opened a new pull request #10492: [FLINK-15140][runtime] Fix shuffle data 
compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492
 
 
   
   1. Disable data compression for operators which use broadcast partitioner in 
pipelined mode.
   2. Not recycle buffer if it not compressed in blocking mode.
   
   ## What is the purpose of the change
   We implemented shuffle data compression in FLINK-14845 to reduce disk and 
network IO, but unfortunately, a bug was introduced, which makes the 
compression doesn't work when broadcast partitioner is used.
   For pipelined mode, because the compressor copies the data back to the input 
buffer, however, the underlying buffer is shared when BroadcastRecordWriter is 
used. So we can not copy the compressed buffer back to the input buffer if the 
underlying buffer is shared. For blocking mode, we wrongly recycle the buffer 
when buffer is not compressed, and the problem is also triggered when 
BroadcastRecordWriter is used.
   This PR tries to fix the problem.
   
   
   ## Brief change log
   
 - Disable data compression for operators which use broadcast partitioner 
in pipelined mode.
 - Not recycle buffer if it not compressed in blocking mode.=
   
   
   ## Verifying this change
   
   ```ShuffleCompressionITCase``` is modified to cover the scenario.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13410) Csv input format does not support LocalDate

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13410:
-
Fix Version/s: (was: 1.10.0)

> Csv input format does not support LocalDate
> ---
>
> Key: FLINK-13410
> URL: https://issues.apache.org/jira/browse/FLINK-13410
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Caizhi Weng
>Priority: Major
>
> Csv input format is lacking parsers for LocalDate, LocalTime, etc. As 
> DataTypes.DATE now defaults to LocalDate, we should add these parsers for 
> user experience.
> A temporal workaround for the users is that, users can call 
> DataTypes.Timestamp().bridgeTo(java.sql.Timestamp) to use the old 
> SqlTimestamp converter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15123) remove uniqueKeys from FlinkStatistic in blink planner

2019-12-08 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991193#comment-16991193
 ] 

godfrey he commented on FLINK-15123:


I'd like to take this ticket

> remove uniqueKeys from FlinkStatistic in blink planner 
> ---
>
> Key: FLINK-15123
> URL: https://issues.apache.org/jira/browse/FLINK-15123
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: b_5.txt
>
>
> {{uniqueKeys}} is a kind of constraint, it's unreasonable that {{uniqueKeys}} 
> is a kind of statistic. so we should remove uniqueKeys from 
> {{FlinkStatistic}} in blink planner. Some temporary solutions (e.g. 
> {{RichTableSourceQueryOperation}}) should also be resolved after primaryKey 
> is introduced in {{TableSchema}} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13410) Csv input format does not support LocalDate

2019-12-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991194#comment-16991194
 ] 

Jingsong Lee commented on FLINK-13410:
--

Now, Csv source / sink will type informations, so them should point to 
java.sql.Timestamp correctly.

Hi [~TsReaper], which case will fail?

> Csv input format does not support LocalDate
> ---
>
> Key: FLINK-13410
> URL: https://issues.apache.org/jira/browse/FLINK-13410
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.10.0
>
>
> Csv input format is lacking parsers for LocalDate, LocalTime, etc. As 
> DataTypes.DATE now defaults to LocalDate, we should add these parsers for 
> user experience.
> A temporal workaround for the users is that, users can call 
> DataTypes.Timestamp().bridgeTo(java.sql.Timestamp) to use the old 
> SqlTimestamp converter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15097) flink can not use user specified hdfs conf when submitting app in client node

2019-12-08 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991192#comment-16991192
 ] 

Congxian Qiu(klion26) commented on FLINK-15097:
---

If these two are truly the same, could we close this issue as Duplicated and 
linked to FLINK-11135?

> flink can not use user specified hdfs conf when submitting app in client node
> -
>
> Key: FLINK-15097
> URL: https://issues.apache.org/jira/browse/FLINK-15097
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.9.1
>Reporter: qian wang
>Priority: Major
> Attachments: 0001-adjust-read-hdfs-conf-order.patch
>
>
> now if cluster node had set env HADOOP_CONF_DIR,flink would force use the 
> hdfs-site.xml in the corresponding dir, then user who submitted app in the 
> client node couldn't use custom specified hdfs-site.xml/hdfs-default through 
> setting fs.hdfs.hdfssite or fs.hdfs.hdfsdefault so as to set custom blocksize 
> or replication num. For example Using yarnship to upload my hdfs conf dir and 
> set fs.hdfs.hdfssite direct to \{conf dir}/hdfs-site.xml is useless
> Deep in code it is due to the order of choosing conf in HadoopUtils.java,the 
> conf in HADOOP_CONF_DIR will override user's uploaded conf, i think the way 
> is  not sensible, so i reverse the order which flink read hdfs conf in order 
> to let user custom conf uploaded override HADOOP_CONF_DIR



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode

2019-12-08 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991190#comment-16991190
 ] 

Guowei Ma edited comment on FLINK-13993 at 12/9/19 7:01 AM:


[~tison] Sorry for the late reply.  I think your method could work.  But the 
benefit of my method could lead that the directory structure of TM is almost 
the same in every mode.

 

[~liyu] yes I would do it.


was (Author: maguowei):
[~tison] Sorry for the late reply.  I think your method could work.  But the 
method leads that the directory structure of TM is almost the same in every 
mode.

 

[~liyu] yes I would do it.

> Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
> -
>
> Key: FLINK-13993
> URL: https://issues.apache.org/jira/browse/FLINK-13993
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>   Original Estimate: 30h
>  Time Spent: 20m
>  Remaining Estimate: 29h 40m
>
> Currently, Flink has the FlinkUserCodeClassLoader, which is using to load 
> user’s class. However, the user class and the system class are all loaded by 
> the system classloader in the perjob mode. This introduces some conflicts.
> This document[1] gives a proposal that makes the FlinkUserClassLoader load 
> the user class in perjob mode. (disscuss with Till[2])
>  
> [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7]
> [2] 
> [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode

2019-12-08 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991190#comment-16991190
 ] 

Guowei Ma commented on FLINK-13993:
---

[~tison] Sorry for the late reply.  I think your method could work.  But the 
method leads that the directory structure of TM is almost the same in every 
mode.

 

[~liyu] yes I would do it.

> Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
> -
>
> Key: FLINK-13993
> URL: https://issues.apache.org/jira/browse/FLINK-13993
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>   Original Estimate: 30h
>  Time Spent: 20m
>  Remaining Estimate: 29h 40m
>
> Currently, Flink has the FlinkUserCodeClassLoader, which is using to load 
> user’s class. However, the user class and the system class are all loaded by 
> the system classloader in the perjob mode. This introduces some conflicts.
> This document[1] gives a proposal that makes the FlinkUserClassLoader load 
> the user class in perjob mode. (disscuss with Till[2])
>  
> [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7]
> [2] 
> [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11135) Refactor Hadoop config loading in HadoopUtils

2019-12-08 Thread Paul Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Lin updated FLINK-11135:
-
Affects Version/s: 1.10.0
   1.8.2
   1.9.1

> Refactor Hadoop config loading in HadoopUtils
> -
>
> Key: FLINK-11135
> URL: https://issues.apache.org/jira/browse/FLINK-11135
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.6.2, 1.7.0, 1.8.2, 1.10.0, 1.9.1
>Reporter: Paul Lin
>Assignee: Paul Lin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, Hadoop config search order is: `HADOOP_HOME > HADOOP_CONF_DIR > 
> Flink configuration entries`. However, YARN client prefers `HADOOP_CONF_DIR` 
> to `HADOOP_HOME`, so it would result in inconsistency if the two env 
> variables point to different paths. Hence I propose to refactor the Hadoop 
> config loading in HadoopUtils, changing the search order to `HADOOP_CONF_DIR 
> > Flink configuration entries > HADOOP_HOME`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14515) Implement LimitableTableSource for FileSystemTableFactory

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14515:
-
Fix Version/s: (was: 1.10.0)

> Implement LimitableTableSource for FileSystemTableFactory
> -
>
> Key: FLINK-14515
> URL: https://issues.apache.org/jira/browse/FLINK-14515
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode

2019-12-08 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma closed FLINK-13993.
-
Resolution: Fixed

> Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
> -
>
> Key: FLINK-13993
> URL: https://issues.apache.org/jira/browse/FLINK-13993
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>   Original Estimate: 30h
>  Time Spent: 20m
>  Remaining Estimate: 29h 40m
>
> Currently, Flink has the FlinkUserCodeClassLoader, which is using to load 
> user’s class. However, the user class and the system class are all loaded by 
> the system classloader in the perjob mode. This introduces some conflicts.
> This document[1] gives a proposal that makes the FlinkUserClassLoader load 
> the user class in perjob mode. (disscuss with Till[2])
>  
> [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7]
> [2] 
> [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15081:
---

Assignee: Steve OU

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: Steve OU
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991187#comment-16991187
 ] 

Jark Wu commented on FLINK-15081:
-

Hi [~ousheobin], I assigned this ticket to you. Feel free to open pull request.

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: Steve OU
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10491: [FLINK-15093][streaming-java] 
StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#issuecomment-563083959
 
 
   
   ## CI report:
   
   * 2e942aa60f0807a6a41fd0483237d8b571fdca1f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140171167)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10489: [FLINK-15134][client] Delete temporary files created in YarnClusterDescriptor

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10489: [FLINK-15134][client] Delete 
temporary files created in YarnClusterDescriptor
URL: https://github.com/apache/flink/pull/10489#issuecomment-563036443
 
 
   
   ## CI report:
   
   * bd0bc5261acfb555944947c0151f54c5eb806e1a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140154352)
   * bf750f63748d6fd7fa3a6301cd6d498e2785e172 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140158866)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-560368141
 
 
   
   ## CI report:
   
   * ebaf14134fb520e1e66c7788139fb619e8fb2be0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138957663)
   * 9df2a0f1ad0d34ef3ae9a5068c3befd26cbfa41b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139847920)
   * cccf5a0caf535a1cc1d2e85a97c581f8b5e95774 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139855927)
   * d26505044d18e5bed32dc2cbe951c6288fd70d95 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140091581)
   * 420eeda2acb74315cf284ae51f80e3548eb10d7d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130215)
   * 7bd7a7411c7de50717f29230d64775338671c037 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164791)
   * 821e68e39f6430f08b95b65adc14d8b23c72d568 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140171153)
   * bb5841b725e8ceea0374ca8fd2c567ae453f2e60 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-08 Thread Steve OU (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991182#comment-16991182
 ] 

Steve OU edited comment on FLINK-15081 at 12/9/19 6:45 AM:
---

[~jark] Hi Jark. I did not open the PR now. Actually it is my first time to 
contribute to Flink and I am not so familiar with the workflow. Should I create 
the PR after for your assignment or just tranlate the document and open PR 
directly?


was (Author: ousheobin):
[~jark] Hi Jark. I did not open the PR now. Actually it is the first time to 
contribute to Flink and I am not so familiar with the workflow. Should I create 
the PR after for your assignment or just tranlate the document and open PR 
directly?

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-08 Thread Steve OU (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991182#comment-16991182
 ] 

Steve OU commented on FLINK-15081:
--

[~jark] Hi Jark. I did not open the PR now. Actually it is the first time to 
contribute to Flink and I am not so familiar with the workflow. Should I create 
the PR after for your assignment or just tranlate the document and open PR 
directly?

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9289: [FLINK-13390] Clarify the exact meaning of state size when executing incremental checkpoint

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #9289: [FLINK-13390] Clarify the exact 
meaning of state size when executing incremental checkpoint
URL: https://github.com/apache/flink/pull/9289#issuecomment-516734686
 
 
   
   ## CI report:
   
   * a31d65fb01f4d6d8952f0eab8def4f379cd1051a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121366735)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10300: [FLINK-14926] [state backends] Make sure no resource leak of RocksObject

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10300: [FLINK-14926] [state backends] Make 
sure no resource leak of RocksObject
URL: https://github.com/apache/flink/pull/10300#issuecomment-557891612
 
 
   
   ## CI report:
   
   * dae41b86e4710d0c83d3f764e6196513db26a2c8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137940914)
   * 95bb102bbc2cb241ab66efbf0870f2a9d9e515d8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137975257)
   * 5e8237256c8b300916f1b68347890c0249dd7367 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139852604)
   * 905196bf359b5eccf86b35df16250826bee0c0f0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140171139)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15105) Resuming Externalized Checkpoint after terminal failure (rocks, incremental) end-to-end test stalls on travis

2019-12-08 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991179#comment-16991179
 ] 

Congxian Qiu(klion26) edited comment on FLINK-15105 at 12/9/19 6:38 AM:


The test complete checkpoint successfully in the first job, and resumed from 
the checkpoint successfully in the second job, and can complete checkpoint in 
the seconde job successfully, 
{code:java}
// log for first job
2019-12-05 20:12:17,410 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - 
SlidingWindowOperator (1/2) (5a5c73dd041a0145bc02dc017e46bf1f) switched from DE
PLOYING to RUNNING.
2019-12-05 20:12:17,970 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering 
checkpoint 1 @ 1575576737956 for job 39a292088648857cac5f7e110547c18
0.
2019-12-05 20:12:21,095 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Completed 
checkpoint 1 for job 39a292088648857cac5f7e110547c180 (261564 bytes in 3114 ms).
2019-12-05 20:12:21,113 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering 
checkpoint 2 @ 1575576741094 for job 39a292088648857cac5f7e110547c180.
2019-12-05 20:12:22,002 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - FailureMapper 
(1/1) (4d273da136346ef3ff6e1a54d197f00b) switched from RUNNING to
 FAILED.
java.lang.RuntimeException: Error while confirming checkpoint
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1205)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Artificial failure.
        at 
org.apache.flink.streaming.tests.FailureMapper.notifyCheckpointComplete(FailureMapper.java:70)
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.notifyCheckpointComplete(AbstractUdfStreamOperator.java:130)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:822)
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1200)
        ... 5 more
2019-12-05 20:12:22,014 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Discarding 
checkpoint 2 of job 39a292088648857cac5f7e110547c180.
java.lang.RuntimeException: Error while confirming checkpoint
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1205)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Artificial failure.
        at 
org.apache.flink.streaming.tests.FailureMapper.notifyCheckpointComplete(FailureMapper.java:70)
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.notifyCheckpointComplete(AbstractUdfStreamOperator.java:130)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:822)
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1200)
        ... 5 more

// log for second job
2019-12-05 20:12:27,190 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Starting job 
7c862506012fb04c0d565bfda7cc9595 from savepoint 
file:///home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-02075534631/externalized-chckpt-e2e-backend-dir/39a292088648857cac5f7e110547c180/chk-1
 ()
2019-12-05 20:12:27,213 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Reset the 
checkpoint ID of job 7c862506012fb04c0d565bfda7cc9595 to 2.
2019-12-05 20:12:27,220 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Restoring job 
7c862506012fb04c0d565bfda7cc9595 from latest valid checkpoint: Checkpoint 1 @ 0 
for 7c862506012fb04c0d565bfda7cc9595.
2019-12-05 20:12:27,231 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - No master state 
to restore
2019-12-05 20:12:27,232 INFO  
org.apache.flink.runtime.jobmaster.JobManagerRunner           - JobManager 
runner for job General purpose test job (7c862506012fb04c0d565bfda7cc9595) was 
granted leadership with session id ---- at 
akka.tcp://flink@localhost:6123/user/jobmanager_1.
2019-12-05 20:12:27,233 INFO  org.apache.flink.runtime.jobmaster.JobMaster      
            - Starting execution of job General purpose 

[GitHub] [flink] danny0405 commented on a change in pull request #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
danny0405 commented on a change in pull request #10491: 
[FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#discussion_r355282741
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/LocalExecutorITCase.java
 ##
 @@ -744,6 +744,30 @@ public void testExecuteInsertInto() throws Exception {
fail("Unexpected job status.");
}
}
+   // Execute the same update again.
+   final ProgramTargetDescriptor targetDescriptor2 = 
executor.executeUpdate(
 
 Review comment:
   Maybe we can write a for loop ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15105) Resuming Externalized Checkpoint after terminal failure (rocks, incremental) end-to-end test stalls on travis

2019-12-08 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991179#comment-16991179
 ] 

Congxian Qiu(klion26) commented on FLINK-15105:
---

The test complete checkpoint successfully in the first job, and resumed from 
the checkpoint successfully in the second job, and can complete checkpoint in 
the seconde job successfully, 

 
{code:java}
// log for first job
2019-12-05 20:12:17,410 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - 
SlidingWindowOperator (1/2) (5a5c73dd041a0145bc02dc017e46bf1f) switched from DE
PLOYING to RUNNING.
2019-12-05 20:12:17,970 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering 
checkpoint 1 @ 1575576737956 for job 39a292088648857cac5f7e110547c18
0.
2019-12-05 20:12:21,095 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Completed 
checkpoint 1 for job 39a292088648857cac5f7e110547c180 (261564 bytes in 3114 ms).
2019-12-05 20:12:21,113 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering 
checkpoint 2 @ 1575576741094 for job 39a292088648857cac5f7e110547c180.
2019-12-05 20:12:22,002 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - FailureMapper 
(1/1) (4d273da136346ef3ff6e1a54d197f00b) switched from RUNNING to
 FAILED.
java.lang.RuntimeException: Error while confirming checkpoint
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1205)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Artificial failure.
        at 
org.apache.flink.streaming.tests.FailureMapper.notifyCheckpointComplete(FailureMapper.java:70)
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.notifyCheckpointComplete(AbstractUdfStreamOperator.java:130)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:822)
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1200)
        ... 5 more
2019-12-05 20:12:22,014 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Discarding 
checkpoint 2 of job 39a292088648857cac5f7e110547c180.
java.lang.RuntimeException: Error while confirming checkpoint
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1205)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Artificial failure.
        at 
org.apache.flink.streaming.tests.FailureMapper.notifyCheckpointComplete(FailureMapper.java:70)
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.notifyCheckpointComplete(AbstractUdfStreamOperator.java:130)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:822)
        at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:1200)
        ... 5 more

// log for second job
2019-12-05 20:12:27,190 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Starting job 
7c862506012fb04c0d565bfda7cc9595 from savepoint 
file:///home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-02075534631/externalized-chckpt-e2e-backend-dir/39a292088648857cac5f7e110547c180/chk-1
 ()
2019-12-05 20:12:27,213 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Reset the 
checkpoint ID of job 7c862506012fb04c0d565bfda7cc9595 to 2.
2019-12-05 20:12:27,220 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Restoring job 
7c862506012fb04c0d565bfda7cc9595 from latest valid checkpoint: Checkpoint 1 @ 0 
for 7c862506012fb04c0d565bfda7cc9595.
2019-12-05 20:12:27,231 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - No master state 
to restore
2019-12-05 20:12:27,232 INFO  
org.apache.flink.runtime.jobmaster.JobManagerRunner           - JobManager 
runner for job General purpose test job (7c862506012fb04c0d565bfda7cc9595) was 
granted leadership with session id ---- at 
akka.tcp://flink@localhost:6123/user/jobmanager_1.
2019-12-05 20:12:27,233 INFO  org.apache.flink.runtime.jobmaster.JobMaster      
            - Starting execution of job General purpose test job 
(7c862506012fb04c0d565bfda7cc9595) 

[jira] [Closed] (FLINK-12308) Support python language in Flink Table API

2019-12-08 Thread sunjincheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-12308.
---
Resolution: Fixed

> Support python language in Flink Table API
> --
>
> Key: FLINK-12308
> URL: https://issues.apache.org/jira/browse/FLINK-12308
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At the Flink API level, we have DataStreamAPI/DataSetAPI/TableAPI, the 
> Table API will become the first-class citizen. Table API is declarative, and 
> can be automatically optimized, which is mentioned in the Flink mid-term 
> roadmap by Stephan. So, first considering supporting Python at the Table 
> level to cater to the current large number of analytics users. And Flink's 
> goal for Python Table API as follows:
>  * Users can write Flink Table API job in Python, and should mirror Java / 
> Scala Table API
>  * Users can submit Python Table API job in the following ways:
>  ** Submit a job with python script, integrate with `flink run`
>  ** Submit a job with python script by REST service
>  ** Submit a job in an interactive way, similar `scala-shell`
>  ** Local debug in IDE.
>  * Users can write custom functions(UDF, UDTF, UDAF)
>  * Pandas functions can be used in Flink Python Table API
> A more detailed description can be found in 
> [FLIP-38.|https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API]
> For the API level, we make the following plan:
>  * The short-term:
>  We may initially go with a simple approach to map the Python Table API to 
> the Java Table API via Py4J.
>  * The long-term:
>  We may need to create a Python API that follows the same structure as 
> Flink's Table API that produces the language-independent DAG. (As Stephan 
> already motioned on the [mailing 
> thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-38-Support-python-language-in-flink-TableAPI-td28061.html#a28096])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
flinkbot commented on issue #10491: [FLINK-15093][streaming-java] 
StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#issuecomment-563083959
 
 
   
   ## CI report:
   
   * 2e942aa60f0807a6a41fd0483237d8b571fdca1f : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14258) Integrate hive to FileSystemTableFactory

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14258:
-
Fix Version/s: (was: 1.10.0)

> Integrate hive to FileSystemTableFactory
> 
>
> Key: FLINK-14258
> URL: https://issues.apache.org/jira/browse/FLINK-14258
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
lirui-apache commented on a change in pull request #10381: [FLINK-14513][hive] 
Implement listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#discussion_r355279701
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java
 ##
 @@ -74,12 +77,11 @@
private final JobConf jobConf;
private final ObjectPath tablePath;
private final CatalogTable catalogTable;
+   // Remaining partition specs after partition pruning is performed. Null 
if pruning is not pushed down.
+   @Nullable
+   private List> remainingPartitions = null;
private List allHivePartitions;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14257) Integrate csv to FileSystemTableFactory

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14257:
-
Fix Version/s: (was: 1.10.0)

> Integrate csv to FileSystemTableFactory
> ---
>
> Key: FLINK-14257
> URL: https://issues.apache.org/jira/browse/FLINK-14257
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13056) Optimize region failover performance on calculating vertices to restart

2019-12-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991173#comment-16991173
 ] 

Zhu Zhu commented on FLINK-13056:
-

postpone this improvement to 1.11.

> Optimize region failover performance on calculating vertices to restart
> ---
>
> Key: FLINK-13056
> URL: https://issues.apache.org/jira/browse/FLINK-13056
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently some region boundary structures are calculated each time of a 
> region failover. This calculation can be heavy as its complexity goes up with 
> execution edge count.
> We tested it in a sample case with 8000 vertices and 16,000,000 edges. It 
> takes ~2.0s to calculate vertices to restart.
> (more details in 
> [https://docs.google.com/document/d/197Ou-01h2obvxq8viKqg4FnOnsykOEKxk3r5WrVBPuA/edit?usp=sharing)]
> That's why we'd propose to cache the region boundary structures to improve 
> the region failover performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10458: [FLINK-14815][rest]Expose network metric in IOMetricsInfo

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10458: [FLINK-14815][rest]Expose network 
metric in IOMetricsInfo
URL: https://github.com/apache/flink/pull/10458#issuecomment-562482569
 
 
   
   ## CI report:
   
   * 303f921f58ff61aa3e332c36ffdc07d5f9ceb352 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139659908)
   * 0122cc64d2c4a46ee8ba05ca67832a1589b024b1 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139667569)
   * 2b332510e6d2cd2dee836de532139241c20ce00c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139677676)
   * 30f1b29134a585e7d19c0de5e4ef000282a285b2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140158853)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13056) Optimize region failover performance on calculating vertices to restart

2019-12-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-13056:

Fix Version/s: (was: 1.10.0)
   1.11.0

> Optimize region failover performance on calculating vertices to restart
> ---
>
> Key: FLINK-13056
> URL: https://issues.apache.org/jira/browse/FLINK-13056
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently some region boundary structures are calculated each time of a 
> region failover. This calculation can be heavy as its complexity goes up with 
> execution edge count.
> We tested it in a sample case with 8000 vertices and 16,000,000 edges. It 
> takes ~2.0s to calculate vertices to restart.
> (more details in 
> [https://docs.google.com/document/d/197Ou-01h2obvxq8viKqg4FnOnsykOEKxk3r5WrVBPuA/edit?usp=sharing)]
> That's why we'd propose to cache the region boundary structures to improve 
> the region failover performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14266) Introduce RowCsvInputFormat to new CSV module

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14266:
-
Fix Version/s: (was: 1.10.0)

> Introduce RowCsvInputFormat to new CSV module
> -
>
> Key: FLINK-14266
> URL: https://issues.apache.org/jira/browse/FLINK-14266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now, we have an old CSV, but that is not standard CSV support. we should 
> support the RFC-compliant CSV format for table/sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13285) Check connectors runnable in blink runner

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-13285.

Resolution: Fixed

> Check connectors runnable in blink runner
> -
>
> Key: FLINK-13285
> URL: https://issues.apache.org/jira/browse/FLINK-13285
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now FLIP-32 is almost done, we should let connectors get rid of 
> flink-table-planner dependence.
> And there are still some planner class need to extract to table-common, just 
> like SchemaValidator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14267) Introduce RowCsvOutputFormat to new CSV module

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14267:
-
Fix Version/s: (was: 1.10.0)

> Introduce RowCsvOutputFormat to new CSV module
> --
>
> Key: FLINK-14267
> URL: https://issues.apache.org/jira/browse/FLINK-14267
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now, we have an old CSV, but that is not standard CSV support. we should 
> support the RFC-compliant CSV format for table/sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991169#comment-16991169
 ] 

Jark Wu commented on FLINK-15081:
-

[~ousheobin], did you open a pull request? 

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14951) State TTL backend end-to-end test fail when taskManager has multiple slot

2019-12-08 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991170#comment-16991170
 ] 

Yangze Guo commented on FLINK-14951:


[~azagrebin] Hi, andrey. I notice the fix version of this issue is setted to 
1.10. Do you think its a blocker for 1.10 release? If so, could you take a look 
at the PR? If not, we should edit this field.

> State TTL backend end-to-end test fail when taskManager has multiple slot
> -
>
> Key: FLINK-14951
> URL: https://issues.apache.org/jira/browse/FLINK-14951
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Tests
> Environment: centos 7
> java 8
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I run flink end to end tests, the State TTL backend tests fail. The log 
> of TaskManager show below:
> 2019-11-26 20:22:03,837 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - TtlVerifyUpdateFunction -> Sink: PrintFailedVerifications 
> (3/3) (23f969ddb3e13fcdd3ba9823f50b0eab) switched from RUNNING to FAILED.
> java.lang.IllegalStateException: Timestamps before and after the update do 
> not match.
>   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>   at 
> org.apache.flink.streaming.tests.TtlVerifyUpdateFunction.performUpdate(TtlVerifyUpdateFunction.java:124)
>   at 
> org.apache.flink.streaming.tests.TtlVerifyUpdateFunction.generateUpdateAndVerificationContext(TtlVerifyUpdateFunction.java:101)
>   at 
> org.apache.flink.streaming.tests.TtlVerifyUpdateFunction.flatMap(TtlVerifyUpdateFunction.java:88)
>   at 
> org.apache.flink.streaming.tests.TtlVerifyUpdateFunction.flatMap(TtlVerifyUpdateFunction.java:67)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:284)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:155)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:445)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:834)
> It is cause by the MonotonicTTLTimeProvider:freeze and 
> MonotonicTTLTimeProvider:unfreezeTime called by multithread when 
> taskmanager.numberOfTaskSlots set greater than 1. We could set it to 1 in 
> test_stream_state_ttl.sh. That will fix the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13438) Hive source/sink/udx should respect the conversion class of DataType

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13438:
-
Parent: (was: FLINK-13285)
Issue Type: Bug  (was: Sub-task)

> Hive source/sink/udx should respect the conversion class of DataType
> 
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: 0001-hive.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework 
> using TableSchema, which contains DataType. As the time data read from and 
> write to Hive connectors must be java.sql.* types and the default conversion 
> class of our time data types are java.time.*, we have to fix Hive connector 
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created 
> using Hive's schema, so the time types in the created schema will be sql time 
> type not local time type. If user specifies a local time type in the table 
> schema when creating a table in Hive, he will get a different schema when 
> reading it out. This is undesired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-560368141
 
 
   
   ## CI report:
   
   * ebaf14134fb520e1e66c7788139fb619e8fb2be0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138957663)
   * 9df2a0f1ad0d34ef3ae9a5068c3befd26cbfa41b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139847920)
   * cccf5a0caf535a1cc1d2e85a97c581f8b5e95774 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139855927)
   * d26505044d18e5bed32dc2cbe951c6288fd70d95 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140091581)
   * 420eeda2acb74315cf284ae51f80e3548eb10d7d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130215)
   * 7bd7a7411c7de50717f29230d64775338671c037 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164791)
   * 821e68e39f6430f08b95b65adc14d8b23c72d568 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
KurtYoung commented on a change in pull request #10491: 
[FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#discussion_r355277953
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/LocalExecutorITCase.java
 ##
 @@ -744,6 +744,30 @@ public void testExecuteInsertInto() throws Exception {
fail("Unexpected job status.");
}
}
+   // Execute the same update again.
+   final ProgramTargetDescriptor targetDescriptor2 = 
executor.executeUpdate(
 
 Review comment:
   looks like these are duplicated with previous codes, should avoid this 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10300: [FLINK-14926] [state backends] Make sure no resource leak of RocksObject

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10300: [FLINK-14926] [state backends] Make 
sure no resource leak of RocksObject
URL: https://github.com/apache/flink/pull/10300#issuecomment-557891612
 
 
   
   ## CI report:
   
   * dae41b86e4710d0c83d3f764e6196513db26a2c8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137940914)
   * 95bb102bbc2cb241ab66efbf0870f2a9d9e515d8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137975257)
   * 5e8237256c8b300916f1b68347890c0249dd7367 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139852604)
   * 905196bf359b5eccf86b35df16250826bee0c0f0 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leonardBang commented on a change in pull request #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
leonardBang commented on a change in pull request #10468: [FLINK-14649][table 
sql / api] Flatten all the connector properties keys to make it easy to 
configure in DDL
URL: https://github.com/apache/flink/pull/10468#discussion_r355276918
 
 

 ##
 File path: flink-python/pyflink/table/tests/test_descriptor.py
 ##
 @@ -115,10 +112,7 @@ def test_start_from_specific_offsets(self):
 
 properties = kafka.to_properties()
 expected = {'connector.startup-mode': 'specific-offsets',
-'connector.specific-offsets.0.partition': '1',
-'connector.specific-offsets.0.offset': '220',
-'connector.specific-offsets.1.partition': '3',
-'connector.specific-offsets.1.offset': '400',
+'connector.specific-offsets.0.': 
'partition:1,offset:220;partition:3,offset:400',
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leonardBang commented on a change in pull request #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
leonardBang commented on a change in pull request #10468: [FLINK-14649][table 
sql / api] Flatten all the connector properties keys to make it easy to 
configure in DDL
URL: https://github.com/apache/flink/pull/10468#discussion_r355276947
 
 

 ##
 File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/table/descriptors/ElasticsearchValidator.java
 ##
 @@ -155,18 +155,22 @@ private void 
validateConnectionProperties(DescriptorProperties properties) {
final String hostsStr = 
descriptorProperties.getString(CONNECTOR_HOSTS);
 
final String[] hosts = hostsStr.split(";");
+   final String validationExceptionMessage = "Properties '" + 
CONNECTOR_HOSTS + "' format should " +
+   "follow the format 'http://host_name:port', but is '" + 
hosts + "'.";
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13283) JDBC source/sink should respect the conversion class of DataType

2019-12-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee resolved FLINK-13283.
--
Resolution: Fixed

> JDBC source/sink should respect the conversion class of DataType
> 
>
> Key: FLINK-13283
> URL: https://issues.apache.org/jira/browse/FLINK-13283
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.9.0
>Reporter: LakeShen
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Hi , when I use Flink 1.9  JDBCTableSource,and I create  TableSchema like 
> this:
> final TableSchema schema = TableSchema.builder()
>   .field("id", DataTypes.INT())
>   .field("create", DataTypes.DATE())
>   .field("update", DataTypes.DATE())
>   .field("name", DataTypes.STRING())
>   .field("age", DataTypes.INT())
>   .field("address", DataTypes.STRING())
>   .field("birthday",DataTypes.DATE())
>   .field("likethings", DataTypes.STRING())
>   .build();
> I use  JDBCTableSource.builder() to create JDBCTableSource, I run the 
> program, and there is a exception :
> {color:red}java.lang.IllegalArgumentException: Unsupported type: 
> LocalDate{color}
> I saw the src code , I find that in LegacyTypeInfoDataTypeConverter , 
> DateType convert to Types.LOCAL_DATE,but in JDBCTypeUtil class, the HashMap  
> TYPE_MAPPING  doesn't have the key Types.LOCAL_DATE,so that throw the 
> exception.
> Does the JDBC dim table support the time data,Like Date? May it is bug for 
> JDBCTableSource join.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13283) JDBC source/sink should respect the conversion class of DataType

2019-12-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991166#comment-16991166
 ] 

Jingsong Lee commented on FLINK-13283:
--

Fixed in https://issues.apache.org/jira/browse/FLINK-14645

> JDBC source/sink should respect the conversion class of DataType
> 
>
> Key: FLINK-13283
> URL: https://issues.apache.org/jira/browse/FLINK-13283
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.9.0
>Reporter: LakeShen
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Hi , when I use Flink 1.9  JDBCTableSource,and I create  TableSchema like 
> this:
> final TableSchema schema = TableSchema.builder()
>   .field("id", DataTypes.INT())
>   .field("create", DataTypes.DATE())
>   .field("update", DataTypes.DATE())
>   .field("name", DataTypes.STRING())
>   .field("age", DataTypes.INT())
>   .field("address", DataTypes.STRING())
>   .field("birthday",DataTypes.DATE())
>   .field("likethings", DataTypes.STRING())
>   .build();
> I use  JDBCTableSource.builder() to create JDBCTableSource, I run the 
> program, and there is a exception :
> {color:red}java.lang.IllegalArgumentException: Unsupported type: 
> LocalDate{color}
> I saw the src code , I find that in LegacyTypeInfoDataTypeConverter , 
> DateType convert to Types.LOCAL_DATE,but in JDBCTypeUtil class, the HashMap  
> TYPE_MAPPING  doesn't have the key Types.LOCAL_DATE,so that throw the 
> exception.
> Does the JDBC dim table support the time data,Like Date? May it is bug for 
> JDBCTableSource join.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode

2019-12-08 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991163#comment-16991163
 ] 

Yu Li commented on FLINK-13993:
---

[~maguowei] Since all sub-tasks have been completed, I guess we could close 
this JIRA as resolved?

[~tison] IMHO we could open discussion thread in ML and follow up JIRAs for 
further implementation (if any). What do you think?

Thanks.

> Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
> -
>
> Key: FLINK-13993
> URL: https://issues.apache.org/jira/browse/FLINK-13993
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>   Original Estimate: 30h
>  Time Spent: 20m
>  Remaining Estimate: 29h 40m
>
> Currently, Flink has the FlinkUserCodeClassLoader, which is using to load 
> user’s class. However, the user class and the system class are all loaded by 
> the system classloader in the perjob mode. This introduces some conflicts.
> This document[1] gives a proposal that makes the FlinkUserClassLoader load 
> the user class in perjob mode. (disscuss with Till[2])
>  
> [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7]
> [2] 
> [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
JingsongLi commented on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-563077831
 
 
   LGTM, only left minor comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
JingsongLi commented on a change in pull request #10381: [FLINK-14513][hive] 
Implement listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#discussion_r355274831
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java
 ##
 @@ -74,12 +77,11 @@
private final JobConf jobConf;
private final ObjectPath tablePath;
private final CatalogTable catalogTable;
+   // Remaining partition specs after partition pruning is performed. Null 
if pruning is not pushed down.
+   @Nullable
+   private List> remainingPartitions = null;
private List allHivePartitions;
 
 Review comment:
   `allHivePartitions` can be a local field.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
lirui-apache commented on a change in pull request #10381: [FLINK-14513][hive] 
Implement listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#discussion_r355273794
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java
 ##
 @@ -75,10 +75,10 @@
private final JobConf jobConf;
private final ObjectPath tablePath;
private final CatalogTable catalogTable;
+   // Remaining partition specs after partition pruning is performed. Null 
if pruning is not pushed down.
+   private List> remainingPartitions = null;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10462: [FLINK-15031][runtime] Calculate required shuffle memory before allocating slots if resources are specified

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10462: [FLINK-15031][runtime] Calculate 
required shuffle memory before allocating slots if resources are specified
URL: https://github.com/apache/flink/pull/10462#issuecomment-562542464
 
 
   
   ## CI report:
   
   * 708583f6acbff12da90bc28269d924433c446e32 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139686969)
   * 2f4aee047ff40997c3b940fd1b9e7c7145ed9903 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139869261)
   * ee63a4241ee60e632f48fc03f537b985854e430d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140157125)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10436: [FLINK-14920] [flink-end-to-end-perf-tests] Set up environment to run performance e2e tests

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10436: [FLINK-14920] 
[flink-end-to-end-perf-tests] Set up environment to run performance e2e tests
URL: https://github.com/apache/flink/pull/10436#issuecomment-562103759
 
 
   
   ## CI report:
   
   * 3441142c77315169ce7a3b48d4d59e710c1016c2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139518645)
   * 68bc1d0dac16c85a35644a3444777dde5b38257c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139548818)
   * 60710d385f3fc62c641d36da4029813d779caf6d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139555833)
   * f4d638d6b1e578bc631df874eb67efdeb4673601 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139575528)
   * 841bea9216e1c07d2fce9093d720f64f6d6c889f : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139812364)
   * 87a8eb32399114c449f75a94d0243afed93adda5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139852611)
   * ed6a1e4c6e0fea5c0432c0dde6312c7b73ae447c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126741)
   * c1a1151bd3eb16de5851105700fd48928fe15e7b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140166474)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15136) Update the Chinese version of "Working with state"

2019-12-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15136:

Issue Type: Improvement  (was: Bug)

> Update the Chinese version of "Working with state"
> --
>
> Key: FLINK-15136
> URL: https://issues.apache.org/jira/browse/FLINK-15136
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we enabled background cleanup of state with TTL by default in 
> FLINK-14898, and we should update the Chinese version to respect it.
>  
> documentation location : docs/dev/stream/state/state.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15136) Update the Chinese version of "Working with state"

2019-12-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15136:

Parent: (was: FLINK-11526)
Issue Type: Bug  (was: Sub-task)

> Update the Chinese version of "Working with state"
> --
>
> Key: FLINK-15136
> URL: https://issues.apache.org/jira/browse/FLINK-15136
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we enabled background cleanup of state with TTL by default in 
> FLINK-14898, and we should update the Chinese version to respect it.
>  
> documentation location : docs/dev/stream/state/state.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
flinkbot commented on issue #10491: [FLINK-15093][streaming-java] 
StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491#issuecomment-563073936
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2e942aa60f0807a6a41fd0483237d8b571fdca1f (Mon Dec 09 
05:46:55 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15093) StreamExecutionEnvironment does not clear transformations when executing

2019-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15093:
---
Labels: pull-request-available  (was: )

> StreamExecutionEnvironment does not clear transformations when executing
> 
>
> Key: FLINK-15093
> URL: https://issues.apache.org/jira/browse/FLINK-15093
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Assignee: Danny Chen
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: screenshot-1.png
>
>
> Use the following code in scala shell to reproduce this issue.
> {code}
> val data = senv.fromElements("hello world", "hello flink", "hello hadoop")
> data.flatMap(line => line.split("\\s")).
> map(w => (w, 1)).
> keyBy(0).
> sum(1).
> print
> senv.execute()
> data.flatMap(line => line.split("\\s")).
> map(w => (w, 1)).
> keyBy(0).
> sum(1).
> print
> senv.execute()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] danny0405 opened a new pull request #10491: [FLINK-15093][streaming-java] StreamExecutionEnvironment does not cle…

2019-12-08 Thread GitBox
danny0405 opened a new pull request #10491: [FLINK-15093][streaming-java] 
StreamExecutionEnvironment does not cle…
URL: https://github.com/apache/flink/pull/10491
 
 
   …ar transformations when executing
   
   ## What is the purpose of the change
   
   Add internal interface `StreamExecutionEnvironment#getStreamGraph(String, 
boolean)` with the
   ability to clean existing transformations
   
   
   ## Brief change log
   
   * Add internal interface
   StreamExecutionEnvironment#getStreamGraph(String, boolean) with the
   ability to clean existing transformations;
   * Add tests for this new interface;
   * Add ITCase in SQL-CLI LocalExecutorITCase because FLINK-15052 can also
   be fixed by this patch.
   
   
   ## Verifying this change
   
   See tests in `StreamExecutionEnvironmentTest` and `LocalExecutorITCase`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not documented
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] XuQianJin-Stars commented on issue #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-12-08 Thread GitBox
XuQianJin-Stars commented on issue #8746: [hotfix][FLINK-11120][table]fix the 
bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#issuecomment-563072805
 
 
   hi @walterddr Thank you very much for this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15139) misc end to end test failed on 'SQL Client end-to-end test (Old planner)'

2019-12-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15139:

Affects Version/s: (was: 2.0.0)
   1.10.0

> misc end to end test failed on 'SQL Client end-to-end test (Old planner)'
> -
>
> Key: FLINK-15139
> URL: https://issues.apache.org/jira/browse/FLINK-15139
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: wangxiyuan
>Priority: Major
> Fix For: 2.0.0
>
>
> The test Running 'SQL Client end-to-end test (Old planner)' in misc e2e test 
> failed
> log:
> {code:java}
> (a94d1da25baf2a5586a296d9e933743c) switched from RUNNING to FAILED.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
> ClassLoader info: URL ClassLoader:
> Class not resolvable through given classloader.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:430)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:419)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:144)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:432)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:460)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at 
> org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:60)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550)
>   at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:254)
>   ... 10 more
> {code}
> link: [https://travis-ci.org/apache/flink/jobs/622261358]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15139) misc end to end test failed on 'SQL Client end-to-end test (Old planner)'

2019-12-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15139:

Fix Version/s: (was: 2.0.0)
   1.10.0

> misc end to end test failed on 'SQL Client end-to-end test (Old planner)'
> -
>
> Key: FLINK-15139
> URL: https://issues.apache.org/jira/browse/FLINK-15139
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: wangxiyuan
>Priority: Major
> Fix For: 1.10.0
>
>
> The test Running 'SQL Client end-to-end test (Old planner)' in misc e2e test 
> failed
> log:
> {code:java}
> (a94d1da25baf2a5586a296d9e933743c) switched from RUNNING to FAILED.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
> ClassLoader info: URL ClassLoader:
> Class not resolvable through given classloader.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:430)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:419)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:144)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:432)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:460)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at 
> org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:60)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550)
>   at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:254)
>   ... 10 more
> {code}
> link: [https://travis-ci.org/apache/flink/jobs/622261358]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10479: [FLINK-14992][client] Add job listener to execution environments

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10479: [FLINK-14992][client] Add job 
listener to execution environments
URL: https://github.com/apache/flink/pull/10479#issuecomment-562851491
 
 
   
   ## CI report:
   
   * a5903c83403e4628cdb00e85ab5860be9ebaadc4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139860189)
   * 1c7ec6122059d442ae1fded897689e2aae16380b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164833)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] 
Flatten all the connector properties keys to make it easy to configure in DDL
URL: https://github.com/apache/flink/pull/10468#issuecomment-562648066
 
 
   
   ## CI report:
   
   * 9802f136dd38c6ba22752a7761e644b7bd05e99d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139785260)
   * 96db6fee76e36a40e62140bbc55d71f3ef7e8e41 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139853358)
   * 58217bef8f03db408b29dda2916ab19b2ac57ceb : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139862375)
   * 3e4465a4b29b2635dca4b482661aa384e4528ceb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139865498)
   * 51be8b2eb123d3e174a54be23d0426c3461bc647 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139940483)
   * 0a28b0da54c3edc944de40cc22d9b2bb9c28856d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140044208)
   * bc77e779da82c359eede72fa8701d7295143e56a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126749)
   * 08cd3ae69e01907f72c16080e34fed4a28c5745d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130222)
   * 9c768bc00705629df52d5cc6074c58bc3e9a9c91 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140155749)
   * 539e0ddf21018bdaf5c87aba91070ea7be9359a7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164814)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10436: [FLINK-14920] [flink-end-to-end-perf-tests] Set up environment to run performance e2e tests

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10436: [FLINK-14920] 
[flink-end-to-end-perf-tests] Set up environment to run performance e2e tests
URL: https://github.com/apache/flink/pull/10436#issuecomment-562103759
 
 
   
   ## CI report:
   
   * 3441142c77315169ce7a3b48d4d59e710c1016c2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139518645)
   * 68bc1d0dac16c85a35644a3444777dde5b38257c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139548818)
   * 60710d385f3fc62c641d36da4029813d779caf6d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139555833)
   * f4d638d6b1e578bc631df874eb67efdeb4673601 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139575528)
   * 841bea9216e1c07d2fce9093d720f64f6d6c889f : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139812364)
   * 87a8eb32399114c449f75a94d0243afed93adda5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139852611)
   * ed6a1e4c6e0fea5c0432c0dde6312c7b73ae447c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126741)
   * c1a1151bd3eb16de5851105700fd48928fe15e7b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility class for the Table

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility 
class for the Table
URL: https://github.com/apache/flink/pull/9404#issuecomment-519872127
 
 
   
   ## CI report:
   
   * 4a726770daa19b1a587c5f8d9221a95e08387592 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122573104)
   * cd4cfbe81b4d511ec985bc4ad55a5dad82722863 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/124368074)
   * 6efbedf230f6fdeec96b9802d296629269a230ab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125253936)
   * 51f02e22e7372fa20ab2dd679aaa1099b37d0b1e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125893622)
   * 7d48bb2f3277398b2924f6a21ec31d8ea5570f38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131786275)
   * 32df598f496a30f516ece1189633899b6d7dc201 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139147586)
   * 4bbe55250ac0e748a19313f654e811eb730cd627 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139507969)
   * 3b28c422b180379b6a2249c393196143a2a2de37 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164804)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15140) Shuffle data compression does not work with BroadcastRecordWriter.

2019-12-08 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao updated FLINK-15140:

Description: 
I tested the newest code of master branch last weekend with more test cases. 
Unfortunately, several problems were encountered, including a bug of 
compression.

When BroadcastRecordWriter is used, for pipelined mode, because the compressor 
copies the data back to the input buffer, however, the underlying buffer is 
shared when BroadcastRecordWriter is used. So we can not copy the compressed 
buffer back to the input buffer if the underlying buffer is shared. For 
blocking mode, we wrongly recycle the buffer when buffer is not compressed, and 
the problem is also triggered when BroadcastRecordWriter is used.

To fix the problem, for blocking shuffle, the reference counter should be 
maintained correctly, for pipelined shuffle, the simplest way maybe disable 
compression when the underlying buffer is shared. I will open a PR to fix the 
problem.

  was:
I tested the newest code of master branch last weekend with more test cases. 
Unfortunately, several problems were encountered, including a bug of 
compression.

When BroadcastRecordWriter is used, for pipelined mode, because the compressor 
copies the data back to the input buffer, however, the underlying buffer is 
shared when BroadcastRecordWriter is used. So we can not copy the compressed 
buffer back to the input buffer if the underlying buffer is shared. For 
blocking mode, we wrongly recycle the buffer when buffer is not compressed, and 
the problem is also triggered when BroadcastRecordWriter is used.

To fix the problem, for blocking shuffle, the reference counter should be 
maintained correctly, for pipelined shuffle, the simplest way maybe disable 
compression is the underlying buffer is shared. I will open a PR to fix the 
problem.


> Shuffle data compression does not work with BroadcastRecordWriter.
> --
>
> Key: FLINK-15140
> URL: https://issues.apache.org/jira/browse/FLINK-15140
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Yingjie Cao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> I tested the newest code of master branch last weekend with more test cases. 
> Unfortunately, several problems were encountered, including a bug of 
> compression.
> When BroadcastRecordWriter is used, for pipelined mode, because the 
> compressor copies the data back to the input buffer, however, the underlying 
> buffer is shared when BroadcastRecordWriter is used. So we can not copy the 
> compressed buffer back to the input buffer if the underlying buffer is 
> shared. For blocking mode, we wrongly recycle the buffer when buffer is not 
> compressed, and the problem is also triggered when BroadcastRecordWriter is 
> used.
> To fix the problem, for blocking shuffle, the reference counter should be 
> maintained correctly, for pipelined shuffle, the simplest way maybe disable 
> compression when the underlying buffer is shared. I will open a PR to fix the 
> problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-560368141
 
 
   
   ## CI report:
   
   * ebaf14134fb520e1e66c7788139fb619e8fb2be0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138957663)
   * 9df2a0f1ad0d34ef3ae9a5068c3befd26cbfa41b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139847920)
   * cccf5a0caf535a1cc1d2e85a97c581f8b5e95774 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139855927)
   * d26505044d18e5bed32dc2cbe951c6288fd70d95 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140091581)
   * 420eeda2acb74315cf284ae51f80e3548eb10d7d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130215)
   * 7bd7a7411c7de50717f29230d64775338671c037 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140164791)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15140) Shuffle data compression does not work with BroadcastRecordWriter.

2019-12-08 Thread Yingjie Cao (Jira)
Yingjie Cao created FLINK-15140:
---

 Summary: Shuffle data compression does not work with 
BroadcastRecordWriter.
 Key: FLINK-15140
 URL: https://issues.apache.org/jira/browse/FLINK-15140
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.10.0
Reporter: Yingjie Cao
 Fix For: 1.10.0


I tested the newest code of master branch last weekend with more test cases. 
Unfortunately, several problems were encountered, including a bug of 
compression.

When BroadcastRecordWriter is used, for pipelined mode, because the compressor 
copies the data back to the input buffer, however, the underlying buffer is 
shared when BroadcastRecordWriter is used. So we can not copy the compressed 
buffer back to the input buffer if the underlying buffer is shared. For 
blocking mode, we wrongly recycle the buffer when buffer is not compressed, and 
the problem is also triggered when BroadcastRecordWriter is used.

To fix the problem, for blocking shuffle, the reference counter should be 
maintained correctly, for pipelined shuffle, the simplest way maybe disable 
compression is the underlying buffer is shared. I will open a PR to fix the 
problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15131) Add Source API classes

2019-12-08 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-15131:


Assignee: Jiangjie Qin

> Add Source API classes
> --
>
> Key: FLINK-15131
> URL: https://issues.apache.org/jira/browse/FLINK-15131
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add all the top tier classes defined in FLIP-27.
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface#FLIP-27:RefactorSourceInterface-Toplevelpublicinterfaces]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung commented on a change in pull request #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
KurtYoung commented on a change in pull request #10381: [FLINK-14513][hive] 
Implement listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#discussion_r355264664
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java
 ##
 @@ -75,10 +75,10 @@
private final JobConf jobConf;
private final ObjectPath tablePath;
private final CatalogTable catalogTable;
+   // Remaining partition specs after partition pruning is performed. Null 
if pruning is not pushed down.
+   private List> remainingPartitions = null;
 
 Review comment:
   add @Nullable annotation in this case


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] Flatten all the connector properties keys to make it easy to configure in DDL

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10468: [FLINK-14649][table sql / api] 
Flatten all the connector properties keys to make it easy to configure in DDL
URL: https://github.com/apache/flink/pull/10468#issuecomment-562648066
 
 
   
   ## CI report:
   
   * 9802f136dd38c6ba22752a7761e644b7bd05e99d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139785260)
   * 96db6fee76e36a40e62140bbc55d71f3ef7e8e41 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139853358)
   * 58217bef8f03db408b29dda2916ab19b2ac57ceb : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139862375)
   * 3e4465a4b29b2635dca4b482661aa384e4528ceb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139865498)
   * 51be8b2eb123d3e174a54be23d0426c3461bc647 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139940483)
   * 0a28b0da54c3edc944de40cc22d9b2bb9c28856d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140044208)
   * bc77e779da82c359eede72fa8701d7295143e56a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140126749)
   * 08cd3ae69e01907f72c16080e34fed4a28c5745d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130222)
   * 9c768bc00705629df52d5cc6074c58bc3e9a9c91 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/140155749)
   * 539e0ddf21018bdaf5c87aba91070ea7be9359a7 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10479: [FLINK-14992][client] Add job listener to execution environments

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10479: [FLINK-14992][client] Add job 
listener to execution environments
URL: https://github.com/apache/flink/pull/10479#issuecomment-562851491
 
 
   
   ## CI report:
   
   * a5903c83403e4628cdb00e85ab5860be9ebaadc4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139860189)
   * 1c7ec6122059d442ae1fded897689e2aae16380b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility class for the Table

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #9404: [FLINK-13667][ml] Add the utility 
class for the Table
URL: https://github.com/apache/flink/pull/9404#issuecomment-519872127
 
 
   
   ## CI report:
   
   * 4a726770daa19b1a587c5f8d9221a95e08387592 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122573104)
   * cd4cfbe81b4d511ec985bc4ad55a5dad82722863 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/124368074)
   * 6efbedf230f6fdeec96b9802d296629269a230ab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125253936)
   * 51f02e22e7372fa20ab2dd679aaa1099b37d0b1e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/125893622)
   * 7d48bb2f3277398b2924f6a21ec31d8ea5570f38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131786275)
   * 32df598f496a30f516ece1189633899b6d7dc201 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139147586)
   * 4bbe55250ac0e748a19313f654e811eb730cd627 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/139507969)
   * 3b28c422b180379b6a2249c393196143a2a2de37 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement listPartitionsByFilter to HiveCatalog

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #10381: [FLINK-14513][hive] Implement 
listPartitionsByFilter to HiveCatalog
URL: https://github.com/apache/flink/pull/10381#issuecomment-560368141
 
 
   
   ## CI report:
   
   * ebaf14134fb520e1e66c7788139fb619e8fb2be0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138957663)
   * 9df2a0f1ad0d34ef3ae9a5068c3befd26cbfa41b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/139847920)
   * cccf5a0caf535a1cc1d2e85a97c581f8b5e95774 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/139855927)
   * d26505044d18e5bed32dc2cbe951c6288fd70d95 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/140091581)
   * 420eeda2acb74315cf284ae51f80e3548eb10d7d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140130215)
   * 7bd7a7411c7de50717f29230d64775338671c037 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7314: [FLINK-11135][YARN] Refactor Hadoop config loading in HadoopUtils

2019-12-08 Thread GitBox
flinkbot edited a comment on issue #7314: [FLINK-11135][YARN] Refactor Hadoop 
config loading in HadoopUtils
URL: https://github.com/apache/flink/pull/7314#issuecomment-563040815
 
 
   
   ## CI report:
   
   * 2b4f99077d6d27409c625bf6ee303e527153d0e3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/140155744)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuyang1706 commented on issue #9404: [FLINK-13667][ml] Add the utility class for the Table

2019-12-08 Thread GitBox
xuyang1706 commented on issue #9404: [FLINK-13667][ml] Add the utility class 
for the Table
URL: https://github.com/apache/flink/pull/9404#issuecomment-563058981
 
 
   > Thanks for the contribution @xuyang1706 . I think overall the diff looks 
good to me. Just have some questions regarding the usage scenario of the 
utility class. please kindly take a look --Rong
   
   Thanks for your comments @walterddr .  I have changed the JavaDoc for detail 
usage scenario. Thanks, -Xu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >