[jira] [Updated] (FLINK-14289) Remove Optional fields from RecordWriter relevant classes

2019-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14289:
---
Labels: pull-request-available  (was: )

> Remove Optional fields from RecordWriter relevant classes
> -
>
> Key: FLINK-14289
> URL: https://issues.apache.org/jira/browse/FLINK-14289
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>
> Based on the code style guides for [Jave Optional|#java-optional] , the 
> optional should not be used for class fields. 
> So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
> ChannelSelectorRecordWriter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW opened a new pull request #9818: [FLINK-14289][runtime] Remove Optional fields from RecordWriter relevant classes

2019-09-29 Thread GitBox
zhijiangW opened a new pull request #9818:  [FLINK-14289][runtime] Remove 
Optional fields from RecordWriter relevant classes
URL: https://github.com/apache/flink/pull/9818
 
 
   ## What is the purpose of the change
   
   *Based on the code style guides for Jave Optional , it should not be used 
for class fields. So we remove the optional usages from `RecordWriter`, 
`BroadcastRecordWriter` and `ChannelSelectorRecordWriter`.*
   
   ## Brief change log
   
 - *Refactor the usages of Optional in `RecordWriter` relevant classes*
 - *Add missing generics in `RecordWriterBuilder` for avoiding unchecked 
warning*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark 
in StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536287116
 
 
   
   ## CI report:
   
   * cb5b4da4d8b5d877e296dd84095a328208263c15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129588487)
   * 2906e837e6a7f70dda5bc112658b51157801150e : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129624664)
   * 9e84fcb1dc2b92d87010bbc8ba92fd9a508a27d5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129632305)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark 
in StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536287116
 
 
   
   ## CI report:
   
   * cb5b4da4d8b5d877e296dd84095a328208263c15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129588487)
   * 2906e837e6a7f70dda5bc112658b51157801150e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129624664)
   * 9e84fcb1dc2b92d87010bbc8ba92fd9a508a27d5 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin to 3.1.0

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9817: [FLINK-14178] Downgrade 
maven-shade-plugin to 3.1.0
URL: https://github.com/apache/flink/pull/9817#issuecomment-536390486
 
 
   
   ## CI report:
   
   * 2072428596808c2e32690b4cf72f022dd905e2d6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129626955)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunhaibotb commented on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
sunhaibotb commented on issue #9804: [FLINK-14239] Fix the max watermark in 
StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536400700
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin to 3.1.0

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9817: [FLINK-14178] Downgrade 
maven-shade-plugin to 3.1.0
URL: https://github.com/apache/flink/pull/9817#issuecomment-536390486
 
 
   
   ## CI report:
   
   * 2072428596808c2e32690b4cf72f022dd905e2d6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129626955)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9805: [FLINK-14227] translate dev/stream/state/checkpointing into Chinese

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9805: [FLINK-14227] translate 
dev/stream/state/checkpointing into Chinese
URL: https://github.com/apache/flink/pull/9805#issuecomment-536295524
 
 
   
   ## CI report:
   
   * 9ad039de6bbb953c7e2dd74b3118d80ae552f31e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129590285)
   * 5542dca83dbfb476114efb5aa2f2aa8d0626183a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129626942)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14243) flink hiveudf needs some check when it is using cache

2019-09-29 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940639#comment-16940639
 ] 

jackylau edited comment on FLINK-14243 at 9/30/19 3:56 AM:
---

[~ykt836] This problem I consulted spark community before in wechat spark  
group. And it is the same problem in spark because you don't konw whether the 
UDF use a not thread-safe function. Someone said that they solved this by 
checking code but it would be complicated. 


was (Author: jackylau):
This problem I consulted spark community before in wechat spark  group. And it 
is the same problem in spark because you don't konw whether the UDF use a not 
thread-safe function. Someone said that they solved this by checking code but 
it would be complicated. 

> flink hiveudf needs some check when it is using cache
> -
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 brings in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin to 3.1.0

2019-09-29 Thread GitBox
flinkbot commented on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin 
to 3.1.0
URL: https://github.com/apache/flink/pull/9817#issuecomment-536390486
 
 
   
   ## CI report:
   
   * 2072428596808c2e32690b4cf72f022dd905e2d6 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14243) flink hiveudf needs some check when it is using cache

2019-09-29 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940639#comment-16940639
 ] 

jackylau commented on FLINK-14243:
--

This problem I consulted spark community before in wechat spark  group. And it 
is the same problem in spark because you don't konw whether the UDF use a not 
thread-safe function. Someone said that they solved this by checking code but 
it would be complicated. 

> flink hiveudf needs some check when it is using cache
> -
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 brings in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9805: [FLINK-14227] translate dev/stream/state/checkpointing into Chinese

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9805: [FLINK-14227] translate 
dev/stream/state/checkpointing into Chinese
URL: https://github.com/apache/flink/pull/9805#issuecomment-536295524
 
 
   
   ## CI report:
   
   * 9ad039de6bbb953c7e2dd74b3118d80ae552f31e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129590285)
   * 5542dca83dbfb476114efb5aa2f2aa8d0626183a : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark 
in StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536287116
 
 
   
   ## CI report:
   
   * cb5b4da4d8b5d877e296dd84095a328208263c15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129588487)
   * 2906e837e6a7f70dda5bc112658b51157801150e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129624664)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14273) when User-Defined Aggregate Functions(UDAF) parameters are inconsistent with the definition, the error reporting is confusing

2019-09-29 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940637#comment-16940637
 ] 

hailong wang commented on FLINK-14273:
--

I'd like to fix this problem, thx.

> when User-Defined Aggregate Functions(UDAF) parameters are inconsistent with 
> the definition, the error reporting is confusing
> -
>
> Key: FLINK-14273
> URL: https://issues.apache.org/jira/browse/FLINK-14273
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Priority: Critical
> Fix For: 1.10.0
>
>
> When UDAF parameters are inconsistent with the definition of accumulate 
> method, all arguments to the accumulate method are listed in the error. But 
> the first argument of accumulate is accumulator, users don't have to care 
> when using SQL.
> For example:
> {code:java}
> INSERT INTO Orders SELECT name, USERUDAF(id, name) FROM Orders GROUP BY 
> TUMBLE(rowTime, interval '10' second ), id, name
> {code}
> USERUDAF is a User-Defined Aggregate Functions, and accumulate is defined as 
> follow:
> {code:java}
> public void accumulate(Long acc, String a) {……}
> {code}
> At present, error is as follows:
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Given parameters 
> of function do not match any signature. 
> Actual: (java.lang.Integer, java.lang.String) 
> Expected: (java.lang.Integer, java.lang.String)
> {code}
> This error will mislead users, and the expected errors are as follows :
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Given parameters 
> of function do not match any signature. 
> Actual: (java.lang.Integer, java.lang.String) 
> Expected: (java.lang.String){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on a change in pull request #9801: [FLINK-13983][runtime] Launch task executor with new memory calculation logics

2019-09-29 Thread GitBox
KarmaGYZ commented on a change in pull request #9801: [FLINK-13983][runtime] 
Launch task executor with new memory calculation logics
URL: https://github.com/apache/flink/pull/9801#discussion_r329400418
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerServicesConfiguration.java
 ##
 @@ -104,6 +113,11 @@ public TaskManagerServicesConfiguration(
boolean preAllocateMemory,
float memoryFraction,
int pageSize,
+   MemorySize taskHeapMemorySize,
+   MemorySize taskOffHeapMemorySize,
+   MemorySize shffuleMemorySize,
+   MemorySize onHeapManagedMemorySize,
+   MemorySize offHeapManagedMemorySize,
 
 Review comment:
   Maybe we could directly pass `taskExecutorResourceSpec ` to 
`TaskManagerServicesConfiguration` or add a `Builder` for it? Since the 
constructor takes too many parameters which easily become unwieldy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9796: [FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only when dynamic partition insert

2019-09-29 Thread GitBox
JingsongLi commented on a change in pull request #9796: 
[FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only 
when dynamic partition insert
URL: https://github.com/apache/flink/pull/9796#discussion_r329400159
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/calcite/LogicalSink.scala
 ##
 @@ -37,11 +37,12 @@ final class LogicalSink(
 traitSet: RelTraitSet,
 input: RelNode,
 sink: TableSink[_],
-sinkName: String)
+sinkName: String,
+val staticPartitions: Map[String, String])
 
 Review comment:
   Yes, actually the sink concept from `CatalogSinkModifyOperation` is `insert 
into ` in SQL. So I think Sink contains static partitions information is 
not strange.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9796: [FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only when dynamic partition insert

2019-09-29 Thread GitBox
KurtYoung commented on a change in pull request #9796: 
[FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only 
when dynamic partition insert
URL: https://github.com/apache/flink/pull/9796#discussion_r329399640
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/calcite/LogicalSink.scala
 ##
 @@ -37,11 +37,12 @@ final class LogicalSink(
 traitSet: RelTraitSet,
 input: RelNode,
 sink: TableSink[_],
-sinkName: String)
+sinkName: String,
+val staticPartitions: Map[String, String])
 
 Review comment:
   Guess you're right, it's better to transfer the information through 
`LogicalSink`. We already add such information to `CatalogSinkModifyOperation`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14226) Subset of nightly tests fail due to No output has been received in the last 10m0s

2019-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940631#comment-16940631
 ] 

Dian Fu commented on FLINK-14226:
-

Hi [~sunjincheng121], I'm +1 to downgrade maven-shade-plugin to 3.1.0. Have 
created a [PR|https://github.com/apache/flink/pull/9817] , could you help to 
take a look? [~sunjincheng121] [~gjy] [~hequn8128]?

> Subset of nightly tests fail due to No output has been received in the last 
> 10m0s
> -
>
> Key: FLINK-14226
> URL: https://issues.apache.org/jira/browse/FLINK-14226
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Gary Yao
>Priority: Blocker
>
> https://travis-ci.org/apache/flink/builds/589469198
> https://api.travis-ci.org/v3/job/589469225/log.txt
> {noformat}
> 19:51:07.028 [INFO] --- maven-shade-plugin:3.2.1:shade (shade-flink) @ 
> flink-elasticsearch6-test ---
> 19:51:07.038 [INFO] Excluding 
> org.apache.flink:flink-connector-elasticsearch6_2.11:jar:1.10-SNAPSHOT from 
> the shaded jar.
> 19:51:07.045 [INFO] Excluding 
> org.apache.flink:flink-connector-elasticsearch-base_2.11:jar:1.10-SNAPSHOT 
> from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch.client:elasticsearch-rest-high-level-client:jar:6.3.1 from 
> the shaded jar.
> 19:51:07.046 [INFO] Excluding org.elasticsearch:elasticsearch:jar:6.3.1 from 
> the shaded jar.
> 19:51:07.046 [INFO] Excluding org.elasticsearch:elasticsearch-core:jar:6.3.1 
> from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch:elasticsearch-secure-sm:jar:6.3.1 from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch:elasticsearch-x-content:jar:6.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.yaml:snakeyaml:jar:1.17 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.core:jackson-core:jar:2.8.10 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-core:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-analyzers-common:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-backward-codecs:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-grouping:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-highlighter:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-join:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-memory:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-misc:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-queries:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-queryparser:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-sandbox:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-spatial:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-spatial-extras:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-spatial3d:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-suggest:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.elasticsearch:elasticsearch-cli:jar:6.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding net.sf.jopt-simple:jopt-simple:jar:5.0.2 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding com.carrotsearch:hppc:jar:0.7.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding joda-time:joda-time:jar:2.5 from the shaded jar.
> 19:51:07.047 [INFO] Excluding com.tdunning:t-digest:jar:3.2 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding org.hdrhistogram:HdrHistogram:jar:2.1.9 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.elasticsearch:jna:jar:4.5.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding 
> org.elasticsearch.client:elasticsearch-rest-client:jar:6.3.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding org.apache.httpcomponents:httpclient:jar:4.5.3 
> from the shaded jar.
> 19:51:07.048 [INFO] Excluding org.apache.httpcomponents:httpcore:jar:4.4.6 
> from the shaded jar.
> 19:51:07.048 [INFO] Excluding 
> 

[jira] [Assigned] (FLINK-14178) maven-shade-plugin 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread sunjincheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng reassigned FLINK-14178:
---

Assignee: Dian Fu

> maven-shade-plugin 3.2.1 doesn't work on ARM for Flink
> --
>
> Key: FLINK-14178
> URL: https://issues.apache.org/jira/browse/FLINK-14178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 2.0.0
>Reporter: wangxiyuan
>Assignee: Dian Fu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
> Attachments: debug.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> recently, maven-shade-plugin  is bumped from 3.0.0 to 3.2.1 by the 
> [commit|https://github.com/apache/flink/commit/e7216eebc846a69272c21375af0f4db8009c2e3e].
>  While with my test locally on ARM, The Flink build process will be jammed. 
> After debugging, I found there is an infinite loop.
> Downgrade maven-shade-plugin to 3.1.0 can solve this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940630#comment-16940630
 ] 

Hequn Cheng commented on FLINK-14288:
-

Fixed in
1.9.1: c9c10d84f063a2a2bcf98563891e43cae2e95697
1.10.0: 2ac48f0730c42121f5f88daec79209c2fe9cbc2d

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-14288.
---
Fix Version/s: 1.10.0
   Resolution: Fixed

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940629#comment-16940629
 ] 

Hequn Cheng commented on FLINK-14288:
-

[~sunjincheng121] [~dian.fu] thanks a lot for reporting and fixing the problem. 
Merged.

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark 
in StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536287116
 
 
   
   ## CI report:
   
   * cb5b4da4d8b5d877e296dd84095a328208263c15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129588487)
   * 2906e837e6a7f70dda5bc112658b51157801150e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129624664)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin to 3.1.0 as 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread GitBox
flinkbot commented on issue #9817: [FLINK-14178] Downgrade maven-shade-plugin 
to 3.1.0 as 3.2.1 doesn't work on ARM for Flink
URL: https://github.com/apache/flink/pull/9817#issuecomment-536386845
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 865d5701295da4da3fede13fee7f0455ce858fea (Mon Sep 30 
03:31:10 UTC 2019)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14178).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14178) maven-shade-plugin 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread wangxiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940628#comment-16940628
 ] 

wangxiyuan commented on FLINK-14178:


agree to downgrade it if x86 hit the same issue.

> maven-shade-plugin 3.2.1 doesn't work on ARM for Flink
> --
>
> Key: FLINK-14178
> URL: https://issues.apache.org/jira/browse/FLINK-14178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 2.0.0
>Reporter: wangxiyuan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
> Attachments: debug.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> recently, maven-shade-plugin  is bumped from 3.0.0 to 3.2.1 by the 
> [commit|https://github.com/apache/flink/commit/e7216eebc846a69272c21375af0f4db8009c2e3e].
>  While with my test locally on ARM, The Flink build process will be jammed. 
> After debugging, I found there is an infinite loop.
> Downgrade maven-shade-plugin to 3.1.0 can solve this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14178) maven-shade-plugin 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14178:
---
Labels: pull-request-available  (was: )

> maven-shade-plugin 3.2.1 doesn't work on ARM for Flink
> --
>
> Key: FLINK-14178
> URL: https://issues.apache.org/jira/browse/FLINK-14178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 2.0.0
>Reporter: wangxiyuan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
> Attachments: debug.log
>
>
> recently, maven-shade-plugin  is bumped from 3.0.0 to 3.2.1 by the 
> [commit|https://github.com/apache/flink/commit/e7216eebc846a69272c21375af0f4db8009c2e3e].
>  While with my test locally on ARM, The Flink build process will be jammed. 
> After debugging, I found there is an infinite loop.
> Downgrade maven-shade-plugin to 3.1.0 can solve this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #9817: [FLINK-14178] Downgrade maven-shade-plugin to 3.1.0 as 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread GitBox
dianfu opened a new pull request #9817: [FLINK-14178] Downgrade 
maven-shade-plugin to 3.1.0 as 3.2.1 doesn't work on ARM for Flink
URL: https://github.com/apache/flink/pull/9817
 
 
   
   ## What is the purpose of the change
   
   *This pull request downgrades maven-shade-plugin to 3.1.0 as 3.2.1 doesn't 
work on ARM for Flink*
   
   ## Brief change log
   
 - *Downgrade maven-shade-plugin to 3.1.0*
   
   
   ## Verifying this change
   
   This change is a trivial rework without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 closed pull request #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
hequn8128 closed pull request #9816: [FLINK-14288][legal] Add Py4J NOTICE for 
source release
URL: https://github.com/apache/flink/pull/9816
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9796: [FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only when dynamic partition insert

2019-09-29 Thread GitBox
JingsongLi commented on a change in pull request #9796: 
[FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only 
when dynamic partition insert
URL: https://github.com/apache/flink/pull/9796#discussion_r329398357
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/calcite/LogicalSink.scala
 ##
 @@ -37,11 +37,12 @@ final class LogicalSink(
 traitSet: RelTraitSet,
 input: RelNode,
 sink: TableSink[_],
-sinkName: String)
+sinkName: String,
+val staticPartitions: Map[String, String])
 
 Review comment:
   So we need add `getStaticPartitions` to `PartitionableTableSink`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14268) YARN AM endless restarts when using wrong checkpoint path or wrong checkpoint

2019-09-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940627#comment-16940627
 ] 

Yang Wang commented on FLINK-14268:
---

Do you mean `yarn.application-attempts` does not works? I have test this config 
option and it works as expected.

> YARN AM endless restarts when using wrong checkpoint path or wrong checkpoint
> -
>
> Key: FLINK-14268
> URL: https://issues.apache.org/jira/browse/FLINK-14268
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.7.2
> Environment: Flink: 1.7.2
> Deloyment: YARN Per Job
> YARN:2.7.2
> State backend:FSStateBackend with HDFS 
>  
>Reporter: Lsw_aka_laplace
>Priority: Critical
>
> I tried to start a  streaming task and restore from checkpoint which it was 
> stored in HDFS. 
> I set a wrong checkpoint path and sth unexpected happened: YARN AM restarted 
> again and again.  Since we have already set some restart strategy to prevent 
> endless restart, it should have been restarted with limited times.
> Since we made sure that restart strategy works, we dived into source code and 
> did some change mainly in _ClusterEntrypoint_.
>  
> {code:java}
> //代码占位符
> //before 
> @Override
> public void onFatalError(Throwable exception) {
>LOG.error("Fatal error occurred in the cluster entrypoint.", exception);
>System.exit(RUNTIME_FAILURE_RETURN_CODE);
> }
> //after 
> @Override
> public void onFatalError(Throwable exception) {
>LOG.error("Fatal error occurred in the cluster entrypoint.", exception);
>  
> if(ExceptionUtils.findThrowable(exception,PerJobFatalException.class).isPresent()){
> //PerJobFatalException is the FLAG 
> //在perjob模式有些致命的异常出现,am会一直重启,不能失败掉
>   LOG.error("perjob fatal error");
>   System.exit(STARTUP_FAILURE_RETURN_CODE);
>}
>System.exit(RUNTIME_FAILURE_RETURN_CODE);
> }
> {code}
>  We forced to make the FAILURE_RETURN_CODE as STARTUP_FAILURE_RETURN_CODE 
> rather than RUNTIME_FAILURE_RETURN_CODE in some condition and *it DID WORK*.
>  
>  
> After discussing with [~Tison],  I knew that FAILURE_RETURN_CODE seems only 
> to be used to debug, so I submitted this issue and look forward to ANY 
> solution~
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14178) maven-shade-plugin 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940626#comment-16940626
 ] 

Dian Fu commented on FLINK-14178:
-

[~sunjincheng121] Downgrading the maven-shade-plugin to 3.1.0 makes sense to 
me. I'd like to take this ticket and fix it, could you assign this issue to me?

> maven-shade-plugin 3.2.1 doesn't work on ARM for Flink
> --
>
> Key: FLINK-14178
> URL: https://issues.apache.org/jira/browse/FLINK-14178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 2.0.0
>Reporter: wangxiyuan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: debug.log
>
>
> recently, maven-shade-plugin  is bumped from 3.0.0 to 3.2.1 by the 
> [commit|https://github.com/apache/flink/commit/e7216eebc846a69272c21375af0f4db8009c2e3e].
>  While with my test locally on ARM, The Flink build process will be jammed. 
> After debugging, I found there is an infinite loop.
> Downgrade maven-shade-plugin to 3.1.0 can solve this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14243) flink hiveudf needs some check when it is using cache

2019-09-29 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940624#comment-16940624
 ] 

Kurt Young commented on FLINK-14243:


The cache you mentioned is a normal class fields, and AFAIK flink will use 
different function objects for each task. Have you met some problems with this 
function?

> flink hiveudf needs some check when it is using cache
> -
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 brings in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14226) Subset of nightly tests fail due to No output has been received in the last 10m0s

2019-09-29 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940623#comment-16940623
 ] 

sunjincheng commented on FLINK-14226:
-

Thanks for report the issue and thanks for share your thoughts [~gjy].  

Due to we fond the issue  for ARM for Flink in FLINK-14178, So I suggestion try 
to downgrading the maven-shade-plugin version to 3.1.0. 

What do you think? [~dianfu] [~hequn]

> Subset of nightly tests fail due to No output has been received in the last 
> 10m0s
> -
>
> Key: FLINK-14226
> URL: https://issues.apache.org/jira/browse/FLINK-14226
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Gary Yao
>Priority: Blocker
>
> https://travis-ci.org/apache/flink/builds/589469198
> https://api.travis-ci.org/v3/job/589469225/log.txt
> {noformat}
> 19:51:07.028 [INFO] --- maven-shade-plugin:3.2.1:shade (shade-flink) @ 
> flink-elasticsearch6-test ---
> 19:51:07.038 [INFO] Excluding 
> org.apache.flink:flink-connector-elasticsearch6_2.11:jar:1.10-SNAPSHOT from 
> the shaded jar.
> 19:51:07.045 [INFO] Excluding 
> org.apache.flink:flink-connector-elasticsearch-base_2.11:jar:1.10-SNAPSHOT 
> from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch.client:elasticsearch-rest-high-level-client:jar:6.3.1 from 
> the shaded jar.
> 19:51:07.046 [INFO] Excluding org.elasticsearch:elasticsearch:jar:6.3.1 from 
> the shaded jar.
> 19:51:07.046 [INFO] Excluding org.elasticsearch:elasticsearch-core:jar:6.3.1 
> from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch:elasticsearch-secure-sm:jar:6.3.1 from the shaded jar.
> 19:51:07.046 [INFO] Excluding 
> org.elasticsearch:elasticsearch-x-content:jar:6.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.yaml:snakeyaml:jar:1.17 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.core:jackson-core:jar:2.8.10 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding 
> com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.8.10 from the 
> shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-core:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-analyzers-common:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-backward-codecs:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-grouping:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-highlighter:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-join:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-memory:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-misc:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-queries:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-queryparser:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-sandbox:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-spatial:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding 
> org.apache.lucene:lucene-spatial-extras:jar:7.3.1 from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-spatial3d:jar:7.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding org.apache.lucene:lucene-suggest:jar:7.3.1 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.elasticsearch:elasticsearch-cli:jar:6.3.1 
> from the shaded jar.
> 19:51:07.047 [INFO] Excluding net.sf.jopt-simple:jopt-simple:jar:5.0.2 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding com.carrotsearch:hppc:jar:0.7.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding joda-time:joda-time:jar:2.5 from the shaded jar.
> 19:51:07.047 [INFO] Excluding com.tdunning:t-digest:jar:3.2 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding org.hdrhistogram:HdrHistogram:jar:2.1.9 from 
> the shaded jar.
> 19:51:07.047 [INFO] Excluding org.elasticsearch:jna:jar:4.5.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding 
> org.elasticsearch.client:elasticsearch-rest-client:jar:6.3.1 from the shaded 
> jar.
> 19:51:07.047 [INFO] Excluding org.apache.httpcomponents:httpclient:jar:4.5.3 
> from the shaded jar.
> 19:51:07.048 [INFO] Excluding org.apache.httpcomponents:httpcore:jar:4.4.6 
> from the 

[jira] [Commented] (FLINK-14178) maven-shade-plugin 3.2.1 doesn't work on ARM for Flink

2019-09-29 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940622#comment-16940622
 ] 

sunjincheng commented on FLINK-14178:
-

Thanks for the discussion [~wangxiyuan] and [~dian.fu]. Due to we fond the e2e 
test issue in FLINK-14226, So I suggestion try to downgrading the 
maven-shade-plugin version to 3.1.0. 

What do you think?

> maven-shade-plugin 3.2.1 doesn't work on ARM for Flink
> --
>
> Key: FLINK-14178
> URL: https://issues.apache.org/jira/browse/FLINK-14178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 2.0.0
>Reporter: wangxiyuan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: debug.log
>
>
> recently, maven-shade-plugin  is bumped from 3.0.0 to 3.2.1 by the 
> [commit|https://github.com/apache/flink/commit/e7216eebc846a69272c21375af0f4db8009c2e3e].
>  While with my test locally on ARM, The Flink build process will be jammed. 
> After debugging, I found there is an infinite loop.
> Downgrade maven-shade-plugin to 3.1.0 can solve this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14289) Remove Optional fields from RecordWriter relevant classes

2019-09-29 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-14289:
-
Description: 
Based on the code style guides for [Jave Optional|#java-optional] , the 
optional should not be used for class fields. 

So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
ChannelSelectorRecordWriter.

  was:
Based on the code style guides for [Jave Optional|#java-optional]] , the 
optional should not be used for class fields. 

So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
ChannelSelectorRecordWriter.


> Remove Optional fields from RecordWriter relevant classes
> -
>
> Key: FLINK-14289
> URL: https://issues.apache.org/jira/browse/FLINK-14289
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>
> Based on the code style guides for [Jave Optional|#java-optional] , the 
> optional should not be used for class fields. 
> So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
> ChannelSelectorRecordWriter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14289) Remove Optional fields from RecordWriter relevant classes

2019-09-29 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-14289:
-
Description: 
Based on the code style guides for [Jave Optional|#java-optional]] , the 
optional should not be used for class fields. 

So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
ChannelSelectorRecordWriter.

  was:
Based on the code style guides for [Jave 
Optional|[https://flink.apache.org/contributing/code-style-and-quality-java.html#java-optional]|https://flink.apache.org/contributing/code-style-and-quality-java.html#java-optional],]
 , the optional should not be used for class fields. 

So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
ChannelSelectorRecordWriter.


> Remove Optional fields from RecordWriter relevant classes
> -
>
> Key: FLINK-14289
> URL: https://issues.apache.org/jira/browse/FLINK-14289
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>
> Based on the code style guides for [Jave Optional|#java-optional]] , the 
> optional should not be used for class fields. 
> So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
> ChannelSelectorRecordWriter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14289) Remove Optional fields from RecordWriter relevant classes

2019-09-29 Thread zhijiang (Jira)
zhijiang created FLINK-14289:


 Summary: Remove Optional fields from RecordWriter relevant classes
 Key: FLINK-14289
 URL: https://issues.apache.org/jira/browse/FLINK-14289
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


Based on the code style guides for [Jave 
Optional|[https://flink.apache.org/contributing/code-style-and-quality-java.html#java-optional]|https://flink.apache.org/contributing/code-style-and-quality-java.html#java-optional],]
 , the optional should not be used for class fields. 

So we remove the optional usages from RecordWriter, BroadcastRecordWriter and 
ChannelSelectorRecordWriter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129622336)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14243) flink hiveudf needs some check when it is using cache

2019-09-29 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-14243:
-
Description: 
Flink1.9 brings in hive connector, but it will have some problem when the 
original hive udf using cache. We konw that hive is  processed level parallel 
based on jvm, while flink/spark is task level parallel. If flink just calls the 
hive udf, it wll exists thread-safe problem when using cache.

So it may need check the hive udf code and if it is not thread-safe, and set 
the flink parallize=1

  was:
Flink1.9 bring in hive connector, but it will have some problem when the 
original hive udf using cache. We konw that hive is  processed level parallel 
based on jvm, while flink/spark is task level parallel. If flink just calls the 
hive udf, it wll exists thread-safe problem when using cache.

So it may need check the hive udf code and if it is not thread-safe, and set 
the flink parallize=1


> flink hiveudf needs some check when it is using cache
> -
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 brings in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14243) flink hiveudf needs some check when it is using cache

2019-09-29 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-14243:
-
Summary: flink hiveudf needs some check when it is using cache  (was: flink 
hiveudf need some check when it is using cache)

> flink hiveudf needs some check when it is using cache
> -
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 bring in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14243) flink hiveudf need some check when it is using cache

2019-09-29 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940619#comment-16940619
 ] 

jackylau commented on FLINK-14243:
--

[~ykt836]  The original udf function uses cache, which you can see hive system 
udf like get_json_object.

> flink hiveudf need some check when it is using cache
> 
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 bring in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE 
for source release
URL: https://github.com/apache/flink/pull/9816#issuecomment-536370360
 
 
   
   ## CI report:
   
   * 2647196a7e0997b7a3a22a0212d062ee309f1db6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129621147)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14243) flink hiveudf need some check when it is using cache

2019-09-29 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-14243:
-
Summary: flink hiveudf need some check when it is using cache  (was: flink 
hiveudf need some check whether it is using cache)

> flink hiveudf need some check when it is using cache
> 
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 bring in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-fase, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14243) flink hiveudf need some check when it is using cache

2019-09-29 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-14243:
-
Description: 
Flink1.9 bring in hive connector, but it will have some problem when the 
original hive udf using cache. We konw that hive is  processed level parallel 
based on jvm, while flink/spark is task level parallel. If flink just calls the 
hive udf, it wll exists thread-safe problem when using cache.

So it may need check the hive udf code and if it is not thread-safe, and set 
the flink parallize=1

  was:
Flink1.9 bring in hive connector, but it will have some problem when the 
original hive udf using cache. We konw that hive is  processed level parallel 
based on jvm, while flink/spark is task level parallel. If flink just calls the 
hive udf, it wll exists thread-safe problem when using cache.

So it may need check the hive udf code and if it is not thread-fase, and set 
the flink parallize=1


> flink hiveudf need some check when it is using cache
> 
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 bring in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-safe, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark in StreamSource may arrive the downstream operator early

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9804: [FLINK-14239] Fix the max watermark 
in StreamSource may arrive the downstream operator early
URL: https://github.com/apache/flink/pull/9804#issuecomment-536287116
 
 
   
   ## CI report:
   
   * cb5b4da4d8b5d877e296dd84095a328208263c15 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129588487)
   * 2906e837e6a7f70dda5bc112658b51157801150e : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9796: [FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only when dynamic partition insert

2019-09-29 Thread GitBox
KurtYoung commented on a change in pull request #9796: 
[FLINK-14253][table-planner-blink] Add hash distribution and sort grouping only 
when dynamic partition insert
URL: https://github.com/apache/flink/pull/9796#discussion_r329395856
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/calcite/LogicalSink.scala
 ##
 @@ -37,11 +37,12 @@ final class LogicalSink(
 traitSet: RelTraitSet,
 input: RelNode,
 sink: TableSink[_],
-sinkName: String)
+sinkName: String,
+val staticPartitions: Map[String, String])
 
 Review comment:
   I think such information should be carried by `PartitionableTableSink`, but 
not this `LogicalSink`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129622336)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14243) flink hiveudf need some check whether it is using cache

2019-09-29 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940614#comment-16940614
 ] 

Kurt Young commented on FLINK-14243:


What do you mean by using cache?

> flink hiveudf need some check whether it is using cache
> ---
>
> Key: FLINK-14243
> URL: https://issues.apache.org/jira/browse/FLINK-14243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> Flink1.9 bring in hive connector, but it will have some problem when the 
> original hive udf using cache. We konw that hive is  processed level parallel 
> based on jvm, while flink/spark is task level parallel. If flink just calls 
> the hive udf, it wll exists thread-safe problem when using cache.
> So it may need check the hive udf code and if it is not thread-fase, and set 
> the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xuyang1706 commented on a change in pull request #9300: [FLINK-13513][ml] Add the FlatMapper and related classes for later al…

2019-09-29 Thread GitBox
xuyang1706 commented on a change in pull request #9300: [FLINK-13513][ml] Add 
the FlatMapper and related classes for later al…
URL: https://github.com/apache/flink/pull/9300#discussion_r329391447
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/mapper/Mapper.java
 ##
 @@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.mapper;
+
+import org.apache.flink.ml.api.misc.param.Params;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+/**
+ * Abstract class for mappers.
+ */
+public abstract class Mapper extends FlatMapper implements MapOperable {
+
+   public Mapper(TableSchema dataSchema, Params params) {
+   super(dataSchema, params);
+   }
+
+   /**
+* This method override the {@link FlatMapper#flatMap(Row, Collector)} 
to map a row
+* to a new row which collected to {@link Collector}.
+*
+* @param row the input row.
+* @param output the output collector
+* @throws Exception if {@link Collector#collect(Object)} throws 
exception.
+*/
+   @Override
+   public void flatMap(Row row, Collector output) throws Exception {
 
 Review comment:
   Thanks for your advice. For One of the extended class `FlatModelMapper` not 
support the map() function, we can not override its flatMap() function. Thus, I 
can not move the function to  FlatMapper class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6

2019-09-29 Thread liupengcheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940610#comment-16940610
 ] 

liupengcheng commented on FLINK-14123:
--

[~StephanEwen] Thanks for your reply, I tested the app with G1 GC and CMS, it 
succeeded. So I think this failure mainly affect the Parallel GC. But I think 
we should make the default gc work, and many users are now still using jdk8, 
and there are bugs for G1 GC in jdk8. What do you think to adjust this default 
value to 0.6? 

> Change taskmanager.memory.fraction default value to 0.6
> ---
>
> Key: FLINK-14123
> URL: https://issues.apache.org/jira/browse/FLINK-14123
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.9.0
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we are testing flink batch task, such as terasort, however, it 
> started only awhile then it failed due to OOM. 
>  
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: a807e1d635bd4471ceea4282477f8850)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1080)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1080)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259)
>   ... 23 more
> Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger Reading Thread' terminated due to an exception: GC 
> overhead limit exceeded
>   at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
>   at 
> org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1109)
>   at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:82)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Thread 'SortMerger Reading Thread' terminated 
> due to an exception: GC overhead limit exceeded
>   at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
> 

[jira] [Commented] (FLINK-14250) Introduce separate "bin/flink runpy" and remove python support from "bin/flink run"

2019-09-29 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940609#comment-16940609
 ] 

sunjincheng commented on FLINK-14250:
-

I'm glad to see this proposal [~aljoscha]!(y)

I think simplify the command-line parsing is a good idea. I go through the code 
and share my thoughts as following:

How about we add a new `RunPyOption` and putting all the Python command-line 
parsing logic into it. We can add the language check in `CliFrontend#run()` and 
using `RunPyOption` if it's Python program. In this way, we can flexible extend 
in future if we want to add other language support such as GO.

What do you think?

> Introduce separate "bin/flink runpy" and remove python support from 
> "bin/flink run"
> ---
>
> Key: FLINK-14250
> URL: https://issues.apache.org/jira/browse/FLINK-14250
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Aljoscha Krettek
>Priority: Major
>
> Currently, "bin/flink run" supports both Java and Python programs and there 
> is quite some complexity in command line parsing and validation and the code 
> is spread across different classes.
> I think if we had a separate "bin/flink runpy" then we could simplify the 
> parsing quite a bit and the usage help of each command would only show those 
> options that are relevant for the given use case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-531747685
 
 
   
   ## CI report:
   
   * bd2624914db1147588ea838ae542333c310290cc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/127790175)
   * b5523d10152123f45cf883e446872b90532879c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128113059)
   * 7c4c25b26aa9549fe83628315b816f16327000b1 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE 
for source release
URL: https://github.com/apache/flink/pull/9816#issuecomment-536370360
 
 
   
   ## CI report:
   
   * 2647196a7e0997b7a3a22a0212d062ee309f1db6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129621147)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung merged pull request #9795: [hotfix] [variable name] Typo in Table Planner RuleSets: JOIN_REORDER_PERPARE_RULES

2019-09-29 Thread GitBox
KurtYoung merged pull request #9795: [hotfix] [variable name] Typo in Table 
Planner RuleSets: JOIN_REORDER_PERPARE_RULES
URL: https://github.com/apache/flink/pull/9795
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #9783: [FLINK-14040][travis] Enable MiniCluster tests based on schedulerNG in Flink cron build

2019-09-29 Thread GitBox
zhuzhurk commented on a change in pull request #9783: [FLINK-14040][travis] 
Enable MiniCluster tests based on schedulerNG in Flink cron build
URL: https://github.com/apache/flink/pull/9783#discussion_r329390451
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobExecutionITCase.java
 ##
 @@ -37,6 +39,7 @@
 /**
  * Integration tests for job scheduling.
  */
+@Category(AlsoRunWithSchedulerNG.class)
 public class JobExecutionITCase extends TestLogger {
 
 Review comment:
   @GJL do you think it's Ok to merge this PR before annotating other 
MiniCluster tests?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14271) Deprecate legacy RestartPipelinedRegionStrategy

2019-09-29 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940606#comment-16940606
 ] 

Zhu Zhu commented on FLINK-14271:
-

cc [~GJL]

> Deprecate legacy RestartPipelinedRegionStrategy
> ---
>
> Key: FLINK-14271
> URL: https://issues.apache.org/jira/browse/FLINK-14271
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Minor
> Fix For: 1.10.0
>
>
> The legacy {{RestartPipelinedRegionStrategy}} has been superseded by 
> {{AdaptedRestartPipelinedRegionStrategyNG}} in Flink 1.9.
> It heavily depends on ExecutionGraph components and becomes a blocker for a 
> clean scheduler re-architecture.
> We should deprecate it for further removal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
flinkbot commented on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for 
source release
URL: https://github.com/apache/flink/pull/9816#issuecomment-536370360
 
 
   
   ## CI report:
   
   * 2647196a7e0997b7a3a22a0212d062ee309f1db6 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on issue #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
HuangZhenQiu commented on issue #9689: [FLINK-7151] add a basic function ddl
URL: https://github.com/apache/flink/pull/9689#issuecomment-536370130
 
 
   @bowenli86 @danny0405 @suez1224 
   Would you please review this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
HuangZhenQiu commented on a change in pull request #9689: [FLINK-7151] add a 
basic function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r329387849
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -155,6 +155,24 @@ SqlNodeList TableProperties():
 {  return new SqlNodeList(proList, span.end(this)); }
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+}
+{
+
+
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-09-29 Thread GitBox
HuangZhenQiu commented on a change in pull request #9689: [FLINK-7151] add a 
basic function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r329387836
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -155,6 +155,24 @@ SqlNodeList TableProperties():
 {  return new SqlNodeList(proList, span.end(this)); }
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+}
+{
+
+
 
 Review comment:
   Good catch. Added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
flinkbot commented on issue #9816: [FLINK-14288][legal] Add Py4J NOTICE for 
source release
URL: https://github.com/apache/flink/pull/9816#issuecomment-536366200
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2647196a7e0997b7a3a22a0212d062ee309f1db6 (Mon Sep 30 
01:32:56 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14288:
---
Labels: pull-request-available  (was: )

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #9816: [FLINK-14288][legal] Add Py4J NOTICE for source release

2019-09-29 Thread GitBox
dianfu opened a new pull request #9816: [FLINK-14288][legal] Add Py4J NOTICE 
for source release
URL: https://github.com/apache/flink/pull/9816
 
 
   
   ## What is the purpose of the change
   
   *This pull request adds Py4J to the NOTICE as it's bundled in the source 
release*
   
   
   ## Brief change log
   
 - *Adds Py4J to the NOTICE file*
   
   ## Verifying this change
   
   This change is a trivial rework without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940592#comment-16940592
 ] 

sunjincheng commented on FLINK-14288:
-

Done, great thanks to take this ticket. [~dian.fu]

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
> Fix For: 1.9.1
>
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread sunjincheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng reassigned FLINK-14288:
---

Assignee: Dian Fu

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Blocker
> Fix For: 1.9.1
>
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940591#comment-16940591
 ] 

Dian Fu commented on FLINK-14288:
-

Hi [~sunjincheng121], good catch! I'd like to take this issue. Would you please 
assign it to me?

> Add Py4j  NOTICE for source release
> ---
>
> Key: FLINK-14288
> URL: https://issues.apache.org/jira/browse/FLINK-14288
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Priority: Blocker
> Fix For: 1.9.1
>
>
> I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14288) Add Py4j NOTICE for source release

2019-09-29 Thread sunjincheng (Jira)
sunjincheng created FLINK-14288:
---

 Summary: Add Py4j  NOTICE for source release
 Key: FLINK-14288
 URL: https://issues.apache.org/jira/browse/FLINK-14288
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.9.0
Reporter: sunjincheng
 Fix For: 1.9.1


I just found that we should add Py4j  NOTICE for source release. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14274) RuntimeContext can be get from FunctionContext

2019-09-29 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940589#comment-16940589
 ] 

Kurt Young commented on FLINK-14274:


[~hailong wang] Once RuntimeContext exposed to user, the framework will have no 
chance to stop user from using state in UDF. If you want to use other functions 
other than state in RuntimeContext, you can open other issues to describe what 
kind of methods you want and we can add them to the FunctionContext. 

I will close this for now, we can re-open it once needed. 

>  RuntimeContext can be get from FunctionContext
> ---
>
> Key: FLINK-14274
> URL: https://issues.apache.org/jira/browse/FLINK-14274
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.10.0
>
>
> Now, we can get metricGroup、cachedFile and jobParameter from functionContext 
> in UDF. If we expose runtimeContext from functionContext, user can get more 
> useful variable  such as state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14274) RuntimeContext can be get from FunctionContext

2019-09-29 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-14274.
--
Resolution: Won't Fix

>  RuntimeContext can be get from FunctionContext
> ---
>
> Key: FLINK-14274
> URL: https://issues.apache.org/jira/browse/FLINK-14274
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.10.0
>
>
> Now, we can get metricGroup、cachedFile and jobParameter from functionContext 
> in UDF. If we expose runtimeContext from functionContext, user can get more 
> useful variable  such as state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9815: [FLINK-14117][docs]Translate changes on documentation index page to Chinese

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9815: [FLINK-14117][docs]Translate changes 
on documentation index page to Chinese
URL: https://github.com/apache/flink/pull/9815#issuecomment-536343016
 
 
   
   ## CI report:
   
   * 615acdb2511760c55f8831934f710678a8962acc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129610971)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on issue #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-09-29 Thread GitBox
walterddr commented on issue #9373: [FLINK-13596][ml] Add two utils for Table 
transformations
URL: https://github.com/apache/flink/pull/9373#issuecomment-536348043
 
 
   Thanks for the contribution @xuyang1706. looks like this PR depends on #9184 
yes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9355: [FLINK-13577][ml] Add an util class to build result row and generate …

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9355: [FLINK-13577][ml] Add an util class 
to build result row and generate …
URL: https://github.com/apache/flink/pull/9355#issuecomment-518089359
 
 
   
   ## CI report:
   
   * f8f3792f7ce55735f05200c90431b20b99a42d18 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121904586)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129603380)
   * 40f87e7990abcb5963881c8a0aab561826433c5b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129610055)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an util class to build result row and generate …

2019-09-29 Thread GitBox
walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an 
util class to build result row and generate …
URL: https://github.com/apache/flink/pull/9355#discussion_r329372704
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/OutputColsHelper.java
 ##
 @@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+
+/**
+ * Util for generating output schema.
+ *
+ * Need following information:
+ * 1) data schema
+ * 2) output column names
+ * 3) output column types
+ * 4) keep column names
+ *
+ * The following roles are followed:
+ * 1)If reserved columns is null, then reserve all columns from the origin 
dataSet.
+ * 2)If some of the reserved column names are the same as output column names, 
then they are
 
 Review comment:
   does this means you select a particular subset of columns to output? (and if 
null, output all from `outputColumnName` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an util class to build result row and generate …

2019-09-29 Thread GitBox
walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an 
util class to build result row and generate …
URL: https://github.com/apache/flink/pull/9355#discussion_r329372857
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/OutputColsHelper.java
 ##
 @@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+
+/**
+ * Util for generating output schema.
+ *
+ * Need following information:
+ * 1) data schema
+ * 2) output column names
+ * 3) output column types
+ * 4) keep column names
+ *
+ * The following roles are followed:
+ * 1)If reserved columns is null, then reserve all columns from the origin 
dataSet.
+ * 2)If some of the reserved column names are the same as output column names, 
then they are
+ * replaced by the output at value and type, but with the kept column names 
have their order kept.
+ * 3)[result columns] = ([reserve columns] subtract [output columns]) + 
[output columns]
+ *
+ */
+public class OutputColsHelper implements Serializable {
+   private transient String[] dataColNames;
+   private transient TypeInformation[] dataColTypes;
+   private transient String[] outputColNames;
+   private transient TypeInformation[] outputColTypes;
+
+   private int resultLength;
+   private int[] reservedColIndices;
+   private int[] reservedToResultIndices;
+   private int[] outputToResultIndices;
+
+   public OutputColsHelper(TableSchema dataSchema, String outputColName, 
TypeInformation outputColType) {
+   this(dataSchema, outputColName, outputColType, null);
+   }
+
+   public OutputColsHelper(TableSchema dataSchema, String outputColName, 
TypeInformation outputColType,
+   String[] 
reservedColNames) {
+   this(dataSchema, new String[] {outputColName}, new 
TypeInformation[] {outputColType}, reservedColNames);
+   }
+
+   public OutputColsHelper(TableSchema dataSchema, String[] 
outputColNames, TypeInformation[] outputColTypes) {
+   this(dataSchema, outputColNames, outputColTypes, null);
+   }
+
+   public OutputColsHelper(TableSchema dataSchema, String[] 
outputColNames, TypeInformation[] outputColTypes,
+   String[] 
reservedColNames) {
+   this.dataColNames = dataSchema.getFieldNames();
+   this.dataColTypes = dataSchema.getFieldTypes();
+   this.outputColNames = outputColNames;
+   this.outputColTypes = outputColTypes;
+
+   HashSet  toReservedCols = new HashSet <>(
+   Arrays.asList(
+   reservedColNames == null ? this.dataColNames : 
reservedColNames
+   )
+   );
+
+   ArrayList  reservedColIndices = new ArrayList 
<>(toReservedCols.size());
+   ArrayList  reservedColToResultIndex = new ArrayList 
<>(toReservedCols.size());
+   outputToResultIndices = new int[outputColNames.length];
+   Arrays.fill(outputToResultIndices, -1);
+   int index = 0;
+   for (int i = 0; i < dataColNames.length; i++) {
+   int key = ArrayUtils.indexOf(outputColNames, 
dataColNames[i]);
+   if (key >= 0) {
+   outputToResultIndices[key] = index++;
+   continue;
+   }
+   if (toReservedCols.contains(dataColNames[i])) {
+   reservedColIndices.add(i);
+   reservedColToResultIndex.add(index++);
+   }
+   }
+   for (int i = 0; i < outputToResultIndices.length; i++) {
+   

[GitHub] [flink] walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an util class to build result row and generate …

2019-09-29 Thread GitBox
walterddr commented on a change in pull request #9355: [FLINK-13577][ml] Add an 
util class to build result row and generate …
URL: https://github.com/apache/flink/pull/9355#discussion_r329372657
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/OutputColsHelper.java
 ##
 @@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+
+/**
+ * Util for generating output schema.
+ *
+ * Need following information:
+ * 1) data schema
+ * 2) output column names
+ * 3) output column types
+ * 4) keep column names
 
 Review comment:
   this should either be `reserved column names` or the following java doc 
should change `reserved column` to `keep column`. 
   
   also if we are using the word `keep` I think it is better to use `kept`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9815: [FLINK-14117][docs]Translate changes on documentation index page to Chinese

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9815: [FLINK-14117][docs]Translate changes 
on documentation index page to Chinese
URL: https://github.com/apache/flink/pull/9815#issuecomment-536343016
 
 
   
   ## CI report:
   
   * 615acdb2511760c55f8831934f710678a8962acc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129610971)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129603380)
   * 40f87e7990abcb5963881c8a0aab561826433c5b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129610055)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9815: [FLINK-14117][docs]Translate changes on documentation index page to Chinese

2019-09-29 Thread GitBox
flinkbot commented on issue #9815: [FLINK-14117][docs]Translate changes on 
documentation index page to Chinese
URL: https://github.com/apache/flink/pull/9815#issuecomment-536343016
 
 
   
   ## CI report:
   
   * 615acdb2511760c55f8831934f710678a8962acc : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9815: [FLINK-14117][docs]Translate changes on documentation index page to Chinese

2019-09-29 Thread GitBox
flinkbot commented on issue #9815: [FLINK-14117][docs]Translate changes on 
documentation index page to Chinese
URL: https://github.com/apache/flink/pull/9815#issuecomment-536341137
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 615acdb2511760c55f8831934f710678a8962acc (Sun Sep 29 
21:06:34 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129603380)
   * 40f87e7990abcb5963881c8a0aab561826433c5b : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14117) Translate changes on documentation index page to Chinese

2019-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14117:
---
Labels: pull-request-available  (was: )

> Translate changes on documentation index page to Chinese
> 
>
> Key: FLINK-14117
> URL: https://issues.apache.org/jira/browse/FLINK-14117
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: Fabian Hueske
>Assignee: Ricco Chen
>Priority: Major
>  Labels: pull-request-available
>
> The changes of commit 
> [ee0d6fdf0604d74bd1cf9a6eb9cf5338ac1aa4f9|https://github.com/apache/flink/commit/ee0d6fdf0604d74bd1cf9a6eb9cf5338ac1aa4f9#diff-1a523bd9fa0dbf998008b37579210e12]
>  on the documentation index page should be translated to Chinese.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] koonchen opened a new pull request #9815: [FLINK-14117][docs]Translate changes on documentation index page to Chinese

2019-09-29 Thread GitBox
koonchen opened a new pull request #9815: [FLINK-14117][docs]Translate changes 
on documentation index page to Chinese
URL: https://github.com/apache/flink/pull/9815
 
 
   ## What is the purpose of the change
   
   Translate changes on documentation index page to Chinese
   
   ## Brief change log
   
   - Update docs/index.zh.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129603380)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129601641)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129603380)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis 
consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536328322
 
 
   
   ## CI report:
   
   * dafafa4f7974e75b63c1b2f353d9ead63c359ccf : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
flinkbot commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis 
consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-536326818
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit dafafa4f7974e75b63c1b2f353d9ead63c359ccf (Sun Sep 29 
18:12:22 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14283) Update Kinesis consumer documentation for watermarks and event time alignment

2019-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14283:
---
Labels: pull-request-available  (was: )

> Update Kinesis consumer documentation for watermarks and event time alignment
> -
>
> Key: FLINK-14283
> URL: https://issues.apache.org/jira/browse/FLINK-14283
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Kinesis
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Periodic per shard watermarking and event time alignment have been added over 
> past releases but the doc has not been updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tweise opened a new pull request #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-09-29 Thread GitBox
tweise opened a new pull request #9814: [FLINK-14283][kinesis][docs] Update 
Kinesis consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814
 
 
   ## What is the purpose of the change
   
   Documentation update for Kinesis consumer.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9813: [FLINK-14287] Decouple leader address 
from LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129601641)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-09-29 Thread GitBox
flinkbot commented on issue #9813: [FLINK-14287] Decouple leader address from 
LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536324105
 
 
   
   ## CI report:
   
   * b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to Dispatcher

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to 
Dispatcher
URL: https://github.com/apache/flink/pull/9810#issuecomment-536320582
 
 
   
   ## CI report:
   
   * 3ea0f47e7dd76ec115db4ef583b416685107604b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600085)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific parsing from LeaderConnectionInfo

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9812: [FLINK-14286] Remove Akka specific 
parsing from LeaderConnectionInfo
URL: https://github.com/apache/flink/pull/9812#issuecomment-536322371
 
 
   
   ## CI report:
   
   * c1ca1131b02a2c555cd8386fd07aa1cfccbab161 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600831)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9809: [FLINK-14282] Simplify DispatcherResourceManagerComponent hierarchy

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9809: [FLINK-14282] Simplify 
DispatcherResourceManagerComponent hierarchy
URL: https://github.com/apache/flink/pull/9809#issuecomment-536318695
 
 
   
   ## CI report:
   
   * 531cd688ceb25b544833025d9b556a0d686b29c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129599298)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9812: [FLINK-14286] Remove Akka specific parsing from LeaderConnectionInfo

2019-09-29 Thread GitBox
flinkbot commented on issue #9812: [FLINK-14286] Remove Akka specific parsing 
from LeaderConnectionInfo
URL: https://github.com/apache/flink/pull/9812#issuecomment-536322371
 
 
   
   ## CI report:
   
   * c1ca1131b02a2c555cd8386fd07aa1cfccbab161 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9811: [FLINK-14285] Remove generics from Dispatcher factories

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9811: [FLINK-14285] Remove generics from 
Dispatcher factories 
URL: https://github.com/apache/flink/pull/9811#issuecomment-536320588
 
 
   
   ## CI report:
   
   * 658cddd4424505c1a904a926ad6e626abbd85cd7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600090)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to Dispatcher

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9810: [FLINK-14284] Add shut down future to 
Dispatcher
URL: https://github.com/apache/flink/pull/9810#issuecomment-536320582
 
 
   
   ## CI report:
   
   * 3ea0f47e7dd76ec115db4ef583b416685107604b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/129600085)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9808: [FLINK-14281] Add DispatcherRunner#getShutDownFuture

2019-09-29 Thread GitBox
flinkbot edited a comment on issue #9808: [FLINK-14281] Add 
DispatcherRunner#getShutDownFuture
URL: https://github.com/apache/flink/pull/9808#issuecomment-536316924
 
 
   
   ## CI report:
   
   * 02c93d5d7d69324c33461040fec5c4014eb2b8d6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129598507)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9813: [FLINK-14287] Decouple leader address from LeaderContender

2019-09-29 Thread GitBox
flinkbot commented on issue #9813: [FLINK-14287] Decouple leader address from 
LeaderContender 
URL: https://github.com/apache/flink/pull/9813#issuecomment-536322127
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b7a7e7b6dc8a16d23097911aa98852e2cf4d9c00 (Sun Sep 29 
17:18:32 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >