[GitHub] [flink] wangyang0918 commented on a change in pull request #10143: [FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main thread

2019-11-13 Thread GitBox
wangyang0918 commented on a change in pull request #10143: 
[FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main 
thread
URL: https://github.com/apache/flink/pull/10143#discussion_r346167378
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManager.java
 ##
 @@ -322,11 +323,7 @@ Resource getContainerResource() {
public boolean stopWorker(final YarnWorkerNode workerNode) {
final Container container = workerNode.getContainer();
log.info("Stopping container {}.", container.getId());
-   try {
-   nodeManagerClient.stopContainer(container.getId(), 
container.getNodeId());
-   } catch (final Exception e) {
-   log.warn("Error while calling YARN Node Manager to stop 
container", e);
-   }
+   nodeManagerClient.stopContainerAsync(container.getId(), 
container.getNodeId());
 
 Review comment:
   The exception is not thrown by `startContainerAsync `, but the 
`createTaskExecutorLaunchContext `.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let 
YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#issuecomment-552497066
 
 
   
   ## CI report:
   
   * 5875fa6d987995f327cffae7912c47f4dc51e944 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135953858)
   * 1c9b982ef3ee82b3088ab2c6bf1c48971ad79cc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136142763)
   * 7e5b99825bc1fd7ffe04163158e5cfcb7164bfb9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136272745)
   * a4ced0f532ca317e3495d35758faf46c0252d44b : UNKNOWN
   * 537133983af86ac1a25c784523be74b012ec8ee3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10113: [FLINK-13749][client] Make PackagedProgram respect classloading policy

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10113: [FLINK-13749][client] Make 
PackagedProgram respect classloading policy
URL: https://github.com/apache/flink/pull/10113#issuecomment-550973543
 
 
   
   ## CI report:
   
   * fd65bc500b5539da9560f364ab6e8b1b71b2c1a8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135402939)
   * 46beb179d0ccc6c1065aecf66f937a8de2c443ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135888088)
   * ebbe3babc08bc88afd5680362e0ce54732b5a0bf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135891104)
   * afd5eac118548e16c60d18fcd5a7269b57ee03b2 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14767) Remove TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR

2019-11-13 Thread vinoyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973997#comment-16973997
 ] 

vinoyang commented on FLINK-14767:
--

[~gjy] WDYT?

> Remove TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR
> --
>
> Key: FLINK-14767
> URL: https://issues.apache.org/jira/browse/FLINK-14767
> Project: Flink
>  Issue Type: Sub-task
>Reporter: vinoyang
>Priority: Major
>
> Since {{TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR}} has no longer been  
> used. IMO, we can remove this config option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973996#comment-16973996
 ] 

Zhu Zhu commented on FLINK-14766:
-

[~wind_ljy] According to Flink bylaw, please wait until you are assigned the 
ticket by the committer. 

> Remove volatile variable in ExecutionVertex
> ---
>
> Key: FLINK-14766
> URL: https://issues.apache.org/jira/browse/FLINK-14766
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> Since operations have already been single-thread in {{ExecutionVertex}} I 
> think we can remove the volatile decorator on {{currentExecution}} and 
> {{locationConstraint}}. 
>  
> And same for {{Execution}} too.
>  
> cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973996#comment-16973996
 ] 

Zhu Zhu edited comment on FLINK-14766 at 11/14/19 7:53 AM:
---

[~wind_ljy] According to Flink bylaw, please hold on the PR until you are 
assigned the ticket by the committer. 


was (Author: zhuzh):
[~wind_ljy] According to Flink bylaw, please wait until you are assigned the 
ticket by the committer. 

> Remove volatile variable in ExecutionVertex
> ---
>
> Key: FLINK-14766
> URL: https://issues.apache.org/jira/browse/FLINK-14766
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> Since operations have already been single-thread in {{ExecutionVertex}} I 
> think we can remove the volatile decorator on {{currentExecution}} and 
> {{locationConstraint}}. 
>  
> And same for {{Execution}} too.
>  
> cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14767) Remove TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR

2019-11-13 Thread vinoyang (Jira)
vinoyang created FLINK-14767:


 Summary: Remove TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR
 Key: FLINK-14767
 URL: https://issues.apache.org/jira/browse/FLINK-14767
 Project: Flink
  Issue Type: Sub-task
Reporter: vinoyang


Since {{TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR}} has no longer been  used. 
IMO, we can remove this config option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10079: [FLINK-14594] Fix matching logics of ResourceSpec/ResourceProfile/Resource considering double values

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10079: [FLINK-14594] Fix matching logics of 
ResourceSpec/ResourceProfile/Resource considering double values
URL: https://github.com/apache/flink/pull/10079#issuecomment-549401980
 
 
   
   ## CI report:
   
   * 5b7e9e832ed26f476d0149663b2f0dec9f1bc427 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134874614)
   * 6915be3cf16f83c6850eb18115a333a4a1206080 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135161948)
   * 9627a892cd7076abf01174376dbcdfc9baf41c13 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135207817)
   * ab02aa81dd36f2836b958aec9e8370592dfeff2c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135445772)
   * 0c2901d2a1868d21842da49b578f5f68afbee911 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136060694)
   * 0bedd0f55447262655f03fd62dbe7023427c4044 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136137227)
   * c7a7b2ec081467b22458a494c6aec766d0bfeda2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136155232)
   * 4def7302330a8a3df33d5b410483440f90eebc09 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136161242)
   * bb562e32bc65a3c1355970d73948bd911c0e7f70 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136275151)
   * 2456b6e0c98433a8c2c988f704083e3209a68205 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136375186)
   * 34c92c28d42b726faf3692b895df1d1fe79d6bc6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136468849)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14594) Fix matching logics of ResourceSpec/ResourceProfile/Resource considering double values

2019-11-13 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973994#comment-16973994
 ] 

Zhu Zhu commented on FLINK-14594:
-

Thanks for the proposal [~azagrebin]. With your proposed changes we can do 
resource calculations and checks without considering the precision. So we can 
then keep using #equals to compare resources. And the commits to replace usages 
of #equals are not needed then 
([8758104|https://github.com/apache/flink/pull/10079/commits/875810423ae7853d8499fe57712d5cb66db59c92]
 and 
[34c92c2|https://github.com/apache/flink/pull/10079/commits/34c92c28d42b726faf3692b895df1d1fe79d6bc6]).
Using {{BigDecimal}} should be a good choice that is similar to your proposal.
I will give it a try.

> Fix matching logics of ResourceSpec/ResourceProfile/Resource considering 
> double values
> --
>
> Key: FLINK-14594
> URL: https://issues.apache.org/jira/browse/FLINK-14594
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are resources of double type values, like cpuCores in 
> ResourceSpec/ResourceProfiles or all extended resources. These values can be 
> generated via a merge or subtract, so that there can be small deltas.
> Currently, in resource matching, these resources are matched without 
> considering the deltas, which may result in issues as below:
> 1. A shared slot cannot fulfill a slot request even if it should be able to 
> (because it is possible that {{(d1 + d2) - d1 < d2}} for double values)
> 2. if a shared slot is used up, an unexpected error may occur when 
> calculating its remaining resources in 
> SlotSharingManager#listResolvedRootSlotInfo -> ResourceProfile#subtract
> 3. an unexpected error may happen when releasing a single task slot from a 
> shared slot (in ResourceProfile#subtract)
> To solve this issue, I'd propose to:
> 1.  Introduce a ResourceValue which stores a double value and its acceptable 
> precision (the same kind of resource should use the same precision). It 
> provides {{compareTo}} method, in which two ResourceValues are considered 
> equal if the subtracted abs does not exceed the precision. It also provides 
> merge/subtract/validation operations.
> 2. ResourceSpec/ResourceProfile uses ResourceValue for cpuCores and fix 
> related logics(ctor/validation/subtract/matching). The usages of {{equals}} 
> should be replaced with another method {{hasSameResources}} which considers 
> the precision.
> 3. Resource uses ResourceValue to store its value. Also fix related logics.
> cc [~trohrmann] [~azagrebin] [~xintongsong]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14717) JobExceptionsHandler show exceptions of prior attempts

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14717:
---
Description: 
*Current*

The job's exceptions just show current attempt’s exceptions in web UI.

If the job failovers, we couldn't see any prior attempts' exceptions.

*Proposal*

We could use executionVertex.getPriorExecutionAttempt to get prior attempt in 
JobExceptionsHandler.
{code:java}
for (int i = task.getAttemptNumber() - 1; i >= 0; i--) {
  task = executionVertex.getPriorExecutionAttempt(i);
}
{code}

  was:
Now JobExceptionsHandler just shows current attempt exceptions. If job 
failovers, there may some prior attempts. But they didn't show in 
JobExceptionsHandler. We could use executionVertex.getPriorExecutionAttempt to 
get prior attempt.
{code:java}
for (int i = task.getAttemptNumber() - 1; i >= 0; i--) {
  task = executionVertex.getPriorExecutionAttempt(i);
}
{code}


> JobExceptionsHandler show exceptions of prior  attempts 
> 
>
> Key: FLINK-14717
> URL: https://issues.apache.org/jira/browse/FLINK-14717
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lining
>Priority: Major
>
> *Current*
> The job's exceptions just show current attempt’s exceptions in web UI.
> If the job failovers, we couldn't see any prior attempts' exceptions.
> *Proposal*
> We could use executionVertex.getPriorExecutionAttempt to get prior attempt in 
> JobExceptionsHandler.
> {code:java}
> for (int i = task.getAttemptNumber() - 1; i >= 0; i--) {
>   task = executionVertex.getPriorExecutionAttempt(i);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread vinoyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang updated FLINK-14766:
-
Parent: FLINK-12036
Issue Type: Sub-task  (was: Improvement)

> Remove volatile variable in ExecutionVertex
> ---
>
> Key: FLINK-14766
> URL: https://issues.apache.org/jira/browse/FLINK-14766
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> Since operations have already been single-thread in {{ExecutionVertex}} I 
> think we can remove the volatile decorator on {{currentExecution}} and 
> {{locationConstraint}}. 
>  
> And same for {{Execution}} too.
>  
> cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973991#comment-16973991
 ] 

Jiayi Liao commented on FLINK-14766:


[~zhuzh] I'll submit a PR soon :)

> Remove volatile variable in ExecutionVertex
> ---
>
> Key: FLINK-14766
> URL: https://issues.apache.org/jira/browse/FLINK-14766
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> Since operations have already been single-thread in {{ExecutionVertex}} I 
> think we can remove the volatile decorator on {{currentExecution}} and 
> {{locationConstraint}}. 
>  
> And same for {{Execution}} too.
>  
> cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] AT-Fieldless commented on a change in pull request #10184: [FLINK-14481]Modify the Flink valid socket port check to 0 to 65535.

2019-11-13 Thread GitBox
AT-Fieldless commented on a change in pull request #10184: [FLINK-14481]Modify 
the Flink valid socket port check to 0 to 65535.
URL: https://github.com/apache/flink/pull/10184#discussion_r346161694
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
 ##
 @@ -229,7 +229,7 @@ public void shutdown(Time timeout) {
Collection fileUploads,
RestAPIVersion apiVersion) throws IOException {
Preconditions.checkNotNull(targetAddress);
-   Preconditions.checkArgument(0 <= targetPort && targetPort < 
65536, "The target port " + targetPort + " is not in the range (0, 65536].");
+   Preconditions.checkArgument(0 <= targetPort && targetPort < 
65536, "The target port " + targetPort + " is not in the range [0, 65536).");
 
 Review comment:
   well, I will take your advice


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14730) Add pending slots for job

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14730:
---
Description: 
*Current*

If the resource requested by the job can‘t be satisfied by the cluster, the job 
will remain in the scheduling state.

The user couldn't know the scheduler is blocked by which slot request.

*Proposal*

We could add a rest handler to show information about pending requests in 
SlotPoolImpl.

  was:
*Current*

if the resource requested by the job can‘t be satisfied by the cluster, the job 
will remain in scheduling state.

A user couldn't know the scheduler is blocked by which slot request.

*Proposal*

We could add a rest handler to show information about pending requests in 
SlotPoolImpl.


> Add pending slots for job
> -
>
> Key: FLINK-14730
> URL: https://issues.apache.org/jira/browse/FLINK-14730
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: lining
>Priority: Major
>
> *Current*
> If the resource requested by the job can‘t be satisfied by the cluster, the 
> job will remain in the scheduling state.
> The user couldn't know the scheduler is blocked by which slot request.
> *Proposal*
> We could add a rest handler to show information about pending requests in 
> SlotPoolImpl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14730) Add pending slots for job

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14730:
---
Description: 
*Current*

if the resource requested by the job can‘t be satisfied by the cluster, the job 
will remain in scheduling state.

A user couldn't know the scheduler is blocked by which slot request.

*Proposal*

We could add a rest handler to show information about pending requests in 
SlotPoolImpl.

  was:
*Current*

if the resource requested by the job can‘t be satisfied by the cluster, the job 
will remain in the scheduling state.

A user couldn't the scheduler is blocked by which slotRequest.

This is because there are some pendingSlotRequests, but there is currently no 
such information.


> Add pending slots for job
> -
>
> Key: FLINK-14730
> URL: https://issues.apache.org/jira/browse/FLINK-14730
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: lining
>Priority: Major
>
> *Current*
> if the resource requested by the job can‘t be satisfied by the cluster, the 
> job will remain in scheduling state.
> A user couldn't know the scheduler is blocked by which slot request.
> *Proposal*
> We could add a rest handler to show information about pending requests in 
> SlotPoolImpl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973986#comment-16973986
 ] 

Zhu Zhu commented on FLINK-14766:
-

Sounds good to me.
cc [~gjy]

> Remove volatile variable in ExecutionVertex
> ---
>
> Key: FLINK-14766
> URL: https://issues.apache.org/jira/browse/FLINK-14766
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> Since operations have already been single-thread in {{ExecutionVertex}} I 
> think we can remove the volatile decorator on {{currentExecution}} and 
> {{locationConstraint}}. 
>  
> And same for {{Execution}} too.
>  
> cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle uncaught exceptions in HistoryServerArchive…

2019-11-13 Thread GitBox
dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle 
uncaught exceptions in HistoryServerArchive…
URL: https://github.com/apache/flink/pull/10175#discussion_r346160924
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ScheduledFutures.java
 ##
 @@ -0,0 +1,58 @@
+package org.apache.flink.runtime.util;
+
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ThreadFactory;
+import 
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * Utils related to {@link ScheduledFuture scheduled futures}.
+ */
+public class ScheduledFutures {
+
+   private static final ThreadFactory THREAD_FACTORY = new 
ThreadFactoryBuilder()
+   .setNameFormat("Flink-Scheduled-Future-SafeGuard")
+   .setDaemon(true)
+   .build();
+
+   /**
+* Guard {@link ScheduledFuture} with scheduled future 
uncaughtException handler, because
+* {@link java.util.concurrent.ScheduledExecutorService} does not 
respect the one assigned to
+* executing {@link Thread} instance.
+*
+* @param scheduledFuture Scheduled future to guard.
+* @param uncaughtExceptionHandler Handler to call in case of uncaught 
exception.
+* @param  Type the future returns.
+* @return Future with handler.
+*/
+   public static  ScheduledFuture withUncaughtExceptionHandler(
+   ScheduledFuture scheduledFuture,
+   Thread.UncaughtExceptionHandler uncaughtExceptionHandler) {
+   final Thread safeguardThread = THREAD_FACTORY.newThread(() -> {
+   try {
+   scheduledFuture.get();
+   } catch (InterruptedException e) {
+   Thread.currentThread().interrupt();
 
 Review comment:
   We need to catch it as the `Runnable` does not have `InterruptedException` 
in its signature, we can replace `Thread.currentThread().interrupt();` with `// 
noop` though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle uncaught exceptions in HistoryServerArchive…

2019-11-13 Thread GitBox
dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle 
uncaught exceptions in HistoryServerArchive…
URL: https://github.com/apache/flink/pull/10175#discussion_r346160924
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ScheduledFutures.java
 ##
 @@ -0,0 +1,58 @@
+package org.apache.flink.runtime.util;
+
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ThreadFactory;
+import 
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * Utils related to {@link ScheduledFuture scheduled futures}.
+ */
+public class ScheduledFutures {
+
+   private static final ThreadFactory THREAD_FACTORY = new 
ThreadFactoryBuilder()
+   .setNameFormat("Flink-Scheduled-Future-SafeGuard")
+   .setDaemon(true)
+   .build();
+
+   /**
+* Guard {@link ScheduledFuture} with scheduled future 
uncaughtException handler, because
+* {@link java.util.concurrent.ScheduledExecutorService} does not 
respect the one assigned to
+* executing {@link Thread} instance.
+*
+* @param scheduledFuture Scheduled future to guard.
+* @param uncaughtExceptionHandler Handler to call in case of uncaught 
exception.
+* @param  Type the future returns.
+* @return Future with handler.
+*/
+   public static  ScheduledFuture withUncaughtExceptionHandler(
+   ScheduledFuture scheduledFuture,
+   Thread.UncaughtExceptionHandler uncaughtExceptionHandler) {
+   final Thread safeguardThread = THREAD_FACTORY.newThread(() -> {
+   try {
+   scheduledFuture.get();
+   } catch (InterruptedException e) {
+   Thread.currentThread().interrupt();
 
 Review comment:
   We need to catch it as the `Runnable` does not have `InterruptedException in 
its signature`, we can replace `Thread.currentThread().interrupt();` with `// 
noop` though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle uncaught exceptions in HistoryServerArchive…

2019-11-13 Thread GitBox
dmvk commented on a change in pull request #10175: [FLINK-14746][web] Handle 
uncaught exceptions in HistoryServerArchive…
URL: https://github.com/apache/flink/pull/10175#discussion_r346160519
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ScheduledFutures.java
 ##
 @@ -0,0 +1,58 @@
+package org.apache.flink.runtime.util;
+
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ThreadFactory;
+import 
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * Utils related to {@link ScheduledFuture scheduled futures}.
+ */
+public class ScheduledFutures {
+
+   private static final ThreadFactory THREAD_FACTORY = new 
ThreadFactoryBuilder()
+   .setNameFormat("Flink-Scheduled-Future-SafeGuard")
+   .setDaemon(true)
+   .build();
+
+   /**
+* Guard {@link ScheduledFuture} with scheduled future 
uncaughtException handler, because
+* {@link java.util.concurrent.ScheduledExecutorService} does not 
respect the one assigned to
+* executing {@link Thread} instance.
+*
+* @param scheduledFuture Scheduled future to guard.
+* @param uncaughtExceptionHandler Handler to call in case of uncaught 
exception.
+* @param  Type the future returns.
+* @return Future with handler.
+*/
+   public static  ScheduledFuture withUncaughtExceptionHandler(
+   ScheduledFuture scheduledFuture,
+   Thread.UncaughtExceptionHandler uncaughtExceptionHandler) {
+   final Thread safeguardThread = THREAD_FACTORY.newThread(() -> {
 
 Review comment:
   We need a separate thread to `get()` the result of `ScheduledFuture`, which 
is blocking op. 
   
   Wrapping the scheduled `Runnable` (in the one that try/catches throwable and 
calls UEH) instead of `ScheduledFuture` should get us the same result.
   
   I'm OK with either approach. Which one do you prefer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10187: [FLINK-13938][yarn] Enable configure shared libraries on YARN

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10187: [FLINK-13938][yarn] Enable configure 
shared libraries on YARN
URL: https://github.com/apache/flink/pull/10187#issuecomment-553746982
 
 
   
   ## CI report:
   
   * e1c1980b8e8715f9888d0cacb4eaa8bcd53a4af9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460294)
   * 5363eb505b2f59abd62f3ce5b3c975e001b07593 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10188: [FLINK-14066][python] Support to install pyflink in windows

2019-11-13 Thread GitBox
flinkbot commented on issue #10188: [FLINK-14066][python] Support to install 
pyflink in windows
URL: https://github.com/apache/flink/pull/10188#issuecomment-553762988
 
 
   
   ## CI report:
   
   * dc05221d34c50dbb1ec98a473eef66b4181eb8f5 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13938) Use yarn public distributed cache to speed up containers launch

2019-11-13 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973983#comment-16973983
 ] 

Zili Chen commented on FLINK-13938:
---

I send a pull request towards this issue #10187. You can take a look and review 
it.

cc [~fly_in_gis]

> Use yarn public distributed cache to speed up containers launch
> ---
>
> Key: FLINK-13938
> URL: https://issues.apache.org/jira/browse/FLINK-13938
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / YARN
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> By default, the LocalResourceVisibility is APPLICATION, so they will be 
> downloaded only once and shared for all taskmanager containers of a same 
> application in the same node. However, different applications will have to 
> download all jars every time, including the flink-dist.jar. I think we could 
> use the yarn public cache to eliminate the unnecessary jars downloading and 
> make launching container faster.
>  
> How to use the shared lib feature?
>  # Upload a copy of flink release binary to hdfs.
>  # Use the -ysl argument to specify the shared lib
> {code:java}
> ./bin/flink run -d -m yarn-cluster -p 20 -ysl 
> hdfs:///flink/release/flink-1.9.0/lib examples/streaming/WindowJoin.jar{code}
>  
> -ysl, --yarnsharedLib           Upload a copy of flink lib beforehand
>                                                           and specify the 
> path to use public
>                                                           visibility feature 
> of YARN NodeManager
>                                                           localizing 
> resources.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14730) Add pending slots for job

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14730:
---
Description: 
*Current*

if the resource requested by the job can‘t be satisfied by the cluster, the job 
will remain in the scheduling state.

A user couldn't the scheduler is blocked by which slotRequest.

This is because there are some pendingSlotRequests, but there is currently no 
such information.

  was:Currently, if the resource requested by the job can‘t be satisfied by the 
cluster, the job will remain in the scheduling state. This is because there are 
some pendingSlotRequests, but there is currently no such information.


> Add pending slots for job
> -
>
> Key: FLINK-14730
> URL: https://issues.apache.org/jira/browse/FLINK-14730
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: lining
>Priority: Major
>
> *Current*
> if the resource requested by the job can‘t be satisfied by the cluster, the 
> job will remain in the scheduling state.
> A user couldn't the scheduler is blocked by which slotRequest.
> This is because there are some pendingSlotRequests, but there is currently no 
> such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up legacy code for FLIP-49.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up 
legacy code for FLIP-49.
URL: https://github.com/apache/flink/pull/10161#issuecomment-552882313
 
 
   
   ## CI report:
   
   * 2c0501f41bea1da031777069dd46eb17c5ae8038 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136122509)
   * a93fe47a7f1a91c8a33e7cac2bfc095e3f17012b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270502)
   * 649d050fe4173a390df026156f6e9bae4f346360 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136299400)
   * 08dafb5d2c6f38599bf86c06516465aeaa324941 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136328000)
   * 8700267a462544e3d51aa40baa60ea07482305c5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460282)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346158809
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   Technically the configuration code path of `RemoteEnvironment` is different 
from `ScalaShellRemoteEnvironment`. Duplicate codes are hidden back 
PlanExecutor. But I think you're right that we can make this configuration flow 
configuration(appendable). Hopefully I send a new pull request integrate this 
thought.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let 
YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#issuecomment-552497066
 
 
   
   ## CI report:
   
   * 5875fa6d987995f327cffae7912c47f4dc51e944 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135953858)
   * 1c9b982ef3ee82b3088ab2c6bf1c48971ad79cc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136142763)
   * 7e5b99825bc1fd7ffe04163158e5cfcb7164bfb9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136272745)
   * a4ced0f532ca317e3495d35758faf46c0252d44b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346158429
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   Technically the configuration code path of `RemoteEnvironment` is different 
from `ScalaShellRemoteEnvironment`. Duplicate codes are hidden back 
`PlanExecutor`. If you call current `RemoteEnvironment` as its real usage 
`LocalRemoteEnvironment` you will find that `LocalRemoteEnvironment` is at the 
same level of `ScalaShellRemoteEnvironment`. No inheritance here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14738) Job which status is finished, get operator metric

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14738:
---
Description: 
When the job is running, we can get the operator's metrics, such as [subtask 
index]. [operator Name]. [metric name]
However, once finished, only the metrics for the task record can be obtained. 
Is it because only the record metrics for the task are stored by 
TaskIOMetricGroup.createSnapshot?

  was:When the job is running, we could get operators' metrics. But when it's 
finished, only get task‘s record num. If there any way could get any operator's 
metrics?


> Job which status is finished, get operator metric
> -
>
> Key: FLINK-14738
> URL: https://issues.apache.org/jira/browse/FLINK-14738
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Metrics
>Reporter: lining
>Priority: Major
>
> When the job is running, we can get the operator's metrics, such as [subtask 
> index]. [operator Name]. [metric name]
> However, once finished, only the metrics for the task record can be obtained. 
> Is it because only the record metrics for the task are stored by 
> TaskIOMetricGroup.createSnapshot?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346158429
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   Technically the configuration code path of `RemoteEnvironment` is different 
from `ScalaShellRemoteEnvironment`. Duplicate codes are hidden back 
`PlanExecutor`. If you call current `RemoteEnvironment` as its real usage 
`LocalRemoteEnvironment` you will find that `LocalRemoteEnvironment` is at the 
same level of `ScalaShellRemoteEnvironment`. No inheritance here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework off heap memory config

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework 
off heap memory config
URL: https://github.com/apache/flink/pull/10124#issuecomment-551369957
 
 
   
   ## CI report:
   
   * 4f26ef56d72e62c90fd32ce1ad30766a138405cd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567455)
   * 10caf5a651b06909e67dd43093660be382f4e2b4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135782647)
   * 936edc53a0fe92c91a041cbcc81ba072876afe5d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136440219)
   * a7086b26ad141b709cf6a37d6bd0c9451cc9b68a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136442059)
   * 9407537830c98e27cf96ed0dbf6b5acce8714cb1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136449830)
   * e9e139cea0b3d1ed18fd614c56563c6df036e30f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460258)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW removed a comment on issue #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)

2019-11-13 Thread GitBox
zhijiangW removed a comment on issue #10180: [FLINK-14631] Account for netty 
direct allocations in direct memory limit (Netty Shuffle)
URL: https://github.com/apache/flink/pull/10180#issuecomment-553760932
 
 
   Thanks @azagrebin for this fixing. 
   
   Some background information to share from my side: 
   
   ATM the netty internal memory overhead should be small on output side.  
After the zero-copy improvement for input side working well via 
[7368](https://github.com/apache/flink/pull/7368), the whole netty memory 
overhead should be very small, even ignored. I wished to make this thing happen 
in next release, but not guarantee it.  Based on the current situation, the 
netty memory overhead can reach even 1GB in large scale job sometimes. 
   
   The number of arenas was coupled with the number of slots before by default, 
because this concept is a bit complex for users. Unless netty experts, it is 
difficult for users to understand how many memory would be allocated within one 
arena. Actually the number of arenas should be related to the number of 
physical connections and netty threads. 
   
   Another option is to let users config this portion of memory size directly, 
which is somehow better understood, and the framework can make the 
transformation of arenas internally. My comment is not the blocker for this PR 
merging. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)

2019-11-13 Thread GitBox
zhijiangW commented on issue #10180: [FLINK-14631] Account for netty direct 
allocations in direct memory limit (Netty Shuffle)
URL: https://github.com/apache/flink/pull/10180#issuecomment-553760932
 
 
   Thanks @azagrebin for this fixing. 
   
   Some background information to share from my side: 
   
   ATM the netty internal memory overhead should be small on output side.  
After the zero-copy improvement for input side working well via 
[7368](https://github.com/apache/flink/pull/7368), the whole netty memory 
overhead should be very small, even ignored. I wished to make this thing happen 
in next release, but not guarantee it.  Based on the current situation, the 
netty memory overhead can reach even 1GB in large scale job sometimes. 
   
   The number of arenas was coupled with the number of slots before by default, 
because this concept is a bit complex for users. Unless netty experts, it is 
difficult for users to understand how many memory would be allocated within one 
arena. Actually the number of arenas should be related to the number of 
physical connections and netty threads. 
   
   Another option is to let users config this portion of memory size directly, 
which is somehow better understood, and the framework can make the 
transformation of arenas internally. My comment is not the blocker for this PR 
merging. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14766) Remove volatile variable in ExecutionVertex

2019-11-13 Thread Jiayi Liao (Jira)
Jiayi Liao created FLINK-14766:
--

 Summary: Remove volatile variable in ExecutionVertex
 Key: FLINK-14766
 URL: https://issues.apache.org/jira/browse/FLINK-14766
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Affects Versions: 1.9.0
Reporter: Jiayi Liao


Since operations have already been single-thread in {{ExecutionVertex}} I think 
we can remove the volatile decorator on {{currentExecution}} and 
{{locationConstraint}}. 

 

And same for {{Execution}} too.

 

cc [~zhuzh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14765) Remove STATE_UPDATER in Execution

2019-11-13 Thread vinoyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973977#comment-16973977
 ] 

vinoyang commented on FLINK-14765:
--

[~azagrebin] and [~gjy] WDYT about this ticket? Can we remove this field? I 
have tried to locate and analyze the call chain. However, I am not sure.

> Remove STATE_UPDATER in Execution
> -
>
> Key: FLINK-14765
> URL: https://issues.apache.org/jira/browse/FLINK-14765
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: vinoyang
>Priority: Major
>
> After making access to ExecutionGraph single-threaded in FLINK-11417, we can 
> simplify execution state update and get rid of STATE_UPDATER while about the 
> volatile at the moment which could be further investigated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14765) Remove STATE_UPDATER in Execution

2019-11-13 Thread vinoyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang updated FLINK-14765:
-
Description: After making access to ExecutionGraph single-threaded in 
FLINK-11417, we can simplify execution state update and get rid of 
STATE_UPDATER while about the volatile at the moment which could be further 
investigated.  (was: After making access to ExecutionGraph single-threaded in 
FLINK-11417, we can simplify execution state update and get rid of 
STATE_UPDATER and about the volatile at the moment which could be further 
investigated.)

> Remove STATE_UPDATER in Execution
> -
>
> Key: FLINK-14765
> URL: https://issues.apache.org/jira/browse/FLINK-14765
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: vinoyang
>Priority: Major
>
> After making access to ExecutionGraph single-threaded in FLINK-11417, we can 
> simplify execution state update and get rid of STATE_UPDATER while about the 
> volatile at the moment which could be further investigated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement back-pressure monitor with non-blocking outputs.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135750292)
   * 9e54a9a999a02dd362d052325d46fa5f84f59694 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270449)
   * 1ed9ff6b34b2a73c817b89b5072aa315087b85d1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136343628)
   * b4556b599fedba739f67ac43396c11af20a17427 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460240)
   * 534a334db1a62be4c8246bb19177c5b35c9c423c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136462523)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10079: [FLINK-14594] Fix matching logics of ResourceSpec/ResourceProfile/Resource considering double values

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10079: [FLINK-14594] Fix matching logics of 
ResourceSpec/ResourceProfile/Resource considering double values
URL: https://github.com/apache/flink/pull/10079#issuecomment-549401980
 
 
   
   ## CI report:
   
   * 5b7e9e832ed26f476d0149663b2f0dec9f1bc427 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134874614)
   * 6915be3cf16f83c6850eb18115a333a4a1206080 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135161948)
   * 9627a892cd7076abf01174376dbcdfc9baf41c13 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135207817)
   * ab02aa81dd36f2836b958aec9e8370592dfeff2c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135445772)
   * 0c2901d2a1868d21842da49b578f5f68afbee911 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136060694)
   * 0bedd0f55447262655f03fd62dbe7023427c4044 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136137227)
   * c7a7b2ec081467b22458a494c6aec766d0bfeda2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136155232)
   * 4def7302330a8a3df33d5b410483440f90eebc09 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136161242)
   * bb562e32bc65a3c1355970d73948bd911c0e7f70 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136275151)
   * 2456b6e0c98433a8c2c988f704083e3209a68205 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136375186)
   * 34c92c28d42b726faf3692b895df1d1fe79d6bc6 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14738) Job which status is finished, get operator metric

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14738:
---
Description: When the job is running, we could get operators' metrics. But 
when it's finished, only get task‘s record num. If there any way could get any 
operator's metrics?  (was: When the job is running, we could get operators' 
metrics. But when it's finished, just get task record num. If there any way 
could get any operator's metrics?)

> Job which status is finished, get operator metric
> -
>
> Key: FLINK-14738
> URL: https://issues.apache.org/jira/browse/FLINK-14738
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Metrics
>Reporter: lining
>Priority: Major
>
> When the job is running, we could get operators' metrics. But when it's 
> finished, only get task‘s record num. If there any way could get any 
> operator's metrics?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14765) Remove STATE_UPDATER in Execution

2019-11-13 Thread vinoyang (Jira)
vinoyang created FLINK-14765:


 Summary: Remove STATE_UPDATER in Execution
 Key: FLINK-14765
 URL: https://issues.apache.org/jira/browse/FLINK-14765
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: vinoyang


After making access to ExecutionGraph single-threaded in FLINK-11417, we can 
simplify execution state update and get rid of STATE_UPDATER and about the 
volatile at the moment which could be further investigated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14740) Create OperatorID for OperatorMetricGroup which in batch job

2019-11-13 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973973#comment-16973973
 ] 

lining commented on FLINK-14740:


I agree not to expose operator id in web UI. But the problem is operators which 
have the same VertexId and name will overwrite in TaskMetricGroup.

> Create OperatorID for OperatorMetricGroup which in batch job 
> -
>
> Key: FLINK-14740
> URL: https://issues.apache.org/jira/browse/FLINK-14740
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Metrics
>Reporter: lining
>Priority: Major
>
> In current design:
> The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
> (ps:TaskMetricGroup.getOrAddOperator (string name)).
> If two operators in the same vertex have the same name, they will overwrite 
> each other in the TaskMetricGroup.
> Proposal:
> We could add the OperatorID to the operator of the dataset.
> {code:java}
> for (TaskInChain tic : this.chainedTasksInSequence) {
>TaskConfig t = new 
> TaskConfig(tic.getContainingVertex().getConfiguration());
>Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
>OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
>if(operatorID == null) {
>   operatorID = new OperatorID();
>   this.nodeId2OperatorId.put(nodeId, operatorID);
>}
>t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
> tic.getTaskName(), operatorID.toString());
> }
> {code}
> Then we could get id from TaskInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14740) Create OperatorID for OperatorMetricGroup which in batch job

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14740:
---
Description: 
*In current design:*

The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
(ps:TaskMetricGroup.getOrAddOperator (string name)).

If two operators in the same vertex have the same name, they will overwrite 
each other in the TaskMetricGroup.

*Proposal:*

We could add the OperatorID to the operator of the dataset.
{code:java}
for (TaskInChain tic : this.chainedTasksInSequence) {
   TaskConfig t = new TaskConfig(tic.getContainingVertex().getConfiguration());
   Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
   OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
   if(operatorID == null) {
  operatorID = new OperatorID();
  this.nodeId2OperatorId.put(nodeId, operatorID);
   }
   t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
tic.getTaskName(), operatorID.toString());
}
{code}
Then we could get id from TaskInfo.

  was:
In current design:

The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
(ps:TaskMetricGroup.getOrAddOperator (string name)).

If two operators in the same vertex have the same name, they will overwrite 
each other in the TaskMetricGroup.

Proposal:

We could add the OperatorID to the operator of the dataset.
{code:java}
for (TaskInChain tic : this.chainedTasksInSequence) {
   TaskConfig t = new TaskConfig(tic.getContainingVertex().getConfiguration());
   Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
   OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
   if(operatorID == null) {
  operatorID = new OperatorID();
  this.nodeId2OperatorId.put(nodeId, operatorID);
   }
   t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
tic.getTaskName(), operatorID.toString());
}
{code}
Then we could get id from TaskInfo.


> Create OperatorID for OperatorMetricGroup which in batch job 
> -
>
> Key: FLINK-14740
> URL: https://issues.apache.org/jira/browse/FLINK-14740
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Metrics
>Reporter: lining
>Priority: Major
>
> *In current design:*
> The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
> (ps:TaskMetricGroup.getOrAddOperator (string name)).
> If two operators in the same vertex have the same name, they will overwrite 
> each other in the TaskMetricGroup.
> *Proposal:*
> We could add the OperatorID to the operator of the dataset.
> {code:java}
> for (TaskInChain tic : this.chainedTasksInSequence) {
>TaskConfig t = new 
> TaskConfig(tic.getContainingVertex().getConfiguration());
>Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
>OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
>if(operatorID == null) {
>   operatorID = new OperatorID();
>   this.nodeId2OperatorId.put(nodeId, operatorID);
>}
>t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
> tic.getTaskName(), operatorID.toString());
> }
> {code}
> Then we could get id from TaskInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-13 Thread GitBox
guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] 
Let YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#discussion_r345868426
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/entrypoint/YarnJobClusterEntrypoint.java
 ##
 @@ -61,10 +64,14 @@ protected String getRPCPortRange(Configuration 
configuration) {
}
 
@Override
-   protected DefaultDispatcherResourceManagerComponentFactory 
createDispatcherResourceManagerComponentFactory(Configuration configuration) {
+   protected DefaultDispatcherResourceManagerComponentFactory 
createDispatcherResourceManagerComponentFactory(Configuration configuration) 
throws IOException {
+   final YarnConfigOptions.UserJarInclusion userJarInclusion = 
configuration
+   .getEnum(YarnConfigOptions.UserJarInclusion.class, 
YarnConfigOptions.CLASSPATH_INCLUDE_USER_JAR);
+   final File usrLibDir =
+   userJarInclusion == 
YarnConfigOptions.UserJarInclusion.DISABLED ? tryFindUserLibDirectory().get() : 
null;
return 
DefaultDispatcherResourceManagerComponentFactory.createJobComponentFactory(
YarnResourceManagerFactory.getInstance(),
-   FileJobGraphRetriever.createFrom(configuration));
+   FileJobGraphRetriever.createFrom(configuration, 
usrLibDir));
 
 Review comment:
   If the user does not set the `CLASSPATH_INCLUDE_USER_JAR` to the `DISABLE` 
this would lead to add jars in the usrlib directory to the user classpath. I 
think these jars in the usrlib should not add to the user classpath in this 
situation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346153834
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/JarUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.jar.JarFile;
+
+/**
+ * Utilities for jar.
+ */
+public enum JarUtils {
+   ;
 
 Review comment:
   here is the context https://github.com/apache/flink/pull/7528


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346151056
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   In semantic perspective, it is weird to call it remote ExecutionEnvironment, 
but didn't extend it from`RemoteEnvironment`. Could you explain more about the 
motivation ?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14740) Create OperatorID for OperatorMetricGroup which in batch job

2019-11-13 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14740:
---
Description: 
In current design:

The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
(ps:TaskMetricGroup.getOrAddOperator (string name)).

If two operators in the same vertex have the same name, they will overwrite 
each other in the TaskMetricGroup.

Proposal:

We could add the OperatorID to the operator of the dataset.
{code:java}
for (TaskInChain tic : this.chainedTasksInSequence) {
   TaskConfig t = new TaskConfig(tic.getContainingVertex().getConfiguration());
   Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
   OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
   if(operatorID == null) {
  operatorID = new OperatorID();
  this.nodeId2OperatorId.put(nodeId, operatorID);
   }
   t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
tic.getTaskName(), operatorID.toString());
}
{code}
Then we could get id from TaskInfo.

  was:
Now OperatorMetricGroup which in batch job use VertexId as OperatorId. For 
chain operator, they'll use the same id, if two chain operators which have same 
name. I We could update in JobGraphGenerator.compileJobGraph
{code:java}
for (TaskInChain tic : this.chainedTasksInSequence) {
   TaskConfig t = new TaskConfig(tic.getContainingVertex().getConfiguration());
   Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
   OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
   if(operatorID == null) {
  operatorID = new OperatorID();
  this.nodeId2OperatorId.put(nodeId, operatorID);
   }
   t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
tic.getTaskName(), operatorID.toString());
}
{code}
Then we could get id from TaskInfo.


> Create OperatorID for OperatorMetricGroup which in batch job 
> -
>
> Key: FLINK-14740
> URL: https://issues.apache.org/jira/browse/FLINK-14740
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Metrics
>Reporter: lining
>Priority: Major
>
> In current design:
> The DataSet job uses VertexID as the OperatorID in the OperatorMetricGroup 
> (ps:TaskMetricGroup.getOrAddOperator (string name)).
> If two operators in the same vertex have the same name, they will overwrite 
> each other in the TaskMetricGroup.
> Proposal:
> We could add the OperatorID to the operator of the dataset.
> {code:java}
> for (TaskInChain tic : this.chainedTasksInSequence) {
>TaskConfig t = new 
> TaskConfig(tic.getContainingVertex().getConfiguration());
>Integer nodeId = tic.getPlanNode().getOptimizerNode().getId();
>OperatorID operatorID = this.nodeId2OperatorId.get(nodeId);
>if(operatorID == null) {
>   operatorID = new OperatorID();
>   this.nodeId2OperatorId.put(nodeId, operatorID);
>}
>t.addChainedTask(tic.getChainedTask(), tic.getTaskConfig(), 
> tic.getTaskName(), operatorID.toString());
> }
> {code}
> Then we could get id from TaskInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10187: [FLINK-13938][yarn] Enable configure shared libraries on YARN

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10187: [FLINK-13938][yarn] Enable configure 
shared libraries on YARN
URL: https://github.com/apache/flink/pull/10187#issuecomment-553746982
 
 
   
   ## CI report:
   
   * e1c1980b8e8715f9888d0cacb4eaa8bcd53a4af9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460294)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10186: [FLINK-14759][flink-runtime]Remove unused class TaskManagerCliOptions

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10186: [FLINK-14759][flink-runtime]Remove 
unused class TaskManagerCliOptions
URL: https://github.com/apache/flink/pull/10186#issuecomment-553732980
 
 
   
   ## CI report:
   
   * bc053af0629f4f0b8db14aaaddac474b0a750353 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136456400)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-13 Thread GitBox
guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] 
Let YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#discussion_r346151236
 
 

 ##
 File path: docs/_includes/generated/writable_configuration.html
 ##
 @@ -0,0 +1,11 @@
+
 
 Review comment:
   I think this introduces by https://issues.apache.org/jira/browse/FLINK-14493.
   I think other people might meet the same problem.
   Do you think excluding the `WritableConfig` makes sense?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up legacy code for FLIP-49.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up 
legacy code for FLIP-49.
URL: https://github.com/apache/flink/pull/10161#issuecomment-552882313
 
 
   
   ## CI report:
   
   * 2c0501f41bea1da031777069dd46eb17c5ae8038 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136122509)
   * a93fe47a7f1a91c8a33e7cac2bfc095e3f17012b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270502)
   * 649d050fe4173a390df026156f6e9bae4f346360 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136299400)
   * 08dafb5d2c6f38599bf86c06516465aeaa324941 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136328000)
   * 8700267a462544e3d51aa40baa60ea07482305c5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460282)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346151056
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   In semantic perspective, it is weird to call it remote ExecutionEnvironment, 
but didn't extend it `RemoteEnvironment`. Could you explain more about the 
motivation ?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework off heap memory config

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework 
off heap memory config
URL: https://github.com/apache/flink/pull/10124#issuecomment-551369957
 
 
   
   ## CI report:
   
   * 4f26ef56d72e62c90fd32ce1ad30766a138405cd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567455)
   * 10caf5a651b06909e67dd43093660be382f4e2b4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135782647)
   * 936edc53a0fe92c91a041cbcc81ba072876afe5d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136440219)
   * a7086b26ad141b709cf6a37d6bd0c9451cc9b68a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136442059)
   * 9407537830c98e27cf96ed0dbf6b5acce8714cb1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136449830)
   * e9e139cea0b3d1ed18fd614c56563c6df036e30f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460258)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346149933
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/JarUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.jar.JarFile;
+
+/**
+ * Utilities for jar.
+ */
+public enum JarUtils {
+   ;
 
 Review comment:
   Wah, Didn't notice it is enum, but why use enum instead of class ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346135396
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java
 ##
 @@ -528,8 +529,9 @@ public static void deleteExtractedLibraries(List 
tempLibraries) {
 
private static void checkJarFile(URL jarfile) throws 
ProgramInvocationException {
 
 Review comment:
   Do we still need this checkJarFile method ? It looks like we use call 
`JarUtils.checkJarFile` directly ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #10022: [FLINK-14135][hive][orc] Introduce orc ColumnarRow reader for hive connector

2019-11-13 Thread GitBox
JingsongLi commented on issue #10022: [FLINK-14135][hive][orc] Introduce orc 
ColumnarRow reader for hive connector
URL: https://github.com/apache/flink/pull/10022#issuecomment-553751930
 
 
   > Add documentation to inform users when to use `FlinkOrcReader` v.s. 
`HiveRecordReader`?
   
   I think we can add some comments on config option.
   We don't want users to see this configuration unless something goes wrong 
here.
   Because I can't guarantee that `HiveVectorizedOrcSplitReader` can replace 
`HiveMapredSplitReader` in any scenario. If there are any corner cases, users 
can fallback.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement back-pressure monitor with non-blocking outputs.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135750292)
   * 9e54a9a999a02dd362d052325d46fa5f84f59694 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270449)
   * 1ed9ff6b34b2a73c817b89b5072aa315087b85d1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136343628)
   * b4556b599fedba739f67ac43396c11af20a17427 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460240)
   * 534a334db1a62be4c8246bb19177c5b35c9c423c : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10022: [FLINK-14135][hive][orc] Introduce orc ColumnarRow reader for hive connector

2019-11-13 Thread GitBox
JingsongLi commented on a change in pull request #10022: 
[FLINK-14135][hive][orc] Introduce orc ColumnarRow reader for hive connector
URL: https://github.com/apache/flink/pull/10022#discussion_r346149217
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveOptions.java
 ##
 @@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connectors.hive;
+
+import org.apache.flink.configuration.ConfigOption;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/**
+ * This class holds configuration constants used by hive connector.
+ */
+public class HiveOptions {
+
+   public static final ConfigOption HIVE_EXEC_ORC_FLINK_READER =
+   key("hive.exec.orc.flink-reader")
+   .defaultValue(true)
+   .withDescription(
+   "If it is true, using 
flink native reader to read orc files; " +
 
 Review comment:
   Just flink configuration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10188: [FLINK-14066][python] Support to install pyflink in windows

2019-11-13 Thread GitBox
flinkbot commented on issue #10188: [FLINK-14066][python] Support to install 
pyflink in windows
URL: https://github.com/apache/flink/pull/10188#issuecomment-553751552
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit dc05221d34c50dbb1ec98a473eef66b4181eb8f5 (Thu Nov 14 
06:51:12 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14066).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10143: [FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main thread

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10143: [FLINK-13184]Starting a 
TaskExecutor blocks the YarnResourceManager's main thread
URL: https://github.com/apache/flink/pull/10143#discussion_r346148941
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManager.java
 ##
 @@ -322,11 +323,7 @@ Resource getContainerResource() {
public boolean stopWorker(final YarnWorkerNode workerNode) {
final Container container = workerNode.getContainer();
log.info("Stopping container {}.", container.getId());
-   try {
-   nodeManagerClient.stopContainer(container.getId(), 
container.getNodeId());
-   } catch (final Exception e) {
-   log.warn("Error while calling YARN Node Manager to stop 
container", e);
-   }
+   nodeManagerClient.stopContainerAsync(container.getId(), 
container.getNodeId());
 
 Review comment:
   Thanks for your explanation. From the signature of `startContainerAsync` it 
is also not marked throws exceptions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14066) Pyflink building failure in master and 1.9.0 version

2019-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14066:
---
Labels: beginner build pull-request-available  (was: beginner build)

> Pyflink building failure in master and 1.9.0 version
> 
>
> Key: FLINK-14066
> URL: https://issues.apache.org/jira/browse/FLINK-14066
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System
>Affects Versions: 1.9.0, 1.10.0
> Environment: windows 10 enterprise x64(mentioned as build 
> environment, not development environment.)
> powershell x64
> flink source master and 1.9.0 version
> jdk-8u202
> maven-3.2.5
>Reporter: Xu Yang
>Priority: Blocker
>  Labels: beginner, build, pull-request-available
> Attachments: setup.py
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> ATTENTION: This is a issue about building pyflink, not development.
> During we build pyflink...
> After we have built flink from flink source code, a folder named "target" is 
> generated.
> Then, following the document description, "cd flink-python; python3 setup.py 
> sdist bdist_wheel", error happens.
> Root cause: in the setup.py file, line 75, "FLINK_HOME = 
> os.path.abspath("../build-target")", the program can't found folder 
> "build-target", however, the building of flink generated a folder named 
> "target". So error happens in this way...
>  
> The right way:
> in ../flink-python/setup.py line 75, modify code as following:
> FLINK_HOME = os.path.abspath("../target")



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #10188: [FLINK-14066][python] Support to install pyflink in windows

2019-11-13 Thread GitBox
dianfu opened a new pull request #10188: [FLINK-14066][python] Support to 
install pyflink in windows
URL: https://github.com/apache/flink/pull/10188
 
 
   ## What is the purpose of the change
   
   *This pull request adds support to install pyflink in windows.*
   
   ## Brief change log
   
 - *Fixes setup.py to allow pyflink could be installed in windows*
   
   ## Verifying this change
   
   Test manually that pyflink could be installed in windows with both of the 
following commands:
   - "python setup.py install"
   - "python setup.py sdist && pip install .\dist\apache-flink-1.10.dev0.tar.gz"
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10022: [FLINK-14135][hive][orc] Introduce orc ColumnarRow reader for hive connector

2019-11-13 Thread GitBox
JingsongLi commented on a change in pull request #10022: 
[FLINK-14135][hive][orc] Introduce orc ColumnarRow reader for hive connector
URL: https://github.com/apache/flink/pull/10022#discussion_r346148106
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveTableInputFormat.java
 ##
 @@ -0,0 +1,247 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connectors.hive.read;
+
+import org.apache.flink.api.common.io.LocatableInputSplitAssigner;
+import org.apache.flink.api.common.io.statistics.BaseStatistics;
+import org.apache.flink.api.java.hadoop.common.HadoopInputFormatCommonBase;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connectors.hive.FlinkHiveException;
+import org.apache.flink.connectors.hive.HiveOptions;
+import org.apache.flink.connectors.hive.HiveTablePartition;
+import org.apache.flink.core.io.InputSplitAssigner;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.types.DataType;
+
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.mapred.InputFormat;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.IntStream;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.hadoop.mapreduce.lib.input.FileInputFormat.INPUT_DIR;
+
+/**
+ * The HiveTableInputFormat are inspired by the HCatInputFormat and 
HadoopInputFormatBase.
+ * It's used to read from hive partition/non-partition table.
+ */
+public class HiveTableInputFormat extends HadoopInputFormatCommonBase {
+
+   private static final long serialVersionUID = 1L;
 
 Review comment:
   According to flink code style, it should be `1L`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10143: [FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main thread

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10143: [FLINK-13184]Starting a 
TaskExecutor blocks the YarnResourceManager's main thread
URL: https://github.com/apache/flink/pull/10143#discussion_r346147881
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
 ##
 @@ -381,7 +381,8 @@ public static String getTaskManagerShellCommand(
boolean hasLogback,
boolean hasLog4j,
boolean hasKrb5,
-   Class mainClass) {
+   Class mainClass,
 
 Review comment:
   Make sense to do as is :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346144260
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -438,30 +446,35 @@ object GenerateUtils {
   def generateProctimeTimestamp(
   ctx: CodeGeneratorContext,
   contextTerm: String): GeneratedExpression = {
-val resultTerm = ctx.addReusableLocalVariable("long", "result")
+val resultType = new TimestampType(3)
+val resultTypeTerm = primitiveTypeTermForType(resultType)
+val resultTerm = ctx.addReusableLocalVariable(resultTypeTerm, "result")
 val resultCode =
   s"""
- |$resultTerm = $contextTerm.timerService().currentProcessingTime();
+ |$resultTerm = $SQL_TIMESTAMP.fromEpochMillis(
+ |  $contextTerm.timerService().currentProcessingTime());
  |""".stripMargin.trim
 // the proctime has been materialized, so it's TIMESTAMP now, not 
PROCTIME_INDICATOR
-GeneratedExpression(resultTerm, NEVER_NULL, resultCode, new 
TimestampType(3))
+GeneratedExpression(resultTerm, NEVER_NULL, resultCode, resultType)
   }
 
   def generateCurrentTimestamp(
   ctx: CodeGeneratorContext): GeneratedExpression = {
 new CurrentTimePointCallGen(false).generate(ctx, Seq(), new 
TimestampType(3))
   }
 
-  def generateRowtimeAccess(
+  def generateTimestampAccess(
 
 Review comment:
   Why change the method name? I think the original method describe the logic 
more accurate.
   At the first glance at the new method name, I though it is used to access 
timestamp field from a row or record. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346143069
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
 ##
 @@ -135,8 +138,26 @@ public RexNode visit(ValueLiteralExpression valueLiteral) 
{
return 
relBuilder.getRexBuilder().makeTimeLiteral(TimeString.fromCalendarFields(

valueAsCalendar(extractValue(valueLiteral, java.sql.Time.class))), 0);
case TIMESTAMP_WITHOUT_TIME_ZONE:
-   return 
relBuilder.getRexBuilder().makeTimestampLiteral(TimestampString.fromCalendarFields(
-   
valueAsCalendar(extractValue(valueLiteral, java.sql.Timestamp.class))), 3);
+   TimestampType timestampType = (TimestampType) 
type;
+   Class clazz = 
valueLiteral.getOutputDataType().getConversionClass();
 
 Review comment:
   We can just use `valueLiteral.getValueAs(LocalDateTime.class)`. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346121333
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeSystem.scala
 ##
 @@ -53,6 +56,10 @@ class FlinkTypeSystem extends RelDataTypeSystemImpl {
 case SqlTypeName.VARCHAR | SqlTypeName.CHAR | SqlTypeName.VARBINARY | 
SqlTypeName.BINARY =>
   Int.MaxValue
 
+// The maximal precision of TIMESTAMP is 3, change it to 9 to support 
nanoseconds precision
 
 Review comment:
   What is this "maximal precision of TIMESTAMP is 3" mean? Is that a typo? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14531) Add Kafka Metrics Reporter

2019-11-13 Thread chen yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973957#comment-16973957
 ] 

chen yong commented on FLINK-14531:
---

[~gyfora]

Sorry for not replying to you for a long time.I've tried many times, can't use 
Google account to log on to 
https://lists.apache.org/x/list.html?d...@flink.apache.org, and I don't have 
ASF LDAP.You can promote this discussion, if I have an account next, I will 
participate in it

> Add   Kafka Metrics Reporter
> 
>
> Key: FLINK-14531
> URL: https://issues.apache.org/jira/browse/FLINK-14531
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Metrics
> Environment: jdk1.8 + IDEA
>Reporter: chen yong
>Priority: Minor
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When we developed monitoring indicators and reported them, we found that 
> there was no report suitable for us, because our company had its own 
> monitoring system, so we customized kafkareporter, and then the monitoring 
> system would read the information and analyze it itself.Maybe other companies 
> have similar situations.Although there is JMX, kafka is more 
> versatile.Meanwhile, reporting monitoring indicators to kafka has no impact 
> on the performance of flink program compared with other reports sent to the 
> database.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346141018
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DateTimeTypesTest.scala
 ##
 @@ -82,13 +82,29 @@ class TemporalTypesTest extends ExpressionTestBase {
 testTableApi(
   localDateTime2Literal(localDateTime("2040-09-11 00:00:00.000")),
   "'2040-09-11 00:00:00.000'.toTimestamp",
-  "2040-09-11 00:00:00.000")
+  "2040-09-11 00:00:00")
 
 testAllApis(
   "1500-04-30 12:00:00".cast(DataTypes.TIMESTAMP(3)),
   "'1500-04-30 12:00:00'.cast(SQL_TIMESTAMP)",
-  "CAST('1500-04-30 12:00:00' AS TIMESTAMP)",
-  "1500-04-30 12:00:00.000")
+  "CAST('1500-04-30 12:00:00' AS TIMESTAMP(3))",
+  "1500-04-30 12:00:00")
+
+testSqlApi(
+  "TIMESTAMP '1500-04-30 12:00:00.123456789'",
+"1500-04-30 12:00:00.123456789")
 
 Review comment:
   indent. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r345665286
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/SqlTimestamp.java
 ##
 @@ -117,6 +117,25 @@ public static SqlTimestamp fromEpochMillis(long 
millisecond, int nanoOfMilliseco
return new SqlTimestamp(millisecond, nanoOfMillisecond);
}
 
+   /**
+* Obtains an instance of {@code SqlTimestamp} from a millisecond.
 
 Review comment:
   minor: I would suggest to use `{@link SqlTimestamp}`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r345662571
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/types/LogicalTypeDataTypeConverter.java
 ##
 @@ -98,6 +101,12 @@ protected LogicalType defaultMethod(LogicalType 
logicalType) {
} else if (typeInfo instanceof 
BigDecimalTypeInfo) {
BigDecimalTypeInfo decimalType = 
(BigDecimalTypeInfo) typeInfo;
return new 
DecimalType(decimalType.precision(), decimalType.scale());
+   } else if (typeInfo instanceof 
LegacyLocalDateTimeTypeInfo) {
 
 Review comment:
   The newly introduced `LegacyLocalDateTimeTypeInfo` is wired. In theory, we 
don't need such things unless somewhere converting DataType to TypeInformation 
and back again. I think we should find out the root cause why we need this and 
create JIRA to fix the root cause. And add comments on these classes with the 
JIRA id. 
   
   Otherwise, we don't know how to remove these temporary code in the future. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346120431
 
 

 ##
 File path: flink-table/flink-table-runtime-blink/pom.xml
 ##
 @@ -74,6 +74,67 @@ under the License.
${janino.version}

 
+   
+   
+   org.apache.calcite
+   calcite-core
 
 Review comment:
   +1 not introduce calcite. We are aiming to achieve Calcite-free in runtime 
to support different users' Calcite in a long term (shading Calcite is not 
easy). 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-13 Thread GitBox
wuchong commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r346140105
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/AggregateUtil.scala
 ##
 @@ -494,7 +494,9 @@ object AggregateUtil extends Enumeration {
 
   case DATE => DataTypes.INT
   case TIME_WITHOUT_TIME_ZONE => DataTypes.INT
-  case TIMESTAMP_WITHOUT_TIME_ZONE => DataTypes.BIGINT
+  case TIMESTAMP_WITHOUT_TIME_ZONE =>
+val dt = argTypes(0).asInstanceOf[TimestampType]
+DataTypes.TIMESTAMP(dt.getPrecision).bridgedTo(classOf[SqlTimestamp])
 
 Review comment:
   This is a little performance sensitive, because it relates de/serialization.
   If the precision is less than 3, we can use BIGINT to have a better 
performance. 
   
   Could you improve this a bit?  Or create a issue and TODO for it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10187: [FLINK-13938][yarn] Enable configure shared libraries on YARN

2019-11-13 Thread GitBox
flinkbot commented on issue #10187: [FLINK-13938][yarn] Enable configure shared 
libraries on YARN
URL: https://github.com/apache/flink/pull/10187#issuecomment-553746982
 
 
   
   ## CI report:
   
   * e1c1980b8e8715f9888d0cacb4eaa8bcd53a4af9 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up legacy code for FLIP-49.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up 
legacy code for FLIP-49.
URL: https://github.com/apache/flink/pull/10161#issuecomment-552882313
 
 
   
   ## CI report:
   
   * 2c0501f41bea1da031777069dd46eb17c5ae8038 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136122509)
   * a93fe47a7f1a91c8a33e7cac2bfc095e3f17012b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270502)
   * 649d050fe4173a390df026156f6e9bae4f346360 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136299400)
   * 08dafb5d2c6f38599bf86c06516465aeaa324941 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136328000)
   * 8700267a462544e3d51aa40baa60ea07482305c5 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework off heap memory config

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework 
off heap memory config
URL: https://github.com/apache/flink/pull/10124#issuecomment-551369957
 
 
   
   ## CI report:
   
   * 4f26ef56d72e62c90fd32ce1ad30766a138405cd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567455)
   * 10caf5a651b06909e67dd43093660be382f4e2b4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135782647)
   * 936edc53a0fe92c91a041cbcc81ba072876afe5d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136440219)
   * a7086b26ad141b709cf6a37d6bd0c9451cc9b68a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136442059)
   * 9407537830c98e27cf96ed0dbf6b5acce8714cb1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136449830)
   * e9e139cea0b3d1ed18fd614c56563c6df036e30f : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement back-pressure monitor with non-blocking outputs.

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime] Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135750292)
   * 9e54a9a999a02dd362d052325d46fa5f84f59694 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270449)
   * 1ed9ff6b34b2a73c817b89b5072aa315087b85d1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136343628)
   * b4556b599fedba739f67ac43396c11af20a17427 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wsry commented on issue #10083: [FLINK-14472][runtime] Implement back-pressure monitor with non-blocking outputs.

2019-11-13 Thread GitBox
wsry commented on issue #10083: [FLINK-14472][runtime] Implement back-pressure 
monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-553741266
 
 
   Thanks @zhijiangW . I have updated the PR and commit description.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346139740
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteStreamEnvironment.java
 ##
 @@ -58,27 +80,47 @@
 *   user-defined input formats, or any 
libraries, those must be
 */
public ScalaShellRemoteStreamEnvironment(
-   String host,
-   int port,
-   FlinkILoop flinkILoop,
-   Configuration configuration,
-   String... jarFiles) {
+   String host,
+   int port,
+   FlinkILoop flinkILoop,
+   Configuration configuration,
+   String... jarFiles) {
+   if (!ExecutionEnvironment.areExplicitEnvironmentsAllowed()) {
+   throw new InvalidProgramException(
+   "The RemoteEnvironment cannot be instantiated 
when running in a pre-defined context " +
+   "(such as Command Line Client, Scala 
Shell, or TestEnvironment)");
+   }
 
-   super(host, port, configuration, jarFiles);
+   checkNotNull(host);
+   checkArgument(1 <= port && port < 0x);
+
+   this.host = host;
+   this.port = port;
this.flinkILoop = flinkILoop;
+   this.configuration = configuration != null ? configuration : 
new Configuration();
+
+   if (jarFiles != null) {
+   this.jarFiles = new ArrayList<>(jarFiles.length);
 
 Review comment:
   will re-formatted.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346140200
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java
 ##
 @@ -528,8 +529,9 @@ public static void deleteExtractedLibraries(List 
tempLibraries) {
 
private static void checkJarFile(URL jarfile) throws 
ProgramInvocationException {
 
 Review comment:
   It wraps exception into `ProgramInvocationException`. I think whether or not 
refactor these codes can go into another story.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346139932
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/JarUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.jar.JarFile;
+
+/**
+ * Utilities for jar.
+ */
+public enum JarUtils {
+   ;
 
 Review comment:
   it is necessary for a `enum` to infer that no more enums; otherwise compile 
error


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10187: [FLINK-13938][yarn] Enable configure shared libraries on YARN

2019-11-13 Thread GitBox
flinkbot commented on issue #10187: [FLINK-13938][yarn] Enable configure shared 
libraries on YARN
URL: https://github.com/apache/flink/pull/10187#issuecomment-553740506
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e1c1980b8e8715f9888d0cacb4eaa8bcd53a4af9 (Thu Nov 14 
06:09:18 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346139740
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteStreamEnvironment.java
 ##
 @@ -58,27 +80,47 @@
 *   user-defined input formats, or any 
libraries, those must be
 */
public ScalaShellRemoteStreamEnvironment(
-   String host,
-   int port,
-   FlinkILoop flinkILoop,
-   Configuration configuration,
-   String... jarFiles) {
+   String host,
+   int port,
+   FlinkILoop flinkILoop,
+   Configuration configuration,
+   String... jarFiles) {
+   if (!ExecutionEnvironment.areExplicitEnvironmentsAllowed()) {
+   throw new InvalidProgramException(
+   "The RemoteEnvironment cannot be instantiated 
when running in a pre-defined context " +
+   "(such as Command Line Client, Scala 
Shell, or TestEnvironment)");
+   }
 
-   super(host, port, configuration, jarFiles);
+   checkNotNull(host);
+   checkArgument(1 <= port && port < 0x);
+
+   this.host = host;
+   this.port = port;
this.flinkILoop = flinkILoop;
+   this.configuration = configuration != null ? configuration : 
new Configuration();
+
+   if (jarFiles != null) {
+   this.jarFiles = new ArrayList<>(jarFiles.length);
 
 Review comment:
   formatted.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
TisonKun commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346139693
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   It still connects to a remote cluster from within scala shell


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun opened a new pull request #10187: [FLINK-13938][yarn] Enable configure shared libraries on YARN

2019-11-13 Thread GitBox
TisonKun opened a new pull request #10187: [FLINK-13938][yarn] Enable configure 
shared libraries on YARN
URL: https://github.com/apache/flink/pull/10187
 
 
   ## What is the purpose of the change
   
   Enable configure shared libraries on YARN. See also 
https://issues.apache.org/jira/browse/FLINK-13938
   
   ## Brief change log
   
   - Add command-line option `-ysl` `--yarnsharedLibs` and config option 
`yarn.shared-libraries`
   - respect options
   
   ## Verifying this change
   
   I am looking for adding a test case but it seems hard without a real HDFS 
env. Maybe e2e tests required. But one can still verify it manually.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes, enable configure 
shared libs on YARN)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   
   cc @wangyang0918 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13938) Use yarn public distributed cache to speed up containers launch

2019-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13938:
---
Labels: pull-request-available  (was: )

> Use yarn public distributed cache to speed up containers launch
> ---
>
> Key: FLINK-13938
> URL: https://issues.apache.org/jira/browse/FLINK-13938
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / YARN
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>
> By default, the LocalResourceVisibility is APPLICATION, so they will be 
> downloaded only once and shared for all taskmanager containers of a same 
> application in the same node. However, different applications will have to 
> download all jars every time, including the flink-dist.jar. I think we could 
> use the yarn public cache to eliminate the unnecessary jars downloading and 
> make launching container faster.
>  
> How to use the shared lib feature?
>  # Upload a copy of flink release binary to hdfs.
>  # Use the -ysl argument to specify the shared lib
> {code:java}
> ./bin/flink run -d -m yarn-cluster -p 20 -ysl 
> hdfs:///flink/release/flink-1.9.0/lib examples/streaming/WindowJoin.jar{code}
>  
> -ysl, --yarnsharedLib           Upload a copy of flink lib beforehand
>                                                           and specify the 
> path to use public
>                                                           visibility feature 
> of YARN NodeManager
>                                                           localizing 
> resources.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10186: [FLINK-14759][flink-runtime]Remove unused class TaskManagerCliOptions

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10186: [FLINK-14759][flink-runtime]Remove 
unused class TaskManagerCliOptions
URL: https://github.com/apache/flink/pull/10186#issuecomment-553732980
 
 
   
   ## CI report:
   
   * bc053af0629f4f0b8db14aaaddac474b0a750353 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/136456400)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect handling of FLINK_PLUGINS_DIR on Yarn

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect 
handling of FLINK_PLUGINS_DIR on Yarn
URL: https://github.com/apache/flink/pull/10084#issuecomment-549724300
 
 
   
   ## CI report:
   
   * 0fca3b7914151ce638faa282a6c75096c6f404ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134988810)
   * e3825d38d11026310b0fa2e295d1771822a5c63a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135005005)
   * dfeb8e6057b760a74ea554114d4bf7974a727e9a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135010794)
   * 4a28fdcf8599d49f80cfef13a45c4325a2b7e55b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135157529)
   * 4125d88542be1f188bc85bd92e48abcc72c7bd3d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230098)
   * 9236fbc6bac84c59028dadc67fa38bf7ebf1e626 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135613273)
   * 6e10fa89a0dd482c6638bf0b66b33a28caf4ce3a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135617417)
   * 1c1ba364d57f2f1dec91606697982732b34a2048 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135628954)
   * 06ad7ff1d844e51dfdc22782eabb75d8dd45224a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135742136)
   * eefe895840f311491de77d5e4b422eec312f7f41 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136122485)
   * ca3242dd322f486591de1e7b790dc9c3982226af : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136452707)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346135396
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java
 ##
 @@ -528,8 +529,9 @@ public static void deleteExtractedLibraries(List 
tempLibraries) {
 
private static void checkJarFile(URL jarfile) throws 
ProgramInvocationException {
 
 Review comment:
   Don't we still this checkJarFile method ? It looks like we use call 
`JarUtils.checkJarFile` directly ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346135396
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgram.java
 ##
 @@ -528,8 +529,9 @@ public static void deleteExtractedLibraries(List 
tempLibraries) {
 
private static void checkJarFile(URL jarfile) throws 
ProgramInvocationException {
 
 Review comment:
   Do we still this checkJarFile method ? It looks like we use call 
`JarUtils.checkJarFile` directly ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346135013
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/JarUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.jar.JarFile;
+
+/**
+ * Utilities for jar.
+ */
+public enum JarUtils {
+   ;
 
 Review comment:
   Unnecessary `;`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346134785
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteStreamEnvironment.java
 ##
 @@ -58,27 +80,47 @@
 *   user-defined input formats, or any 
libraries, those must be
 */
public ScalaShellRemoteStreamEnvironment(
-   String host,
-   int port,
-   FlinkILoop flinkILoop,
-   Configuration configuration,
-   String... jarFiles) {
+   String host,
+   int port,
+   FlinkILoop flinkILoop,
+   Configuration configuration,
+   String... jarFiles) {
+   if (!ExecutionEnvironment.areExplicitEnvironmentsAllowed()) {
+   throw new InvalidProgramException(
+   "The RemoteEnvironment cannot be instantiated 
when running in a pre-defined context " +
+   "(such as Command Line Client, Scala 
Shell, or TestEnvironment)");
+   }
 
-   super(host, port, configuration, jarFiles);
+   checkNotNull(host);
+   checkArgument(1 <= port && port < 0x);
+
+   this.host = host;
+   this.port = port;
this.flinkILoop = flinkILoop;
+   this.configuration = configuration != null ? configuration : 
new Configuration();
+
+   if (jarFiles != null) {
+   this.jarFiles = new ArrayList<>(jarFiles.length);
 
 Review comment:
   Format issue ? Looks like more than 4 spaces


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10104: [FLINK-14629][client] Refactor ScalaShellRemote(Stream)Environment to simplify inheritance

2019-11-13 Thread GitBox
zjffdu commented on a change in pull request #10104: [FLINK-14629][client] 
Refactor ScalaShellRemote(Stream)Environment to simplify inheritance
URL: https://github.com/apache/flink/pull/10104#discussion_r346134481
 
 

 ##
 File path: 
flink-scala-shell/src/main/java/org/apache/flink/api/java/ScalaShellRemoteEnvironment.java
 ##
 @@ -19,51 +19,101 @@
  * limitations under the License.
  */
 
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.InvalidProgramException;
 import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.Plan;
 import org.apache.flink.api.common.PlanExecutor;
 import org.apache.flink.api.scala.FlinkILoop;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.JarUtils;
+import org.apache.flink.util.function.TriFunction;
 
+import java.io.File;
+import java.io.IOException;
 import java.net.MalformedURLException;
 import java.net.URL;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
 /**
- * Special version of {@link org.apache.flink.api.java.RemoteEnvironment} that 
has a reference
- * to a {@link org.apache.flink.api.scala.FlinkILoop}. When execute is called 
this will
- * use the reference of the ILoop to write the compiled classes of the current 
session to
- * a Jar file and submit these with the program.
+ * A remote {@link ExecutionEnvironment} for the Scala shell.
 
 Review comment:
   Why we still call it `remote` ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10186: [FLINK-14759][flink-runtime]Remove unused class TaskManagerCliOptions

2019-11-13 Thread GitBox
flinkbot commented on issue #10186: [FLINK-14759][flink-runtime]Remove unused 
class TaskManagerCliOptions
URL: https://github.com/apache/flink/pull/10186#issuecomment-553732980
 
 
   
   ## CI report:
   
   * bc053af0629f4f0b8db14aaaddac474b0a750353 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10185: [FLINK-14762][client] Implement ClusterClientJobClientAdapter

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10185: [FLINK-14762][client] Implement 
ClusterClientJobClientAdapter
URL: https://github.com/apache/flink/pull/10185#issuecomment-553711316
 
 
   
   ## CI report:
   
   * 0e693452ffee7136507f8b1e57431dfba28f9647 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136447518)
   * 6802cc2a20b7faa25a2d72fd1db90663eb07a739 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136449843)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on a change in pull request #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)

2019-11-13 Thread GitBox
xintongsong commented on a change in pull request #10180: [FLINK-14631] Account 
for netty direct allocations in direct memory limit (Netty Shuffle)
URL: https://github.com/apache/flink/pull/10180#discussion_r346132449
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/NettyShuffleEnvironmentOptions.java
 ##
 @@ -175,7 +175,8 @@
 
public static final ConfigOption NUM_ARENAS =
key("taskmanager.network.netty.num-arenas")
-   .defaultValue(-1)
+   .intType()
+   .defaultValue(1)
 
 Review comment:
   It seems that you changed the default value without updating/rebuilding the 
docs.
   These's a test case (`ConfigOptionsDocsCompletenessITCase`) failed because 
of this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on a change in pull request #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)

2019-11-13 Thread GitBox
xintongsong commented on a change in pull request #10180: [FLINK-14631] Account 
for netty direct allocations in direct memory limit (Netty Shuffle)
URL: https://github.com/apache/flink/pull/10180#discussion_r346129548
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/NettyShuffleEnvironmentOptions.java
 ##
 @@ -175,7 +175,8 @@
 
public static final ConfigOption NUM_ARENAS =
key("taskmanager.network.netty.num-arenas")
-   .defaultValue(-1)
+   .intType()
 
 Review comment:
   Why do we need this line? Isn't this option already declared as 
`ConfigOption`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #10183: [FLINK-14743][table-blink] Optimize BaseRowSerializer to use Projection instead of switch

2019-11-13 Thread GitBox
JingsongLi commented on issue #10183: [FLINK-14743][table-blink] Optimize 
BaseRowSerializer to use Projection instead of switch
URL: https://github.com/apache/flink/pull/10183#issuecomment-553729738
 
 
   > What I meant is runtime should only compile generated codes to class, all 
code generation logic should happen in client, at least for now.
   
   I got your concerns, in the future, we can run job in cluster without 
non-runtime codes(eg: codegen).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #10183: [FLINK-14743][table-blink] Optimize BaseRowSerializer to use Projection instead of switch

2019-11-13 Thread GitBox
KurtYoung commented on issue #10183: [FLINK-14743][table-blink] Optimize 
BaseRowSerializer to use Projection instead of switch
URL: https://github.com/apache/flink/pull/10183#issuecomment-553729348
 
 
   What I meant is runtime should only compile generated codes to class, all 
code generation logic should happen in client, at least for now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #10183: [FLINK-14743][table-blink] Optimize BaseRowSerializer to use Projection instead of switch

2019-11-13 Thread GitBox
JingsongLi commented on issue #10183: [FLINK-14743][table-blink] Optimize 
BaseRowSerializer to use Projection instead of switch
URL: https://github.com/apache/flink/pull/10183#issuecomment-553726864
 
 
   > -1, I don't think any code generation logic should happen during runtime.
   
   How about move the generation of `GeneratedProjection` to the constructor of 
`BaseRowSerializer`? Is that what you mean? Or BaseRowSerializer just can not 
see `ProjectionCodeGenerator`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework off heap memory config

2019-11-13 Thread GitBox
flinkbot edited a comment on issue #10124: [FLINK-14637] Introduce framework 
off heap memory config
URL: https://github.com/apache/flink/pull/10124#issuecomment-551369957
 
 
   
   ## CI report:
   
   * 4f26ef56d72e62c90fd32ce1ad30766a138405cd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567455)
   * 10caf5a651b06909e67dd43093660be382f4e2b4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135782647)
   * 936edc53a0fe92c91a041cbcc81ba072876afe5d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136440219)
   * a7086b26ad141b709cf6a37d6bd0c9451cc9b68a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136442059)
   * 9407537830c98e27cf96ed0dbf6b5acce8714cb1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136449830)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10186: [FLINK-14759][flink-runtime]Remove unused class TaskManagerCliOptions

2019-11-13 Thread GitBox
flinkbot commented on issue #10186: [FLINK-14759][flink-runtime]Remove unused 
class TaskManagerCliOptions
URL: https://github.com/apache/flink/pull/10186#issuecomment-553725151
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bc053af0629f4f0b8db14aaaddac474b0a750353 (Thu Nov 14 
05:00:40 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   >