[jira] [Commented] (FLINK-14164) Add a metric to show failover count regarding fine grained recovery

2019-10-31 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963983#comment-16963983
 ] 

Till Rohrmann commented on FLINK-14164:
---

The plan sounds good to me [~zhuzh]. I've assigned this ticket to you.

> Add a metric to show failover count regarding fine grained recovery
> ---
>
> Key: FLINK-14164
> URL: https://issues.apache.org/jira/browse/FLINK-14164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
> Fix For: 1.10.0
>
>
> Previously Flink uses restart all strategy to recover jobs from failures. And 
> the metric "fullRestart" is used to show the count of failovers.
> However, with fine grained recovery introduced in 1.9.0, the "fullRestart" 
> metric only reveals how many times the entire graph has been restarted, not 
> including the number of fine grained failure recoveries.
> As many users want to build their job alerting based on failovers, I'd 
> propose to add such a new metric {{numberOfRestarts}} which also respects 
> fine grained recoveries. The metric should be a meter(MeterView) so that 
> users can leverage the rate to detect newly happened failures rather than de 
> deviation by themselves.
> The MeterView should be added in SchedulerBase to serve both legacy scheduler 
> and ng scheduler.
> The underlying counter of the MeterView is determined by the scheduler 
> implementations:
> 1. for legacy scheduler, it's the {{ExecutionGraph#numberOfRestartsCounter}} 
> which was added in FLINK-14206
> 2. for ng scheduler, it's a new counter added in {{ExecutionFailureHandler}} 
> that counts all the task and global failures notified to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14164) Add a metric to show failover count regarding fine grained recovery

2019-10-31 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-14164:
-

Assignee: Zhu Zhu

> Add a metric to show failover count regarding fine grained recovery
> ---
>
> Key: FLINK-14164
> URL: https://issues.apache.org/jira/browse/FLINK-14164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
> Fix For: 1.10.0
>
>
> Previously Flink uses restart all strategy to recover jobs from failures. And 
> the metric "fullRestart" is used to show the count of failovers.
> However, with fine grained recovery introduced in 1.9.0, the "fullRestart" 
> metric only reveals how many times the entire graph has been restarted, not 
> including the number of fine grained failure recoveries.
> As many users want to build their job alerting based on failovers, I'd 
> propose to add such a new metric {{numberOfRestarts}} which also respects 
> fine grained recoveries. The metric should be a meter(MeterView) so that 
> users can leverage the rate to detect newly happened failures rather than de 
> deviation by themselves.
> The MeterView should be added in SchedulerBase to serve both legacy scheduler 
> and ng scheduler.
> The underlying counter of the MeterView is determined by the scheduler 
> implementations:
> 1. for legacy scheduler, it's the {{ExecutionGraph#numberOfRestartsCounter}} 
> which was added in FLINK-14206
> 2. for ng scheduler, it's a new counter added in {{ExecutionFailureHandler}} 
> that counts all the task and global failures notified to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry commented on a change in pull request #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
wsry commented on a change in pull request #9993: 
[FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting 
with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#discussion_r341136224
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java
 ##
 @@ -269,6 +267,17 @@ private MemorySegment requestMemorySegmentFromGlobal() 
throws IOException {
if (segment != null) {
numberOfRequestedMemorySegments++;
return segment;
+   } else if (isBlocking) {
+   // if the future is completed before the 
callback is registered,
+   // the request thread will wait 2s before 
polling an available
+   // segment from the global pool, which is not a 
big problem.
 
 Review comment:
   I think it should work. The advantage of this implementation is that it can 
reduce useless wakeup avoid 2 second waiting if the future is completed 
immediately. 
   Maybe we need do tow more things: one is to wrap the 
```ExecutionException``` as ```IOException``` or ```InterruptedException``` to 
avoid the change of method signature (including the caller); the other is to 
complete the available future when buffer pool is destroyed to unblocking the 
main thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wsry commented on a change in pull request #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
wsry commented on a change in pull request #9993: 
[FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting 
with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#discussion_r341136224
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java
 ##
 @@ -269,6 +267,17 @@ private MemorySegment requestMemorySegmentFromGlobal() 
throws IOException {
if (segment != null) {
numberOfRequestedMemorySegments++;
return segment;
+   } else if (isBlocking) {
+   // if the future is completed before the 
callback is registered,
+   // the request thread will wait 2s before 
polling an available
+   // segment from the global pool, which is not a 
big problem.
 
 Review comment:
   I think it should work. The advantage of this implementation is that it can 
reduce useless wakeup and avoid 2 second waiting if the future is completed 
immediately. 
   Maybe we need do tow more things: one is to wrap the 
```ExecutionException``` as ```IOException``` or ```InterruptedException``` to 
avoid the change of method signature (including the caller); the other is to 
complete the available future when buffer pool is destroyed to unblocking the 
main thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope 
of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341174345
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotAndTableFactory.java
 ##
 @@ -46,8 +48,8 @@
 
public static TaskSlotTable createTaskSlotTableWithDefaultSlots(int 
numberOfSlots) {
return createTaskSlotTableWithDefaultSlots(
-   numberOfSlots,
-   
createDefaultTimerService(DEFAULT_SLOT_TIMEOUT));
+   numberOfSlots,
 
 Review comment:
   belongs to previous commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope 
of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341173178
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotAndTableFactory.java
 ##
 @@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor.slot;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.memory.MemoryManager;
+import org.apache.flink.runtime.testingUtils.TestingUtils;
+
+import java.util.Collections;
+import java.util.List;
+
+/** Factory for {@link TaskSlotTable} and {@link TaskSlot}s. */
+public enum TaskSlotAndTableFactory {
 
 Review comment:
   maybe call this `ResourceTestUtils` or similar?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope 
of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341179474
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerServices.java
 ##
 @@ -258,15 +253,15 @@ public static TaskManagerServices fromConfiguration(

taskManagerServicesConfiguration.getTaskManagerAddress(),
dataPort);
 
-   // this call has to happen strictly after the network stack has 
been initialized
-   final MemoryManager memoryManager = 
createMemoryManager(taskManagerServicesConfiguration);
-
final BroadcastVariableManager broadcastVariableManager = new 
BroadcastVariableManager();
 
+   // this call has to happen strictly after the network stack has 
been initialized
 
 Review comment:
   let's clarify this comment; it is difficult to pinpoint which statement it 
applies to, since the MemoryManager is created 2 levels into the _next_ line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope 
of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341175289
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/memory/MemoryManager.java
 ##
 @@ -192,7 +171,6 @@ public boolean isShutdown() {
 *
 * @return True, if the memory manager is empty and valid, false if it 
is not empty or corrupted.
 */
-   @VisibleForTesting
 
 Review comment:
   ping


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
zentol commented on a change in pull request #10034: [FLINK-14400] Shrink scope 
of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341172664
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotAndTableFactory.java
 ##
 @@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor.slot;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.memory.MemoryManager;
+import org.apache.flink.runtime.testingUtils.TestingUtils;
+
+import java.util.Collections;
+import java.util.List;
+
+/** Factory for {@link TaskSlotTable} and {@link TaskSlot}s. */
+public enum TaskSlotAndTableFactory {
+   ;
+
+   private static final long DEFAULT_SLOT_TIMEOUT = 1L;
+
+   private static final ResourceProfile DEFAULT_RESOURCE_PROFILE =
+   new ResourceProfile(
+   Double.MAX_VALUE,
+   MemorySize.MAX_VALUE,
+   MemorySize.MAX_VALUE,
+   new MemorySize(10 * MemoryManager.MIN_PAGE_SIZE),
+   new MemorySize(0),
+   MemorySize.MAX_VALUE,
+   Collections.emptyMap());
+
+   public static TaskSlotTable createTaskSlotTableWithDefaultSlots(int 
numberOfSlots) {
 
 Review comment:
   I'd just call this `createTaskSlotTable`; that the table will contain _some_ 
kind of slot is a given anyway


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL closed pull request #10043: [FLINK-14439][runtime] Enable RestartPipelinedRegionStrategy to leverage JM tracked partition availability in DefaultScheduler

2019-10-31 Thread GitBox
GJL closed pull request #10043: [FLINK-14439][runtime] Enable 
RestartPipelinedRegionStrategy to leverage JM tracked partition availability in 
DefaultScheduler
URL: https://github.com/apache/flink/pull/10043
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL closed pull request #10044: [FLINK-14440][tests] Annotate BatchFineGrainedRecoveryITCase to enable scheduler NG testing for it

2019-10-31 Thread GitBox
GJL closed pull request #10044: [FLINK-14440][tests] Annotate 
BatchFineGrainedRecoveryITCase to enable scheduler NG testing for it
URL: https://github.com/apache/flink/pull/10044
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-548335375
 
 
   
   ## CI report:
   
   * 6f081dffc4e0da56df96f1535e796d2b6e8bb045 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380778)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10042: [FLINK-11466][e2e] Implement LocalStandaloneKafkaResource

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10042: [FLINK-11466][e2e] Implement 
LocalStandaloneKafkaResource 
URL: https://github.com/apache/flink/pull/10042#issuecomment-547792578
 
 
   
   ## CI report:
   
   * b523eace793d9168a2816e50891d6227183a7175 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134149629)
   * 876f0f86b208f6fce4d74488081791dcc7b89cf8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134326879)
   * 027d5fe9ca3832c9384a715f3f7e65df3fd64dcf : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546219804
 
 
   
   ## CI report:
   
   * f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133499201)
   * 501e86a6e9e8eab7fc26f030d284268d530e093e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134129471)
   * 3e3a090cfc7d9216701b68664e2c8fa4f34861f7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134135547)
   * bb53297bace0091789a1c0fa07e7261a339022b0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134324937)
   * 9af1c8d0e395ce1b197fef96e53512e7189aa12d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134356941)
   * d0a3717005a6e4145f7b51aeb5d04003808b31a0 : UNKNOWN
   * 1df7a8290658f9a7079bfa61524172a39001c6cc : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341116077
 
 

 ##
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImplTest.java
 ##
 @@ -156,26 +156,26 @@ public void testLifeCycleClose() throws Exception {
try {
taskMailbox.take(DEFAULT_PRIORITY);
Assert.fail();
-   } catch (MailboxStateException ignore) {
+   } catch (IllegalStateException ignore) {
 
 Review comment:
   Maybe replace those `try`/`assert.fail()`/`catch` with 
   
   ```
@Rule
public ExpectedException expectedException = ExpectedException.none();
   
... 
  @Test
  public void foo() {
expectedException.expect(IllegalStateException.class);
  }
 
   ```
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341157710
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java
 ##
 @@ -77,22 +112,30 @@ public boolean hasMail() {
 
@Override
public Optional tryTake(int priority) throws 
IllegalStateException {
+   Optional head = tryTakeFromBatch();
 
 Review comment:
   shouldn't the `tryTake` and `take` methods call `createBatch()` if 
`tryTakeFromBatch` returned `Optional.empty()`?
   
   Do we need the old non batched "take" methods? Shouldn't we just have two: 
`tryTakeFromBatch` and `takeFromBatch`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10034: [FLINK-14400] Shrink scope of 
MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#issuecomment-547440502
 
 
   
   ## CI report:
   
   * 85d93043d366498b8bdaed5351b74ca47c77d575 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010434)
   * 6dd1e38a230348bb9e8b489c6f99a6a125cd2cc8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134406685)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski merged pull request #10049: [FLINK-14544][runtime] Fixing race condition during task cancellation

2019-10-31 Thread GitBox
pnowojski merged pull request #10049: [FLINK-14544][runtime] Fixing race 
condition during task cancellation
URL: https://github.com/apache/flink/pull/10049
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-31 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-14544.
--
Fix Version/s: 1.10.0
   Resolution: Fixed

merged to master as 8522f3b 

> Resuming Externalized Checkpoint after terminal failure (file, async) 
> end-to-end test fails on travis
> -
>
> Key: FLINK-14544
> URL: https://issues.apache.org/jira/browse/FLINK-14544
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> From the log we could see below error message and then the job was terminated 
> due to job exceeded the maximum log length. 
> {code}
> 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
>   - Error while canceling task FailureMapper (1/1).
> java.util.concurrent.RejectedExecutionException: 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: 
> Mailbox is in state CLOSED, but is required to be in state OPEN for put 
> operations.
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)
>   at 
> org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: 
> Mailbox is in state CLOSED, but is required to be in state OPEN for put 
> operations.
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)
>   ... 7 more
> {code}
> https://api.travis-ci.org/v3/job/602788586/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
lirui-apache opened a new pull request #10062: [FLINK-14588][hive] Support Hive 
version 1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062
 
 
   
   
   ## What is the purpose of the change
   
   To support Hive 1.0.0 and 1.0.1.
   
   
   ## Brief change log
   
 - Added profiles and shim implementations for these two versions.
 - New shim interfaces are added to create Hive record writer and get 
schema from deserializer respectively, since the related util classes are 
incompatible between old and new Hive versions.
 - Misc changes to accommodate the tow versions.
   
   
   ## Verifying this change
   
   Covered by existing changes and verified tests pass for all profiles.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10002: [FLINK-14074][mesos] Forward configuration to Mesos TaskExecutor

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10002: [FLINK-14074][mesos] Forward 
configuration to Mesos TaskExecutor
URL: https://github.com/apache/flink/pull/10002#issuecomment-546597933
 
 
   
   ## CI report:
   
   * 4c13fd02d0f6f98e18ddb04bd11581e0c378a7ca : UNKNOWN
   * 083f6a67ae04af531615736c2fc518c11ab06626 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133664128)
   * a53452cfc9aa87118cea0e633edb80dcc19685a3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133668431)
   * 2a5a0bc1f8345ea254893b1a66b53aa86676d276 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133675789)
   * 6f65d099e13c8fcdc010965e908ac7c8acb4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133677952)
   * 6c1d4bbb93ef498d6c3e79b1ad3770e363ec8ccb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134033751)
   * 16002bf66f6f7137fb65b2d57b974af0403d9a6c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134040124)
   * 9faf100a695c6fbecd80df9b6ccbe10071e5bcdb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134169902)
   * 80f7ca7f80283142e0607e96e16c198b556e2577 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134363520)
   * 049eb1d6f2f678e6796720fcee17c42a1c0d131d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380692)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10003: [BP-1.9][FLINK-14074][mesos] Forward configuration to Mesos TaskExecutor

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10003: [BP-1.9][FLINK-14074][mesos] Forward 
configuration to Mesos TaskExecutor 
URL: https://github.com/apache/flink/pull/10003#issuecomment-546599805
 
 
   
   ## CI report:
   
   * f51cd353402817c2a22946c857fe80cf6954fda3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133664795)
   * ee4bf5706f13dc7d138713c449413dcb74cf04c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133668439)
   * 873cbb047958b21ce84ebb899ecc888fde61c94a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133674551)
   * f7f2f1e15448b13b6780e383f3047f60d9882e89 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133675793)
   * 680b47f939aabcc3985c37be38e1dca381d1df69 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/133677129)
   * 2854856b0293358139de51a4af3c45d92da5a41e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133677961)
   * 37d616bb52df01f8495bc65d1c55205bda68a01d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134033800)
   * 6542825ca4eca86082142d695f7bf0ca642a64f1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134040157)
   * f56db617b70806d025432398694fd9006a9d00e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134046257)
   * 6e40853bee1d1caa44b08f577260b8c93b329651 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134169951)
   * 44be6b414297d000c5a871b1532d9ea667ed8602 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134363537)
   * 0af24d79504c217dd155d1d58cbca29c65d546e2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380713)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14588) Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14588:
---
Labels: pull-request-available  (was: )

> Support Hive version 1.0.0 and 1.0.1
> 
>
> Key: FLINK-14588
> URL: https://issues.apache.org/jira/browse/FLINK-14588
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
lirui-apache commented on issue #10062: [FLINK-14588][hive] Support Hive 
version 1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062#issuecomment-548371621
 
 
   cc @xuefuz @bowenli86 @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9972: [FLINK-14496][client] Exclude detach flag from ClusterClient

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9972: [FLINK-14496][client] Exclude detach 
flag from ClusterClient
URL: https://github.com/apache/flink/pull/9972#issuecomment-545092005
 
 
   
   ## CI report:
   
   * 355939385e2951dd0da0e3fe1da8feb7dbc27f61 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133049329)
   * 3eaa1b59726a2695f71bc6bc8cfe75f6a085a5d0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133107682)
   * c08dc75534c18bc5bd35e6f270ef4e1eba9c39ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133109491)
   * b20e536e7f2b867cab66e7f4cef402664c31aa33 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133113346)
   * 63610043d2e1ba1ca58e70d35285d9eff500b31b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133133305)
   * 565fab1d144edf946e99cbe29398e3699dc43abd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133175206)
   * 090564e34d866c8d823312a218ff6c3aadd3b366 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/134189153)
   * a24ba575dbbd17f9cd06b3906703617f81c65761 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134194013)
   * 8fc8c7591b97a0687816b7df233103fa6bd7a3a4 : UNKNOWN
   * b8971704faa08bf64a2dabcd36899c16ec32719a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134229119)
   * ba1c4b1d076b25e09850d68a2856f8db169aa246 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134389814)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #10006: 
[FLINK-14312][runtime] Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341145835
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
+
+   public DefaultLogicalTopology(final JobGraph jobGraph) {
+   checkNotNull(jobGraph);
+
+   this.containsCoLocationConstraints = 
IterableUtils.toStream(jobGraph.getVertices())
+   .map(JobVertex::getCoLocationGroup)
+   .anyMatch(Objects::nonNull);
+
+   this.verticesSorted = new 
ArrayList<>(jobGraph.getNumberOfVertices());
+   this.idToVertexMap = new HashMap<>();
+   this.idToResultMap = new HashMap<>();
+
+   buildVerticesAndResults(jobGraph);
+   }
+
+   private void buildVerticesAndResults(final JobGraph jobGraph) {
+   for (JobVertex jobVertex : 
jobGraph.getVerticesSortedTopologicallyFromSources()) {
+
+   final DefaultLogicalVertex logicalVertex = new 
DefaultLogicalVertex(jobVertex, this::getResult);
 
 Review comment:
   Not sure how bad this is but I think we create multiple lookup functions 
with `this::getResult` because it is computed within the for loop. One could 
move this out of the loop. Might be premature optimization, though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10059: [FLINK-14543][table] Support partition for temporary table

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10059: [FLINK-14543][table] Support 
partition for temporary table
URL: https://github.com/apache/flink/pull/10059#issuecomment-548289939
 
 
   
   ## CI report:
   
   * 4937b8b139bab4957799947b841fb1ae3758ed48 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134352112)
   * 6de7bf454ccb8a05ec596954b7551ea9eddac297 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134385271)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #10006: 
[FLINK-14312][runtime] Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341146660
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
+
+   public DefaultLogicalTopology(final JobGraph jobGraph) {
+   checkNotNull(jobGraph);
+
+   this.containsCoLocationConstraints = 
IterableUtils.toStream(jobGraph.getVertices())
+   .map(JobVertex::getCoLocationGroup)
+   .anyMatch(Objects::nonNull);
+
+   this.verticesSorted = new 
ArrayList<>(jobGraph.getNumberOfVertices());
+   this.idToVertexMap = new HashMap<>();
+   this.idToResultMap = new HashMap<>();
+
+   buildVerticesAndResults(jobGraph);
+   }
+
+   private void buildVerticesAndResults(final JobGraph jobGraph) {
+   for (JobVertex jobVertex : 
jobGraph.getVerticesSortedTopologicallyFromSources()) {
+
+   final DefaultLogicalVertex logicalVertex = new 
DefaultLogicalVertex(jobVertex, this::getResult);
+   this.verticesSorted.add(logicalVertex);
+   this.idToVertexMap.put(logicalVertex.getId(), 
logicalVertex);
+
+   for (IntermediateDataSet intermediateDataSet : 
jobVertex.getProducedDataSets()) {
+   DefaultLogicalResult logicalResult = new 
DefaultLogicalResult(intermediateDataSet, this::getVertex);
+   idToResultMap.put(logicalResult.getId(), 
logicalResult);
+   }
+   }
+   }
+
+   @Override
+   public Iterable getVertices() {
+   return verticesSorted;
+   }
+
+   @Override
+   public boolean containsCoLocationConstraints() {
+   return containsCoLocationConstraints;
+   }
+
+   private DefaultLogicalVertex getVertex(final JobVertexID vertexId) {
+   return Optional.ofNullable(idToVertexMap.get(vertexId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find vertex: " + vertexId));
+   }
+
+   private DefaultLogicalResult getResult(final IntermediateDataSetID 
resultId) {
+   return Optional.ofNullable(idToResultMap.get(resultId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find result: " + resultId));
+   }
+
+   public Set getLogicalPipelinedRegions() {
+   if (regions == null) {
+   regions = buildLogicalPipelinedRegions();
+   }
+   return regions;
+   }
 
 Review comment:
   Is there any particular reason why we are calculating the `regions` lazily? 
Could we simplify this class by eagerly calculate the regions?


[GitHub] [flink] flinkbot commented on issue #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
flinkbot commented on issue #10062: [FLINK-14588][hive] Support Hive version 
1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062#issuecomment-548389810
 
 
   
   ## CI report:
   
   * 47e63a50352681e156100c5a35084ff927292910 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10062: [FLINK-14588][hive] Support Hive 
version 1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062#issuecomment-548389810
 
 
   
   ## CI report:
   
   * 47e63a50352681e156100c5a35084ff927292910 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134400911)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wsry commented on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
wsry commented on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-548386177
 
 
   Many thanks to @pnowojski and @zhijiangW for the review. I adopted 
@pnowojski 's implementation and have updated the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341181890
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/BatchReadableMailbox.java
 ##
 @@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.tasks.mailbox;
+
+import java.util.Optional;
+
+/**
+ * Extends {@link ReadableMailbox} to support taking messages in batches. A 
batch is a local view on the mailbox that
+ * does not contain simultaneously added mails similar to iterators of 
copy-on-write collections.
+ *
+ * A batch serves two purposes: it reduces synchronization if more than one 
mail is processable at the time of the
+ * creation of a batch. Furthermore, it allows to divide the work of a mailbox 
in smaller logical chunks, such that
+ * the task threads cannot be blocked by a mail that enqueues itself and thus 
provides input starvation.
+ *
+ * All methods can only be invoked by the mailbox thread, which is passed 
upon construction. To verify that the
+ * current thread is allowed to take any mail, use {@link #isMailboxThread()}, 
but all methods will perform the check
+ * themselves.
+ *
+ * Note that there is no blocking takeFromBatch as batches can only be 
created and consumed from the mailbox thread.
+ */
+public interface BatchReadableMailbox extends ReadableMailbox {
+
+   /**
+* Creates a batch of mails that can be taken with {@link 
#tryTakeFromBatch()}. The batch does not affect any
+* method from {@link ReadableMailbox}.
+*
+* The default batch is empty. Thus, this method must be invoked 
once before {@link #tryTakeFromBatch()}.
+*
+* If a batch is not completely consumed by {@link 
#tryTakeFromBatch()}, its elements are carried over to the
+* new batch.
+*
+* @return true if there is at least one element in the batch; that is, 
if there is any mail at all at the time
+* of the invocation.
+*/
+   boolean createBatch();
 
 Review comment:
   Do we need this method in the inteface? Can not it be implicitly called by 
`tryTakeFromBatch` if batch is empty?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341145634
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java
 ##
 @@ -185,31 +171,19 @@ private Mail takeHeadInternal(int priority) throws 
IllegalStateException {
}
}
 
-   private boolean isEmpty() {
-   return count == 0;
-   }
-
-   private boolean isPutAbleState() {
-   return state == State.OPEN;
-   }
-
-   private boolean isTakeAbleState() {
-   return state != State.CLOSED;
-   }
-
private void checkPutStateConditions() {
final State state = this.state;
-   if (!isPutAbleState()) {
+   if (this.state != OPEN) {
 
 Review comment:
   `checkState()`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341114825
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/Mailbox.java
 ##
 @@ -37,33 +37,33 @@
 *
 * @return an optional with either the oldest mail from the mailbox 
(head of queue) if the mailbox is not empty or
 * an empty optional otherwise.
-* @throws  MailboxStateException if mailbox is already closed.
+* @throws IllegalStateException if mailbox is already closed.
 */
-   Optional tryTake(int priority) throws MailboxStateException;
+   Optional tryTake(int priority) throws IllegalStateException;
 
/**
 * This method returns the oldest mail from the mailbox (head of queue) 
or blocks until a mail is available.
 *
 * @return the oldest mail from the mailbox (head of queue).
 * @throws InterruptedException on interruption.
-* @throws  MailboxStateException if mailbox is already closed.
+* @throws IllegalStateException if mailbox is already closed.
 */
@Nonnull
-   Mail take(int priority) throws InterruptedException, 
MailboxStateException;
+   Mail take(int priority) throws InterruptedException;
 
 Review comment:
   One big difference between `MailboxStateException` and 
`IllegalStateException` is that the latter is a `RuntimeException`, as we can 
see in this missing `throws` declaration.
   
   The question is, whether we think this is indeed an `IllegalState` which 
indicates a bug and code should crash, or whether such exception should/could 
be handled by someone? I guess it's a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341157710
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java
 ##
 @@ -77,22 +112,30 @@ public boolean hasMail() {
 
@Override
public Optional tryTake(int priority) throws 
IllegalStateException {
+   Optional head = tryTakeFromBatch();
 
 Review comment:
   shouldn't the `tryTake` and `take` methods call `createBatch()` if 
`tryTakeFromBatch` returned `Optional.empty()`?
   
   Why do we allow for non batched takes? Shouldn't we have just 
`tryTakeFromBatch` and `takeFromBatch`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341119021
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java
 ##
 @@ -211,26 +215,27 @@ private void checkTakeStateConditions() {
 
@Override
public void quiesce() {
+   final ReentrantLock lock = this.lock;
 
 Review comment:
   why do we need this local variable?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341120305
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java
 ##
 @@ -172,13 +172,17 @@ private Mail takeHeadInternal(int priority) throws 
IllegalStateException {
return null;
}
 
-   private void drainAllMails(List drainInto) {
-   assert lock.isHeldByCurrentThread();
-   for (Mail mail : queue) {
-   drainInto.add(mail.getRunnable());
+   @Override
+   public List drain() {
+   final ReentrantLock lock = this.lock;
+   lock.lock();
 
 Review comment:
   isn't double acquiring the lock hurting performance?
   
   Usually the pattern that we follow, is to provide synchronized `public` 
method, with non synchronized/unsafe `private` counterpart (usually with 
`Unsafe` suffix added to the method name, here that would be `private 
List drainUnsafe()`) for re-using in other public methods. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341116077
 
 

 ##
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImplTest.java
 ##
 @@ -156,26 +156,26 @@ public void testLifeCycleClose() throws Exception {
try {
taskMailbox.take(DEFAULT_PRIORITY);
Assert.fail();
-   } catch (MailboxStateException ignore) {
+   } catch (IllegalStateException ignore) {
 
 Review comment:
   Replace those `try`/`assert.fail()`/`catch` with 
   
   ```
@Rule
public ExpectedException expectedException = ExpectedException.none();
   
... 
  @Test
  public void foo() {
expectedException.expect(IllegalStateException.class);
  }
 
   ```
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341115208
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/Mailbox.java
 ##
 @@ -37,33 +37,33 @@
 *
 * @return an optional with either the oldest mail from the mailbox 
(head of queue) if the mailbox is not empty or
 * an empty optional otherwise.
-* @throws  MailboxStateException if mailbox is already closed.
+* @throws IllegalStateException if mailbox is already closed.
 */
-   Optional tryTake(int priority) throws MailboxStateException;
+   Optional tryTake(int priority) throws IllegalStateException;
 
 Review comment:
   remove `throws IllegalStateException;` (and the same in other places)? I 
guess there is no point in declaring `RuntimeExceptions` in general and also it 
seems strange to declare `IllegalStateException`, as it indicates some bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341192702
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
+
+   public DefaultLogicalTopology(final JobGraph jobGraph) {
+   checkNotNull(jobGraph);
+
+   this.containsCoLocationConstraints = 
IterableUtils.toStream(jobGraph.getVertices())
+   .map(JobVertex::getCoLocationGroup)
+   .anyMatch(Objects::nonNull);
+
+   this.verticesSorted = new 
ArrayList<>(jobGraph.getNumberOfVertices());
+   this.idToVertexMap = new HashMap<>();
+   this.idToResultMap = new HashMap<>();
+
+   buildVerticesAndResults(jobGraph);
+   }
+
+   private void buildVerticesAndResults(final JobGraph jobGraph) {
+   for (JobVertex jobVertex : 
jobGraph.getVerticesSortedTopologicallyFromSources()) {
+
+   final DefaultLogicalVertex logicalVertex = new 
DefaultLogicalVertex(jobVertex, this::getResult);
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341192586
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalResultTest.java
 ##
 @@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.flink.shaded.guava18.com.google.common.collect.Iterables;
+
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.runtime.executiongraph.ExecutionGraphTestUtils.createNoOpVertex;
+import static 
org.apache.flink.runtime.io.network.partition.ResultPartitionType.PIPELINED;
+import static org.apache.flink.runtime.jobgraph.DistributionPattern.ALL_TO_ALL;
+import static 
org.apache.flink.runtime.jobgraph.topology.DefaultLogicalVertexTest.assertVertexInfoEquals;
+import static 
org.apache.flink.runtime.jobgraph.topology.DefaultLogicalVertexTest.assertVerticesEquals;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Unit tests for {@link DefaultLogicalResult}.
+ */
+public class DefaultLogicalResultTest extends TestLogger {
+
+   private IntermediateDataSet result;
+
+   private DefaultLogicalResult logicalResult;
+
+   private Map vertexMap;
+
+   private JobVertex producerVertex;
+
+   private Set consumerVertices;
+
+   @Before
+   public void setUp() throws Exception {
+   buildVerticesAndResults();
+
+   logicalResult = new DefaultLogicalResult(
+   result,
+   vid -> new DefaultLogicalVertex(vertexMap.get(vid), rid 
-> null));
+   }
+
+   @Test
+   public void testConstructor() {
+   assertResultInfoEquals(result, logicalResult);
+   }
+
+   @Test
+   public void testGetProducer() {
+   assertVertexInfoEquals(producerVertex, 
logicalResult.getProducer());
+   }
+
+   @Test
+   public void testGetConsumers() {
+   assertVerticesEquals(consumerVertices, 
logicalResult.getConsumers());
+   }
+
+   private void buildVerticesAndResults() {
+   vertexMap = new HashMap<>();
+   consumerVertices = new HashSet<>();
+
+   final int parallelism = 3;
+   producerVertex = createNoOpVertex(parallelism);
+   vertexMap.put(producerVertex.getID(), producerVertex);
+
+   result = producerVertex.createAndAddResultDataSet(PIPELINED);
+
+   for (int i = 0; i < 5; i++) {
+   final JobVertex consumerVertex = 
createNoOpVertex(parallelism);
+   consumerVertex.connectDataSetAsInput(result, 
ALL_TO_ALL);
+   consumerVertices.add(consumerVertex);
+   vertexMap.put(consumerVertex.getID(), consumerVertex);
+   }
+   }
+
+   static void assertResultsEquals(
+   final Iterable results,
+   final Iterable logicalResults) {
+
+   assertEquals(Iterables.size(results), 
Iterables.size(logicalResults));
+
+   for (IntermediateDataSet result : results) {
+   final List matchedLogicResults = 
IterableUtils.toStream(logicalResults)
+   .filter(logicalResult -> 
logicalResult.getId().equals(result.getId()))
+   .collect(Collectors.toList());
+
+   assertEquals(1, matchedLogicResults.size());
+
+   assertResultInfoEquals(result, 
matchedLogicResults.get(0));
+   }
 
 Review comment:
   Good idea! done.


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341192879
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
 
 Review comment:
   Removed along with the lazy building.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341192663
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
+
+   public DefaultLogicalTopology(final JobGraph jobGraph) {
+   checkNotNull(jobGraph);
+
+   this.containsCoLocationConstraints = 
IterableUtils.toStream(jobGraph.getVertices())
+   .map(JobVertex::getCoLocationGroup)
+   .anyMatch(Objects::nonNull);
+
+   this.verticesSorted = new 
ArrayList<>(jobGraph.getNumberOfVertices());
+   this.idToVertexMap = new HashMap<>();
+   this.idToResultMap = new HashMap<>();
+
+   buildVerticesAndResults(jobGraph);
+   }
+
+   private void buildVerticesAndResults(final JobGraph jobGraph) {
+   for (JobVertex jobVertex : 
jobGraph.getVerticesSortedTopologicallyFromSources()) {
+
+   final DefaultLogicalVertex logicalVertex = new 
DefaultLogicalVertex(jobVertex, this::getResult);
+   this.verticesSorted.add(logicalVertex);
+   this.idToVertexMap.put(logicalVertex.getId(), 
logicalVertex);
+
+   for (IntermediateDataSet intermediateDataSet : 
jobVertex.getProducedDataSets()) {
+   DefaultLogicalResult logicalResult = new 
DefaultLogicalResult(intermediateDataSet, this::getVertex);
+   idToResultMap.put(logicalResult.getId(), 
logicalResult);
+   }
+   }
+   }
+
+   @Override
+   public Iterable getVertices() {
+   return verticesSorted;
+   }
+
+   @Override
+   public boolean containsCoLocationConstraints() {
+   return containsCoLocationConstraints;
+   }
+
+   private DefaultLogicalVertex getVertex(final JobVertexID vertexId) {
+   return Optional.ofNullable(idToVertexMap.get(vertexId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find vertex: " + vertexId));
+   }
+
+   private DefaultLogicalResult getResult(final IntermediateDataSetID 
resultId) {
+   return Optional.ofNullable(idToResultMap.get(resultId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find result: " + resultId));
+   }
+
+   public Set getLogicalPipelinedRegions() {
+   if (regions == null) {
+   regions = buildLogicalPipelinedRegions();
+   }
+   return regions;
+   }
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [flink] GJL commented on a change in pull request #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-31 Thread GitBox
GJL commented on a change in pull request #9974: [FLINK-14501][FLINK-14502] 
Decouple ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#discussion_r341242448
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
 ##
 @@ -709,22 +730,9 @@ private ApplicationReport startAppMaster(
systemShipFiles.add(file.getAbsoluteFile());
}
 
-   //check if there is a logback or log4j file
-   File logbackFile = new File(configurationDirectory + 
File.separator + CONFIG_FILE_LOGBACK_NAME);
-   final boolean hasLogback = logbackFile.exists();
-   if (hasLogback) {
-   systemShipFiles.add(logbackFile);
-   }
-
-   File log4jFile = new File(configurationDirectory + 
File.separator + CONFIG_FILE_LOG4J_NAME);
-   final boolean hasLog4j = log4jFile.exists();
-   if (hasLog4j) {
-   systemShipFiles.add(log4jFile);
-   if (hasLogback) {
-   // this means there is already a logback 
configuration file --> fail
-   LOG.warn("The configuration directory ('" + 
configurationDirectory + "') contains both LOG4J and " +
-   "Logback configuration files. 
Please delete or rename one of them.");
-   }
+   final String logConfigFilePath = 
configuration.getString(YarnConfigOptions.APPLICATION_LOG_CONFIG_FILE);
 
 Review comment:
   Smoke tests are passing now: deployment on YARN, standalone, and Mesos was 
tested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11127) Make metrics query service establish connection to JobManager

2019-10-31 Thread Hwanju Kim (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964210#comment-16964210
 ] 

Hwanju Kim commented on FLINK-11127:


[~tsubasa2oo2], in addition to what [~trohrmann] said (and it's interesting to 
me as well why cancel led to metric connection issue), I also wonder what was 
the workaround fix you used before 1.8 and had worked fine in the same scenario 
without the error you mentioned above. The problematic error related to DNS 
would show akka error with "Name or service not known". We've tested for 
TM-to-RM registration but not necessarily have seen the metrics connection 
error that you showed.

> Make metrics query service establish connection to JobManager
> -
>
> Key: FLINK-11127
> URL: https://issues.apache.org/jira/browse/FLINK-11127
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Runtime / Coordination, Runtime 
> / Metrics
>Affects Versions: 1.7.0
>Reporter: Ufuk Celebi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As part of FLINK-10247, the internal metrics query service has been separated 
> into its own actor system. Before this change, the JobManager (JM) queried 
> TaskManager (TM) metrics via the TM actor. Now, the JM needs to establish a 
> separate connection to the TM metrics query service actor.
> In the context of Kubernetes, this is problematic as the JM will typically 
> *not* be able to resolve the TMs by name, resulting in warnings as follows:
> {code}
> 2018-12-11 08:32:33,962 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@flink-task-manager-64b868487c-x9l4b:39183] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@flink-task-manager-64b868487c-x9l4b:39183]] Caused 
> by: [flink-task-manager-64b868487c-x9l4b: Name does not resolve]
> {code}
> In order to expose the TMs by name in Kubernetes, users require a service 
> *for each* TM instance which is not practical.
> This currently results in the web UI not being to display some basic metrics 
> about number of sent records. You can reproduce this by following the READMEs 
> in {{flink-container/kubernetes}}.
> This worked before, because the JM is typically exposed via a service with a 
> known name and the TMs establish the connection to it which the metrics query 
> service piggybacked on.
> A potential solution to this might be to let the query service connect to the 
> JM similar to how the TMs register.
> I tagged this ticket as an improvement, but in the context of Kubernetes I 
> would consider this to be a bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on a change in pull request #10059: [FLINK-14543][table] Support partition for temporary table

2019-10-31 Thread GitBox
dawidwys commented on a change in pull request #10059: [FLINK-14543][table] 
Support partition for temporary table
URL: https://github.com/apache/flink/pull/10059#discussion_r341128392
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/descriptors/ConnectTableDescriptor.java
 ##
 @@ -136,14 +137,24 @@ public void createTemporaryTable(String path) {
" use 
registerTableSource/registerTableSink/registerTableSourceAndSink.");
 
 Review comment:
   That's what we are doing. The `schemaDescriptor` contain those properties.
   
   Previously I think it was possible that the `TableSource` produced the 
`TableSchema` itself without getting any schema from properties. We cannot 
support this scenario anymore. Though I think all the connectors we provide so 
far require the schema properties anyway.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dawidwys commented on a change in pull request #10059: [FLINK-14543][table] Support partition for temporary table

2019-10-31 Thread GitBox
dawidwys commented on a change in pull request #10059: [FLINK-14543][table] 
Support partition for temporary table
URL: https://github.com/apache/flink/pull/10059#discussion_r341128392
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/descriptors/ConnectTableDescriptor.java
 ##
 @@ -136,14 +137,24 @@ public void createTemporaryTable(String path) {
" use 
registerTableSource/registerTableSink/registerTableSourceAndSink.");
 
 Review comment:
   That's what we are doing. The `schemaDescriptor` contain those properties.
   
   Previously I think it was possible that the {{TableSource}} produced the 
{{TableSchema}} itself without getting any schema from properties. We cannot 
support this scenario anymore. Though I think all the connectors we provide so 
far require the schema properties anyway.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink 
scope of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341145798
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java
 ##
 @@ -82,28 +81,15 @@
private boolean started;
 
public TaskSlotTable(
-   final Collection resourceProfiles,
+   final List taskSlots,
final TimerService timerService) {
 
-   int numberSlots = resourceProfiles.size();
-
-   Preconditions.checkArgument(0 < numberSlots, "The number of 
task slots must be greater than 0.");
 
 Review comment:
   added back


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink 
scope of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341145693
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTableBuilder.java
 ##
 @@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor.slot;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.memory.MemoryManager;
+import org.apache.flink.runtime.taskexecutor.TaskManagerServices;
+import org.apache.flink.runtime.testingUtils.TestingUtils;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/** Builder for {@link TaskSlotTable}. */
+public class TaskSlotTableBuilder {
+   private static final long DEFAULT_SLOT_TIMEOUT = 1L;
+
+   private static final ResourceProfile DEFAULT_RESOURCE_PROFILE =
+   TaskManagerServices.computeSlotResourceProfile(1, 10 * 
MemoryManager.MIN_PAGE_SIZE);
+
+   private List taskSlots;
+   private TimerService timerService;
+
+   private TaskSlotTableBuilder setTaskSlots(List 
taskSlots) {
+   this.taskSlots = new ArrayList<>(taskSlots);
+   return this;
+   }
+
+   public TaskSlotTableBuilder setTimerService(TimerService 
timerService) {
+   this.timerService = timerService;
+   return this;
+   }
+
+   public TaskSlotTableBuilder withTimerServiceTimeout(Time timeout) {
+   this.timerService = new 
TimerService<>(TestingUtils.defaultExecutor(), timeout.toMilliseconds());
+   return this;
+   }
+
+   public TaskSlotTable build() {
+   timerService = timerService == null ? 
createDefaultTimerService() : timerService;
+   return new TaskSlotTable(taskSlots, timerService);
+   }
+
+   private static TimerService createDefaultTimerService() {
+   return new TimerService<>(TestingUtils.defaultExecutor(), 
DEFAULT_SLOT_TIMEOUT);
+   }
+
+   public static TaskSlotTableBuilder newBuilder() {
+   return newBuilderWithDefaultSlots(1);
+   }
+
+   public static TaskSlotTableBuilder newBuilderWithDefaultSlots(int 
numberOfDefaultSlots) {
+   return new 
TaskSlotTableBuilder().setTaskSlots(createDefaultSlotProfiles(numberOfDefaultSlots));
+   }
+
+   public static List createDefaultSlotProfiles(int 
numberOfDefaultSlots) {
 
 Review comment:
   let's refactor `TaskSlotTableBuilder` to `TaskSlotAndTableFactory`.
   That should also resolve other comments for this commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink 
scope of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341145951
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerServices.java
 ##
 @@ -302,6 +299,30 @@ public static TaskManagerServices fromConfiguration(
taskEventDispatcher);
}
 
+   private static TaskSlotTable createTaskSlotTable(
+   int numberOfSlots,
+   long managedMemorySize,
+   long timerServiceShutdownTimeout) {
+   final List resourceProfiles =
+   Collections.nCopies(numberOfSlots, 
computeSlotResourceProfile(numberOfSlots, managedMemorySize));
+   final TimerService timerService = new 
TimerService<>(
+   new ScheduledThreadPoolExecutor(1),
+   timerServiceShutdownTimeout);
+   return new 
TaskSlotTable(createTaskSlotsFromResources(resourceProfiles), timerService);
+   }
+
+   public static List 
createTaskSlotsFromResources(Collection resourceProfiles) {
 
 Review comment:
   used in `TaskSlotTableBuilder`, true, not needed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race condition during task cancellation

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race 
condition during task cancellation
URL: https://github.com/apache/flink/pull/10049#issuecomment-547930818
 
 
   
   ## CI report:
   
   * 7e94dd883cdbfa5f32bdf727166366819f956a17 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134206876)
   * e1ade8d8bebde874fc376978f55241a62abb2ff0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380749)
   * 776504c9a21756a5111e60f6574b8f5a17209b6b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134395066)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10042: [FLINK-11466][e2e] Implement LocalStandaloneKafkaResource

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10042: [FLINK-11466][e2e] Implement 
LocalStandaloneKafkaResource 
URL: https://github.com/apache/flink/pull/10042#issuecomment-547792578
 
 
   
   ## CI report:
   
   * b523eace793d9168a2816e50891d6227183a7175 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134149629)
   * 876f0f86b208f6fce4d74488081791dcc7b89cf8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134326879)
   * 027d5fe9ca3832c9384a715f3f7e65df3fd64dcf : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134395038)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #5521: [FLINK-8599] Improve the failure behavior of the FileInputFormat for …

2019-10-31 Thread GitBox
kl0u commented on issue #5521: [FLINK-8599] Improve the failure behavior of the 
FileInputFormat for …
URL: https://github.com/apache/flink/pull/5521#issuecomment-548392411
 
 
   I was thinking something like the `FileProcessingMode` in the `readFile` 
method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race condition during task cancellation

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race 
condition during task cancellation
URL: https://github.com/apache/flink/pull/10049#issuecomment-547930818
 
 
   
   ## CI report:
   
   * 7e94dd883cdbfa5f32bdf727166366819f956a17 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134206876)
   * e1ade8d8bebde874fc376978f55241a62abb2ff0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380749)
   * 776504c9a21756a5111e60f6574b8f5a17209b6b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134395066)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-14372) Enable KeyedStateCheckpointingITCase to pass with scheduler NG

2019-10-31 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-14372:


Assignee: Gary Yao

> Enable KeyedStateCheckpointingITCase to pass with scheduler NG
> --
>
> Key: FLINK-14372
> URL: https://issues.apache.org/jira/browse/FLINK-14372
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Gary Yao
>Priority: Major
> Fix For: 1.10.0
>
>
> KeyedStateCheckpointingITCase currently fails with scheduler NG.
> The failure cause is that state restore is not supported in scheduler NG yet.
> We need to support the state restore in scheduler NG and annotate it with 
> AlsoRunWithSchedulerNG.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14439) RestartPipelinedRegionStrategy leverage tracked partition availability for better failover experience in DefaultScheduler

2019-10-31 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-14439.

Resolution: Fixed

1.10: a966e44b525e694dd90be30206ed83ee341ed9a4

> RestartPipelinedRegionStrategy leverage tracked partition availability for 
> better failover experience in DefaultScheduler 
> --
>
> Key: FLINK-14439
> URL: https://issues.apache.org/jira/browse/FLINK-14439
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In current region failover when using DefaultScheduler, most of the input 
> result partition states are unknown. Even though the failure cause is a 
> PartitionException, only one unhealthy partition can be identified.
> The may lead to multiple unsuccessful failovers before all the unhealthy but 
> needed partitions are identified and their producers are involved in the 
> failover as well. (unsuccessful failover here means the recovered tasks get 
> failed again soon due to some missing input partitions.)
> Using JM side tracked partition states to help the region failover to 
> identify unhealthy(missing) partitions earlier can help with this case.
> To achieve it, I'd propose as follows:
> 1. Change {{FailoverStrategy.Factory#create(FailoverTopology)}} to 
> {{FailoverStrategy.Factory#create(FailoverTopology, 
> ResultPartitionAvailabilityChecker)}}.
> 2. Add {{schedulerBase#getResultPartitionAvailabilityChecker}} which returns 
> {{getExecutionGraph().getResultPartitionAvailabilityChecker()}}
> 3. In DefaultScheduler use the ResultPartitionAvailabilityChecker from 
> SchedulerBase to create the failover strategy from the factory
> It also fails BatchFineGrainedRecoveryITCase due to unexpected failover 
> counts. This is because the legacy scheduler already has similar optimization 
> in FLINK-13055.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
azagrebin commented on a change in pull request #10034: [FLINK-14400] Shrink 
scope of MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#discussion_r341146226
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java
 ##
 @@ -82,28 +81,15 @@
private boolean started;
 
public TaskSlotTable(
-   final Collection resourceProfiles,
+   final List taskSlots,
final TimerService timerService) {
 
-   int numberSlots = resourceProfiles.size();
-
-   Preconditions.checkArgument(0 < numberSlots, "The number of 
task slots must be greater than 0.");
 
 Review comment:
   added back


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce FileSystemOutputFormat for batch

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9864: [FLINK-14254][table] Introduce 
FileSystemOutputFormat for batch
URL: https://github.com/apache/flink/pull/9864#issuecomment-539910284
 
 
   
   ## CI report:
   
   * e66114b1aa73a82b4c6bcf5c3a16baedb598f8c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131098600)
   * b7887760a3c3d28ca88eb31800ebd61084a520fc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131249622)
   * 19ff8f1384bf24b469fa6cac0566d603a332b31d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133329923)
   * 98c83564a56ddcc30095e83458384a106170443c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133770151)
   * 501f1c3266693d0cbb1b8b1c0195f354756b7526 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134385291)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14558) Fix the ClassNotFoundException issue for run python job in standalone mode

2019-10-31 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964057#comment-16964057
 ] 

Hequn Cheng commented on FLINK-14558:
-

Fixed in 1.10.0 via 3a69d5b8af7fd131670fbf4479653b8700c8a3c4

> Fix the ClassNotFoundException issue for run python job in standalone mode
> --
>
> Key: FLINK-14558
> URL: https://issues.apache.org/jira/browse/FLINK-14558
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> java.lang.ClassNotFoundException: 
> org.apache.flink.table.runtime.operators.python.PythonScalarFunctionOperator 
> will be thrown when running a Python UDF job in a standalone cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14558) Fix the ClassNotFoundException issue for run python job in standalone mode

2019-10-31 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-14558.
---
Resolution: Fixed

> Fix the ClassNotFoundException issue for run python job in standalone mode
> --
>
> Key: FLINK-14558
> URL: https://issues.apache.org/jira/browse/FLINK-14558
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> java.lang.ClassNotFoundException: 
> org.apache.flink.table.runtime.operators.python.PythonScalarFunctionOperator 
> will be thrown when running a Python UDF job in a standalone cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 closed pull request #10045: [FLINK-14558][python] Fix ClassNotFoundException of PythonScalarFunctionOperator

2019-10-31 Thread GitBox
hequn8128 closed pull request #10045: [FLINK-14558][python] Fix 
ClassNotFoundException of PythonScalarFunctionOperator
URL: https://github.com/apache/flink/pull/10045
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on a change in pull request #9972: [FLINK-14496][client] Exclude detach flag from ClusterClient

2019-10-31 Thread GitBox
kl0u commented on a change in pull request #9972: [FLINK-14496][client] Exclude 
detach flag from ClusterClient
URL: https://github.com/apache/flink/pull/9972#discussion_r341162969
 
 

 ##
 File path: flink-clients/src/main/java/org/apache/flink/client/ClientUtils.java
 ##
 @@ -95,15 +100,55 @@ public static ClassLoader 
buildUserCodeClassLoader(List jars, List cla
return FlinkUserCodeClassLoaders.parentFirst(urls, parent);
}
 
+   public static JobExecutionResult submitJob(
+   ClusterClient client,
+   JobGraph jobGraph) throws ProgramInvocationException {
+   try {
 
 Review comment:
   You can add the necessary precondition checks like `checkNotNull()` for the 
`client` and the `jobGraph`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on a change in pull request #9972: [FLINK-14496][client] Exclude detach flag from ClusterClient

2019-10-31 Thread GitBox
kl0u commented on a change in pull request #9972: [FLINK-14496][client] Exclude 
detach flag from ClusterClient
URL: https://github.com/apache/flink/pull/9972#discussion_r341163172
 
 

 ##
 File path: flink-clients/src/main/java/org/apache/flink/client/ClientUtils.java
 ##
 @@ -95,15 +100,55 @@ public static ClassLoader 
buildUserCodeClassLoader(List jars, List cla
return FlinkUserCodeClassLoaders.parentFirst(urls, parent);
}
 
+   public static JobExecutionResult submitJob(
+   ClusterClient client,
+   JobGraph jobGraph) throws ProgramInvocationException {
+   try {
+   return client
+   .submitJob(jobGraph)
+   .thenApply(JobSubmissionResult::getJobID)
+   .thenApply(DetachedJobExecutionResult::new)
+   .get();
+   } catch (InterruptedException | ExecutionException e) {
+   ExceptionUtils.checkInterrupted(e);
+   throw new ProgramInvocationException("Could not run job 
in detached mode.", jobGraph.getJobID(), e);
+   }
+   }
+
+   public static JobExecutionResult submitJobAndWaitForResult(
+   ClusterClient client,
+   JobGraph jobGraph,
+   ClassLoader classLoader) throws 
ProgramInvocationException {
+   JobResult jobResult;
+
 
 Review comment:
   You can add the necessary precondition checks like `checkNotNull()` for the 
`client`, the `jobGraph` and the `classLoader`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341170248
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
 
 Review comment:
   You are right. It's not necessary. It was ported from `JobGraph` which was 
made to extend `LogicalToplogy` in my previous trials.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
zhuzhurk commented on a change in pull request #10006: [FLINK-14312][runtime] 
Support building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#discussion_r341171431
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/topology/DefaultLogicalTopology.java
 ##
 @@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobgraph.topology;
+
+import 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSet;
+import org.apache.flink.runtime.jobgraph.IntermediateDataSetID;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.util.IterableUtils;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Default implementation of {@link LogicalTopology}.
+ * It is an adapter of {@link JobGraph}.
+ */
+public class DefaultLogicalTopology implements 
LogicalTopology {
+
+   private final boolean containsCoLocationConstraints;
+
+   private final List verticesSorted;
+
+   private final Map idToVertexMap;
+
+   private final Map 
idToResultMap;
+
+   private transient Set regions;
+
+   public DefaultLogicalTopology(final JobGraph jobGraph) {
+   checkNotNull(jobGraph);
+
+   this.containsCoLocationConstraints = 
IterableUtils.toStream(jobGraph.getVertices())
+   .map(JobVertex::getCoLocationGroup)
+   .anyMatch(Objects::nonNull);
+
+   this.verticesSorted = new 
ArrayList<>(jobGraph.getNumberOfVertices());
+   this.idToVertexMap = new HashMap<>();
+   this.idToResultMap = new HashMap<>();
+
+   buildVerticesAndResults(jobGraph);
+   }
+
+   private void buildVerticesAndResults(final JobGraph jobGraph) {
+   for (JobVertex jobVertex : 
jobGraph.getVerticesSortedTopologicallyFromSources()) {
+
+   final DefaultLogicalVertex logicalVertex = new 
DefaultLogicalVertex(jobVertex, this::getResult);
+   this.verticesSorted.add(logicalVertex);
+   this.idToVertexMap.put(logicalVertex.getId(), 
logicalVertex);
+
+   for (IntermediateDataSet intermediateDataSet : 
jobVertex.getProducedDataSets()) {
+   DefaultLogicalResult logicalResult = new 
DefaultLogicalResult(intermediateDataSet, this::getVertex);
+   idToResultMap.put(logicalResult.getId(), 
logicalResult);
+   }
+   }
+   }
+
+   @Override
+   public Iterable getVertices() {
+   return verticesSorted;
+   }
+
+   @Override
+   public boolean containsCoLocationConstraints() {
+   return containsCoLocationConstraints;
+   }
+
+   private DefaultLogicalVertex getVertex(final JobVertexID vertexId) {
+   return Optional.ofNullable(idToVertexMap.get(vertexId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find vertex: " + vertexId));
+   }
+
+   private DefaultLogicalResult getResult(final IntermediateDataSetID 
resultId) {
+   return Optional.ofNullable(idToResultMap.get(resultId))
+   .orElseThrow(() -> new IllegalArgumentException("can 
not find result: " + resultId));
+   }
+
+   public Set getLogicalPipelinedRegions() {
+   if (regions == null) {
+   regions = buildLogicalPipelinedRegions();
+   }
+   return regions;
+   }
 
 Review comment:
   Sure we can. Just wanted to avoid to calculate it twice. But currently we 
only need to invoke it once.


This 

[GitHub] [flink] flinkbot edited a comment on issue #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10062: [FLINK-14588][hive] Support Hive 
version 1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062#issuecomment-548389810
 
 
   
   ## CI report:
   
   * 47e63a50352681e156100c5a35084ff927292910 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134400911)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #9993: 
[FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting 
with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#discussion_r341203502
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java
 ##
 @@ -231,25 +230,39 @@ private BufferBuilder toBufferBuilder(MemorySegment 
memorySegment) {
 
@Nullable
private MemorySegment requestMemorySegment(boolean isBlocking) throws 
InterruptedException, IOException {
+   if (!isBlocking) {
 
 Review comment:
   Sorry for adding a bit more of the confussion, but I just spotted one more 
thing: `requestMemorySegment(boolean isBlocking)` is actually used only in two 
places, one blockingly and the other non blocking. I think we can get rid of 
this `if (!isBlocking)` condition by simply inlining this method.
   
   Also it would also get rid of the ugly `try/catch(InterruptedException)` in 
the non blocking method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10006: [FLINK-14312][runtime] Support 
building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#issuecomment-546671910
 
 
   
   ## CI report:
   
   * 0697f19b0cce32ea7e70e0c13abe5a4a53d5 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/133705655)
   * bad1d64360f1c9efc6e77e47dcb309629e31e6e0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133706389)
   * 603e5b42d12209be8ded1283e0b3f0130eb5abaf : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133949732)
   * 347249231785b0f0d02c8410fa664c1c34a81023 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134046310)
   * e701b10679b0b9078d95a8aeda029c8268b3851c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134418810)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14588) Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread Rui Li (Jira)
Rui Li created FLINK-14588:
--

 Summary: Support Hive version 1.0.0 and 1.0.1
 Key: FLINK-14588
 URL: https://issues.apache.org/jira/browse/FLINK-14588
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] azagrebin commented on issue #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
azagrebin commented on issue #10034: [FLINK-14400] Shrink scope of 
MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#issuecomment-548388263
 
 
   Thanks for the review @zentol, addressed comments
   The commit order looks correct now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341167377
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/component/AbstractUserClassPathJobGraphRetriever.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.entrypoint.component;
+
+import org.apache.flink.util.FileUtils;
+
+import javax.annotation.Nullable;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ *  Abstract class for the JobGraphRetriever, which wants to get classpath 
user's code depends on.
+ */
+public abstract class AbstractUserClassPathJobGraphRetriever implements 
JobGraphRetriever {
+
+   /* A collection of relative jar paths to the working directory */
+   private final List userClassPaths;
+
+   protected AbstractUserClassPathJobGraphRetriever(@Nullable final File 
jobDir) throws IOException {
+   if (jobDir == null) {
+   userClassPaths = Collections.emptyList();
+   } else {
+   final Collection jarFiles = 
FileUtils.listFilesInPath(jobDir, file -> file.getName().endsWith(".jar"));
+   final Collection relativeFiles = 
FileUtils.relativizeToWorkingDir(jarFiles);
+   this.userClassPaths = new 
ArrayList<>(FileUtils.toRelativeURLs(relativeFiles));
 
 Review comment:
   I'd be slightly in favour of @zhuzhurk proposal. That way, we say that this 
class returns a `Collection` and whoever expects a `List` will have 
to transform it. I think this way gives a bit more flexibility because this 
class, as you've said, does not impose any particular order.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341157747
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
+*
+* @param relativeFiles a collection of relative {@code File}s
+* @return a collection of relative URLs
+*
+* @throws MalformedURLException if error occurs while construct a url.
 
 Review comment:
   ```suggestion
 * @throws MalformedURLException if error occurs while constructing a 
url.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341157466
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
 
 Review comment:
   ```suggestion
 * Convert a collection of relative {@link File Files} to a collection 
of relative {@link URL URLs}.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341157954
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
+*
+* @param relativeFiles a collection of relative {@code File}s
+* @return a collection of relative URLs
+*
+* @throws MalformedURLException if error occurs while construct a url.
+*/
+   public static Collection toRelativeURLs(final Collection 
relativeFiles) throws MalformedURLException {
+   checkNotNull(relativeFiles, "relativeFiles");
+
+   if (relativeFiles.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final List urls = new LinkedList<>();
 
 Review comment:
   Same here with `LinkedList`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341168070
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/entrypoint/component/AbstractUserClassPathJobGraphRetrieverTest.java
 ##
 @@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.entrypoint.component;
+
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.util.FileUtilsTest;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.util.Collection;
+
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests for the {@link AbstractUserClassPathJobGraphRetriever}.
+ */
+public class AbstractUserClassPathJobGraphRetrieverTest extends TestLogger {
+
+   @Rule
+   public final TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+   private static class TestJobGraphRetriever extends 
AbstractUserClassPathJobGraphRetriever {
+   public TestJobGraphRetriever(@Nonnull final File jobDir) throws 
IOException {
+   super(jobDir);
+   }
+
+   @Override
+   public JobGraph retrieveJobGraph(Configuration configuration) {
+   return null;
+   }
+   }
+
+   @Test
+   public void testGetUserClassPath() throws IOException {
+   final Path testJobDir = 
temporaryFolder.newFolder("_test_job").toPath();
+   final Tuple3, Collection, 
Collection>
+   result = FileUtilsTest.prepareTestFiles(testJobDir);
+   final TestJobGraphRetriever testJobGraphRetriever = new 
TestJobGraphRetriever(testJobDir.toFile());
+   assertTrue(CollectionUtils.isEqualCollection(result.f2, 
testJobGraphRetriever.getUserClassPaths()));
+   }
+
+   @Test(expected = IllegalArgumentException.class)
+   public void 
testTheJobGraphRetrieverThrowExceptionBecauseJobDirDoesNotHaveAnyJars() throws 
IOException {
+   final Path testJobDir = 
temporaryFolder.newFolder("_test_job_").toPath();
+   new TestJobGraphRetriever(testJobDir.toFile());
+   }
 
 Review comment:
   I'm not sure about this behaviour. I think it would be better not to assume 
that the directory needs to contain at least one jar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10035: [FLINK-14080][table-planner-blink] Introduce DateTime as internal representation of TIMESTAMP_WITHOUT_TIME_ZONE

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10035: [FLINK-14080][table-planner-blink] 
Introduce DateTime as internal representation of TIMESTAMP_WITHOUT_TIME_ZONE
URL: https://github.com/apache/flink/pull/10035#issuecomment-547440593
 
 
   
   ## CI report:
   
   * db25eacc6c332f35ea23e583b5363717c0f599da : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010481)
   * 7161ec4c8020019c8b4976dbd262fc53a90c9f02 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134143018)
   * 6f0258442192155e8d19d4eece8c5818f68d3467 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134160149)
   * 7e4705478310fb41201a5b7f581a09bed3fc3c6f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134169990)
   * 1130be17a2024e3fda8b097acb17fa4135c01690 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134389782)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341164351
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
+*
+* @param relativeFiles a collection of relative {@code File}s
+* @return a collection of relative URLs
+*
+* @throws MalformedURLException if error occurs while construct a url.
+*/
+   public static Collection toRelativeURLs(final Collection 
relativeFiles) throws MalformedURLException {
 
 Review comment:
   Wouldn't it be enough to have a method `URL toURL(File)` which transforms a 
`File` into a `URL`? The `File` will then decide whether the URL is relative or 
absolute, depending on the `File`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341156074
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
 
 Review comment:
   Given that we always convert `File` to `nio.file.Path`, I think we could 
change the signature to accept `nio.file.Path` and return a collection of 
`nio.file.Path`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10034: [FLINK-14400] Shrink scope of MemoryManager from TaskExecutor to slot

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10034: [FLINK-14400] Shrink scope of 
MemoryManager from TaskExecutor to slot
URL: https://github.com/apache/flink/pull/10034#issuecomment-547440502
 
 
   
   ## CI report:
   
   * 85d93043d366498b8bdaed5351b74ca47c77d575 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010434)
   * 6dd1e38a230348bb9e8b489c6f99a6a125cd2cc8 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341165709
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/component/AbstractUserClassPathJobGraphRetriever.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.entrypoint.component;
+
+import org.apache.flink.util.FileUtils;
+
+import javax.annotation.Nullable;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ *  Abstract class for the JobGraphRetriever which supports getting user 
classpaths.
+ */
+public abstract class AbstractUserClassPathJobGraphRetriever implements 
JobGraphRetriever {
+
+   /** User classpaths in relative form to the working directory. */
+   private final List userClassPaths;
+
+   protected AbstractUserClassPathJobGraphRetriever(@Nullable final File 
jobDir) throws IOException {
+   if (jobDir == null) {
+   userClassPaths = Collections.emptyList();
+   } else {
+   final Collection jarFiles = 
FileUtils.listFilesInPath(jobDir, file -> file.getName().endsWith(".jar"));
+   final Collection relativeFiles = 
FileUtils.relativizeToWorkingDir(jarFiles);
+   this.userClassPaths = new 
ArrayList<>(FileUtils.toRelativeURLs(relativeFiles));
+   if (this.userClassPaths.isEmpty()) {
+   throw new 
IllegalArgumentException(String.format("The job dir %s does not have any 
jars.", jobDir));
+   }
 
 Review comment:
   I think we should not fail if the job directory is empty.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341157610
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
+*
+* @param relativeFiles a collection of relative {@code File}s
 
 Review comment:
   ```suggestion
 * @param relativeFiles a collection of relative {@link File Files}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341156734
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
 
 Review comment:
   `LinkedList` is usually slower than `ArrayList`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341160757
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
+   checkNotNull(files, "files");
+
+   if (files.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final java.nio.file.Path workingDirPath = 
Paths.get(System.getProperty("user.dir"));
+
+   final List relativeFiles = new LinkedList<>();
+
+   for (File file : files) {
+   if (file.isAbsolute()) {
+   
relativeFiles.add(workingDirPath.relativize(file.toPath()).toFile());
+   } else {
+   relativeFiles.add(file);
+   }
+   }
+
+   return Collections.unmodifiableCollection(relativeFiles);
+   }
+
+   /**
+* Convert a collection of relative {@code File}s to a collection of 
relative {@code URL}s.
+*
+* @param relativeFiles a collection of relative {@code File}s
+* @return a collection of relative URLs
+*
+* @throws MalformedURLException if error occurs while construct a url.
+*/
+   public static Collection toRelativeURLs(final Collection 
relativeFiles) throws MalformedURLException {
+   checkNotNull(relativeFiles, "relativeFiles");
+
+   if (relativeFiles.isEmpty()) {
+   return Collections.emptyList();
+   }
+
+   final List urls = new LinkedList<>();
+
+   for (File file : relativeFiles) {
+   checkArgument(!file.isAbsolute(), "the relative path is 
required");
+   urls.add(
+   new URL(
+   new URL(file.toURI().getScheme() + ":"),
+   file.getPath()
+   )
+   );
+   }
+
+   return Collections.unmodifiableCollection(urls);
+   }
+
+   private static final class FilterFileVisitor extends 
SimpleFileVisitor {
+
+   private List files;
+
+   private final Predicate fileFilter;
+
+   FilterFileVisitor(Predicate fileFilter) {
+   this.fileFilter = checkNotNull(fileFilter);
+   }
+
+   @Override
+   public FileVisitResult visitFile(java.nio.file.Path file, 
BasicFileAttributes attrs) throws IOException {
+   FileVisitResult fileVisitResult = super.visitFile(file, 
attrs);
+
+   if (this.fileFilter.test(file.toFile())) {
+   if (files == null) {
+   files = new ArrayList<>();
 
 Review comment:
   

[GitHub] [flink] tillrohrmann commented on a change in pull request #9950: [FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever

2019-10-31 Thread GitBox
tillrohrmann commented on a change in pull request #9950: 
[FLINK-14464][runtime] Introduce the AbstractUserClassPathJobGraphRetriever
URL: https://github.com/apache/flink/pull/9950#discussion_r341156511
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/FileUtils.java
 ##
 @@ -525,6 +540,123 @@ public static Path expandDirectory(Path file, Path 
targetDirectory) throws IOExc
return new Path(targetDirectory, rootDir);
}
 
+   /**
+* List the {@code directory} recursively and return the files that 
satisfy the {@code fileFilter}.
+*
+* @param directory the directory to be listed
+* @param fileFilter a file filter
+* @return a collection of {@code File}s
+*
+* @throws IOException if an I/O error occurs while listing the files 
in the given directory
+*/
+   public static Collection listFilesInPath(final File directory, 
final Predicate fileFilter) throws IOException {
+   checkNotNull(directory, "directory");
+   checkNotNull(fileFilter, "fileFilter");
+
+   if (!Files.exists(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
directory %s dose not exist.", directory));
+   }
+   if (!Files.isDirectory(directory.toPath())) {
+   throw new IllegalArgumentException(String.format("The 
%s is not a directory.", directory));
+   }
+
+   final FilterFileVisitor filterFileVisitor = new 
FilterFileVisitor(fileFilter);
+
+   Files.walkFileTree(
+   directory.toPath(),
+   EnumSet.of(FileVisitOption.FOLLOW_LINKS),
+   Integer.MAX_VALUE,
+   filterFileVisitor);
+
+   return filterFileVisitor.getFiles();
+   }
+
+   /**
+* Convert a collection of {@code File}s to a collection of relative 
path to the working dir.
+*
+* @param files a collection of files needed to be relatived
+* @return a collection of relative {@code File}s
+*/
+   public static Collection relativizeToWorkingDir(final 
Collection files) {
 
 Review comment:
   Same here. Let's change the signature to `nio.file.Path`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid task starvation with mailbox

2019-10-31 Thread GitBox
pnowojski commented on a change in pull request #10009: [FLINK-14304] Avoid 
task starvation with mailbox
URL: https://github.com/apache/flink/pull/10009#discussion_r341114825
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/Mailbox.java
 ##
 @@ -37,33 +37,33 @@
 *
 * @return an optional with either the oldest mail from the mailbox 
(head of queue) if the mailbox is not empty or
 * an empty optional otherwise.
-* @throws  MailboxStateException if mailbox is already closed.
+* @throws IllegalStateException if mailbox is already closed.
 */
-   Optional tryTake(int priority) throws MailboxStateException;
+   Optional tryTake(int priority) throws IllegalStateException;
 
/**
 * This method returns the oldest mail from the mailbox (head of queue) 
or blocks until a mail is available.
 *
 * @return the oldest mail from the mailbox (head of queue).
 * @throws InterruptedException on interruption.
-* @throws  MailboxStateException if mailbox is already closed.
+* @throws IllegalStateException if mailbox is already closed.
 */
@Nonnull
-   Mail take(int priority) throws InterruptedException, 
MailboxStateException;
+   Mail take(int priority) throws InterruptedException;
 
 Review comment:
   One big difference between `MailboxStateException` and 
`IllegalStateException` is that the latter is a `RuntimeException`, as we can 
see in this missing `throws` declaration.
   
   The question is, whether we think this is indeed an `IllegalState` which 
indicates a bug and code should crash, or whether such exception should/could 
be handled by someone? I guess it's a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14440) Enable BatchFineGrainedRecoveryITCase to pass with scheduler NG

2019-10-31 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-14440.

Resolution: Fixed

1.10: 6a25bc59503397b9c959d957624c654ba5b3d8e5

> Enable BatchFineGrainedRecoveryITCase to pass with scheduler NG
> ---
>
> Key: FLINK-14440
> URL: https://issues.apache.org/jira/browse/FLINK-14440
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> BatchFineGrainedRecoveryITCase currently fails with scheduler NG.
> The failure cause is more failover counts than expected due to lacking of an 
> optimization of region failover, see FLINK-14439.
> We need to solve FLINK-14439 and annotate this test with 
> AlsoRunWithSchedulerNG.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13702) BaseMapSerializerTest.testDuplicate fails on Travis

2019-10-31 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963982#comment-16963982
 ] 

Dawid Wysakowicz commented on FLINK-13702:
--

I don't know exactly how the {{JoinedRow}} is used and therefore if the 
{{equals/hashCode}} is needed. Correct me if I am wrong but if we remove the 
{{equals/hashCode}} from {{BinaryGeneric}}, doesn't it mean that we can no 
longer use generic objects in a {{GroupBy}} clause, no? Isn't that a regression?

> BaseMapSerializerTest.testDuplicate fails on Travis
> ---
>
> Key: FLINK-13702
> URL: https://issues.apache.org/jira/browse/FLINK-13702
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Dawid Wysakowicz
>Priority: Critical
>  Labels: test-stability
>
> The {{BaseMapSerializerTest.testDuplicate}} fails on Travis with an 
> {{java.lang.IndexOutOfBoundsException}}.
> https://api.travis-ci.org/v3/job/570973199/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10035: [FLINK-14080][table-planner-blink] Introduce DateTime as internal representation of TIMESTAMP_WITHOUT_TIME_ZONE

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10035: [FLINK-14080][table-planner-blink] 
Introduce DateTime as internal representation of TIMESTAMP_WITHOUT_TIME_ZONE
URL: https://github.com/apache/flink/pull/10035#issuecomment-547440593
 
 
   
   ## CI report:
   
   * db25eacc6c332f35ea23e583b5363717c0f599da : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010481)
   * 7161ec4c8020019c8b4976dbd262fc53a90c9f02 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134143018)
   * 6f0258442192155e8d19d4eece8c5818f68d3467 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134160149)
   * 7e4705478310fb41201a5b7f581a09bed3fc3c6f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134169990)
   * 1130be17a2024e3fda8b097acb17fa4135c01690 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134389782)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10062: [FLINK-14588][hive] Support Hive version 1.0.0 and 1.0.1

2019-10-31 Thread GitBox
flinkbot commented on issue #10062: [FLINK-14588][hive] Support Hive version 
1.0.0 and 1.0.1
URL: https://github.com/apache/flink/pull/10062#issuecomment-548373039
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 47e63a50352681e156100c5a35084ff927292910 (Thu Oct 31 
13:25:01 UTC 2019)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14588).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race condition during task cancellation

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10049: [FLINK-14544][runtime] Fixing race 
condition during task cancellation
URL: https://github.com/apache/flink/pull/10049#issuecomment-547930818
 
 
   
   ## CI report:
   
   * 7e94dd883cdbfa5f32bdf727166366819f956a17 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134206876)
   * e1ade8d8bebde874fc376978f55241a62abb2ff0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380749)
   * 776504c9a21756a5111e60f6574b8f5a17209b6b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13702) BaseMapSerializerTest.testDuplicate fails on Travis

2019-10-31 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963982#comment-16963982
 ] 

Dawid Wysakowicz edited comment on FLINK-13702 at 10/31/19 1:40 PM:


I don't know exactly how the {{JoinedRow}} is used and therefore if the 
{{equals/hashCode}} is needed. Correct me if I am wrong but if we remove the 
{{equals/hashCode}} from {{BinaryGeneric}}, doesn't it mean that we can no 
longer use generic objects in a {{GroupBy}} clause, no? Isn't that a regression?

Edit: Ok I think the answer is that the key selector always uses {{BinaryRow}}, 
where the {{BinaryGeneric}} is written as bytes. You never compare the objects.

I would be in favor of removing the {{serializer}} from the {{BinaryGeneric}} 
then.


was (Author: dawidwys):
I don't know exactly how the {{JoinedRow}} is used and therefore if the 
{{equals/hashCode}} is needed. Correct me if I am wrong but if we remove the 
{{equals/hashCode}} from {{BinaryGeneric}}, doesn't it mean that we can no 
longer use generic objects in a {{GroupBy}} clause, no? Isn't that a regression?

> BaseMapSerializerTest.testDuplicate fails on Travis
> ---
>
> Key: FLINK-13702
> URL: https://issues.apache.org/jira/browse/FLINK-13702
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Dawid Wysakowicz
>Priority: Critical
>  Labels: test-stability
>
> The {{BaseMapSerializerTest.testDuplicate}} fails on Travis with an 
> {{java.lang.IndexOutOfBoundsException}}.
> https://api.travis-ci.org/v3/job/570973199/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546219804
 
 
   
   ## CI report:
   
   * f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133499201)
   * 501e86a6e9e8eab7fc26f030d284268d530e093e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134129471)
   * 3e3a090cfc7d9216701b68664e2c8fa4f34861f7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134135547)
   * bb53297bace0091789a1c0fa07e7261a339022b0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134324937)
   * 9af1c8d0e395ce1b197fef96e53512e7189aa12d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134356941)
   * d0a3717005a6e4145f7b51aeb5d04003808b31a0 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14569) use flink to deal with batch task,when parallelism granter than 1, always 1 or 2 records loss

2019-10-31 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964064#comment-16964064
 ] 

kk commented on FLINK-14569:


thank you  very much, use the api  
"env.readCsvFile(path).fieldDelimiter(fieldDeli).lineDelimiter(lineDeli).types(...)"
 or use the "new 
CsvAppendTableSourceFactory().createTableSource(false,targetTableDescriptor.toProperties())",
 then print the dataset ,get then data alwasys lose 1 
record(total:18records,Parallelism:24),but use the api 
"env.readTextFile(path)", can get the right result!

> use flink to deal with batch task,when parallelism granter than 1, always 1 
> or 2 records loss
> -
>
> Key: FLINK-14569
> URL: https://issues.apache.org/jira/browse/FLINK-14569
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.7.2
> Environment: hadoop:2.8.5
> flink: 1.7.1
> node machines: 3
> deploy: yarn
>Reporter: kk
>Priority: Critical
>
> when flink read from hdfs file and set the parallelism>1, occasionally, 1  or 
> 2 records  lose,  however the hdfs file bigger or smaller.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kl0u edited a comment on issue #5521: [FLINK-8599] Improve the failure behavior of the FileInputFormat for …

2019-10-31 Thread GitBox
kl0u edited a comment on issue #5521: [FLINK-8599] Improve the failure behavior 
of the FileInputFormat for …
URL: https://github.com/apache/flink/pull/5521#issuecomment-548392411
 
 
   I was thinking something like the `FileProcessingMode` in the `readFile` 
method. One `mode` being something like `SKIP_MISSING_SPLITS` and the other 
`FAIL_ON_MISSING_SPLITS` or something like this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9972: [FLINK-14496][client] Exclude detach flag from ClusterClient

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9972: [FLINK-14496][client] Exclude detach 
flag from ClusterClient
URL: https://github.com/apache/flink/pull/9972#issuecomment-545092005
 
 
   
   ## CI report:
   
   * 355939385e2951dd0da0e3fe1da8feb7dbc27f61 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133049329)
   * 3eaa1b59726a2695f71bc6bc8cfe75f6a085a5d0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133107682)
   * c08dc75534c18bc5bd35e6f270ef4e1eba9c39ea : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133109491)
   * b20e536e7f2b867cab66e7f4cef402664c31aa33 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133113346)
   * 63610043d2e1ba1ca58e70d35285d9eff500b31b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133133305)
   * 565fab1d144edf946e99cbe29398e3699dc43abd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133175206)
   * 090564e34d866c8d823312a218ff6c3aadd3b366 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/134189153)
   * a24ba575dbbd17f9cd06b3906703617f81c65761 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134194013)
   * 8fc8c7591b97a0687816b7df233103fa6bd7a3a4 : UNKNOWN
   * b8971704faa08bf64a2dabcd36899c16ec32719a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134229119)
   * ba1c4b1d076b25e09850d68a2856f8db169aa246 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134389814)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9975: [FLINK-14477][coordination] Implement promotion logic on TaskExecutor

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9975: [FLINK-14477][coordination] Implement 
promotion logic on TaskExecutor 
URL: https://github.com/apache/flink/pull/9975#issuecomment-545348136
 
 
   
   ## CI report:
   
   * 93c53cdbe080b7708e12f2f26dcd386684f8f125 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151875)
   * 9be6cd8ac4742ac30c0b00d11858bff2183d76a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133375898)
   * 7f40891d03679b2a6f2d9d2734195a4ba026b66c : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14565) Shutdown SystemResourcesCounter on (JM|TM)MetricGroup closed

2019-10-31 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964115#comment-16964115
 ] 

Chesnay Schepler commented on FLINK-14565:
--

I would disagree that the {{SystemResourcesCounter}} is bound to any metric 
group.
Ultimately, the SRC is a separate entity populating _some_ data-structure, from 
which _some_ metrics are created which are then passed to the metric system.

The counter may continue to exist with the group being closed, just as the 
metric group can continue to exist with the counter having shut down.

The {{SystemResourcesCounter}} probably shouldn't be spawning a separate thread 
in the first place; let me think a bit to find way how we can workaround this.

> Shutdown SystemResourcesCounter on (JM|TM)MetricGroup closed
> 
>
> Key: FLINK-14565
> URL: https://issues.apache.org/jira/browse/FLINK-14565
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we start SystemResourcesCounter when initialize 
> (JM|TM)MetricGroup. This thread doesn't exit on (JM|TM)MetricGroup closed and 
> even there is not exit logic of them.
> It possibly causes thread leak. For example, on our platform which supports 
> previewing sample SQL execution, it starts a MiniCluster in the same process 
> as the platform. When the preview job finished MiniCluster closed and also 
> (JM|TM)MetricGroup. However these SystemResourcesCounter threads remain.
> I propose when creating SystemResourcesCounter, track it in 
> (JM|TM)MetricGroup, and on (JM|TM)MetricGroup closed, shutdown 
> SystemResourcesCounter. This way, we survive from thread leaks.
> CC [~chesnay] [~trohrmann]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546219804
 
 
   
   ## CI report:
   
   * f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133499201)
   * 501e86a6e9e8eab7fc26f030d284268d530e093e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134129471)
   * 3e3a090cfc7d9216701b68664e2c8fa4f34861f7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134135547)
   * bb53297bace0091789a1c0fa07e7261a339022b0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134324937)
   * 9af1c8d0e395ce1b197fef96e53512e7189aa12d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134356941)
   * d0a3717005a6e4145f7b51aeb5d04003808b31a0 : UNKNOWN
   * 1df7a8290658f9a7079bfa61524172a39001c6cc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134400890)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10006: [FLINK-14312][runtime] Support building logical pipelined regions from JobGraph

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #10006: [FLINK-14312][runtime] Support 
building logical pipelined regions from JobGraph
URL: https://github.com/apache/flink/pull/10006#issuecomment-546671910
 
 
   
   ## CI report:
   
   * 0697f19b0cce32ea7e70e0c13abe5a4a53d5 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/133705655)
   * bad1d64360f1c9efc6e77e47dcb309629e31e6e0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133706389)
   * 603e5b42d12209be8ded1283e0b3f0130eb5abaf : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133949732)
   * 347249231785b0f0d02c8410fa664c1c34a81023 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134046310)
   * e701b10679b0b9078d95a8aeda029c8268b3851c : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9703: [FLINK-14038]Add default GC options for flink on yarn to facilitate debugging

2019-10-31 Thread GitBox
flinkbot edited a comment on issue #9703: [FLINK-14038]Add default GC options 
for flink on yarn to facilitate debugging
URL: https://github.com/apache/flink/pull/9703#issuecomment-532581942
 
 
   
   ## CI report:
   
   * 1b930d19f27909ad5e2759eb6c5471c2ce07e8b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128133485)
   * 977ccb5d91869e37027069d8b2b490bf850253ed : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/129659424)
   * 8347093d4cb32ed752bc01f5cd98abb2d803df94 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/130842273)
   * 796de65585c861a67c46ba8c578e08302ade2cdc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133371242)
   * 5817aa535fb834889eebb96478b7a40f936fb3c3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134200223)
   * 040b9878337aa7b919f16d2cfb1c9bc590b31a7e : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >