[jira] [Updated] (FLINK-9779) Remove SlotRequest timeout

2018-07-11 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/FLINK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

陈梓立 updated FLINK-9779:
---
Description: As is involved in FLINK-8643 and  FLINK-8653, we use external 
timeout to replace internal timeout of slot request. Follow the question: why 
not entirely remove this timeout mechanism? In our industrial case, this 
timeout mechanism causes more no-needed fail and makes resource allocation 
inaccurate.  (was: As is involved in FLINK-8643 and  FLINK-8653, we use 
external timeout to replace internal timeout of slot request. Follow the 
question: why not entirely remove this timeout mechanism? In our industrial 
case, this timeout mechanism causes more no-needed fail and makes resource 
allocation inaccurate.

I would propose to get rid of slot request timeout. Instead, we handle TM fail 
in RM where properly cancel pending request and if TM cannot offer slot to JM, 
we introduce a blacklist mechanism to nudge RM realloc for pending request.)

> Remove SlotRequest timeout
> --
>
> Key: FLINK-9779
> URL: https://issues.apache.org/jira/browse/FLINK-9779
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, ResourceManager, TaskManager
>Reporter: 陈梓立
>Priority: Major
>
> As is involved in FLINK-8643 and  FLINK-8653, we use external timeout to 
> replace internal timeout of slot request. Follow the question: why not 
> entirely remove this timeout mechanism? In our industrial case, this timeout 
> mechanism causes more no-needed fail and makes resource allocation inaccurate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9649) TaskManagers are not scheduled on Mesos

2018-07-11 Thread Gary Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541160#comment-16541160
 ] 

Gary Yao commented on FLINK-9649:
-

[~lishim] You are right. Thanks for following up.

> TaskManagers are not scheduled on Mesos
> ---
>
> Key: FLINK-9649
> URL: https://issues.apache.org/jira/browse/FLINK-9649
> Project: Flink
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Leonid Ishimnikov
>Priority: Major
>
> Flink correctly registers as a framework, but does not schedule task managers.
> Command:
> {noformat}
> ./bin/mesos-appmaster.sh -Dmesos.master="zk://192.168.0.101:2181/mesos" 
> -Djobmanager.heap.mb=1024 -Djobmanager.rpc.address=$(hostname -i) 
> -Djobmanager.rpc.port=6123 -Djobmanager.web.address=$(hostname -i) 
> -Djobmanager.web.port=8080 -Dmesos.initial-tasks=2 
> -Dmesos.resourcemanager.tasks.mem=4096 -Dtaskmanager.heap.mb=3500 
> -Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=10 
> -Dmesos.resourcemanager.tasks.cpus=1 
> -Dmesos.resourcemanager.framework.principal=someuser 
> -Dmesos.resourcemanager.framework.secret=somepassword 
> -Dmesos.resourcemanager.framework.name="Flink-Test"{noformat}
> Log:
> {noformat}
> 2018-06-22 17:39:27,082 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - TaskManagers 
> will be created with 2 task slots
> 2018-06-22 17:39:27,082 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - TaskManagers 
> will be started with container size 4096 MB, JVM heap size 2765 MB, JVM 
> direct memory limit 1331 MB, 1.0 cpus, 0 gpus
> ...
> 2018-06-22 17:39:27,304 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Starting 
> the SlotManager.
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Registering as new framework.
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> 
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  -  
> Mesos Info:
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Master URL: zk://192.168.0.101:2181/mesos
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  -  
> Framework Info:
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> ID: (none)
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Name: Flink-Test
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Failover Timeout (secs): 10.0
> 2018-06-22 17:39:27,305 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Role: *
> 2018-06-22 17:39:27,306 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Capabilities: (none)
> 2018-06-22 17:39:27,306 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Principal: someuser
> 2018-06-22 17:39:27,306 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Host: 192.168.0.100
> 2018-06-22 17:39:27,306 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> Web UI: (none)
> 2018-06-22 17:39:27,306 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - 
> 
> 2018-06-22 17:39:27,432 INFO  
> org.apache.flink.mesos.scheduler.ConnectionMonitor    - Connecting to 
> Mesos...
> 2018-06-22 17:39:27,434 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosResourceManager  - Mesos 
> resource manager initialized.
> 2018-06-22 17:39:27,444 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Dispatcher 
> akka.tcp://flink@192.168.0.100:6123/user/dispatcher was granted leadership 
> with fencing token 
> 2018-06-22 17:39:27,444 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Recovering 
> all persisted jobs.
> 2018-06-22 17:39:27,466 INFO  
> org.apache.flink.mesos.scheduler.ConnectionMonitor    - Connected to 
> Mesos as framework ID 7295a8f7-c0a9-41d1-a737-ae71c57b72bf-1141.{noformat}
> There is nothing further in the log after that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9828) Resource manager should recover slot resource status after failover

2018-07-11 Thread shuai.xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shuai.xu reassigned FLINK-9828:
---

Assignee: shuai.xu

> Resource manager should recover slot resource status after failover
> ---
>
> Key: FLINK-9828
> URL: https://issues.apache.org/jira/browse/FLINK-9828
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Reporter: shuai.xu
>Assignee: shuai.xu
>Priority: Major
>
> After resource manager failover, task executors will report their slot 
> allocation status to RM. But the report does not contain resource. So RM only 
> know the slot are occupied but can not know how much resource is used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9807) Improve EventTimeWindowCheckpointITCase with parameterized

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541135#comment-16541135
 ] 

ASF GitHub Bot commented on FLINK-9807:
---

Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201907959
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
+   default:
+   return 300;
--- End diff --

seems missed a "tab" here


> Improve EventTimeWindowCheckpointITCase with parameterized
> --
>
> Key: FLINK-9807
> URL: https://issues.apache.org/jira/browse/FLINK-9807
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Now, the `AbastractEventTimeWIndowCheckpointITCase` and 
> `AbstractLocalRecoveryITCase` need to re-implement for every backend, we can 
> improve this by using JUnit parameterized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9807) Improve EventTimeWindowCheckpointITCase with parameterized

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541136#comment-16541136
 ] 

ASF GitHub Bot commented on FLINK-9807:
---

Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908022
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
--- End diff --

change this to a const looks better~


> Improve EventTimeWindowCheckpointITCase with parameterized
> --
>
> Key: FLINK-9807
> URL: https://issues.apache.org/jira/browse/FLINK-9807
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Now, the `AbastractEventTimeWIndowCheckpointITCase` and 
> `AbstractLocalRecoveryITCase` need to re-implement for every backend, we can 
> improve this by using JUnit parameterized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9807) Improve EventTimeWindowCheckpointITCase with parameterized

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541138#comment-16541138
 ] 

ASF GitHub Bot commented on FLINK-9807:
---

Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908226
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/LocalRecoveryITCase.java
 ---
@@ -25,35 +25,46 @@
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
 
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
 
-import static 
org.apache.flink.test.checkpointing.AbstractEventTimeWindowCheckpointingITCase.StateBackendEnum;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.FILE_ASYNC;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.ROCKSDB_FULLY_ASYNC;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.ROCKSDB_INCREMENTAL_ZK;
 
 /**
- * This test delegates to instances of {@link 
AbstractEventTimeWindowCheckpointingITCase} that have been reconfigured
+ * This test delegates to instances of {@link 
EventTimeWindowCheckpointingITCase} that have been reconfigured
  * to use local recovery.
  *
- * TODO: This class must be refactored to properly extend {@link 
AbstractEventTimeWindowCheckpointingITCase}.
+ * TODO: This class must be refactored to properly extend {@link 
EventTimeWindowCheckpointingITCase}.
--- End diff --

is the TODO still needed?


> Improve EventTimeWindowCheckpointITCase with parameterized
> --
>
> Key: FLINK-9807
> URL: https://issues.apache.org/jira/browse/FLINK-9807
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Now, the `AbastractEventTimeWIndowCheckpointITCase` and 
> `AbstractLocalRecoveryITCase` need to re-implement for every backend, we can 
> improve this by using JUnit parameterized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9807) Improve EventTimeWindowCheckpointITCase with parameterized

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541137#comment-16541137
 ] 

ASF GitHub Bot commented on FLINK-9807:
---

Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908082
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
+   default:
+   return 300;
+   }
}
 
-   protected int windowSize() {
-   return 100;
+   private int windowSize() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 1000;
--- End diff --

change this to a const looks better to me


> Improve EventTimeWindowCheckpointITCase with parameterized
> --
>
> Key: FLINK-9807
> URL: https://issues.apache.org/jira/browse/FLINK-9807
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: pull-request-available
>
> Now, the `AbastractEventTimeWIndowCheckpointITCase` and 
> `AbstractLocalRecoveryITCase` need to re-implement for every backend, we can 
> improve this by using JUnit parameterized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6305: [FLINK-9807][tests] Optimize EventTimeWindowCheckp...

2018-07-11 Thread yanghua
Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201907959
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
+   default:
+   return 300;
--- End diff --

seems missed a "tab" here


---


[GitHub] flink pull request #6305: [FLINK-9807][tests] Optimize EventTimeWindowCheckp...

2018-07-11 Thread yanghua
Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908022
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
--- End diff --

change this to a const looks better~


---


[GitHub] flink pull request #6305: [FLINK-9807][tests] Optimize EventTimeWindowCheckp...

2018-07-11 Thread yanghua
Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908226
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/LocalRecoveryITCase.java
 ---
@@ -25,35 +25,46 @@
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
 
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
 
-import static 
org.apache.flink.test.checkpointing.AbstractEventTimeWindowCheckpointingITCase.StateBackendEnum;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.FILE_ASYNC;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.ROCKSDB_FULLY_ASYNC;
+import static 
org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.StateBackendEnum.ROCKSDB_INCREMENTAL_ZK;
 
 /**
- * This test delegates to instances of {@link 
AbstractEventTimeWindowCheckpointingITCase} that have been reconfigured
+ * This test delegates to instances of {@link 
EventTimeWindowCheckpointingITCase} that have been reconfigured
  * to use local recovery.
  *
- * TODO: This class must be refactored to properly extend {@link 
AbstractEventTimeWindowCheckpointingITCase}.
+ * TODO: This class must be refactored to properly extend {@link 
EventTimeWindowCheckpointingITCase}.
--- End diff --

is the TODO still needed?


---


[GitHub] flink pull request #6305: [FLINK-9807][tests] Optimize EventTimeWindowCheckp...

2018-07-11 Thread yanghua
Github user yanghua commented on a diff in the pull request:

https://github.com/apache/flink/pull/6305#discussion_r201908082
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeWindowCheckpointingITCase.java
 ---
@@ -871,19 +892,40 @@ public IntType(int value) {
}
}
 
-   protected int numElementsPerKey() {
-   return 300;
+   private int numElementsPerKey() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 3000;
+   default:
+   return 300;
+   }
}
 
-   protected int windowSize() {
-   return 100;
+   private int windowSize() {
+   switch (this.stateBackendEnum) {
+   case ROCKSDB_FULLY_ASYNC:
+   case ROCKSDB_INCREMENTAL:
+   case ROCKSDB_INCREMENTAL_ZK:
+   return 1000;
--- End diff --

change this to a const looks better to me


---


[jira] [Created] (FLINK-9828) Resource manager should recover slot resource status after failover

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9828:
---

 Summary: Resource manager should recover slot resource status 
after failover
 Key: FLINK-9828
 URL: https://issues.apache.org/jira/browse/FLINK-9828
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management
Reporter: shuai.xu


After resource manager failover, task executors will report their slot 
allocation status to RM. But the report does not contain resource. So RM only 
know the slot are occupied but can not know how much resource is used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9827) ResourceManager may receive outdate report of slots status from task manager

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9827:
---

 Summary: ResourceManager may receive outdate report of slots 
status from task manager
 Key: FLINK-9827
 URL: https://issues.apache.org/jira/browse/FLINK-9827
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management
Affects Versions: 1.5.0
Reporter: shuai.xu
Assignee: shuai.xu


TaskExecutor will report its slot status to resource manager in heartbeat, but 
this is in a different thread with the main rpc  thread. So it may happen that 
rm request a slot from task executor but then receive a heartbeat saying the 
slot not assigned. This will cause the slot be freed and assigned again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9826) Implement FLIP-6 YARN Resource Manager for SESSION mode

2018-07-11 Thread shuai.xu (JIRA)
shuai.xu created FLINK-9826:
---

 Summary: Implement FLIP-6 YARN Resource Manager for SESSION mode
 Key: FLINK-9826
 URL: https://issues.apache.org/jira/browse/FLINK-9826
 Project: Flink
  Issue Type: New Feature
  Components: Cluster Management
Reporter: shuai.xu
Assignee: shuai.xu


The Flink YARN Session Resource Manager communicates with YARN's Resource 
Manager to acquire and release containers. It will ask for N containers from 
YARN according to the config。

It is also responsible to notify the JobManager eagerly about container 
failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9294) Improve type inference for UDFs with composite parameter or result type

2018-07-11 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541084#comment-16541084
 ] 

Rong Rong commented on FLINK-9294:
--

hmm. nvm. I think I messed up with scala/java implicit conversion. I will 
ignore Scala class for now.

> Improve type inference for UDFs with composite parameter or result type 
> 
>
> Key: FLINK-9294
> URL: https://issues.apache.org/jira/browse/FLINK-9294
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Most of the UDF function signatures that includes composite types such as 
> *{{MAP}}*, *{{ARRAY}}*, etc would require user to override 
> *{{getParameterType}}* or *{{getResultType}}* method explicitly. 
> It should be able to resolve the composite type based on the function 
> signature, such as:
> {code:java}
> public String[] eval(Map mapArg) { /* ...  */ }
> {code}
> The function catalog search should do either of the following:
> - Automatically resolve that:
>   1. *{{ObjectArrayTypeInfo}}* to be the result type.
>   2. *{{MapTypeInfo}}*  to be 
> the parameter type.
> - Improved function mapping to find and locate function with such signatures.
> During compilation, should do the following:
> - Consistent resolution for: (Scala.Map / java.util.Map) and (Scala.Seq / 
> Java array)
> - Automatically ingest type cast function (see FLINK-9430) to match the 
> correct type, or automatically generate the counter part of the corresponding 
> Scala / Java implementation of the eval function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-11 Thread zhangminglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangminglei reassigned FLINK-9825:
---

Assignee: zhangminglei

> Upgrade checkstyle version to 8.6
> -
>
> Key: FLINK-9825
> URL: https://issues.apache.org/jira/browse/FLINK-9825
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: zhangminglei
>Priority: Minor
>
> We should upgrade checkstyle version to 8.6+ so that we can use the "match 
> violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-07-11 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9825:
-

 Summary: Upgrade checkstyle version to 8.6
 Key: FLINK-9825
 URL: https://issues.apache.org/jira/browse/FLINK-9825
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu


We should upgrade checkstyle version to 8.6+ so that we can use the "match 
violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6301: [FLINK-9794] [jdbc] JDBCOutputFormat does not consider id...

2018-07-11 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/6301
  
@sihuazhou is right and reviewed, +1 from my side


---


[jira] [Commented] (FLINK-9794) JDBCOutputFormat does not consider idle connection and multithreads synchronization

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541077#comment-16541077
 ] 

ASF GitHub Bot commented on FLINK-9794:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/6301
  
@sihuazhou is right and reviewed, +1 from my side


> JDBCOutputFormat does not consider idle connection and multithreads 
> synchronization
> ---
>
> Key: FLINK-9794
> URL: https://issues.apache.org/jira/browse/FLINK-9794
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Affects Versions: 1.4.0, 1.5.0
>Reporter: wangsan
>Priority: Major
>  Labels: pull-request-available
>
> Current implementation of  JDBCOutputFormat has two potential problems: 
> 1. The Connection was established when JDBCOutputFormat is opened, and will 
> be used all the time. But if this connection lies idle for a long time, the 
> database will force close the connection, thus errors may occur.
> 2. The flush() method is called when batchCount exceeds the threshold, but it 
> is also called while snapshotting state. So two threads may modify upload and 
> batchCount, but without synchronization.
> We need fix these two problems to make JDBCOutputFormat more reliable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9514) Create wrapper with TTL logic for value state

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541073#comment-16541073
 ] 

ASF GitHub Bot commented on FLINK-9514:
---

Github user Aitozi commented on the issue:

https://github.com/apache/flink/pull/6186
  
Hi, after read the whole implementation,  i found that the state is expired 
when it is accessed, When there is the dirty data store to state and never be 
queried, how does it can be expired. Or is there an undergoing work for this ? 
@azagrebin 


> Create wrapper with TTL logic for value state
> -
>
> Key: FLINK-9514
> URL: https://issues.apache.org/jira/browse/FLINK-9514
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> TTL state decorator uses original state with packed TTL and add TTL logic 
> using time provider:
> {code:java}
> TtlValueState implements ValueState {
>   ValueState> underlyingState;
>   InternalTimeService timeProvider;
>   V value() {
> TtlValue valueWithTtl = underlyingState.get();
> // ttl logic here (e.g. update timestamp)
> return valueWithTtl.getValue();
>   }
>   void update() { ... underlyingState.update(valueWithTtl) ...  }
> }
> {code}
> TTL decorators are apply to state produced by normal state binder in its TTL 
> wrapper from FLINK-9513



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6186: [FLINK-9514,FLINK-9515,FLINK-9516] State ttl wrappers

2018-07-11 Thread Aitozi
Github user Aitozi commented on the issue:

https://github.com/apache/flink/pull/6186
  
Hi, after read the whole implementation,  i found that the state is expired 
when it is accessed, When there is the dirty data store to state and never be 
queried, how does it can be expired. Or is there an undergoing work for this ? 
@azagrebin 


---


[jira] [Commented] (FLINK-9794) JDBCOutputFormat does not consider idle connection and multithreads synchronization

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541068#comment-16541068
 ] 

ASF GitHub Bot commented on FLINK-9794:
---

Github user jrthe42 commented on the issue:

https://github.com/apache/flink/pull/6301
  
Hi @sihuazhou , I am not familiar with the checkpoint mechanism of Flink, 
and I check the source code again. 

Although the ```RichSinkFunction#invoke()``` and 
```RichSinkFunction#snapshotState()``` are not executed in the same thread, but 
there is already synchronization mechanism in ```StreamTask```. 
```StreamTask``` use a **checkpoint lock object** to make sure they won't be 
called concurrently. Check ```StreamTask#performCheckpoint()``` and 
```StreamInputProcessor#processInput()``` if you want to know more.

Thanks for your comment, I removed synchronization here, and this PR is 
updated. cc @yanghua 




> JDBCOutputFormat does not consider idle connection and multithreads 
> synchronization
> ---
>
> Key: FLINK-9794
> URL: https://issues.apache.org/jira/browse/FLINK-9794
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Affects Versions: 1.4.0, 1.5.0
>Reporter: wangsan
>Priority: Major
>  Labels: pull-request-available
>
> Current implementation of  JDBCOutputFormat has two potential problems: 
> 1. The Connection was established when JDBCOutputFormat is opened, and will 
> be used all the time. But if this connection lies idle for a long time, the 
> database will force close the connection, thus errors may occur.
> 2. The flush() method is called when batchCount exceeds the threshold, but it 
> is also called while snapshotting state. So two threads may modify upload and 
> batchCount, but without synchronization.
> We need fix these two problems to make JDBCOutputFormat more reliable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6301: [FLINK-9794] [jdbc] JDBCOutputFormat does not consider id...

2018-07-11 Thread jrthe42
Github user jrthe42 commented on the issue:

https://github.com/apache/flink/pull/6301
  
Hi @sihuazhou , I am not familiar with the checkpoint mechanism of Flink, 
and I check the source code again. 

Although the ```RichSinkFunction#invoke()``` and 
```RichSinkFunction#snapshotState()``` are not executed in the same thread, but 
there is already synchronization mechanism in ```StreamTask```. 
```StreamTask``` use a **checkpoint lock object** to make sure they won't be 
called concurrently. Check ```StreamTask#performCheckpoint()``` and 
```StreamInputProcessor#processInput()``` if you want to know more.

Thanks for your comment, I removed synchronization here, and this PR is 
updated. cc @yanghua 




---


[jira] [Commented] (FLINK-9514) Create wrapper with TTL logic for value state

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541061#comment-16541061
 ] 

ASF GitHub Bot commented on FLINK-9514:
---

Github user Aitozi commented on a diff in the pull request:

https://github.com/apache/flink/pull/6186#discussion_r201895445
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlListState.java
 ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state.ttl;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.runtime.state.internal.InternalListState;
+import org.apache.flink.util.Preconditions;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+/**
+ * This class wraps list state with TTL logic.
+ *
+ * @param  The type of key the state is associated to
+ * @param  The type of the namespace
+ * @param  Type of the user entry value of state with TTL
+ */
+class TtlListState extends
+   AbstractTtlState, List>, InternalListState>>
+   implements InternalListState {
+   TtlListState(
+   InternalListState> originalState,
+   TtlConfig config,
+   TtlTimeProvider timeProvider,
+   TypeSerializer> valueSerializer) {
+   super(originalState, config, timeProvider, valueSerializer);
+   }
+
+   @Override
+   public void update(List values) throws Exception {
+   updateInternal(values);
+   }
+
+   @Override
+   public void addAll(List values) throws Exception {
+   Preconditions.checkNotNull(values, "List of values to add 
cannot be null.");
+   original.addAll(withTs(values));
+   }
+
+   @Override
+   public Iterable get() throws Exception {
+   Iterable> ttlValue = original.get();
+   ttlValue = ttlValue == null ? Collections.emptyList() : 
ttlValue;
+   if (updateTsOnRead) {
+   List> collected = collect(ttlValue);
+   ttlValue = collected;
+   updateTs(collected);
+   }
+   final Iterable> finalResult = ttlValue;
+   return () -> new IteratorWithCleanup(finalResult.iterator());
+   }
+
+   private void updateTs(List> ttlValue) throws Exception {
+   List> unexpiredWithUpdatedTs = ttlValue.stream()
+   .filter(v -> !expired(v))
+   .map(this::rewrapWithNewTs)
+   .collect(Collectors.toList());
+   if (!unexpiredWithUpdatedTs.isEmpty()) {
+   original.update(unexpiredWithUpdatedTs);
+   }
+   }
+
+   @Override
+   public void add(T value) throws Exception {
+   Preconditions.checkNotNull(value, "You cannot add null to a 
ListState.");
+   original.add(wrapWithTs(value));
+   }
+
+   @Override
+   public void clear() {
+   original.clear();
+   }
+
+   @Override
+   public void mergeNamespaces(N target, Collection sources) throws 
Exception {
+   original.mergeNamespaces(target, sources);
+   }
+
+   @Override
+   public List getInternal() throws Exception {
+   return collect(get());
+   }
+
+   private  List collect(Iterable iterable) {
--- End diff --

Hi @azagrebin , little doubt that you say the  

> return Iterable and avoid querying backend if not needed

But when deal with the ListState the `original.get()` has already query the 
original `Iterable` from RocksDB doesn't it ? Is this way just lazy query the 
iterable element in memory?


> Create wrapper with TTL logic for 

[GitHub] flink pull request #6186: [FLINK-9514,FLINK-9515,FLINK-9516] State ttl wrapp...

2018-07-11 Thread Aitozi
Github user Aitozi commented on a diff in the pull request:

https://github.com/apache/flink/pull/6186#discussion_r201895445
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlListState.java
 ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state.ttl;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.runtime.state.internal.InternalListState;
+import org.apache.flink.util.Preconditions;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+/**
+ * This class wraps list state with TTL logic.
+ *
+ * @param  The type of key the state is associated to
+ * @param  The type of the namespace
+ * @param  Type of the user entry value of state with TTL
+ */
+class TtlListState extends
+   AbstractTtlState, List>, InternalListState>>
+   implements InternalListState {
+   TtlListState(
+   InternalListState> originalState,
+   TtlConfig config,
+   TtlTimeProvider timeProvider,
+   TypeSerializer> valueSerializer) {
+   super(originalState, config, timeProvider, valueSerializer);
+   }
+
+   @Override
+   public void update(List values) throws Exception {
+   updateInternal(values);
+   }
+
+   @Override
+   public void addAll(List values) throws Exception {
+   Preconditions.checkNotNull(values, "List of values to add 
cannot be null.");
+   original.addAll(withTs(values));
+   }
+
+   @Override
+   public Iterable get() throws Exception {
+   Iterable> ttlValue = original.get();
+   ttlValue = ttlValue == null ? Collections.emptyList() : 
ttlValue;
+   if (updateTsOnRead) {
+   List> collected = collect(ttlValue);
+   ttlValue = collected;
+   updateTs(collected);
+   }
+   final Iterable> finalResult = ttlValue;
+   return () -> new IteratorWithCleanup(finalResult.iterator());
+   }
+
+   private void updateTs(List> ttlValue) throws Exception {
+   List> unexpiredWithUpdatedTs = ttlValue.stream()
+   .filter(v -> !expired(v))
+   .map(this::rewrapWithNewTs)
+   .collect(Collectors.toList());
+   if (!unexpiredWithUpdatedTs.isEmpty()) {
+   original.update(unexpiredWithUpdatedTs);
+   }
+   }
+
+   @Override
+   public void add(T value) throws Exception {
+   Preconditions.checkNotNull(value, "You cannot add null to a 
ListState.");
+   original.add(wrapWithTs(value));
+   }
+
+   @Override
+   public void clear() {
+   original.clear();
+   }
+
+   @Override
+   public void mergeNamespaces(N target, Collection sources) throws 
Exception {
+   original.mergeNamespaces(target, sources);
+   }
+
+   @Override
+   public List getInternal() throws Exception {
+   return collect(get());
+   }
+
+   private  List collect(Iterable iterable) {
--- End diff --

Hi @azagrebin , little doubt that you say the  

> return Iterable and avoid querying backend if not needed

But when deal with the ListState the `original.get()` has already query the 
original `Iterable` from RocksDB doesn't it ? Is this way just lazy query the 
iterable element in memory?


---


[jira] [Assigned] (FLINK-9824) Support IPv6 literal

2018-07-11 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9824:
---

Assignee: vinoyang

> Support IPv6 literal
> 
>
> Key: FLINK-9824
> URL: https://issues.apache.org/jira/browse/FLINK-9824
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently we use colon as separator when parsing host and port.
> We should support the usage of IPv6 literals in parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9824) Support IPv6 literal

2018-07-11 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9824:
-

 Summary: Support IPv6 literal
 Key: FLINK-9824
 URL: https://issues.apache.org/jira/browse/FLINK-9824
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


Currently we use colon as separator when parsing host and port.

We should support the usage of IPv6 literals in parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9814) CsvTableSource lack of column warning

2018-07-11 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9814:
---

Assignee: vinoyang

> CsvTableSource lack of column warning
> -
>
> Key: FLINK-9814
> URL: https://issues.apache.org/jira/browse/FLINK-9814
> Project: Flink
>  Issue Type: Wish
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: François Lacombe
>Assignee: vinoyang
>Priority: Minor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The CsvTableSource class is built by defining expected columns to be find in 
> the corresponding csv file.
>  
> It would be great to throw an Exception when the csv file doesn't have the 
> same structure as defined in the source.
> It can be easilly checked with file header if it exists.
> Is this possible ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9806) Add a canonical link element to documentation HTML

2018-07-11 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9806:
---

Assignee: vinoyang

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: vinoyang
>Priority: Major
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-9439) DispatcherTest#testJobRecovery dead locks

2018-07-11 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-9439.
--
Resolution: Fixed

Fixed via 
1.6.0: 3c4e59a7f78deeaccf41022e92699e1ef7510cc3
1.5.2: ddf550aa829a4d3a82b0faad3c525947864a0900

> DispatcherTest#testJobRecovery dead locks
> -
>
> Key: FLINK-9439
> URL: https://issues.apache.org/jira/browse/FLINK-9439
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Piotr Nowojski
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.5.2, 1.6.0
>
>
> Rarely DispatcherTest#testJobRecovery dead locks on master. Example deadlock 
> in my pull request:
> [https://api.travis-ci.org/v3/job/383147736/log.txt]
> afterwards I have stripped down `flink-runtime`, looped on travis this test 
> and it also dead locks on clean master branch:
> [https://github.com/pnowojski/flink/commits/loop-runtime-master]
> (note, that looped versions sometimes also fails with an exception from 
> setUp: {{ akka.actor.InvalidActorNameException: actor name 
> [dispatcher_testJobRecovery] is not unique! }} but this might be unrelated).
> Example failed build:
> [https://travis-ci.org/pnowojski/flink/builds/383650106]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-9706) DispatcherTest#testSubmittedJobGraphListener fails on Travis

2018-07-11 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-9706.
--
Resolution: Fixed

Fixed via 
1.6.0: 3c4e59a7f78deeaccf41022e92699e1ef7510cc3
1.5.2: ddf550aa829a4d3a82b0faad3c525947864a0900

> DispatcherTest#testSubmittedJobGraphListener fails on Travis
> 
>
> Key: FLINK-9706
> URL: https://issues.apache.org/jira/browse/FLINK-9706
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.5.2, 1.6.0
>
>
> https://travis-ci.org/apache/flink/jobs/399331775
> {code:java}
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.103 sec  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: a collection with size <1>
>  but: collection size was <0>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.testSubmittedJobGraphListener(DispatcherTest.java:294)
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.11 sec  <<< ERROR!
> org.apache.flink.runtime.util.TestingFatalErrorHandler$TestingException: 
> org.apache.flink.runtime.dispatcher.DispatcherException: Could not start the 
> added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandler.rethrowError(TestingFatalErrorHandler.java:51)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.tearDown(DispatcherTest.java:219)
> Caused by: org.apache.flink.runtime.dispatcher.DispatcherException: Could not 
> start the added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$28(Dispatcher.java:845)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.util.FlinkException: Failed to submit job 
> b8ab3b7fa8a929bf608a5b65896a2b17.
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.submitJob(Dispatcher.java:254)
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$27(Dispatcher.java:836)
>   at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)

[jira] [Commented] (FLINK-9706) DispatcherTest#testSubmittedJobGraphListener fails on Travis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540886#comment-16540886
 ] 

ASF GitHub Bot commented on FLINK-9706:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6279


> DispatcherTest#testSubmittedJobGraphListener fails on Travis
> 
>
> Key: FLINK-9706
> URL: https://issues.apache.org/jira/browse/FLINK-9706
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.5.2, 1.6.0
>
>
> https://travis-ci.org/apache/flink/jobs/399331775
> {code:java}
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.103 sec  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: a collection with size <1>
>  but: collection size was <0>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.testSubmittedJobGraphListener(DispatcherTest.java:294)
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.11 sec  <<< ERROR!
> org.apache.flink.runtime.util.TestingFatalErrorHandler$TestingException: 
> org.apache.flink.runtime.dispatcher.DispatcherException: Could not start the 
> added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandler.rethrowError(TestingFatalErrorHandler.java:51)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.tearDown(DispatcherTest.java:219)
> Caused by: org.apache.flink.runtime.dispatcher.DispatcherException: Could not 
> start the added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$28(Dispatcher.java:845)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.util.FlinkException: Failed to submit job 
> b8ab3b7fa8a929bf608a5b65896a2b17.
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.submitJob(Dispatcher.java:254)
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$27(Dispatcher.java:836)
>   at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> 

[GitHub] flink pull request #6279: [FLINK-9706] Properly wait for termination of JobM...

2018-07-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6279


---


[jira] [Commented] (FLINK-9706) DispatcherTest#testSubmittedJobGraphListener fails on Travis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540875#comment-16540875
 ] 

ASF GitHub Bot commented on FLINK-9706:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/6279#discussion_r201869163
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java 
---
@@ -536,7 +540,25 @@ private JobManagerRunner 
createJobManagerRunner(JobGraph jobGraph) throws Except
private void removeJobAndRegisterTerminationFuture(JobID jobId, boolean 
cleanupHA) {
final CompletableFuture cleanupFuture = removeJob(jobId, 
cleanupHA);
 
-   registerOrphanedJobManagerTerminationFuture(cleanupFuture);
+   registerJobManagerRunnerTerminationFuture(jobId, cleanupFuture);
+   }
+
+   private void registerJobManagerRunnerTerminationFuture(JobID jobId, 
CompletableFuture jobManagerRunnerTerminationFuture) {
+   
Preconditions.checkState(!jobManagerTerminationFutures.containsKey(jobId));
+
+   jobManagerTerminationFutures.put(jobId, 
jobManagerRunnerTerminationFuture);
+
+   // clean up the pending termination future
+   jobManagerRunnerTerminationFuture.thenRunAsync(
+   () -> {
+   final CompletableFuture terminationFuture 
= jobManagerTerminationFutures.remove(jobId);
+
+   //noinspection ObjectEquality
+   if (terminationFuture != null && 
terminationFuture != jobManagerRunnerTerminationFuture) {
+   jobManagerTerminationFutures.put(jobId, 
terminationFuture);
--- End diff --

It can happen because we also clear the termination future in the callback 
of the `Dispatcher#waitForTerminatingJobManager` method.


> DispatcherTest#testSubmittedJobGraphListener fails on Travis
> 
>
> Key: FLINK-9706
> URL: https://issues.apache.org/jira/browse/FLINK-9706
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.5.2, 1.6.0
>
>
> https://travis-ci.org/apache/flink/jobs/399331775
> {code:java}
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.103 sec  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: a collection with size <1>
>  but: collection size was <0>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.testSubmittedJobGraphListener(DispatcherTest.java:294)
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.11 sec  <<< ERROR!
> org.apache.flink.runtime.util.TestingFatalErrorHandler$TestingException: 
> org.apache.flink.runtime.dispatcher.DispatcherException: Could not start the 
> added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandler.rethrowError(TestingFatalErrorHandler.java:51)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.tearDown(DispatcherTest.java:219)
> Caused by: org.apache.flink.runtime.dispatcher.DispatcherException: Could not 
> start the added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$28(Dispatcher.java:845)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
>   at 
> 

[GitHub] flink issue #6279: [FLINK-9706] Properly wait for termination of JobManagerR...

2018-07-11 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/6279
  
Thanks for the review @GJL. Merging this PR.


---


[GitHub] flink pull request #6279: [FLINK-9706] Properly wait for termination of JobM...

2018-07-11 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/6279#discussion_r201869163
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java 
---
@@ -536,7 +540,25 @@ private JobManagerRunner 
createJobManagerRunner(JobGraph jobGraph) throws Except
private void removeJobAndRegisterTerminationFuture(JobID jobId, boolean 
cleanupHA) {
final CompletableFuture cleanupFuture = removeJob(jobId, 
cleanupHA);
 
-   registerOrphanedJobManagerTerminationFuture(cleanupFuture);
+   registerJobManagerRunnerTerminationFuture(jobId, cleanupFuture);
+   }
+
+   private void registerJobManagerRunnerTerminationFuture(JobID jobId, 
CompletableFuture jobManagerRunnerTerminationFuture) {
+   
Preconditions.checkState(!jobManagerTerminationFutures.containsKey(jobId));
+
+   jobManagerTerminationFutures.put(jobId, 
jobManagerRunnerTerminationFuture);
+
+   // clean up the pending termination future
+   jobManagerRunnerTerminationFuture.thenRunAsync(
+   () -> {
+   final CompletableFuture terminationFuture 
= jobManagerTerminationFutures.remove(jobId);
+
+   //noinspection ObjectEquality
+   if (terminationFuture != null && 
terminationFuture != jobManagerRunnerTerminationFuture) {
+   jobManagerTerminationFutures.put(jobId, 
terminationFuture);
--- End diff --

It can happen because we also clear the termination future in the callback 
of the `Dispatcher#waitForTerminatingJobManager` method.


---


[jira] [Commented] (FLINK-9706) DispatcherTest#testSubmittedJobGraphListener fails on Travis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540877#comment-16540877
 ] 

ASF GitHub Bot commented on FLINK-9706:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/6279
  
Thanks for the review @GJL. Merging this PR.


> DispatcherTest#testSubmittedJobGraphListener fails on Travis
> 
>
> Key: FLINK-9706
> URL: https://issues.apache.org/jira/browse/FLINK-9706
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.5.2, 1.6.0
>
>
> https://travis-ci.org/apache/flink/jobs/399331775
> {code:java}
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.103 sec  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: a collection with size <1>
>  but: collection size was <0>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.testSubmittedJobGraphListener(DispatcherTest.java:294)
> testSubmittedJobGraphListener(org.apache.flink.runtime.dispatcher.DispatcherTest)
>   Time elapsed: 0.11 sec  <<< ERROR!
> org.apache.flink.runtime.util.TestingFatalErrorHandler$TestingException: 
> org.apache.flink.runtime.dispatcher.DispatcherException: Could not start the 
> added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandler.rethrowError(TestingFatalErrorHandler.java:51)
>   at 
> org.apache.flink.runtime.dispatcher.DispatcherTest.tearDown(DispatcherTest.java:219)
> Caused by: org.apache.flink.runtime.dispatcher.DispatcherException: Could not 
> start the added job b8ab3b7fa8a929bf608a5b65896a2b17
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$28(Dispatcher.java:845)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.util.FlinkException: Failed to submit job 
> b8ab3b7fa8a929bf608a5b65896a2b17.
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.submitJob(Dispatcher.java:254)
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$onAddedJobGraph$27(Dispatcher.java:836)
>   at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> 

[jira] [Updated] (FLINK-9823) Add Kubernetes deployment files for standalone job cluster

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9823:
--
Labels: pull-request-available  (was: )

> Add Kubernetes deployment files for standalone job cluster
> --
>
> Key: FLINK-9823
> URL: https://issues.apache.org/jira/browse/FLINK-9823
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Similar to FLINK-9822, it would be helpful for the user to have example 
> Kubernetes deployment files to start a standalone job cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9823) Add Kubernetes deployment files for standalone job cluster

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540869#comment-16540869
 ] 

ASF GitHub Bot commented on FLINK-9823:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6320

[FLINK-9823] Add Kubernetes deployment ymls

## What is the purpose of the change

The Kubernetes files contain a job-cluster service specification, a job 
specification
for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers.

This PR is based on #6319.

cc @GJL 

## Verifying this change

- Tested manually

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (README)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink containerEntrypoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6320.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6320


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.

commit 339a24fb2508c7f3cd041bc2cf9b15fd62980fcf
Author: Till Rohrmann 
Date:   2018-07-10T13:41:18Z

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.

commit c0f8ce88a1e5ce877add31214d9b2674acfbc90f
Author: Till Rohrmann 
Date:   2018-07-10T22:52:08Z

[FLINK-9823] Add Kubernetes deployment ymls

The Kubernetes files contain a job-cluster service specification, a job 
specification
for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers.

commit b32a8f4149fecf953d24bd62af56f8620b360610
Author: Till Rohrmann 
Date:   2018-07-11T14:13:30Z

[hotfix] Support building a job image from a Flink archive

Extend the flink-container/docker/build.sh script to also accept a Flink 
archive to build
the image from. This makes it easier to build an image from one of the 
convenience releases.




> Add Kubernetes deployment files for standalone job cluster
> --
>
> Key: FLINK-9823
>  

[GitHub] flink pull request #6320: [FLINK-9823] Add Kubernetes deployment ymls

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6320

[FLINK-9823] Add Kubernetes deployment ymls

## What is the purpose of the change

The Kubernetes files contain a job-cluster service specification, a job 
specification
for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers.

This PR is based on #6319.

cc @GJL 

## Verifying this change

- Tested manually

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (README)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink containerEntrypoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6320.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6320


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.

commit 339a24fb2508c7f3cd041bc2cf9b15fd62980fcf
Author: Till Rohrmann 
Date:   2018-07-10T13:41:18Z

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.

commit c0f8ce88a1e5ce877add31214d9b2674acfbc90f
Author: Till Rohrmann 
Date:   2018-07-10T22:52:08Z

[FLINK-9823] Add Kubernetes deployment ymls

The Kubernetes files contain a job-cluster service specification, a job 
specification
for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers.

commit b32a8f4149fecf953d24bd62af56f8620b360610
Author: Till Rohrmann 
Date:   2018-07-11T14:13:30Z

[hotfix] Support building a job image from a Flink archive

Extend the flink-container/docker/build.sh script to also accept a Flink 
archive to build
the image from. This makes it easier to build an image from one of the 
convenience releases.




---


[jira] [Created] (FLINK-9823) Add Kubernetes deployment files for standalone job cluster

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9823:


 Summary: Add Kubernetes deployment files for standalone job cluster
 Key: FLINK-9823
 URL: https://issues.apache.org/jira/browse/FLINK-9823
 Project: Flink
  Issue Type: New Feature
  Components: Kubernetes
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Similar to FLINK-9822, it would be helpful for the user to have example 
Kubernetes deployment files to start a standalone job cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9822) Add Dockerfile for StandaloneJobClusterEntryPoint image

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540863#comment-16540863
 ] 

ASF GitHub Bot commented on FLINK-9822:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6319

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

## What is the purpose of the change

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.

This PR is based on #6318.

cc @GJL 

## Verifying this change

- Manually tested

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (yes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink standaloneJobDockerfile

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6319.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6319


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.

commit 339a24fb2508c7f3cd041bc2cf9b15fd62980fcf
Author: Till Rohrmann 
Date:   2018-07-10T13:41:18Z

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.




> Add Dockerfile for StandaloneJobClusterEntryPoint image
> ---
>
> Key: FLINK-9822
> URL: https://issues.apache.org/jira/browse/FLINK-9822
> Project: Flink
>  Issue Type: New Feature
>  Components: Docker
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Add a {{Dockerfile}} to create an image which contains the 
> {{StandaloneJobClusterEntryPoint}} and a specified user code jar. The 
> entrypoint of this image should start the {{StandaloneJobClusterEntryPoint}} 
> 

[jira] [Updated] (FLINK-9822) Add Dockerfile for StandaloneJobClusterEntryPoint image

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9822:
--
Labels: pull-request-available  (was: )

> Add Dockerfile for StandaloneJobClusterEntryPoint image
> ---
>
> Key: FLINK-9822
> URL: https://issues.apache.org/jira/browse/FLINK-9822
> Project: Flink
>  Issue Type: New Feature
>  Components: Docker
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Add a {{Dockerfile}} to create an image which contains the 
> {{StandaloneJobClusterEntryPoint}} and a specified user code jar. The 
> entrypoint of this image should start the {{StandaloneJobClusterEntryPoint}} 
> with the added user code jar. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6319: [FLINK-9822] Add Dockerfile for StandaloneJobClust...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6319

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

## What is the purpose of the change

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.

This PR is based on #6318.

cc @GJL 

## Verifying this change

- Manually tested

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (yes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink standaloneJobDockerfile

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6319.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6319


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.

commit 339a24fb2508c7f3cd041bc2cf9b15fd62980fcf
Author: Till Rohrmann 
Date:   2018-07-10T13:41:18Z

[FLINK-9822] Add Dockerfile for StandaloneJobClusterEntryPoint image

This commit adds a Dockerfile for a standalone job cluster image. The image
contains the Flink distribution and a specified user code jar. The 
entrypoint
will start the StandaloneJobClusterEntryPoint with the provided job 
classname.




---


[jira] [Created] (FLINK-9822) Add Dockerfile for StandaloneJobClusterEntryPoint image

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9822:


 Summary: Add Dockerfile for StandaloneJobClusterEntryPoint image
 Key: FLINK-9822
 URL: https://issues.apache.org/jira/browse/FLINK-9822
 Project: Flink
  Issue Type: New Feature
  Components: Docker
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Add a {{Dockerfile}} to create an image which contains the 
{{StandaloneJobClusterEntryPoint}} and a specified user code jar. The 
entrypoint of this image should start the {{StandaloneJobClusterEntryPoint}} 
with the added user code jar. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9821) Let dynamic properties overwrite configuration settings in TaskManagerRunner

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540847#comment-16540847
 ] 

ASF GitHub Bot commented on FLINK-9821:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6318

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

## What is the purpose of the change

With this commit we can use dynamic properties to overwrite configuration 
values in the
`TaskManagerRunner`.

This PR is based on #6317 

cc @GJL 

## Verifying this change

- Tested manually 

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
forwardDynamicPropertiesTaskManager

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6318.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6318


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.




> Let dynamic properties overwrite configuration settings in TaskManagerRunner
> 
>
> Key: FLINK-9821
> URL: https://issues.apache.org/jira/browse/FLINK-9821
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Similar to FLINK-9820 we should also allow dynamic properties to overwrite 
> configuration values in the {{TaskManagerRunner}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9821) Let dynamic properties overwrite configuration settings in TaskManagerRunner

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9821:
--
Labels: pull-request-available  (was: )

> Let dynamic properties overwrite configuration settings in TaskManagerRunner
> 
>
> Key: FLINK-9821
> URL: https://issues.apache.org/jira/browse/FLINK-9821
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Similar to FLINK-9820 we should also allow dynamic properties to overwrite 
> configuration values in the {{TaskManagerRunner}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6318: [FLINK-9821] Forward dynamic properties to configu...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6318

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

## What is the purpose of the change

With this commit we can use dynamic properties to overwrite configuration 
values in the
`TaskManagerRunner`.

This PR is based on #6317 

cc @GJL 

## Verifying this change

- Tested manually 

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
forwardDynamicPropertiesTaskManager

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6318.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6318


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.

commit b38683205961c625e8c99eff1552ef5a8142ee89
Author: Till Rohrmann 
Date:   2018-07-10T21:43:34Z

[FLINK-9821] Forward dynamic properties to configuration in 
TaskManagerRunner

With this commit we can use dynamic properties to overwrite configuration 
values in the
TaskManagerRunner.




---


[jira] [Created] (FLINK-9821) Let dynamic properties overwrite configuration settings in TaskManagerRunner

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9821:


 Summary: Let dynamic properties overwrite configuration settings 
in TaskManagerRunner
 Key: FLINK-9821
 URL: https://issues.apache.org/jira/browse/FLINK-9821
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


Similar to FLINK-9820 we should also allow dynamic properties to overwrite 
configuration values in the {{TaskManagerRunner}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9820) Let dynamic properties overwrite configuration settings in ClusterEntrypoint

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540801#comment-16540801
 ] 

ASF GitHub Bot commented on FLINK-9820:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6317

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

## What is the purpose of the change

With this commit we can use dynamic properties to overwrite configuration 
values in the
`ClusterEntrypoint`.

This PR is based on #6316.

cc @GJL 

## Verifying this change

- Added `ConfigurationUtilsTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
forwardDynamicPropertiesClusterEntryPoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6317.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6317


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.




> Let dynamic properties overwrite configuration settings in ClusterEntrypoint
> 
>
> Key: FLINK-9820
> URL: https://issues.apache.org/jira/browse/FLINK-9820
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> The dynamic properties which are passed to the {{ClusterEntrypoint}} should 
> overwrite values in the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9820) Let dynamic properties overwrite configuration settings in ClusterEntrypoint

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9820:
--
Labels: pull-request-available  (was: )

> Let dynamic properties overwrite configuration settings in ClusterEntrypoint
> 
>
> Key: FLINK-9820
> URL: https://issues.apache.org/jira/browse/FLINK-9820
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> The dynamic properties which are passed to the {{ClusterEntrypoint}} should 
> overwrite values in the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6317: [FLINK-9820] Forward dynamic properties to Flink c...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6317

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

## What is the purpose of the change

With this commit we can use dynamic properties to overwrite configuration 
values in the
`ClusterEntrypoint`.

This PR is based on #6316.

cc @GJL 

## Verifying this change

- Added `ConfigurationUtilsTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
forwardDynamicPropertiesClusterEntryPoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6317.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6317


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point

commit 75fb8125e3ec270994628f46457b281cdb587874
Author: Till Rohrmann 
Date:   2018-07-10T21:23:59Z

[FLINK-9820] Forward dynamic properties to Flink configuration in 
ClusterEntrypoint

With this commit we can use dynamic properties to overwrite configuration 
values in the
ClusterEntrypoint.




---


[jira] [Created] (FLINK-9820) Let dynamic properties overwrite configuration settings in ClusterEntrypoint

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9820:


 Summary: Let dynamic properties overwrite configuration settings 
in ClusterEntrypoint
 Key: FLINK-9820
 URL: https://issues.apache.org/jira/browse/FLINK-9820
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


The dynamic properties which are passed to the {{ClusterEntrypoint}} should 
overwrite values in the loaded {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9819) Create start up scripts for the StandaloneJobClusterEntryPoint

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540777#comment-16540777
 ] 

ASF GitHub Bot commented on FLINK-9819:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6316

[FLINK-9819] Add startup scripts for standalone job cluster entry point

## What is the purpose of the change

Add startup shell scripts for the `StandaloneJobClusterEntryPoint`.

This PR is based on #6315.

cc @GJL 

## Verifying this change

- Tested manually

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink standaloneJobStartupScripts

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6316.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6316


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point




> Create start up scripts for the StandaloneJobClusterEntryPoint
> --
>
> Key: FLINK-9819
> URL: https://issues.apache.org/jira/browse/FLINK-9819
> Project: Flink
>  Issue Type: New Feature
>  Components: Startup Shell Scripts
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> In order to start the {{StandaloneJobClusterEntryPoint}} we need start up 
> scripts in {{flink-dist}}. We should extend the {{flink-daemon.sh}} and the 
> {{flink-console.sh}} scripts to support this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6316: [FLINK-9819] Add startup scripts for standalone jo...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6316

[FLINK-9819] Add startup scripts for standalone job cluster entry point

## What is the purpose of the change

Add startup shell scripts for the `StandaloneJobClusterEntryPoint`.

This PR is based on #6315.

cc @GJL 

## Verifying this change

- Tested manually

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink standaloneJobStartupScripts

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6316.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6316


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

commit 3b78e4099de1511bbe52c203fd2d05e5cfa03efa
Author: Till Rohrmann 
Date:   2018-07-10T09:24:26Z

[FLINK-9819] Add startup scripts for standalone job cluster entry point




---


[jira] [Updated] (FLINK-9819) Create start up scripts for the StandaloneJobClusterEntryPoint

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9819:
--
Labels: pull-request-available  (was: )

> Create start up scripts for the StandaloneJobClusterEntryPoint
> --
>
> Key: FLINK-9819
> URL: https://issues.apache.org/jira/browse/FLINK-9819
> Project: Flink
>  Issue Type: New Feature
>  Components: Startup Shell Scripts
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> In order to start the {{StandaloneJobClusterEntryPoint}} we need start up 
> scripts in {{flink-dist}}. We should extend the {{flink-daemon.sh}} and the 
> {{flink-console.sh}} scripts to support this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9819) Create start up scripts for the StandaloneJobClusterEntryPoint

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9819:


 Summary: Create start up scripts for the 
StandaloneJobClusterEntryPoint
 Key: FLINK-9819
 URL: https://issues.apache.org/jira/browse/FLINK-9819
 Project: Flink
  Issue Type: New Feature
  Components: Startup Shell Scripts
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


In order to start the {{StandaloneJobClusterEntryPoint}} we need start up 
scripts in {{flink-dist}}. We should extend the {{flink-daemon.sh}} and the 
{{flink-console.sh}} scripts to support this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9488) Create common entry point for master and workers

2018-07-11 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-9488:


Assignee: Till Rohrmann

> Create common entry point for master and workers
> 
>
> Key: FLINK-9488
> URL: https://issues.apache.org/jira/browse/FLINK-9488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> To make the container setup easier, we should provide a single cluster entry 
> point which uses leader election to become either the master or a worker 
> which runs the {{TaskManager}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9675) Avoid FileInputStream/FileOutputStream

2018-07-11 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9675:
--
Description: 
They rely on finalizers (before Java 11), which create unnecessary GC load.

The alternatives, Files.newInputStream, are as easy to use and don't have this 
issue.

  was:They rely on finalizers (before Java 11), which create unnecessary GC 
load. The alternatives, Files.newInputStream, are as easy to use and don't have 
this issue.


> Avoid FileInputStream/FileOutputStream
> --
>
> Key: FLINK-9675
> URL: https://issues.apache.org/jira/browse/FLINK-9675
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: zhangminglei
>Priority: Minor
>  Labels: filesystem
>
> They rely on finalizers (before Java 11), which create unnecessary GC load.
> The alternatives, Files.newInputStream, are as easy to use and don't have 
> this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9488) Create common entry point for master and workers

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540771#comment-16540771
 ] 

ASF GitHub Bot commented on FLINK-9488:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6315

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

## What is the purpose of the change

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

This PR creates a new `flink-container` module.

cc @GJL 

## Verifying this change

- Added `StandaloneJobClusterEntryPointTest` and 
`StandaloneClusterConfigurationParserFactoryTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
standaloneJobClusterEntryPoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6315.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6315


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.




> Create common entry point for master and workers
> 
>
> Key: FLINK-9488
> URL: https://issues.apache.org/jira/browse/FLINK-9488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> To make the container setup easier, we should provide a single cluster entry 
> point which uses leader election to become either the master or a worker 
> which runs the {{TaskManager}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9488) Create common entry point for master and workers

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9488:
--
Labels: pull-request-available  (was: )

> Create common entry point for master and workers
> 
>
> Key: FLINK-9488
> URL: https://issues.apache.org/jira/browse/FLINK-9488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> To make the container setup easier, we should provide a single cluster entry 
> point which uses leader election to become either the master or a worker 
> which runs the {{TaskManager}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6315: [FLINK-9488] Add container entry point StandaloneJ...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6315

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

## What is the purpose of the change

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.

This PR creates a new `flink-container` module.

cc @GJL 

## Verifying this change

- Added `StandaloneJobClusterEntryPointTest` and 
`StandaloneClusterConfigurationParserFactoryTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
standaloneJobClusterEntryPoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6315.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6315


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit d373b6b01ec9d5b63513718a8e6b7db87629a477
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

commit 2526bcc69ff4eb3d196a4a7ceba2e59a7f455922
Author: Till Rohrmann 
Date:   2018-07-09T21:54:55Z

[FLINK-9488] Add container entry point StandaloneJobClusterEntryPoint

The StandaloneJobClusterEntryPoint is the basic entry point for containers. 
It is started with
the user code jar in its classpath and the classname of the user program. 
The entrypoint will
then load this user program via the classname and execute its main method. 
This will generate
a JobGraph which is then used to start the MiniDispatcher.




---


[jira] [Commented] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions

2018-07-11 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540754#comment-16540754
 ] 

Ted Yu commented on FLINK-9735:
---

Short term, we should fix the leaked DBOptions instance.

> Potential resource leak in RocksDBStateBackend#getDbOptions
> ---
>
> Key: FLINK-9735
> URL: https://issues.apache.org/jira/browse/FLINK-9735
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Here is related code:
> {code}
> if (optionsFactory != null) {
>   opt = optionsFactory.createDBOptions(opt);
> }
> {code}
> opt, an DBOptions instance, should be closed before being rewritten.
> getColumnOptions has similar issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9818) Add cluster component command line parser

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9818:
--
Labels: pull-request-available  (was: )

> Add cluster component command line parser
> -
>
> Key: FLINK-9818
> URL: https://issues.apache.org/jira/browse/FLINK-9818
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> In order to parse command line options for the cluster components 
> ({{TaskManagerRunner}}, {{ClusterEntrypoints}}), we should add a 
> {{CommandLineParser}} which supports the common command line options 
> ({{--configDir}}, {{--webui-port}} and dynamic properties which can override 
> configuration values).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9818) Add cluster component command line parser

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540744#comment-16540744
 ] 

ASF GitHub Bot commented on FLINK-9818:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6314

[FLINK-9818] Add cluster component command line parser

## What is the purpose of the change

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

cc @GJL

## Verifying this change

- Added `EntrypointClusterConfigurationParserFactoryTest` and 
`ClusterConfigurationParserFactoryTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink commandLineParser

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6314.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6314


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit b577425b67dd5b33a8094989f7a25145ac60d542
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.




> Add cluster component command line parser
> -
>
> Key: FLINK-9818
> URL: https://issues.apache.org/jira/browse/FLINK-9818
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.6.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> In order to parse command line options for the cluster components 
> ({{TaskManagerRunner}}, {{ClusterEntrypoints}}), we should add a 
> {{CommandLineParser}} which supports the common command line options 
> ({{--configDir}}, {{--webui-port}} and dynamic properties which can override 
> configuration values).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6314: [FLINK-9818] Add cluster component command line pa...

2018-07-11 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/6314

[FLINK-9818] Add cluster component command line parser

## What is the purpose of the change

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.

cc @GJL

## Verifying this change

- Added `EntrypointClusterConfigurationParserFactoryTest` and 
`ClusterConfigurationParserFactoryTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink commandLineParser

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6314.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6314


commit b798af824433657ca215f8598112094a23819ee0
Author: Till Rohrmann 
Date:   2018-07-11T15:30:53Z

[hotfix] Make PackagedProgram(Class, String...) constructor public

commit b577425b67dd5b33a8094989f7a25145ac60d542
Author: Till Rohrmann 
Date:   2018-07-11T15:41:27Z

[FLINK-9818] Add cluster component command line parser

The cluster component command line parser is responsible for parsing the 
common command line
arguments with which the cluster components are started. These include the 
configDir, webui-port
and dynamic properties.




---


[jira] [Created] (FLINK-9818) Add cluster component command line parser

2018-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9818:


 Summary: Add cluster component command line parser
 Key: FLINK-9818
 URL: https://issues.apache.org/jira/browse/FLINK-9818
 Project: Flink
  Issue Type: Improvement
  Components: Distributed Coordination
Affects Versions: 1.6.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.6.0


In order to parse command line options for the cluster components 
({{TaskManagerRunner}}, {{ClusterEntrypoints}}), we should add a 
{{CommandLineParser}} which supports the common command line options 
({{--configDir}}, {{--webui-port}} and dynamic properties which can override 
configuration values).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9692) Adapt maxRecords parameter in the getRecords call to optimize bytes read from Kinesis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540708#comment-16540708
 ] 

ASF GitHub Bot commented on FLINK-9692:
---

Github user tweise commented on the issue:

https://github.com/apache/flink/pull/6300
  
@tzulitai can you please take a look - would be good if we can get this 
into v1.6.0


> Adapt maxRecords parameter in the getRecords call to optimize bytes read from 
> Kinesis 
> --
>
> Key: FLINK-9692
> URL: https://issues.apache.org/jira/browse/FLINK-9692
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0, 1.4.2
>Reporter: Lakshmi Rao
>Assignee: Lakshmi Rao
>Priority: Major
>  Labels: performance, pull-request-available
>
> The Kinesis connector currently has a [constant 
> value|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java#L213]
>  set for maxRecords that it can fetch from a single Kinesis getRecords call. 
> However, in most realtime scenarios, the average size of the Kinesis record 
> (in bytes) changes depending on the situation i.e. you could be in a 
> transient scenario where you are reading large sized records and would hence 
> like to fetch fewer records in each getRecords call (so as to not exceed the 
> 2 Mb/sec [per shard 
> limit|https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html]
>  on the getRecords call). 
> The idea here is to adapt the Kinesis connector to identify an average batch 
> size prior to making the getRecords call, so that the maxRecords parameter 
> can be appropriately tuned before making the call. 
> This feature can be behind a 
> [ConsumerConfigConstants|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ConsumerConfigConstants.java]
>  flag that defaults to false. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6300: [FLINK-9692][Kinesis Connector] Adaptive reads from Kines...

2018-07-11 Thread tweise
Github user tweise commented on the issue:

https://github.com/apache/flink/pull/6300
  
@tzulitai can you please take a look - would be good if we can get this 
into v1.6.0


---


[GitHub] flink pull request #6300: [FLINK-9692][Kinesis Connector] Adaptive reads fro...

2018-07-11 Thread tweise
Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/6300#discussion_r201792650
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ---
@@ -224,10 +232,19 @@ public void run() {

subscribedShard.getShard().getHashKeyRange().getStartingHashKey(),

subscribedShard.getShard().getHashKeyRange().getEndingHashKey());
 
+   long recordBatchSizeBytes = 0L;
+   long averageRecordSizeBytes = 0L;
+
for (UserRecord record : 
fetchedRecords) {
+   recordBatchSizeBytes += 
record.getData().remaining();

deserializeRecordForCollectionAndUpdateState(record);
}
 
+   if (useAdaptiveReads && 
fetchedRecords.size() != 0) {
--- End diff --

nit: && !isEmpty()


---


[jira] [Commented] (FLINK-9692) Adapt maxRecords parameter in the getRecords call to optimize bytes read from Kinesis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540707#comment-16540707
 ] 

ASF GitHub Bot commented on FLINK-9692:
---

Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/6300#discussion_r201793459
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ---
@@ -330,4 +347,24 @@ private GetRecordsResult getRecords(String shardItr, 
int maxNumberOfRecords) thr
protected static List deaggregateRecords(List 
records, String startingHashKey, String endingHashKey) {
return UserRecord.deaggregate(records, new 
BigInteger(startingHashKey), new BigInteger(endingHashKey));
}
+
+   /**
+* Adapts the maxNumberOfRecordsPerFetch based on the current average 
record size
+* to optimize 2 Mb / sec read limits.
+*
+* @param averageRecordSizeBytes
+* @return adaptedMaxRecordsPerFetch
+*/
+
+   private int getAdaptiveMaxRecordsPerFetch(long averageRecordSizeBytes) {
--- End diff --

Make this protected to allow for override? (Currently the shard consumer as 
a whole cannot be customized, but I think it should.)


> Adapt maxRecords parameter in the getRecords call to optimize bytes read from 
> Kinesis 
> --
>
> Key: FLINK-9692
> URL: https://issues.apache.org/jira/browse/FLINK-9692
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0, 1.4.2
>Reporter: Lakshmi Rao
>Assignee: Lakshmi Rao
>Priority: Major
>  Labels: performance, pull-request-available
>
> The Kinesis connector currently has a [constant 
> value|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java#L213]
>  set for maxRecords that it can fetch from a single Kinesis getRecords call. 
> However, in most realtime scenarios, the average size of the Kinesis record 
> (in bytes) changes depending on the situation i.e. you could be in a 
> transient scenario where you are reading large sized records and would hence 
> like to fetch fewer records in each getRecords call (so as to not exceed the 
> 2 Mb/sec [per shard 
> limit|https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html]
>  on the getRecords call). 
> The idea here is to adapt the Kinesis connector to identify an average batch 
> size prior to making the getRecords call, so that the maxRecords parameter 
> can be appropriately tuned before making the call. 
> This feature can be behind a 
> [ConsumerConfigConstants|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ConsumerConfigConstants.java]
>  flag that defaults to false. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9692) Adapt maxRecords parameter in the getRecords call to optimize bytes read from Kinesis

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540706#comment-16540706
 ] 

ASF GitHub Bot commented on FLINK-9692:
---

Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/6300#discussion_r201792650
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ---
@@ -224,10 +232,19 @@ public void run() {

subscribedShard.getShard().getHashKeyRange().getStartingHashKey(),

subscribedShard.getShard().getHashKeyRange().getEndingHashKey());
 
+   long recordBatchSizeBytes = 0L;
+   long averageRecordSizeBytes = 0L;
+
for (UserRecord record : 
fetchedRecords) {
+   recordBatchSizeBytes += 
record.getData().remaining();

deserializeRecordForCollectionAndUpdateState(record);
}
 
+   if (useAdaptiveReads && 
fetchedRecords.size() != 0) {
--- End diff --

nit: && !isEmpty()


> Adapt maxRecords parameter in the getRecords call to optimize bytes read from 
> Kinesis 
> --
>
> Key: FLINK-9692
> URL: https://issues.apache.org/jira/browse/FLINK-9692
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0, 1.4.2
>Reporter: Lakshmi Rao
>Assignee: Lakshmi Rao
>Priority: Major
>  Labels: performance, pull-request-available
>
> The Kinesis connector currently has a [constant 
> value|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java#L213]
>  set for maxRecords that it can fetch from a single Kinesis getRecords call. 
> However, in most realtime scenarios, the average size of the Kinesis record 
> (in bytes) changes depending on the situation i.e. you could be in a 
> transient scenario where you are reading large sized records and would hence 
> like to fetch fewer records in each getRecords call (so as to not exceed the 
> 2 Mb/sec [per shard 
> limit|https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html]
>  on the getRecords call). 
> The idea here is to adapt the Kinesis connector to identify an average batch 
> size prior to making the getRecords call, so that the maxRecords parameter 
> can be appropriately tuned before making the call. 
> This feature can be behind a 
> [ConsumerConfigConstants|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ConsumerConfigConstants.java]
>  flag that defaults to false. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6300: [FLINK-9692][Kinesis Connector] Adaptive reads fro...

2018-07-11 Thread tweise
Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/6300#discussion_r201793459
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java
 ---
@@ -330,4 +347,24 @@ private GetRecordsResult getRecords(String shardItr, 
int maxNumberOfRecords) thr
protected static List deaggregateRecords(List 
records, String startingHashKey, String endingHashKey) {
return UserRecord.deaggregate(records, new 
BigInteger(startingHashKey), new BigInteger(endingHashKey));
}
+
+   /**
+* Adapts the maxNumberOfRecordsPerFetch based on the current average 
record size
+* to optimize 2 Mb / sec read limits.
+*
+* @param averageRecordSizeBytes
+* @return adaptedMaxRecordsPerFetch
+*/
+
+   private int getAdaptiveMaxRecordsPerFetch(long averageRecordSizeBytes) {
--- End diff --

Make this protected to allow for override? (Currently the shard consumer as 
a whole cannot be customized, but I think it should.)


---


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540695#comment-16540695
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201844991
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/LineBreakElement.java
 ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Represents a line break in the {@link Description}.
+ */
+public class LineBreakElement implements BlockElement {
+
+   /**
+* Creates a line break in the description.
+*/
+   public static LineBreakElement linebreak() {
+   return new LineBreakElement();
+   }
+
+   private LineBreakElement() {
+   }
+
+   @Override
+   public String format(Formatter formatter) {
+   return formatter.format(this);
--- End diff --

It is a basic Visitor Pattern approach. This way we have a typesafe way of 
implementing it. With the enum approach we would need to do casting if we have 
any custom parameters e.g. in link where beside link we also want to have a 
visible name for the link.

Also I would be in favour of splitting into inline and block elements to 
control complexity of the structure. I don't think we should support highly 
nested structures just to minimize future efforts.


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201844991
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/LineBreakElement.java
 ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Represents a line break in the {@link Description}.
+ */
+public class LineBreakElement implements BlockElement {
+
+   /**
+* Creates a line break in the description.
+*/
+   public static LineBreakElement linebreak() {
+   return new LineBreakElement();
+   }
+
+   private LineBreakElement() {
+   }
+
+   @Override
+   public String format(Formatter formatter) {
+   return formatter.format(this);
--- End diff --

It is a basic Visitor Pattern approach. This way we have a typesafe way of 
implementing it. With the enum approach we would need to do casting if we have 
any custom parameters e.g. in link where beside link we also want to have a 
visible name for the link.

Also I would be in favour of splitting into inline and block elements to 
control complexity of the structure. I don't think we should support highly 
nested structures just to minimize future efforts.


---


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540662#comment-16540662
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201838689
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/LineBreakElement.java
 ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Represents a line break in the {@link Description}.
+ */
+public class LineBreakElement implements BlockElement {
+
+   /**
+* Creates a line break in the description.
+*/
+   public static LineBreakElement linebreak() {
+   return new LineBreakElement();
+   }
+
+   private LineBreakElement() {
+   }
+
+   @Override
+   public String format(Formatter formatter) {
+   return formatter.format(this);
--- End diff --

hmm..

Alright so this works but I still think it's kinda weird that the element 
calls into the formatter; this effectively gives the element control over how 
it is formatted, since it _could_ just return an arbitrary string.

Let me sketch out an alternative, the general idea being to generalize the 
parent interface (DescriptionElement) to accommodate all sub-classes, and 
introduce an enum for categorization.
With this the elements are just a container for data, and most (but not all 
logic unfortunately unless we go for instanceof checks) formatting logic is in 
the formatter.
```
public enum ElementType {
TEXT,
LINK,
LIST,
LINE_BREAK,
SEQUENCE // replaces your nested Text constructor
}

public interface Element {
String getValue()
List getChildren()
ElementType getType()
}

class HtmlFormatter implements Formatter {

format(Description description) {
description.getElements().stream()
.forEach(this::format)
.collect(Collectors.joining())
}

String format(Element element) {
switch (element.getType()) {
case TEXT:
return element.getValue()
case LIST:
StringBuilder sb = new StringBuilder()
for (Element item : element.getChildren()) {
sb.append()
sb.append(format(item))
sb.append()
}
return sb.toString()
case LINK:
return ""
case LINE_BREAK:
reutrn ""
case SEQUENCE:
StringBuilder sb = new StringBuilder()
for (Element item : element.getChildren()) {
sb.append(format(item))
}
return sb.toString()
}
}
}

class Link implements Element {
private final String value

Link(String text) {
this.value = text
}

getValue() {
return this.value
}

getChildren() {
return Collections.emptyList()
}

getType() {
return LINK
}
}

class Text implements Element {
Text(String text) {
this.value = text
this.children = Collections.emptyList()
}

getValue() {
return 

[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201838689
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/LineBreakElement.java
 ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Represents a line break in the {@link Description}.
+ */
+public class LineBreakElement implements BlockElement {
+
+   /**
+* Creates a line break in the description.
+*/
+   public static LineBreakElement linebreak() {
+   return new LineBreakElement();
+   }
+
+   private LineBreakElement() {
+   }
+
+   @Override
+   public String format(Formatter formatter) {
+   return formatter.format(this);
--- End diff --

hmm..

Alright so this works but I still think it's kinda weird that the element 
calls into the formatter; this effectively gives the element control over how 
it is formatted, since it _could_ just return an arbitrary string.

Let me sketch out an alternative, the general idea being to generalize the 
parent interface (DescriptionElement) to accommodate all sub-classes, and 
introduce an enum for categorization.
With this the elements are just a container for data, and most (but not all 
logic unfortunately unless we go for instanceof checks) formatting logic is in 
the formatter.
```
public enum ElementType {
TEXT,
LINK,
LIST,
LINE_BREAK,
SEQUENCE // replaces your nested Text constructor
}

public interface Element {
String getValue()
List getChildren()
ElementType getType()
}

class HtmlFormatter implements Formatter {

format(Description description) {
description.getElements().stream()
.forEach(this::format)
.collect(Collectors.joining())
}

String format(Element element) {
switch (element.getType()) {
case TEXT:
return element.getValue()
case LIST:
StringBuilder sb = new StringBuilder()
for (Element item : element.getChildren()) {
sb.append()
sb.append(format(item))
sb.append()
}
return sb.toString()
case LINK:
return ""
case LINE_BREAK:
reutrn ""
case SEQUENCE:
StringBuilder sb = new StringBuilder()
for (Element item : element.getChildren()) {
sb.append(format(item))
}
return sb.toString()
}
}
}

class Link implements Element {
private final String value

Link(String text) {
this.value = text
}

getValue() {
return this.value
}

getChildren() {
return Collections.emptyList()
}

getType() {
return LINK
}
}

class Text implements Element {
Text(String text) {
this.value = text
this.children = Collections.emptyList()
}

getValue() {
return this.value
}

getChildren() {
return Collections.emptyList
}

getType() {
return TEXT
}
}


class List implements Element {
private final List 

[jira] [Commented] (FLINK-9750) Create new StreamingFileSink that works on Flink's FileSystem abstraction

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540654#comment-16540654
 ] 

ASF GitHub Bot commented on FLINK-9750:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/6281
  
Hi @aljoscha and @StephanEwen . I have updated the PR, please have a look.


> Create new StreamingFileSink that works on Flink's FileSystem abstraction
> -
>
> Key: FLINK-9750
> URL: https://issues.apache.org/jira/browse/FLINK-9750
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Stephan Ewen
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Using Flink's own file system abstraction means that we can add additional 
> streaming/checkpointing related behavior.
> In addition, the new StreamingFileSink should only rely on internal 
> checkpointed state what files are possibly in progress or need to roll over, 
> never assume enumeration of files in the file system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6281: [FLINK-9750] Add new StreamingFileSink with ResumableWrit...

2018-07-11 Thread kl0u
Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/6281
  
Hi @aljoscha and @StephanEwen . I have updated the PR, please have a look.


---


[jira] [Commented] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540625#comment-16540625
 ] 

ASF GitHub Bot commented on FLINK-9817:
---

NicoK commented on issue #45: [FLINK-9817] update version to 5.0, part 2
URL: https://github.com/apache/flink-shaded/pull/45#issuecomment-404303250
 
 
   ah - that's how the change was done...I'll adapt the script and update this 
PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Version not correctly bumped to 5.0
> ---
>
> Key: FLINK-9817
> URL: https://issues.apache.org/jira/browse/FLINK-9817
> Project: Flink
>  Issue Type: Bug
>  Components: flink-shaded.git
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Blocker
>  Labels: pull-request-available
>
> Current {{master}} of {{flink-shaded}} only made half of the changes needed 
> when updating version numbers: the suffix in each sub-module's version number 
> also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540623#comment-16540623
 ] 

ASF GitHub Bot commented on FLINK-9817:
---

zentol commented on issue #45: [FLINK-9817] update version to 5.0, part 2
URL: https://github.com/apache/flink-shaded/pull/45#issuecomment-404302844
 
 
   we have to fix the `tools/releasing/update_branch_version_sh` script to 
prevent this from happening again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Version not correctly bumped to 5.0
> ---
>
> Key: FLINK-9817
> URL: https://issues.apache.org/jira/browse/FLINK-9817
> Project: Flink
>  Issue Type: Bug
>  Components: flink-shaded.git
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Blocker
>  Labels: pull-request-available
>
> Current {{master}} of {{flink-shaded}} only made half of the changes needed 
> when updating version numbers: the suffix in each sub-module's version number 
> also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540617#comment-16540617
 ] 

ASF GitHub Bot commented on FLINK-9817:
---

NicoK opened a new pull request #45: [FLINK-9817] update version to 5.0, part 2
URL: https://github.com/apache/flink-shaded/pull/45
 
 
   the previous commit, i.e. 49c9fa878cab53ab76f3e4302e4f48920566a2e2, did not 
adapt the sub-modules' version strings


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Version not correctly bumped to 5.0
> ---
>
> Key: FLINK-9817
> URL: https://issues.apache.org/jira/browse/FLINK-9817
> Project: Flink
>  Issue Type: Bug
>  Components: flink-shaded.git
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Blocker
>  Labels: pull-request-available
>
> Current {{master}} of {{flink-shaded}} only made half of the changes needed 
> when updating version numbers: the suffix in each sub-module's version number 
> also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9817:
--
Labels: pull-request-available  (was: )

> Version not correctly bumped to 5.0
> ---
>
> Key: FLINK-9817
> URL: https://issues.apache.org/jira/browse/FLINK-9817
> Project: Flink
>  Issue Type: Bug
>  Components: flink-shaded.git
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Blocker
>  Labels: pull-request-available
>
> Current {{master}} of {{flink-shaded}} only made half of the changes needed 
> when updating version numbers: the suffix in each sub-module's version number 
> also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9817) Version not correctly bumped to 5.0

2018-07-11 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-9817:
--

 Summary: Version not correctly bumped to 5.0
 Key: FLINK-9817
 URL: https://issues.apache.org/jira/browse/FLINK-9817
 Project: Flink
  Issue Type: Bug
  Components: flink-shaded.git
Reporter: Nico Kruber
Assignee: Nico Kruber


Current {{master}} of {{flink-shaded}} only made half of the changes needed 
when updating version numbers: the suffix in each sub-module's version number 
also needs to be adapted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7789) Add handler for Async IO operator timeouts

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540611#comment-16540611
 ] 

ASF GitHub Bot commented on FLINK-7789:
---

Github user pranjal0811 commented on the issue:

https://github.com/apache/flink/pull/6091
  
Hi Team,

Would this feature be included in the Flink 1.5.1 version?

Cheers,
Pranjal


> Add handler for Async IO operator timeouts 
> ---
>
> Key: FLINK-7789
> URL: https://issues.apache.org/jira/browse/FLINK-7789
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Karthik Deivasigamani
>Assignee: blues zheng
>Priority: Major
> Fix For: 1.6.0
>
>
> Currently Async IO operator does not provide a mechanism to handle timeouts. 
> When a request times out it an exception is thrown and job is restarted. It 
> would be good to pass a AsyncIOTimeoutHandler which can be implemented by the 
> user and passed in the constructor.
> Here is the discussion from apache flink users mailing list 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/async-io-operator-timeouts-tt16068.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6091: [FLINK-7789][DataStream API] Add handler for Async IO ope...

2018-07-11 Thread pranjal0811
Github user pranjal0811 commented on the issue:

https://github.com/apache/flink/pull/6091
  
Hi Team,

Would this feature be included in the Flink 1.5.1 version?

Cheers,
Pranjal


---


[jira] [Commented] (FLINK-7789) Add handler for Async IO operator timeouts

2018-07-11 Thread Pranjal Shrivastava (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540607#comment-16540607
 ] 

Pranjal Shrivastava commented on FLINK-7789:


Hi Team,

Is there any chances that this feature will be included in the flink 1.5.1 
version ?

cheers,
Pranjal

> Add handler for Async IO operator timeouts 
> ---
>
> Key: FLINK-7789
> URL: https://issues.apache.org/jira/browse/FLINK-7789
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Karthik Deivasigamani
>Assignee: blues zheng
>Priority: Major
> Fix For: 1.6.0
>
>
> Currently Async IO operator does not provide a mechanism to handle timeouts. 
> When a request times out it an exception is thrown and job is restarted. It 
> would be good to pass a AsyncIOTimeoutHandler which can be implemented by the 
> user and passed in the constructor.
> Here is the discussion from apache flink users mailing list 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/async-io-operator-timeouts-tt16068.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540595#comment-16540595
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201819059
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/BlockElement.java
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Part of description that represents a block e.g. some text, linebreak 
or a list.
+ */
+public interface BlockElement {
+
+   /**
+* Transforms itself into String representation using given format.
+*
+* @param formatter formatter to use.
+* @return string representation
+*/
+   String format(Formatter formatter);
--- End diff --

Added a common parent.


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201819026
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/ListElement.java
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Represents a list in the {@link Description}.
+ */
+public class ListElement implements BlockElement {
+
+   private final List entries;
--- End diff --

True, changed it.


---


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201819059
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/BlockElement.java
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Part of description that represents a block e.g. some text, linebreak 
or a list.
+ */
+public interface BlockElement {
+
+   /**
+* Transforms itself into String representation using given format.
+*
+* @param formatter formatter to use.
+* @return string representation
+*/
+   String format(Formatter formatter);
--- End diff --

Added a common parent.


---


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540592#comment-16540592
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201818972
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/Formatter.java
 ---
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Allows providing multiple formatters for the description. E.g. Html 
formatter, Markdown formatter etc.
+ */
+public abstract class Formatter {
+
+   /**
+* Formats the description into a String using format specific tags.
+*
+* @param description description to be formatted
+* @return string representation of the description
+*/
+   public String format(Description description) {
+   return description.getBlocks().stream().map(b -> 
b.format(this)).collect(Collectors.joining());
--- End diff --

Originally I thought this way there will be less loops in the 
implementation of concrete formatters, but I had a second look and I think it 
is not the case. I agree with your suggestion. In other words I unnecessary 
overcomplicated things. :(


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540594#comment-16540594
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201819026
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/ListElement.java
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Represents a list in the {@link Description}.
+ */
+public class ListElement implements BlockElement {
+
+   private final List entries;
--- End diff --

True, changed it.


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201818972
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/Formatter.java
 ---
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Allows providing multiple formatters for the description. E.g. Html 
formatter, Markdown formatter etc.
+ */
+public abstract class Formatter {
+
+   /**
+* Formats the description into a String using format specific tags.
+*
+* @param description description to be formatted
+* @return string representation of the description
+*/
+   public String format(Description description) {
+   return description.getBlocks().stream().map(b -> 
b.format(this)).collect(Collectors.joining());
--- End diff --

Originally I thought this way there will be less loops in the 
implementation of concrete formatters, but I had a second look and I think it 
is not the case. I agree with your suggestion. In other words I unnecessary 
overcomplicated things. :(


---


[jira] [Updated] (FLINK-9701) Activate TTL in state descriptors

2018-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9701:
--
Labels: pull-request-available  (was: )

> Activate TTL in state descriptors
> -
>
> Key: FLINK-9701
> URL: https://issues.apache.org/jira/browse/FLINK-9701
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9701) Activate TTL in state descriptors

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540571#comment-16540571
 ] 

ASF GitHub Bot commented on FLINK-9701:
---

GitHub user azagrebin opened a pull request:

https://github.com/apache/flink/pull/6313

[FLINK-9701] Add TTL in state descriptors

## What is the purpose of the change

This PR activates TTL feature in state descriptors.

## Brief change log

  - add method enabling TTL with configuration in state descriptor
  - integrate TTL tests with heap and rocksdb backends


## Verifying this change

TTL unit tests

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (yes)
  - The runtime per-record code paths (performance sensitive): (yes)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/azagrebin/flink FLINK-9701

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6313


commit 0f6a3e297b95defeafb1e31899e22195c5985702
Author: Andrey Zagrebin 
Date:   2018-07-03T17:23:41Z

[FLINK-9701] Add TTL in state descriptors




> Activate TTL in state descriptors
> -
>
> Key: FLINK-9701
> URL: https://issues.apache.org/jira/browse/FLINK-9701
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6313: [FLINK-9701] Add TTL in state descriptors

2018-07-11 Thread azagrebin
GitHub user azagrebin opened a pull request:

https://github.com/apache/flink/pull/6313

[FLINK-9701] Add TTL in state descriptors

## What is the purpose of the change

This PR activates TTL feature in state descriptors.

## Brief change log

  - add method enabling TTL with configuration in state descriptor
  - integrate TTL tests with heap and rocksdb backends


## Verifying this change

TTL unit tests

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (yes)
  - The runtime per-record code paths (performance sensitive): (yes)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/azagrebin/flink FLINK-9701

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6313


commit 0f6a3e297b95defeafb1e31899e22195c5985702
Author: Andrey Zagrebin 
Date:   2018-07-03T17:23:41Z

[FLINK-9701] Add TTL in state descriptors




---


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540540#comment-16540540
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201806215
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/BlockElement.java
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Part of description that represents a block e.g. some text, linebreak 
or a list.
+ */
+public interface BlockElement {
+
+   /**
+* Transforms itself into String representation using given format.
+*
+* @param formatter formatter to use.
+* @return string representation
+*/
+   String format(Formatter formatter);
--- End diff --

This class is effectively identical to `InlineElement`; at the very least 
this method could be moved into a shared parent class.


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9792) Cannot add html tags in options description

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540541#comment-16540541
 ] 

ASF GitHub Bot commented on FLINK-9792:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201805835
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/ListElement.java
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Represents a list in the {@link Description}.
+ */
+public class ListElement implements BlockElement {
+
+   private final List entries;
--- End diff --

This should allow any in-line element imo, why can't we have a list of 
links?


> Cannot add html tags in options description
> ---
>
> Key: FLINK-9792
> URL: https://issues.apache.org/jira/browse/FLINK-9792
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> Right now it is impossible to add any html tags in options description, 
> because all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201806215
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/BlockElement.java
 ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+/**
+ * Part of description that represents a block e.g. some text, linebreak 
or a list.
+ */
+public interface BlockElement {
+
+   /**
+* Transforms itself into String representation using given format.
+*
+* @param formatter formatter to use.
+* @return string representation
+*/
+   String format(Formatter formatter);
--- End diff --

This class is effectively identical to `InlineElement`; at the very least 
this method could be moved into a shared parent class.


---


[GitHub] flink pull request #6312: [FLINK-9792] Added custom Description class for Co...

2018-07-11 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/6312#discussion_r201805835
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/description/ListElement.java
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration.description;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Represents a list in the {@link Description}.
+ */
+public class ListElement implements BlockElement {
+
+   private final List entries;
--- End diff --

This should allow any in-line element imo, why can't we have a list of 
links?


---


  1   2   3   4   >