[jira] [Commented] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0

2018-04-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433258#comment-16433258
 ] 

Ted Yu commented on FLINK-7642:
---

SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10

> Upgrade maven surefire plugin to 2.21.0
> ---
>
> Key: FLINK-7642
> URL: https://issues.apache.org/jira/browse/FLINK-7642
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>
> Surefire 2.19 release introduced more useful test filters which would let us 
> run a subset of the test.
> This issue is for upgrading maven surefire plugin to 2.21.0 which contains 
> SUREFIRE-1422



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9152) Harmonize BroadcastProcessFunction Context names

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432864#comment-16432864
 ] 

ASF GitHub Bot commented on FLINK-9152:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180538022
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/co/CoBroadcastWithKeyedOperator.java
 ---
@@ -33,6 +33,8 @@
 import org.apache.flink.streaming.api.TimeDomain;
 import org.apache.flink.streaming.api.TimerService;
 import 
org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction;
+import 
org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction.Context;
--- End diff --

Unused import.


> Harmonize BroadcastProcessFunction Context names
> 
>
> Key: FLINK-9152
> URL: https://issues.apache.org/jira/browse/FLINK-9152
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Currently, the {{Context}} on {{KeyedBroadcastProcessFunction}} is called 
> {{KeyedContext}}, which is different from the name of the context on 
> {{BroadcastProcessFunction}}. This leads to the strange combination of
> {code:java}
> public abstract void processBroadcastElement(final IN2 value, final 
> KeyedContext ctx, final Collector out) throws Exception;
> {code}
> i.e. you're processing a broadcast element but the context is called a 
> "keyed" context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5830: [FLINK-9152] Harmonize BroadcastProcessFunction Co...

2018-04-10 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180541104
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/co/KeyedBroadcastProcessFunction.java
 ---
@@ -89,11 +89,11 @@
 * query the current processing/event time, and also query and update 
the internal
 * {@link org.apache.flink.api.common.state.BroadcastState broadcast 
state}. In addition, it
 * can register a {@link KeyedStateFunction function} to be applied to 
all keyed states on
-* the local partition. These can be done through the provided {@link 
Context}.
+* the local partition. These can be done through the provided {@link 
BaseBroadcastProcessFunction.Context}.
--- End diff --

Remove the `BaseBroadcastProcessFunction`.


---


[jira] [Commented] (FLINK-9152) Harmonize BroadcastProcessFunction Context names

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432865#comment-16432865
 ] 

ASF GitHub Bot commented on FLINK-9152:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180541190
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/co/KeyedBroadcastProcessFunction.java
 ---
@@ -89,11 +89,11 @@
 * query the current processing/event time, and also query and update 
the internal
 * {@link org.apache.flink.api.common.state.BroadcastState broadcast 
state}. In addition, it
 * can register a {@link KeyedStateFunction function} to be applied to 
all keyed states on
-* the local partition. These can be done through the provided {@link 
Context}.
+* the local partition. These can be done through the provided {@link 
BaseBroadcastProcessFunction.Context}.
 * The context is only valid during the invocation of this method, do 
not store it.
 *
 * @param value The stream element.
-* @param ctx A {@link Context} that allows querying the timestamp of 
the element,
+* @param ctx A {@link BaseBroadcastProcessFunction.Context} that 
allows querying the timestamp of the element,
--- End diff --

Remove the `BaseBroadcastProcessFunction`.


> Harmonize BroadcastProcessFunction Context names
> 
>
> Key: FLINK-9152
> URL: https://issues.apache.org/jira/browse/FLINK-9152
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Currently, the {{Context}} on {{KeyedBroadcastProcessFunction}} is called 
> {{KeyedContext}}, which is different from the name of the context on 
> {{BroadcastProcessFunction}}. This leads to the strange combination of
> {code:java}
> public abstract void processBroadcastElement(final IN2 value, final 
> KeyedContext ctx, final Collector out) throws Exception;
> {code}
> i.e. you're processing a broadcast element but the context is called a 
> "keyed" context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5830: [FLINK-9152] Harmonize BroadcastProcessFunction Co...

2018-04-10 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180538022
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/co/CoBroadcastWithKeyedOperator.java
 ---
@@ -33,6 +33,8 @@
 import org.apache.flink.streaming.api.TimeDomain;
 import org.apache.flink.streaming.api.TimerService;
 import 
org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction;
+import 
org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction.Context;
--- End diff --

Unused import.


---


[jira] [Commented] (FLINK-9152) Harmonize BroadcastProcessFunction Context names

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432863#comment-16432863
 ] 

ASF GitHub Bot commented on FLINK-9152:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180541104
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/co/KeyedBroadcastProcessFunction.java
 ---
@@ -89,11 +89,11 @@
 * query the current processing/event time, and also query and update 
the internal
 * {@link org.apache.flink.api.common.state.BroadcastState broadcast 
state}. In addition, it
 * can register a {@link KeyedStateFunction function} to be applied to 
all keyed states on
-* the local partition. These can be done through the provided {@link 
Context}.
+* the local partition. These can be done through the provided {@link 
BaseBroadcastProcessFunction.Context}.
--- End diff --

Remove the `BaseBroadcastProcessFunction`.


> Harmonize BroadcastProcessFunction Context names
> 
>
> Key: FLINK-9152
> URL: https://issues.apache.org/jira/browse/FLINK-9152
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Currently, the {{Context}} on {{KeyedBroadcastProcessFunction}} is called 
> {{KeyedContext}}, which is different from the name of the context on 
> {{BroadcastProcessFunction}}. This leads to the strange combination of
> {code:java}
> public abstract void processBroadcastElement(final IN2 value, final 
> KeyedContext ctx, final Collector out) throws Exception;
> {code}
> i.e. you're processing a broadcast element but the context is called a 
> "keyed" context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5830: [FLINK-9152] Harmonize BroadcastProcessFunction Co...

2018-04-10 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/5830#discussion_r180541190
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/co/KeyedBroadcastProcessFunction.java
 ---
@@ -89,11 +89,11 @@
 * query the current processing/event time, and also query and update 
the internal
 * {@link org.apache.flink.api.common.state.BroadcastState broadcast 
state}. In addition, it
 * can register a {@link KeyedStateFunction function} to be applied to 
all keyed states on
-* the local partition. These can be done through the provided {@link 
Context}.
+* the local partition. These can be done through the provided {@link 
BaseBroadcastProcessFunction.Context}.
 * The context is only valid during the invocation of this method, do 
not store it.
 *
 * @param value The stream element.
-* @param ctx A {@link Context} that allows querying the timestamp of 
the element,
+* @param ctx A {@link BaseBroadcastProcessFunction.Context} that 
allows querying the timestamp of the element,
--- End diff --

Remove the `BaseBroadcastProcessFunction`.


---


[jira] [Commented] (FLINK-8205) Multi key get

2018-04-10 Thread Kostas Kloudas (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432789#comment-16432789
 ] 

Kostas Kloudas commented on FLINK-8205:
---

Hi [~sihuazhou]! I would suggest to start with multi key get for one state and 
then we can move forward with querying different states at the same time. 

> Multi key get
> -
>
> Key: FLINK-8205
> URL: https://issues.apache.org/jira/browse/FLINK-8205
> Project: Flink
>  Issue Type: New Feature
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: Any
>Reporter: Martin Eden
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.6.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Currently the Java queryable state api only allows for fetching one key at a 
> time. It would be extremely useful and more efficient if a similar call 
> exists for submitting multiple keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7917) The return of taskInformationOrBlobKey should be placed inside synchronized in ExecutionJobVertex

2018-04-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432712#comment-16432712
 ] 

Ted Yu commented on FLINK-7917:
---

lgtm

> The return of taskInformationOrBlobKey should be placed inside synchronized 
> in ExecutionJobVertex
> -
>
> Key: FLINK-7917
> URL: https://issues.apache.org/jira/browse/FLINK-7917
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently in ExecutionJobVertex#getTaskInformationOrBlobKey:
> {code}
> }
> return taskInformationOrBlobKey;
> {code}
> The return should be placed inside synchronized block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9088) Upgrade Nifi connector dependency to 1.6.0

2018-04-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9088:
--
Description: 
Currently dependency of Nifi is 0.6.1


We should upgrade to 1.6.0

  was:
Currently dependency of Nifi is 0.6.1

We should upgrade to 1.6.0


> Upgrade Nifi connector dependency to 1.6.0
> --
>
> Key: FLINK-9088
> URL: https://issues.apache.org/jira/browse/FLINK-9088
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
>
> Currently dependency of Nifi is 0.6.1
> We should upgrade to 1.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432650#comment-16432650
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Thank you very much @zentol . I have updated the PR. Not sure whether to 
meet the demand 100%. For verify the job was successfully run, I used 
Elasticsearch2 as a sink. As well add a elasticsearch2 dependency to archetype 
pom.xml. Could you take another look ?


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5823: [FLINK-9008] [e2e] Implements quickstarts end to end test

2018-04-10 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Thank you very much @zentol . I have updated the PR. Not sure whether to 
meet the demand 100%. For verify the job was successfully run, I used 
Elasticsearch2 as a sink. As well add a elasticsearch2 dependency to archetype 
pom.xml. Could you take another look ?


---


[jira] [Updated] (FLINK-9087) Change the method signature of RecordWriter#broadcastEvent() from BufferConsumer to void

2018-04-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9087:
--
Summary: Change the method signature of RecordWriter#broadcastEvent() from 
BufferConsumer to void  (was: Return value of broadcastEvent should be closed 
in StreamTask#performCheckpoint)

> Change the method signature of RecordWriter#broadcastEvent() from 
> BufferConsumer to void
> 
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9070) Improve performance of RocksDBMapState.clear()

2018-04-10 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432548#comment-16432548
 ] 

Sihua Zhou commented on FLINK-9070:
---

Thanks, I will make a PR for this ticket for flink 1.6 once flink 1.5 is 
released out.

> Improve performance of RocksDBMapState.clear()
> --
>
> Key: FLINK-9070
> URL: https://issues.apache.org/jira/browse/FLINK-9070
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Truong Duc Kien
>Priority: Minor
> Fix For: 1.6.0
>
>
> Currently, RocksDBMapState.clear() is implemented by iterating over all the 
> keys and drop them one by one. This iteration can be quite slow with: 
>  * Large maps
>  * High-churn maps with a lot of tombstones
> There are a few methods to speed-up deletion for a range of keys, each with 
> their own caveats:
>  * DeleteRange: still experimental, likely buggy
>  * DeleteFilesInRange + CompactRange: only good for large ranges
>  
> Flink can also keep a list of inserted keys in-memory, then directly delete 
> them without having to iterate over the Rocksdb database again. 
>  
> Reference:
>  * [RocksDB article about range 
> deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys]
>  * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9070) Improve performance of RocksDBMapState.clear()

2018-04-10 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou reassigned FLINK-9070:
-

Assignee: Sihua Zhou

> Improve performance of RocksDBMapState.clear()
> --
>
> Key: FLINK-9070
> URL: https://issues.apache.org/jira/browse/FLINK-9070
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Truong Duc Kien
>Assignee: Sihua Zhou
>Priority: Minor
> Fix For: 1.6.0
>
>
> Currently, RocksDBMapState.clear() is implemented by iterating over all the 
> keys and drop them one by one. This iteration can be quite slow with: 
>  * Large maps
>  * High-churn maps with a lot of tombstones
> There are a few methods to speed-up deletion for a range of keys, each with 
> their own caveats:
>  * DeleteRange: still experimental, likely buggy
>  * DeleteFilesInRange + CompactRange: only good for large ranges
>  
> Flink can also keep a list of inserted keys in-memory, then directly delete 
> them without having to iterate over the Rocksdb database again. 
>  
> Reference:
>  * [RocksDB article about range 
> deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys]
>  * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9070) Improve performance of RocksDBMapState.clear()

2018-04-10 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-9070:
--
Fix Version/s: 1.6.0

> Improve performance of RocksDBMapState.clear()
> --
>
> Key: FLINK-9070
> URL: https://issues.apache.org/jira/browse/FLINK-9070
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Truong Duc Kien
>Priority: Minor
> Fix For: 1.6.0
>
>
> Currently, RocksDBMapState.clear() is implemented by iterating over all the 
> keys and drop them one by one. This iteration can be quite slow with: 
>  * Large maps
>  * High-churn maps with a lot of tombstones
> There are a few methods to speed-up deletion for a range of keys, each with 
> their own caveats:
>  * DeleteRange: still experimental, likely buggy
>  * DeleteFilesInRange + CompactRange: only good for large ranges
>  
> Flink can also keep a list of inserted keys in-memory, then directly delete 
> them without having to iterate over the Rocksdb database again. 
>  
> Reference:
>  * [RocksDB article about range 
> deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys]
>  * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8205) Multi key get

2018-04-10 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-8205:
--
Fix Version/s: (was: 1.5.0)
   1.6.0

> Multi key get
> -
>
> Key: FLINK-8205
> URL: https://issues.apache.org/jira/browse/FLINK-8205
> Project: Flink
>  Issue Type: New Feature
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: Any
>Reporter: Martin Eden
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.6.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Currently the Java queryable state api only allows for fetching one key at a 
> time. It would be extremely useful and more efficient if a similar call 
> exists for submitting multiple keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432430#comment-16432430
 ] 

mingleizhang commented on FLINK-9087:
-

It seems that [~triones] does not have permission to perform the write 
operation at the moment. I could support helps or committer can give you a 
permission, then you can do it by yourself.

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432403#comment-16432403
 ] 

ASF GitHub Bot commented on FLINK-9087:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5802
  
@trionesadam  


> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5802: [FLINK-9087] [runtime] change the method signature of Rec...

2018-04-10 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5802
  
@trionesadam 👍 


---


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432401#comment-16432401
 ] 

ASF GitHub Bot commented on FLINK-9087:
---

Github user trionesadam commented on the issue:

https://github.com/apache/flink/pull/5802
  
This PR is ready for having another review. @NicoK , @tedyu , Thank you. 
@tedyu , do we need change the description of this jira?


> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5802: [FLINK-9087] [runtime] change the method signature of Rec...

2018-04-10 Thread trionesadam
Github user trionesadam commented on the issue:

https://github.com/apache/flink/pull/5802
  
This PR is ready for having another review. @NicoK , @tedyu , Thank you. 
@tedyu , do we need change the description of this jira?


---


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432399#comment-16432399
 ] 

Ted Yu commented on FLINK-9087:
---

You can modify the description to match your fix.

Thanks

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread Triones Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432385#comment-16432385
 ] 

Triones Deng commented on FLINK-9087:
-

This PR is ready for having another review. [~NicoK], [~yuzhih...@gmail.com], 
Thank you. 

[~yuzhih...@gmail.com], do we need change the description of this jira?

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6022) Don't serialise Schema when serialising Avro GenericRecord

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432341#comment-16432341
 ] 

ASF GitHub Bot commented on FLINK-6022:
---

Github user shashank734 commented on the issue:

https://github.com/apache/flink/pull/4943
  
@zentol Thanks, I think it's the wrong place to ask But Actually We have 
tried to use AvroTypeInfo, But it was unable to restore from the savepoint 
(Note we have changed the schema and class with 1 extra variable) So why I was 
asking if I can get a very minimal example or hint to check Am I am doing 
something wrong? I am using Scala.


> Don't serialise Schema when serialising Avro GenericRecord
> --
>
> Key: FLINK-6022
> URL: https://issues.apache.org/jira/browse/FLINK-6022
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.5.0
>
>
> Currently, Flink is serializing the schema for each Avro GenericRecord in the 
> stream.
> This leads to a lot of overhead over the wire/disk + high serialization costs.
> Therefore, I'm proposing to improve the support for GenericRecord in Flink by 
> shipping the schema to each serializer  through the AvroTypeInformation.
> Then, we can only support GenericRecords with the same type per stream, but 
> the performance will be much better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #4943: [FLINK-6022] [avro] Use Avro to serialize Avro in flight ...

2018-04-10 Thread shashank734
Github user shashank734 commented on the issue:

https://github.com/apache/flink/pull/4943
  
@zentol Thanks, I think it's the wrong place to ask But Actually We have 
tried to use AvroTypeInfo, But it was unable to restore from the savepoint 
(Note we have changed the schema and class with 1 extra variable) So why I was 
asking if I can get a very minimal example or hint to check Am I am doing 
something wrong? I am using Scala.


---


[jira] [Commented] (FLINK-9124) Allow customization of KinesisProxy.getRecords read timeout and retry

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432254#comment-16432254
 ] 

ASF GitHub Bot commented on FLINK-9124:
---

Github user tweise commented on the issue:

https://github.com/apache/flink/pull/5803
  
@aljoscha @tzulitai can you take a look?


> Allow customization of KinesisProxy.getRecords read timeout and retry
> -
>
> Key: FLINK-9124
> URL: https://issues.apache.org/jira/browse/FLINK-9124
> Project: Flink
>  Issue Type: Task
>  Components: Kinesis Connector
>Affects Versions: 1.4.2
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Minor
>
> It should be possible to change the socket read timeout and all other 
> configuration parameters of the underlying AWS ClientConfiguration and also 
> have the option to retry after a socket timeout exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5803: [FLINK-9124] [kinesis] Allow customization of KinesisProx...

2018-04-10 Thread tweise
Github user tweise commented on the issue:

https://github.com/apache/flink/pull/5803
  
@aljoscha @tzulitai can you take a look?


---


[jira] [Commented] (FLINK-9070) Improve performance of RocksDBMapState.clear()

2018-04-10 Thread Truong Duc Kien (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432213#comment-16432213
 ] 

Truong Duc Kien commented on FLINK-9070:


Sure. Go ahead.

> Improve performance of RocksDBMapState.clear()
> --
>
> Key: FLINK-9070
> URL: https://issues.apache.org/jira/browse/FLINK-9070
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.6.0
>Reporter: Truong Duc Kien
>Priority: Minor
>
> Currently, RocksDBMapState.clear() is implemented by iterating over all the 
> keys and drop them one by one. This iteration can be quite slow with: 
>  * Large maps
>  * High-churn maps with a lot of tombstones
> There are a few methods to speed-up deletion for a range of keys, each with 
> their own caveats:
>  * DeleteRange: still experimental, likely buggy
>  * DeleteFilesInRange + CompactRange: only good for large ranges
>  
> Flink can also keep a list of inserted keys in-memory, then directly delete 
> them without having to iterate over the Rocksdb database again. 
>  
> Reference:
>  * [RocksDB article about range 
> deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys]
>  * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432141#comment-16432141
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
cc @tillrohrmann for mode : `new` (flip-6) the `TaskManagerRunner` could 
not specify a port range, I suggest we could merge this PR as a hotfix issue 
and add it into 1.5 release.


> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5834: [FLINK-9153] TaskManagerRunner should support rpc port ra...

2018-04-10 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5834
  
cc @tillrohrmann for mode : `new` (flip-6) the `TaskManagerRunner` could 
not specify a port range, I suggest we could merge this PR as a hotfix issue 
and add it into 1.5 release.


---


[jira] [Commented] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432138#comment-16432138
 ] 

ASF GitHub Bot commented on FLINK-9153:
---

GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/5834

[FLINK-9153] TaskManagerRunner should support rpc port range

## What is the purpose of the change

*This pull request makes `TaskManagerRunner` (FLIP-6) supports rpc port 
range*


## Brief change log

  - *Fixed a config item reading bug and let taskmanager runner support rpc 
port range *

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9153

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5834.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5834


commit 212884207844d9485eb63e5e1ba32118e9fa1567
Author: yanghua 
Date:   2018-04-10T11:52:16Z

[FLINK-9153] TaskManagerRunner should support rpc port range




> TaskManagerRunner should support rpc port range
> ---
>
> Key: FLINK-9153
> URL: https://issues.apache.org/jira/browse/FLINK-9153
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.4.0, 1.5.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.5.0
>
>
> TaskManagerRunner current just support one specific port :
> {code:java}
> final int rpcPort = 
> configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
> {code}
> It should support port range as the document described : 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5834: [FLINK-9153] TaskManagerRunner should support rpc ...

2018-04-10 Thread yanghua
GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/5834

[FLINK-9153] TaskManagerRunner should support rpc port range

## What is the purpose of the change

*This pull request makes `TaskManagerRunner` (FLIP-6) supports rpc port 
range*


## Brief change log

  - *Fixed a config item reading bug and let taskmanager runner support rpc 
port range *

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9153

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5834.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5834


commit 212884207844d9485eb63e5e1ba32118e9fa1567
Author: yanghua 
Date:   2018-04-10T11:52:16Z

[FLINK-9153] TaskManagerRunner should support rpc port range




---


[jira] [Commented] (FLINK-6022) Don't serialise Schema when serialising Avro GenericRecord

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432133#comment-16432133
 ] 

ASF GitHub Bot commented on FLINK-6022:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4943
  
@shashank734 The commits are contained in 1.4 already. Have you read 
[this](https://github.com/apache/flink/pull/4943#issuecomment-342156083) 
comment?


> Don't serialise Schema when serialising Avro GenericRecord
> --
>
> Key: FLINK-6022
> URL: https://issues.apache.org/jira/browse/FLINK-6022
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.5.0
>
>
> Currently, Flink is serializing the schema for each Avro GenericRecord in the 
> stream.
> This leads to a lot of overhead over the wire/disk + high serialization costs.
> Therefore, I'm proposing to improve the support for GenericRecord in Flink by 
> shipping the schema to each serializer  through the AvroTypeInformation.
> Then, we can only support GenericRecords with the same type per stream, but 
> the performance will be much better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #4943: [FLINK-6022] [avro] Use Avro to serialize Avro in flight ...

2018-04-10 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4943
  
@shashank734 The commits are contained in 1.4 already. Have you read 
[this](https://github.com/apache/flink/pull/4943#issuecomment-342156083) 
comment?


---


[jira] [Created] (FLINK-9155) Provide message context information in DeserializationSchema

2018-04-10 Thread Alex Smirnov (JIRA)
Alex Smirnov created FLINK-9155:
---

 Summary: Provide message context information in 
DeserializationSchema
 Key: FLINK-9155
 URL: https://issues.apache.org/jira/browse/FLINK-9155
 Project: Flink
  Issue Type: Improvement
Reporter: Alex Smirnov


There's no way to retrieve more information about corrupted message in the 
DeserializationSchema class. It is only possible to return null, which is a 
signal to skip the message, and to throw an exception, which will cause job 
failure.

For investigation purposes it would be good to have more information, like:
 * kafka topic from which the message came from
 ** in Flink 1.4, it is possible to subscribe using Pattern, so topic name is 
not always evident
 * kafka topic offset

The idea is to write this information into the log file for further analysis. 
Having topic name and offset allows to retrieve the message and investigate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6022) Don't serialise Schema when serialising Avro GenericRecord

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432112#comment-16432112
 ] 

ASF GitHub Bot commented on FLINK-6022:
---

Github user shashank734 commented on the issue:

https://github.com/apache/flink/pull/4943
  
@StephanEwen Are these changes part of 1.5 or 1.4, Do you have any example 
how I can use this with states and CEP? Please give me some hint. I have seen 
test cases of Input and Output only. State evolution is the main issue for us 
nowadays.


> Don't serialise Schema when serialising Avro GenericRecord
> --
>
> Key: FLINK-6022
> URL: https://issues.apache.org/jira/browse/FLINK-6022
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.5.0
>
>
> Currently, Flink is serializing the schema for each Avro GenericRecord in the 
> stream.
> This leads to a lot of overhead over the wire/disk + high serialization costs.
> Therefore, I'm proposing to improve the support for GenericRecord in Flink by 
> shipping the schema to each serializer  through the AvroTypeInformation.
> Then, we can only support GenericRecords with the same type per stream, but 
> the performance will be much better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9154) Include WebSubmissionExtension in REST API docs

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432113#comment-16432113
 ] 

ASF GitHub Bot commented on FLINK-9154:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5833

[FLINK-9154][REST][docs] Document WebSubmissionExtension handlers

## What is the purpose of the change

With this PR the handlers defined in the `WebSubmissionExtension` (for 
running jars, creating plans etc.) are now part of the REST API documentation.

## Brief change log

* ensure the WebSubmissionExtension can be loaded by the generator
* regenerate documentation


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9154

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5833.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5833


commit a6c9ac5029d0cc38e3bda69a4d3325c391656fa1
Author: zentol 
Date:   2018-04-10T11:38:10Z

[FLINK-9154][REST][docs] Document WebSubmissionExtension handlers




> Include WebSubmissionExtension in REST API docs
> ---
>
> Key: FLINK-9154
> URL: https://issues.apache.org/jira/browse/FLINK-9154
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.0
>
>
> The handlers contained in the {{WebSubmissionExtension}} are currently not 
> documented in the REST API docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5833: [FLINK-9154][REST][docs] Document WebSubmissionExt...

2018-04-10 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5833

[FLINK-9154][REST][docs] Document WebSubmissionExtension handlers

## What is the purpose of the change

With this PR the handlers defined in the `WebSubmissionExtension` (for 
running jars, creating plans etc.) are now part of the REST API documentation.

## Brief change log

* ensure the WebSubmissionExtension can be loaded by the generator
* regenerate documentation


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9154

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5833.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5833


commit a6c9ac5029d0cc38e3bda69a4d3325c391656fa1
Author: zentol 
Date:   2018-04-10T11:38:10Z

[FLINK-9154][REST][docs] Document WebSubmissionExtension handlers




---


[GitHub] flink issue #4943: [FLINK-6022] [avro] Use Avro to serialize Avro in flight ...

2018-04-10 Thread shashank734
Github user shashank734 commented on the issue:

https://github.com/apache/flink/pull/4943
  
@StephanEwen Are these changes part of 1.5 or 1.4, Do you have any example 
how I can use this with states and CEP? Please give me some hint. I have seen 
test cases of Input and Output only. State evolution is the main issue for us 
nowadays.


---


[jira] [Created] (FLINK-9154) Include WebSubmissionExtension in REST API docs

2018-04-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9154:
---

 Summary: Include WebSubmissionExtension in REST API docs
 Key: FLINK-9154
 URL: https://issues.apache.org/jira/browse/FLINK-9154
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, REST
Affects Versions: 1.5.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.0


The handlers contained in the {{WebSubmissionExtension}} are currently not 
documented in the REST API docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432035#comment-16432035
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5823
  
1. You should create a project _within the test_. This project must **not** 
be based on `https://flink.apache.org/q/quickstart.sh`, is has to be created 
with the archetype that is installed locally by `flink-quickstarts`. For this 
you will have to use the maven commands as outlined in the 
[documentation](https://ci.apache.org/projects/flink/flink-docs-master/quickstart/java_api_quickstart.html).
2. Correct, you should verify that none of the core flink classes are 
contained in the jar. The only classes that should be contained are those of 
the project.
3. No, this is not related to checkpointing. The point of this test is to 
ensure that job-jars created by a quickstart project actually work when 
submitted to a flink cluster. For example, they have to contain the job 
classes, like `StreamingJob`, as otherwise the job will fail outright since 
there's nothing to run. So you have to verify that the job was successfully 
run, the easiest way being to write some data to some file (like the WordCount 
tests), and verifying the contents.


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5823: [FLINK-9008] [e2e] Implements quickstarts end to end test

2018-04-10 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5823
  
1. You should create a project _within the test_. This project must **not** 
be based on `https://flink.apache.org/q/quickstart.sh`, is has to be created 
with the archetype that is installed locally by `flink-quickstarts`. For this 
you will have to use the maven commands as outlined in the 
[documentation](https://ci.apache.org/projects/flink/flink-docs-master/quickstart/java_api_quickstart.html).
2. Correct, you should verify that none of the core flink classes are 
contained in the jar. The only classes that should be contained are those of 
the project.
3. No, this is not related to checkpointing. The point of this test is to 
ensure that job-jars created by a quickstart project actually work when 
submitted to a flink cluster. For example, they have to contain the job 
classes, like `StreamingJob`, as otherwise the job will fail outright since 
there's nothing to run. So you have to verify that the job was successfully 
run, the easiest way being to write some data to some file (like the WordCount 
tests), and verifying the contents.


---


[jira] [Closed] (FLINK-9137) Merge TopSpeedWindowingExampleITCase into StreamingExamplesITCase

2018-04-10 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-9137.
---
Resolution: Won't Fix

> Merge TopSpeedWindowingExampleITCase into StreamingExamplesITCase
> -
>
> Key: FLINK-9137
> URL: https://issues.apache.org/jira/browse/FLINK-9137
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5816: [FLINK-9137][tests] Merge TopSpeedWindowingExample...

2018-04-10 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5816


---


[jira] [Commented] (FLINK-9137) Merge TopSpeedWindowingExampleITCase into StreamingExamplesITCase

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432013#comment-16432013
 ] 

ASF GitHub Bot commented on FLINK-9137:
---

Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5816


> Merge TopSpeedWindowingExampleITCase into StreamingExamplesITCase
> -
>
> Key: FLINK-9137
> URL: https://issues.apache.org/jira/browse/FLINK-9137
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9153) TaskManagerRunner should support rpc port range

2018-04-10 Thread vinoyang (JIRA)
vinoyang created FLINK-9153:
---

 Summary: TaskManagerRunner should support rpc port range
 Key: FLINK-9153
 URL: https://issues.apache.org/jira/browse/FLINK-9153
 Project: Flink
  Issue Type: Bug
  Components: TaskManager
Affects Versions: 1.4.0, 1.5.0
Reporter: vinoyang
Assignee: vinoyang
 Fix For: 1.5.0


TaskManagerRunner current just support one specific port :
{code:java}
final int rpcPort = 
configuration.getInteger(ConfigConstants.TASK_MANAGER_IPC_PORT_KEY, 0);
{code}
It should support port range as the document described : 
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-rpc-port

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8961) Port JobRetrievalITCase to flip6

2018-04-10 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8961:

Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-8700

> Port JobRetrievalITCase to flip6
> 
>
> Key: FLINK-8961
> URL: https://issues.apache.org/jira/browse/FLINK-8961
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8702) Migrate tests from FlinkMiniCluster to MiniClusterResource

2018-04-10 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8702.
---
Resolution: Fixed

> Migrate tests from FlinkMiniCluster to MiniClusterResource
> --
>
> Key: FLINK-8702
> URL: https://issues.apache.org/jira/browse/FLINK-8702
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9151) standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task managers

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431959#comment-16431959
 ] 

ASF GitHub Bot commented on FLINK-9151:
---

GitHub user facboy opened a pull request:

https://github.com/apache/flink/pull/5832

[FLINK-9151] [Startup Shell Scripts] Export FLINK_CONF_DIR to job manager 
and task managers in standalone cluster mode (master)

## What is the purpose of the change

This pull request makes the standalone cluster scripts pass FLINK_CONF_DIR 
to the launched job managers and task managers, rather than relying on the 
default config dir on the target host.

## Brief change log

- Added export FLINK_CONF_DIR to `config.sh` and `start_cluser.sh`

## Verifying this change

- I've only manually verified the change on 1.4.x.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: yes
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/facboy/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5832.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5832


commit ec023753dadd1b4fda1b4ab23e9b0ce459f44667
Author: Christopher Ng 
Date:   2018-04-10T08:39:41Z

FLINK-9151 Export FLINK_CONF_DIR to job manager and task managers in 
standalone cluster mode.




> standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task 
> managers
> --
>
> Key: FLINK-9151
> URL: https://issues.apache.org/jira/browse/FLINK-9151
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.1
>Reporter: Christopher Ng
>Priority: Minor
>
> At the moment FLINK_CONF_DIR is not passed to the job manager and task 
> manager when they are started over SSH.  This means that if the user has a 
> locally set FLINK_CONF_DIR that is not configured by their login shell, it is 
> not used by the launched job manager and task manager which can result in 
> silently failing to launch if there are errors due to Flink not using the 
> correct config dir.
> One particular inconsistency is that a TaskManagers may be launched locally 
> (without ssh) on localhost, but JobManagers are always launched over ssh.  In 
> my particular case this meant that the TaskManager launched but the 
> JobManager silently failed to launch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5832: [FLINK-9151] [Startup Shell Scripts] Export FLINK_...

2018-04-10 Thread facboy
GitHub user facboy opened a pull request:

https://github.com/apache/flink/pull/5832

[FLINK-9151] [Startup Shell Scripts] Export FLINK_CONF_DIR to job manager 
and task managers in standalone cluster mode (master)

## What is the purpose of the change

This pull request makes the standalone cluster scripts pass FLINK_CONF_DIR 
to the launched job managers and task managers, rather than relying on the 
default config dir on the target host.

## Brief change log

- Added export FLINK_CONF_DIR to `config.sh` and `start_cluser.sh`

## Verifying this change

- I've only manually verified the change on 1.4.x.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: yes
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/facboy/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5832.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5832


commit ec023753dadd1b4fda1b4ab23e9b0ce459f44667
Author: Christopher Ng 
Date:   2018-04-10T08:39:41Z

FLINK-9151 Export FLINK_CONF_DIR to job manager and task managers in 
standalone cluster mode.




---


[jira] [Assigned] (FLINK-9151) standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task managers

2018-04-10 Thread vinoyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9151:
---

Assignee: (was: vinoyang)

> standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task 
> managers
> --
>
> Key: FLINK-9151
> URL: https://issues.apache.org/jira/browse/FLINK-9151
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.1
>Reporter: Christopher Ng
>Priority: Minor
>
> At the moment FLINK_CONF_DIR is not passed to the job manager and task 
> manager when they are started over SSH.  This means that if the user has a 
> locally set FLINK_CONF_DIR that is not configured by their login shell, it is 
> not used by the launched job manager and task manager which can result in 
> silently failing to launch if there are errors due to Flink not using the 
> correct config dir.
> One particular inconsistency is that a TaskManagers may be launched locally 
> (without ssh) on localhost, but JobManagers are always launched over ssh.  In 
> my particular case this meant that the TaskManager launched but the 
> JobManager silently failed to launch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9151) standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task managers

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431948#comment-16431948
 ] 

ASF GitHub Bot commented on FLINK-9151:
---

GitHub user facboy opened a pull request:

https://github.com/apache/flink/pull/5831

[FLINK-9151] [Startup Shell Scripts] Export FLINK_CONF_DIR to job manager 
and task managers in standalone cluster mode

## What is the purpose of the change

This pull request makes the standalone cluster scripts pass FLINK_CONF_DIR 
to the launched job managers and task managers, rather than relying on the 
default config dir on the target host.

## Brief change log

- Added export FLINK_CONF_DIR to `config.sh` and `start_cluser.sh`

## Verifying this change

- Manually verified the change by running a standalone cluster with a local 
FLINK_CONF_DIR environment variable set.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: yes
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/facboy/flink 1.4.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5831.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5831


commit 760d1a6bb75eb9519a4b93eb3cf34ad1605621da
Author: yew1eb 
Date:   2017-11-07T01:06:45Z

[hotfix][docs] Add type for numLateRecordsDropped metric in docs

commit 07830e7897a42b5d12f0b33c42933c6ca78e70d3
Author: zentol 
Date:   2017-11-07T11:16:04Z

[hotfix][rat] Add missing rat exclusions

Another set of RAT exclusions to prevent errors on Windows.

commit aab36f934548a5697c5c461b2a79c7cf3fd0d756
Author: kkloudas 
Date:   2017-11-06T11:43:18Z

[FLINK-7823][QS] Update Queryable State configuration parameters.

commit 819995454611be6a85e2933318d053b2c25a18f7
Author: kkloudas 
Date:   2017-11-06T16:21:45Z

[FLINK-7822][QS][doc] Update Queryable State docs.

commit 564c9934fd3aaba462a7415788b3d55486146f9b
Author: Aljoscha Krettek 
Date:   2017-11-07T17:27:16Z

[hotfix] Use correct commit id in GenericWriteAheadSink.notifyCheckpoint

commit 3cbf467ebdf639df4d7d4da78b7bc2929aa4b5d9
Author: Piotr Nowojski 
Date:   2017-11-06T13:03:16Z

[hotfix][kafka] Extract TransactionalIdsGenerator class from 
FlinkKafkaProducer011

This is pure refactor without any functional changes.

commit 460e27aeb5e246aff0f8137448441c315123608c
Author: Piotr Nowojski 
Date:   2017-11-06T13:14:01Z

[FLINK-7978][kafka] Ensure that transactional ids will never clash

Previously transactional ids to use and to abort could clash between
subtasks. This could lead to a race condition between initialization
and writting the data, where one subtask is still initializing/aborting
some transactional id while different subtask is already trying to write
the data using the same transactional id.

commit b677c8d69b81fb3594798ba2761fdb7e2edea5db
Author: Fabian Hueske 
Date:   2017-11-07T22:43:45Z

[hotfix] [docs] Improve Supported Types section of Table API & SQL docs.

commit dc1ca78a4e4cb339e9fbf0c90700f3204e091c53
Author: Fabian Hueske 
Date:   2017-11-07T23:12:49Z

[hotfix] [docs] Fix UDTF join description in SQL docs.

commit 5af710080eb72d23d8d2f6a77d1825f3d8a009ae
Author: zentol 
Date:   2017-11-07T10:40:15Z

[FLINK-8004][metrics][docs] Fix usage examples

commit 49dc380697627189f6ac2e8bf5a084ac85c21ed5
Author: zentol 
Date:   2017-11-07T14:36:49Z

[FLINK-8010][build] Bump remaining flink-shaded versions

commit 17aae5af4a7973348067d5786cd4f16fc9da2639
Author: Tzu-Li (Gordon) Tai 
Date:   2017-11-07T11:35:33Z

[FLINK-8001] [kafka] Prevent PeriodicWatermarkEmitter from violating IDLE 
status

Prior to this commit, a bug exists such that if a Kafka consumer subtask
initially marks itself as idle because it didn't have any partitions to
subscribe to, that idleness status will be violated when the
PeriodicWatermarkEmitter is fired.

The problem is that the 

[GitHub] flink pull request #5831: [FLINK-9151] [Startup Shell Scripts] Export FLINK_...

2018-04-10 Thread facboy
GitHub user facboy opened a pull request:

https://github.com/apache/flink/pull/5831

[FLINK-9151] [Startup Shell Scripts] Export FLINK_CONF_DIR to job manager 
and task managers in standalone cluster mode

## What is the purpose of the change

This pull request makes the standalone cluster scripts pass FLINK_CONF_DIR 
to the launched job managers and task managers, rather than relying on the 
default config dir on the target host.

## Brief change log

- Added export FLINK_CONF_DIR to `config.sh` and `start_cluser.sh`

## Verifying this change

- Manually verified the change by running a standalone cluster with a local 
FLINK_CONF_DIR environment variable set.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: yes
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/facboy/flink 1.4.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5831.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5831


commit 760d1a6bb75eb9519a4b93eb3cf34ad1605621da
Author: yew1eb 
Date:   2017-11-07T01:06:45Z

[hotfix][docs] Add type for numLateRecordsDropped metric in docs

commit 07830e7897a42b5d12f0b33c42933c6ca78e70d3
Author: zentol 
Date:   2017-11-07T11:16:04Z

[hotfix][rat] Add missing rat exclusions

Another set of RAT exclusions to prevent errors on Windows.

commit aab36f934548a5697c5c461b2a79c7cf3fd0d756
Author: kkloudas 
Date:   2017-11-06T11:43:18Z

[FLINK-7823][QS] Update Queryable State configuration parameters.

commit 819995454611be6a85e2933318d053b2c25a18f7
Author: kkloudas 
Date:   2017-11-06T16:21:45Z

[FLINK-7822][QS][doc] Update Queryable State docs.

commit 564c9934fd3aaba462a7415788b3d55486146f9b
Author: Aljoscha Krettek 
Date:   2017-11-07T17:27:16Z

[hotfix] Use correct commit id in GenericWriteAheadSink.notifyCheckpoint

commit 3cbf467ebdf639df4d7d4da78b7bc2929aa4b5d9
Author: Piotr Nowojski 
Date:   2017-11-06T13:03:16Z

[hotfix][kafka] Extract TransactionalIdsGenerator class from 
FlinkKafkaProducer011

This is pure refactor without any functional changes.

commit 460e27aeb5e246aff0f8137448441c315123608c
Author: Piotr Nowojski 
Date:   2017-11-06T13:14:01Z

[FLINK-7978][kafka] Ensure that transactional ids will never clash

Previously transactional ids to use and to abort could clash between
subtasks. This could lead to a race condition between initialization
and writting the data, where one subtask is still initializing/aborting
some transactional id while different subtask is already trying to write
the data using the same transactional id.

commit b677c8d69b81fb3594798ba2761fdb7e2edea5db
Author: Fabian Hueske 
Date:   2017-11-07T22:43:45Z

[hotfix] [docs] Improve Supported Types section of Table API & SQL docs.

commit dc1ca78a4e4cb339e9fbf0c90700f3204e091c53
Author: Fabian Hueske 
Date:   2017-11-07T23:12:49Z

[hotfix] [docs] Fix UDTF join description in SQL docs.

commit 5af710080eb72d23d8d2f6a77d1825f3d8a009ae
Author: zentol 
Date:   2017-11-07T10:40:15Z

[FLINK-8004][metrics][docs] Fix usage examples

commit 49dc380697627189f6ac2e8bf5a084ac85c21ed5
Author: zentol 
Date:   2017-11-07T14:36:49Z

[FLINK-8010][build] Bump remaining flink-shaded versions

commit 17aae5af4a7973348067d5786cd4f16fc9da2639
Author: Tzu-Li (Gordon) Tai 
Date:   2017-11-07T11:35:33Z

[FLINK-8001] [kafka] Prevent PeriodicWatermarkEmitter from violating IDLE 
status

Prior to this commit, a bug exists such that if a Kafka consumer subtask
initially marks itself as idle because it didn't have any partitions to
subscribe to, that idleness status will be violated when the
PeriodicWatermarkEmitter is fired.

The problem is that the PeriodicWatermarkEmitter incorrecty yields a
Long.MAX_VALUE watermark even when there are no partitions to subscribe
to. This commit fixes this by additionally ensuring that the aggregated
watermark in the PeriodicWatermarkEmitterr is 

[jira] [Updated] (FLINK-9151) standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task managers

2018-04-10 Thread Christopher Ng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Ng updated FLINK-9151:
--
Component/s: Startup Shell Scripts

> standalone cluster scripts should pass FLINK_CONF_DIR to job manager and task 
> managers
> --
>
> Key: FLINK-9151
> URL: https://issues.apache.org/jira/browse/FLINK-9151
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.1
>Reporter: Christopher Ng
>Assignee: vinoyang
>Priority: Minor
>
> At the moment FLINK_CONF_DIR is not passed to the job manager and task 
> manager when they are started over SSH.  This means that if the user has a 
> locally set FLINK_CONF_DIR that is not configured by their login shell, it is 
> not used by the launched job manager and task manager which can result in 
> silently failing to launch if there are errors due to Flink not using the 
> correct config dir.
> One particular inconsistency is that a TaskManagers may be launched locally 
> (without ssh) on localhost, but JobManagers are always launched over ssh.  In 
> my particular case this meant that the TaskManager launched but the 
> JobManager silently failed to launch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8205) Multi key get

2018-04-10 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431809#comment-16431809
 ] 

Sihua Zhou commented on FLINK-8205:
---

Hi [~kkl0u], I'm preparing to write the design document, there's one question I 
want to have a bit discussion with you. Shall we support {{multi key get}} for 
"one state with multi keys"  or  "multi state with multi keys"?

> Multi key get
> -
>
> Key: FLINK-8205
> URL: https://issues.apache.org/jira/browse/FLINK-8205
> Project: Flink
>  Issue Type: New Feature
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: Any
>Reporter: Martin Eden
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Currently the Java queryable state api only allows for fetching one key at a 
> time. It would be extremely useful and more efficient if a similar call 
> exists for submitting multiple keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)