[GitHub] flink issue #3497: [FLINK-6002] Documentation: 'MacOS X' section in Quicksta...

2017-03-08 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3497
  
+1 to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4545) Flink automatically manages TM network buffer

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901930#comment-15901930
 ] 

ASF GitHub Bot commented on FLINK-4545:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3467
  
I think this is a good change, merging this...

@zhijiangW Managing the buffers changes in some followup PR, first 
adjusting the local pools, then the global pool. Managing buffers in a global 
pool can help when caching data, such as for batch jobs. But we can take 
suggestions followup improvements as a separate thread, after this improvement 
is in.


> Flink automatically manages TM network buffer
> -
>
> Key: FLINK-4545
> URL: https://issues.apache.org/jira/browse/FLINK-4545
> Project: Flink
>  Issue Type: Wish
>  Components: Network
>Reporter: Zhenzhong Xu
>
> Currently, the number of network buffer per task manager is preconfigured and 
> the memory is pre-allocated through taskmanager.network.numberOfBuffers 
> config. In a Job DAG with shuffle phase, this number can go up very high 
> depends on the TM cluster size. The formula for calculating the buffer count 
> is documented here 
> (https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-the-network-buffers).
>   
> #slots-per-TM^2 * #TMs * 4
> In a standalone deployment, we may need to control the task manager cluster 
> size dynamically and then leverage the up-coming Flink feature to support 
> scaling job parallelism/rescaling at runtime. 
> If the buffer count config is static at runtime and cannot be changed without 
> restarting task manager process, this may add latency and complexity for 
> scaling process. I am wondering if there is already any discussion around 
> whether the network buffer should be automatically managed by Flink or at 
> least expose some API to allow it to be reconfigured. Let me know if there is 
> any existing JIRA that I should follow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3467: [FLINK-4545] preparations for removing the network buffer...

2017-03-08 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3467
  
I think this is a good change, merging this...

@zhijiangW Managing the buffers changes in some followup PR, first 
adjusting the local pools, then the global pool. Managing buffers in a global 
pool can help when caching data, such as for batch jobs. But we can take 
suggestions followup improvements as a separate thread, after this improvement 
is in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901830#comment-15901830
 ] 

Stavros Kontopoulos edited comment on FLINK-5785 at 3/8/17 8:26 PM:


[~beera] Let me know if you want any help. Also let me know when finished so I 
can review your work.  


was (Author: skonto):
[~beera] Let me know if you want any kind of help.

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901830#comment-15901830
 ] 

Stavros Kontopoulos edited comment on FLINK-5785 at 3/8/17 8:25 PM:


[~beera] Let me know if you want any kind of help.


was (Author: skonto):
[~beera] If you do that please follow my approach here for raising exceptions:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127

I will finish that PR for unit scaling ASAP.

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901830#comment-15901830
 ] 

Stavros Kontopoulos edited comment on FLINK-5785 at 3/8/17 8:25 PM:


[~beera] If you do that please follow my approach here for raising exceptions:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127

I will finish that PR for unit scaling ASAP.


was (Author: skonto):
[~beera] If you do that please follow my approach here:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127

I will finish that PR for unit scaling ASAP.

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5929) Allow Access to Per-Window State in ProcessWindowFunction

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901887#comment-15901887
 ] 

ASF GitHub Bot commented on FLINK-5929:
---

Github user sjwiesman commented on the issue:

https://github.com/apache/flink/pull/3479
  
@aljoscha I made the changes you asked for. Just a heads up, there are a 
number of files that were superficially changed when migrating from apply -> 
process but are otherwise untouched. 


> Allow Access to Per-Window State in ProcessWindowFunction
> -
>
> Key: FLINK-5929
> URL: https://issues.apache.org/jira/browse/FLINK-5929
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>
> Right now, the state that a {{WindowFunction}} or {{ProcessWindowFunction}} 
> can access is scoped to the key of the window but not the window itself. That 
> is, state is global across all windows for a given key.
> For some use cases it is beneficial to keep state scoped to a window. For 
> example, if you expect to have several {{Trigger}} firings (due to early and 
> late firings) a user can keep state per window to keep some information 
> between those firings.
> The per-window state has to be cleaned up in some way. For this I see two 
> options:
>  - Keep track of all state that a user uses and clean up when we reach the 
> window GC horizon.
>  - Add a method {{cleanup()}} to {{ProcessWindowFunction}} which is called 
> when we reach the window GC horizon that users can/should use to clean up 
> their state.
> On the API side, we can add a method {{windowState()}} on 
> {{ProcessWindowFunction.Context}} that retrieves the per-window state and 
> {{globalState()}} that would allow access to the (already available) global 
> state. The {{Context}} would then look like this:
> {code}
> /**
>  * The context holding window metadata
>  */
> public abstract class Context {
> /**
>  * @return The window that is being evaluated.
>  */
> public abstract W window();
> /**
>  * State accessor for per-key and per-window state.
>  */
> KeyedStateStore windowState();
> /**
>  * State accessor for per-key global state.
>  */
> KeyedStateStore globalState();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3479: [FLINK-5929] Allow Access to Per-Window State in ProcessW...

2017-03-08 Thread sjwiesman
Github user sjwiesman commented on the issue:

https://github.com/apache/flink/pull/3479
  
@aljoscha I made the changes you asked for. Just a heads up, there are a 
number of files that were superficially changed when migrating from apply -> 
process but are otherwise untouched. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5731) Split up CI builds

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901867#comment-15901867
 ] 

ASF GitHub Bot commented on FLINK-5731:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3344
  
I agree that we have this issue again.
If you want faster feedback on your pull requests, I recommend setting up 
travis for your flink fork. Then, every time you push to your repo, you'll 
immediately start your own build.


> Split up CI builds
> --
>
> Key: FLINK-5731
> URL: https://issues.apache.org/jira/browse/FLINK-5731
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Tests
>Reporter: Ufuk Celebi
>Assignee: Robert Metzger
>Priority: Critical
>
> Test builds regularly time out because we are hitting the Travis 50 min 
> limit. Previously, we worked around this by splitting up the tests into 
> groups. I think we have to split them further.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3344: FLINK-5731 Spilt up tests into three disjoint groups

2017-03-08 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3344
  
I agree that we have this issue again.
If you want faster feedback on your pull requests, I recommend setting up 
travis for your flink fork. Then, every time you push to your repo, you'll 
immediately start your own build.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901830#comment-15901830
 ] 

Stavros Kontopoulos edited comment on FLINK-5785 at 3/8/17 7:25 PM:


[~beera] If you do that please follow my approach here:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127

I will finish that PR for unit scaling ASAP.


was (Author: skonto):
[~beera] If you do that please follow my approach here:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127
I will finish that PR ASAP.

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6002) Documentation: 'MacOS X' under 'Download and Start Flink' in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901831#comment-15901831
 ] 

Bowen Li commented on FLINK-6002:
-

Hi guys, how to assign this issue to myself?

> Documentation: 'MacOS X' under 'Download and Start Flink' in Quickstart page 
> is not rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.
> This is because the markdown is misformatted - it doesn't leave a blank line 
> between text and code block.
> So the fix is simple - add a blank line between text and code block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3484: [FLINK-4460] Side Outputs in Flink

2017-03-08 Thread chenqin
Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104997566
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
 ---
@@ -60,6 +60,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import com.google.common.collect.Iterables;
--- End diff --

That sounds right, good catch! 
Thanks for fixing!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4460) Side Outputs in Flink

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901828#comment-15901828
 ] 

ASF GitHub Bot commented on FLINK-4460:
---

Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104997566
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
 ---
@@ -60,6 +60,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import com.google.common.collect.Iterables;
--- End diff --

That sounds right, good catch! 
Thanks for fixing!


> Side Outputs in Flink
> -
>
> Key: FLINK-4460
> URL: https://issues.apache.org/jira/browse/FLINK-4460
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataStream API
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Chen Qin
>Assignee: Chen Qin
>  Labels: latearrivingevents, sideoutput
>
> https://docs.google.com/document/d/1vg1gpR8JL4dM07Yu4NyerQhhVvBlde5qdqnuJv4LcV4/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901830#comment-15901830
 ] 

Stavros Kontopoulos commented on FLINK-5785:


[~beera] If you do that please follow my approach here:
https://github.com/skonto/flink/blob/6736a66ae1bd2c0efbaa29cf170cabd18b281a8a/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/preprocessing/Normalizer.scala#L127
I will finish that PR ASAP.

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6002) Documentation: 'MacOS X' under 'Download and Start Flink' in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-6002:

Summary: Documentation: 'MacOS X' under 'Download and Start Flink' in 
Quickstart page is not rendered correctly  (was: "Setup: Download and Start 
Flink" / "MacOS X" in Quickstart page is not rendered correctly)

> Documentation: 'MacOS X' under 'Download and Start Flink' in Quickstart page 
> is not rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.
> This is because the markdown is misformatted - it doesn't leave a blank line 
> between text and code block.
> So the fix is simple - add a blank line between text and code block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6002) "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not rendered correctly

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901803#comment-15901803
 ] 

ASF GitHub Bot commented on FLINK-6002:
---

GitHub user phoenixjiangnan opened a pull request:

https://github.com/apache/flink/pull/3497

[FLINK-6002] 'MacOS X' section in Quickstart page is not rendered correctly

Currently, ["MacOS X" section under "Setup: Download and Start Flink" in 
Quickstart 
page](https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink)
 is not rendered correctly. 

![before](https://cloud.githubusercontent.com/assets/1892692/23719173/28a8daaa-03ef-11e7-9663-26486723c652.png)


I fixed this markdown issue by simply adding a blank line between text 
paragraph and code block.

Now it looks like this:

![after](https://cloud.githubusercontent.com/assets/1892692/23719258/7b0645da-03ef-11e7-8c15-e8484d71364c.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phoenixjiangnan/flink FLINK-6002

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3497


commit d34944a05a2584d0f3880c100dc3f71b48ffe94e
Author: phoenixjiangnan 
Date:   2017-03-08T19:03:21Z

[FLINK-6002] 'MacOS X' under 'Download and Start Flink' in Quickstart page 
is not rendered correctly




> "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not 
> rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.
> This is because the markdown is misformatted - it doesn't leave a blank line 
> between text and code block.
> So the fix is simple - add a blank line between text and code block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3497: [FLINK-6002] 'MacOS X' section in Quickstart page ...

2017-03-08 Thread phoenixjiangnan
GitHub user phoenixjiangnan opened a pull request:

https://github.com/apache/flink/pull/3497

[FLINK-6002] 'MacOS X' section in Quickstart page is not rendered correctly

Currently, ["MacOS X" section under "Setup: Download and Start Flink" in 
Quickstart 
page](https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink)
 is not rendered correctly. 

![before](https://cloud.githubusercontent.com/assets/1892692/23719173/28a8daaa-03ef-11e7-9663-26486723c652.png)


I fixed this markdown issue by simply adding a blank line between text 
paragraph and code block.

Now it looks like this:

![after](https://cloud.githubusercontent.com/assets/1892692/23719258/7b0645da-03ef-11e7-8c15-e8484d71364c.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phoenixjiangnan/flink FLINK-6002

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3497


commit d34944a05a2584d0f3880c100dc3f71b48ffe94e
Author: phoenixjiangnan 
Date:   2017-03-08T19:03:21Z

[FLINK-6002] 'MacOS X' under 'Download and Start Flink' in Quickstart page 
is not rendered correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4460) Side Outputs in Flink

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901801#comment-15901801
 ] 

ASF GitHub Bot commented on FLINK-4460:
---

Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104996349
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
 ---
@@ -85,6 +86,7 @@
private Set sources;
private Set sinks;
private Map> virtualSelectNodes;
+   private Map> virtualOutputNodes;
--- End diff --

sounds good


> Side Outputs in Flink
> -
>
> Key: FLINK-4460
> URL: https://issues.apache.org/jira/browse/FLINK-4460
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataStream API
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Chen Qin
>Assignee: Chen Qin
>  Labels: latearrivingevents, sideoutput
>
> https://docs.google.com/document/d/1vg1gpR8JL4dM07Yu4NyerQhhVvBlde5qdqnuJv4LcV4/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3484: [FLINK-4460] Side Outputs in Flink

2017-03-08 Thread chenqin
Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104996349
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
 ---
@@ -85,6 +86,7 @@
private Set sources;
private Set sinks;
private Map> virtualSelectNodes;
+   private Map> virtualOutputNodes;
--- End diff --

sounds good


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4460) Side Outputs in Flink

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901795#comment-15901795
 ] 

ASF GitHub Bot commented on FLINK-4460:
---

Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104995971
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
 ---
@@ -333,32 +356,41 @@ public void addEdge(Integer upStreamVertexID, Integer 
downStreamVertexID, int ty
downStreamVertexID,
typeNumber,
null,
-   new ArrayList());
+   new ArrayList(), null);
 
}
 
private void addEdgeInternal(Integer upStreamVertexID,
Integer downStreamVertexID,
int typeNumber,
StreamPartitioner partitioner,
-   List outputNames) {
-
+   List outputNames,
+   OutputTag outputTag) {
 
-   if (virtualSelectNodes.containsKey(upStreamVertexID)) {
+   if (virtualOutputNodes.containsKey(upStreamVertexID)) {
+   int virtualId = upStreamVertexID;
+   upStreamVertexID = virtualOutputNodes.get(virtualId).f0;
+   if (outputTag == null) {
+   // selections that happen downstream override 
earlier selections
--- End diff --

sounds good to me!


> Side Outputs in Flink
> -
>
> Key: FLINK-4460
> URL: https://issues.apache.org/jira/browse/FLINK-4460
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataStream API
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Chen Qin
>Assignee: Chen Qin
>  Labels: latearrivingevents, sideoutput
>
> https://docs.google.com/document/d/1vg1gpR8JL4dM07Yu4NyerQhhVvBlde5qdqnuJv4LcV4/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3484: [FLINK-4460] Side Outputs in Flink

2017-03-08 Thread chenqin
Github user chenqin commented on a diff in the pull request:

https://github.com/apache/flink/pull/3484#discussion_r104995971
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
 ---
@@ -333,32 +356,41 @@ public void addEdge(Integer upStreamVertexID, Integer 
downStreamVertexID, int ty
downStreamVertexID,
typeNumber,
null,
-   new ArrayList());
+   new ArrayList(), null);
 
}
 
private void addEdgeInternal(Integer upStreamVertexID,
Integer downStreamVertexID,
int typeNumber,
StreamPartitioner partitioner,
-   List outputNames) {
-
+   List outputNames,
+   OutputTag outputTag) {
 
-   if (virtualSelectNodes.containsKey(upStreamVertexID)) {
+   if (virtualOutputNodes.containsKey(upStreamVertexID)) {
+   int virtualId = upStreamVertexID;
+   upStreamVertexID = virtualOutputNodes.get(virtualId).f0;
+   if (outputTag == null) {
+   // selections that happen downstream override 
earlier selections
--- End diff --

sounds good to me!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901756#comment-15901756
 ] 

Vladislav Pernin commented on FLINK-6001:
-

Another reproducer version without a "sleeping map" but a slow source function 
that try to mimic the reality :
https://github.com/vpernin/flink-window-npe/tree/slow-serializer

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6002) "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-6002:

Description: 
On 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
 , command lines in "MacOS X" part are not rendered correctly.

This is because the markdown is misformatted - it doesn't leave a blank line 
between text and code block.

So the fix is simple - add a blank line between text and code block.



  was:
On 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
 , command lines in "MacOS X" part are not rendered correctly.

This is because the markdown is misformatted - it doesn't leave a blank line 
between text and code block.





> "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not 
> rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.
> This is because the markdown is misformatted - it doesn't leave a blank line 
> between text and code block.
> So the fix is simple - add a blank line between text and code block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6002) "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-6002:

Description: 
On 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
 , command lines in "MacOS X" part are not rendered correctly.

This is because the markdown is misformatted - it doesn't leave a blank line 
between text and code block.




  was:
On 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
 , command lines in "MacOS X" part are not rendered correctly.





> "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not 
> rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.
> This is because the markdown is misformatted - it doesn't leave a blank line 
> between text and code block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6002) "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-6002:

Description: 
On 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
 , command lines in "MacOS X" part are not rendered correctly.




> "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not 
> rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>
> On 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html#setup-download-and-start-flink
>  , command lines in "MacOS X" part are not rendered correctly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6002) "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-6002:

Affects Version/s: 1.2.0
 Priority: Trivial  (was: Major)
Fix Version/s: 1.2.1
  Component/s: Documentation
  Summary: "Setup: Download and Start Flink" / "MacOS X" in 
Quickstart page is not rendered correctly  (was: Setup: Download and Start 
Flink/MacOS X in Quickstart page is not rendered correctly)

> "Setup: Download and Start Flink" / "MacOS X" in Quickstart page is not 
> rendered correctly
> --
>
> Key: FLINK-6002
> URL: https://issues.apache.org/jira/browse/FLINK-6002
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Priority: Trivial
> Fix For: 1.2.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6002) Setup: Download and Start Flink/MacOS X in Quickstart page is not rendered correctly

2017-03-08 Thread Bowen Li (JIRA)
Bowen Li created FLINK-6002:
---

 Summary: Setup: Download and Start Flink/MacOS X in Quickstart 
page is not rendered correctly
 Key: FLINK-6002
 URL: https://issues.apache.org/jira/browse/FLINK-6002
 Project: Flink
  Issue Type: Bug
Reporter: Bowen Li






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3494: [FLINK-5635] [docker] Improve Docker tooling

2017-03-08 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104986831
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I see, just a terminology thing: image refers to the binary blob Docker Hub 
stores for download, in the Flink repo is a Dockerfile for generating that 
image. :)

This is mainly to follow suit with the other Apache projects I have found 
with official images; they all tend to host their Dockerfiles totally 
separately. Needless to say, the repo will be wholly licensed under the Apache 
license, just not maintained under the ASF.

And we're definitely committed to maintaining these official images going 
forward, updating them as a part of the Flink release process. If they need an 
individual maintainer point of contact, that can be me and I can hand it off 
later if I have to.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5635) Improve Docker tooling to make it easier to build images and launch Flink via Docker tools

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901718#comment-15901718
 ] 

ASF GitHub Bot commented on FLINK-5635:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104986831
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I see, just a terminology thing: image refers to the binary blob Docker Hub 
stores for download, in the Flink repo is a Dockerfile for generating that 
image. :)

This is mainly to follow suit with the other Apache projects I have found 
with official images; they all tend to host their Dockerfiles totally 
separately. Needless to say, the repo will be wholly licensed under the Apache 
license, just not maintained under the ASF.

And we're definitely committed to maintaining these official images going 
forward, updating them as a part of the Flink release process. If they need an 
individual maintainer point of contact, that can be me and I can hand it off 
later if I have to.


> Improve Docker tooling to make it easier to build images and launch Flink via 
> Docker tools
> --
>
> Key: FLINK-5635
> URL: https://issues.apache.org/jira/browse/FLINK-5635
> Project: Flink
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.2.0
>Reporter: Jamie Grier
>Assignee: Patrick Lucas
>
> This is a bit of a catch-all ticket for general improvements to the Flink on 
> Docker experience.
> Things to improve:
>   - Make it possible to build a Docker image from your own flink-dist 
> directory as well as official releases.
>   - Make it possible to override the image name so a user can more easily 
> publish these images to their Docker repository
>   - Provide scripts that show how to properly run on Docker Swarm or similar 
> environments with overlay networking (Kubernetes) without using host 
> networking.
>   - Log to stdout rather than to files.
>   - Work properly with docker-compose for local deployment as well as 
> production deployments (Swarm/k8s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5635) Improve Docker tooling to make it easier to build images and launch Flink via Docker tools

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901705#comment-15901705
 ] 

ASF GitHub Bot commented on FLINK-5635:
---

Github user iemejia commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104984654
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I am not talking about a published image on docker hub, but the image that 
exists in the flink repo, the whole reason I jumped in and worked in this image 
was because I was aware that the community was around the image, and it will be 
improved/updated, people can't be aware of the image existance/maintenance if 
we don't show it somewhere.

Ok for the image on dataArtisans it makes sense but I have to be honest I 
would prefer it to be still at Apache, at least I hope it preserves the license 
(I hadn't seen you work there, nice).

And yes, when you propose an image, you have to show that you plan to 
maintain it, or that you are part of the community.
https://github.com/docker-library/official-images#maintainership
This is important because they (docker) want images to get security updates 
/ improvements.




> Improve Docker tooling to make it easier to build images and launch Flink via 
> Docker tools
> --
>
> Key: FLINK-5635
> URL: https://issues.apache.org/jira/browse/FLINK-5635
> Project: Flink
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.2.0
>Reporter: Jamie Grier
>Assignee: Patrick Lucas
>
> This is a bit of a catch-all ticket for general improvements to the Flink on 
> Docker experience.
> Things to improve:
>   - Make it possible to build a Docker image from your own flink-dist 
> directory as well as official releases.
>   - Make it possible to override the image name so a user can more easily 
> publish these images to their Docker repository
>   - Provide scripts that show how to properly run on Docker Swarm or similar 
> environments with overlay networking (Kubernetes) without using host 
> networking.
>   - Log to stdout rather than to files.
>   - Work properly with docker-compose for local deployment as well as 
> production deployments (Swarm/k8s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3494: [FLINK-5635] [docker] Improve Docker tooling

2017-03-08 Thread iemejia
Github user iemejia commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104984654
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I am not talking about a published image on docker hub, but the image that 
exists in the flink repo, the whole reason I jumped in and worked in this image 
was because I was aware that the community was around the image, and it will be 
improved/updated, people can't be aware of the image existance/maintenance if 
we don't show it somewhere.

Ok for the image on dataArtisans it makes sense but I have to be honest I 
would prefer it to be still at Apache, at least I hope it preserves the license 
(I hadn't seen you work there, nice).

And yes, when you propose an image, you have to show that you plan to 
maintain it, or that you are part of the community.
https://github.com/docker-library/official-images#maintainership
This is important because they (docker) want images to get security updates 
/ improvements.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3494: [FLINK-5635] [docker] Improve Docker tooling

2017-03-08 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104982158
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

Which existing image are you referring to that we need to keep updated? I'm 
not aware of a Flink project- or data Artisans-maintained image on Docker Hub.

I'm almost done with the basic functionality for the first Alpine image 
(and a script to template its Dockerfile) but there will be plenty of room for 
improvements. Once the new repo is live I'll ping you and you can take a look.

I don't know if I need to be the "official maintainer", but the repo will 
live under the [data Artisans GitHub org](https://github.com/dataArtisans), so 
if not me someone at dA will likely handle contributions to that repo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5635) Improve Docker tooling to make it easier to build images and launch Flink via Docker tools

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901687#comment-15901687
 ] 

ASF GitHub Bot commented on FLINK-5635:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104982158
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

Which existing image are you referring to that we need to keep updated? I'm 
not aware of a Flink project- or data Artisans-maintained image on Docker Hub.

I'm almost done with the basic functionality for the first Alpine image 
(and a script to template its Dockerfile) but there will be plenty of room for 
improvements. Once the new repo is live I'll ping you and you can take a look.

I don't know if I need to be the "official maintainer", but the repo will 
live under the [data Artisans GitHub org](https://github.com/dataArtisans), so 
if not me someone at dA will likely handle contributions to that repo.


> Improve Docker tooling to make it easier to build images and launch Flink via 
> Docker tools
> --
>
> Key: FLINK-5635
> URL: https://issues.apache.org/jira/browse/FLINK-5635
> Project: Flink
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.2.0
>Reporter: Jamie Grier
>Assignee: Patrick Lucas
>
> This is a bit of a catch-all ticket for general improvements to the Flink on 
> Docker experience.
> Things to improve:
>   - Make it possible to build a Docker image from your own flink-dist 
> directory as well as official releases.
>   - Make it possible to override the image name so a user can more easily 
> publish these images to their Docker repository
>   - Provide scripts that show how to properly run on Docker Swarm or similar 
> environments with overlay networking (Kubernetes) without using host 
> networking.
>   - Log to stdout rather than to files.
>   - Work properly with docker-compose for local deployment as well as 
> production deployments (Swarm/k8s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3493: [FLINK-3026] Publish the flink docker container to the do...

2017-03-08 Thread iemejia
Github user iemejia commented on the issue:

https://github.com/apache/flink/pull/3493
  
I oups not the ubuntu, the non-alpine one :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3026) Publish the flink docker container to the docker registry

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901677#comment-15901677
 ] 

ASF GitHub Bot commented on FLINK-3026:
---

Github user iemejia commented on the issue:

https://github.com/apache/flink/pull/3493
  
I oups not the ubuntu, the non-alpine one :)


> Publish the flink docker container to the docker registry
> -
>
> Key: FLINK-3026
> URL: https://issues.apache.org/jira/browse/FLINK-3026
> Project: Flink
>  Issue Type: Task
>  Components: Build System, Docker
>Reporter: Omer Katz
>Assignee: Ismaël Mejía
>  Labels: Deployment, Docker
>
> There's a dockerfile that can be used to build a docker container already in 
> the repository. It'd be awesome to just be able to pull it instead of 
> building it ourselves.
> The dockerfile can be found at 
> https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
> It also doesn't point to the latest version of Flink which I fixed in 
> https://github.com/apache/flink/pull/1366



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-3026) Publish the flink docker container to the docker registry

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901673#comment-15901673
 ] 

ASF GitHub Bot commented on FLINK-3026:
---

Github user iemejia commented on the issue:

https://github.com/apache/flink/pull/3493
  
I will, and add the commits for the ubuntu image, probably I will create a 
JIRA for this.


> Publish the flink docker container to the docker registry
> -
>
> Key: FLINK-3026
> URL: https://issues.apache.org/jira/browse/FLINK-3026
> Project: Flink
>  Issue Type: Task
>  Components: Build System, Docker
>Reporter: Omer Katz
>Assignee: Ismaël Mejía
>  Labels: Deployment, Docker
>
> There's a dockerfile that can be used to build a docker container already in 
> the repository. It'd be awesome to just be able to pull it instead of 
> building it ourselves.
> The dockerfile can be found at 
> https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
> It also doesn't point to the latest version of Flink which I fixed in 
> https://github.com/apache/flink/pull/1366



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3493: [FLINK-3026] Publish the flink docker container to the do...

2017-03-08 Thread iemejia
Github user iemejia commented on the issue:

https://github.com/apache/flink/pull/3493
  
I will, and add the commits for the ubuntu image, probably I will create a 
JIRA for this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5635) Improve Docker tooling to make it easier to build images and launch Flink via Docker tools

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901670#comment-15901670
 ] 

ASF GitHub Bot commented on FLINK-5635:
---

Github user iemejia commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104980420
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

Pefect, makes sense, but we should probably keep updating the existing 
image for some time, because this is the way most people are aware that it 
exists, or add a README that it was moved and maintained in the new repo.
Are you going to be the official maintainer (it is ok with me if you want 
to take the lead there, I just want to learn the whole process of creating an 
official image), but I can also help if you need with maintenance.


> Improve Docker tooling to make it easier to build images and launch Flink via 
> Docker tools
> --
>
> Key: FLINK-5635
> URL: https://issues.apache.org/jira/browse/FLINK-5635
> Project: Flink
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.2.0
>Reporter: Jamie Grier
>Assignee: Patrick Lucas
>
> This is a bit of a catch-all ticket for general improvements to the Flink on 
> Docker experience.
> Things to improve:
>   - Make it possible to build a Docker image from your own flink-dist 
> directory as well as official releases.
>   - Make it possible to override the image name so a user can more easily 
> publish these images to their Docker repository
>   - Provide scripts that show how to properly run on Docker Swarm or similar 
> environments with overlay networking (Kubernetes) without using host 
> networking.
>   - Log to stdout rather than to files.
>   - Work properly with docker-compose for local deployment as well as 
> production deployments (Swarm/k8s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3494: [FLINK-5635] [docker] Improve Docker tooling

2017-03-08 Thread iemejia
Github user iemejia commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104980420
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

Pefect, makes sense, but we should probably keep updating the existing 
image for some time, because this is the way most people are aware that it 
exists, or add a README that it was moved and maintained in the new repo.
Are you going to be the official maintainer (it is ok with me if you want 
to take the lead there, I just want to learn the whole process of creating an 
official image), but I can also help if you need with maintenance.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3026) Publish the flink docker container to the docker registry

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901665#comment-15901665
 ] 

ASF GitHub Bot commented on FLINK-3026:
---

Github user patricklucas commented on the issue:

https://github.com/apache/flink/pull/3493
  
Since this PR is no longer directly related to publishing official Docker 
images, could you change its title? (See my reply to your comment on #3494)


> Publish the flink docker container to the docker registry
> -
>
> Key: FLINK-3026
> URL: https://issues.apache.org/jira/browse/FLINK-3026
> Project: Flink
>  Issue Type: Task
>  Components: Build System, Docker
>Reporter: Omer Katz
>Assignee: Ismaël Mejía
>  Labels: Deployment, Docker
>
> There's a dockerfile that can be used to build a docker container already in 
> the repository. It'd be awesome to just be able to pull it instead of 
> building it ourselves.
> The dockerfile can be found at 
> https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
> It also doesn't point to the latest version of Flink which I fixed in 
> https://github.com/apache/flink/pull/1366



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3493: [FLINK-3026] Publish the flink docker container to the do...

2017-03-08 Thread patricklucas
Github user patricklucas commented on the issue:

https://github.com/apache/flink/pull/3493
  
Since this PR is no longer directly related to publishing official Docker 
images, could you change its title? (See my reply to your comment on #3494)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5635) Improve Docker tooling to make it easier to build images and launch Flink via Docker tools

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901660#comment-15901660
 ] 

ASF GitHub Bot commented on FLINK-5635:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104979246
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I spoke with @jgrier and we decided to take the route of creating a 
separate repo for the official images with all the Dockerfile variants we need, 
like most other projects have.

In the mean time, I think we can go ahead and merge this change (and yours 
in #3493) as a general improvement for Flink developers and users who just want 
a nice way to create Docker images from a release or their own Flink repo.


> Improve Docker tooling to make it easier to build images and launch Flink via 
> Docker tools
> --
>
> Key: FLINK-5635
> URL: https://issues.apache.org/jira/browse/FLINK-5635
> Project: Flink
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.2.0
>Reporter: Jamie Grier
>Assignee: Patrick Lucas
>
> This is a bit of a catch-all ticket for general improvements to the Flink on 
> Docker experience.
> Things to improve:
>   - Make it possible to build a Docker image from your own flink-dist 
> directory as well as official releases.
>   - Make it possible to override the image name so a user can more easily 
> publish these images to their Docker repository
>   - Provide scripts that show how to properly run on Docker Swarm or similar 
> environments with overlay networking (Kubernetes) without using host 
> networking.
>   - Log to stdout rather than to files.
>   - Work properly with docker-compose for local deployment as well as 
> production deployments (Swarm/k8s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3494: [FLINK-5635] [docker] Improve Docker tooling

2017-03-08 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3494#discussion_r104979246
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -36,22 +31,24 @@ ENV PATH $PATH:$FLINK_HOME/bin
 EXPOSE 8081
 EXPOSE 6123
 
+# flink-dist can point to a directory or a tarball on the local system
+ARG flink_dist=NOT_SET
+
 # Install build dependencies and flink
+ADD $flink_dist $FLINK_INSTALL_PATH
 RUN set -x && \
-  mkdir -p $FLINK_INSTALL_PATH && \
-  apk --update add --virtual build-dependencies curl && \
-  curl -s $(curl -s 
https://www.apache.org/dyn/closer.cgi\?preferred\=true)flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-hadoop${HADOOP_VERSION}-scala_${SCALA_VERSION}.tgz
 | \
--- End diff --

I spoke with @jgrier and we decided to take the route of creating a 
separate repo for the official images with all the Dockerfile variants we need, 
like most other projects have.

In the mean time, I think we can go ahead and merge this change (and yours 
in #3493) as a general improvement for Flink developers and users who just want 
a nice way to create Docker images from a release or their own Flink repo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3679) Allow Kafka consumer to skip corrupted messages

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901624#comment-15901624
 ] 

ASF GitHub Bot commented on FLINK-3679:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3314
  
I've made some final general improvements in 
https://github.com/tzulitai/flink/tree/PR-FLINK-3679.

Doing a Travis run before merging:
https://travis-ci.org/tzulitai/flink/builds/209054624


> Allow Kafka consumer to skip corrupted messages
> ---
>
> Key: FLINK-3679
> URL: https://issues.apache.org/jira/browse/FLINK-3679
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Kafka Connector
>Reporter: Jamie Grier
>Assignee: Haohui Mai
>
> There are a couple of issues with the DeserializationSchema API that I think 
> should be improved.  This request has come to me via an existing Flink user.
> The main issue is simply that the API assumes that there is a one-to-one 
> mapping between input and outputs.  In reality there are scenarios where one 
> input message (say from Kafka) might actually map to zero or more logical 
> elements in the pipeline.
> Particularly important here is the case where you receive a message from a 
> source (such as Kafka) and say the raw bytes don't deserialize properly.  
> Right now the only recourse is to throw IOException and therefore fail the 
> job.  
> This is definitely not good since bad data is a reality and failing the job 
> is not the right option.  If the job fails we'll just end up replaying the 
> bad data and the whole thing will start again.
> Instead in this case it would be best if the user could just return the empty 
> set.
> The other case is where one input message should logically be multiple output 
> messages.  This case is probably less important since there are other ways to 
> do this but in general it might be good to make the 
> DeserializationSchema.deserialize() method return a collection rather than a 
> single element.
> Maybe we need to support a DeserializationSchema variant that has semantics 
> more like that of FlatMap.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3314: [FLINK-3679] DeserializationSchema should handle zero or ...

2017-03-08 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3314
  
I've made some final general improvements in 
https://github.com/tzulitai/flink/tree/PR-FLINK-3679.

Doing a Travis run before merging:
https://travis-ci.org/tzulitai/flink/builds/209054624


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901616#comment-15901616
 ] 

Vladislav Pernin commented on FLINK-6001:
-

I have simplified the reproducer but it fails less often. Please use the 
following branch :
https://github.com/vpernin/flink-window-npe/tree/simpler-but-fails-less-often

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5731) Split up CI builds

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901611#comment-15901611
 ] 

ASF GitHub Bot commented on FLINK-5731:
---

Github user addisonj commented on the issue:

https://github.com/apache/flink/pull/3344
  
Even with this in place, I am still regularly seeing timeouts on quite a 
few PRs. In addition to the travis queue being backed up, its taking 12+ hours 
to get feedback which is mostly just build timeouts :|


> Split up CI builds
> --
>
> Key: FLINK-5731
> URL: https://issues.apache.org/jira/browse/FLINK-5731
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Tests
>Reporter: Ufuk Celebi
>Assignee: Robert Metzger
>Priority: Critical
>
> Test builds regularly time out because we are hitting the Travis 50 min 
> limit. Previously, we worked around this by splitting up the tests into 
> groups. I think we have to split them further.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3344: FLINK-5731 Spilt up tests into three disjoint groups

2017-03-08 Thread addisonj
Github user addisonj commented on the issue:

https://github.com/apache/flink/pull/3344
  
Even with this in place, I am still regularly seeing timeouts on quite a 
few PRs. In addition to the travis queue being backed up, its taking 12+ hours 
to get feedback which is mostly just build timeouts :|


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3489: [FLINK-5983] [table] Convert FOR into WHILE loops for agg...

2017-03-08 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3489
  
Thanks @fhueske. I could not find anything suspicious. I will rebase and 
merge this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pernin updated FLINK-6001:

Description: 
I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
So, you may have to increase the sleep time depending of your environment.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe

  was:
I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe


> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  

[jira] [Commented] (FLINK-1579) Create a Flink History Server

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901581#comment-15901581
 ] 

ASF GitHub Bot commented on FLINK-1579:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3460
  
@uce I've addressed the remaining comments. Still undecided on the handler 
re-use; tests/documentation still missing.


> Create a Flink History Server
> -
>
> Key: FLINK-1579
> URL: https://issues.apache.org/jira/browse/FLINK-1579
> Project: Flink
>  Issue Type: New Feature
>  Components: Distributed Coordination
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>
> Right now its not possible to analyze the job results for jobs that ran on 
> YARN, because we'll loose the information once the JobManager has stopped.
> Therefore, I propose to implement a "Flink History Server" which serves  the 
> results from these jobs.
> I haven't started thinking about the implementation, but I suspect it 
> involves some JSON files stored in HDFS :)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5984) Add resetAccumulator method for AggregateFunction

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901574#comment-15901574
 ] 

ASF GitHub Bot commented on FLINK-5984:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3496
  
Thanks @shaoxuan-wang. The PR looks good. 
FLINK-5983 has been merged and FLINK-5963 will be merged soon.
It would be great if you could rebase your code on the master once that was 
done.

Thanks, Fabian


> Add resetAccumulator method for AggregateFunction
> -
>
> Key: FLINK-5984
> URL: https://issues.apache.org/jira/browse/FLINK-5984
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Shaoxuan Wang
>
> Right now we have to create a new accumulator object if we just want to reset 
> it. We should allow passing the old one as a {{reuse}} object to 
> {{AggregateFunction#createAccumulator}}. The aggregate function then can 
> decide if it wants to create a new object or reset the old one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3460: [FLINK-1579] Implement History Server

2017-03-08 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3460
  
@uce I've addressed the remaining comments. Still undecided on the handler 
re-use; tests/documentation still missing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3496: [FLINK-5984] [table] add resetAccumulator method for Aggr...

2017-03-08 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3496
  
Thanks @shaoxuan-wang. The PR looks good. 
FLINK-5983 has been merged and FLINK-5963 will be merged soon.
It would be great if you could rebase your code on the master once that was 
done.

Thanks, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5983) Replace for/foreach/map in aggregates by while loops

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901566#comment-15901566
 ] 

ASF GitHub Bot commented on FLINK-5983:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3489


> Replace for/foreach/map in aggregates by while loops
> 
>
> Key: FLINK-5983
> URL: https://issues.apache.org/jira/browse/FLINK-5983
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Fabian Hueske
> Fix For: 1.3.0
>
>
> Right now there is a mixture of different kinds of loops within aggregate 
> functions. Although performance is not the main goal at the moment, we should 
> focus on performant execution especially in this runtime functions.
> e.g. {{DataSetTumbleCountWindowAggReduceGroupFunction}}
> We should replace loops, maps etc. by primitive while loops.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3489: [FLINK-5983] [table] Convert FOR into WHILE loops ...

2017-03-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3489


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-5983) Replace for/foreach/map in aggregates by while loops

2017-03-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5983.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: adbf846f23881b98ab9dc5886a0b066b8aa1ded6

> Replace for/foreach/map in aggregates by while loops
> 
>
> Key: FLINK-5983
> URL: https://issues.apache.org/jira/browse/FLINK-5983
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Fabian Hueske
> Fix For: 1.3.0
>
>
> Right now there is a mixture of different kinds of loops within aggregate 
> functions. Although performance is not the main goal at the moment, we should 
> focus on performant execution especially in this runtime functions.
> e.g. {{DataSetTumbleCountWindowAggReduceGroupFunction}}
> We should replace loops, maps etc. by primitive while loops.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)
Vladislav Pernin created FLINK-6001:
---

 Summary: NPE on TumblingEventTimeWindows with 
ContinuousEventTimeTrigger and allowedLateness
 Key: FLINK-6001
 URL: https://issues.apache.org/jira/browse/FLINK-6001
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, Streaming
Affects Versions: 1.2.0
Reporter: Vladislav Pernin
Priority: Critical


I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5983) Replace for/foreach/map in aggregates by while loops

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901545#comment-15901545
 ] 

ASF GitHub Bot commented on FLINK-5983:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3489
  
Thanks @fhueske. I could not find anything suspicious. I will rebase and 
merge this...


> Replace for/foreach/map in aggregates by while loops
> 
>
> Key: FLINK-5983
> URL: https://issues.apache.org/jira/browse/FLINK-5983
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Fabian Hueske
>
> Right now there is a mixture of different kinds of loops within aggregate 
> functions. Although performance is not the main goal at the moment, we should 
> focus on performant execution especially in this runtime functions.
> e.g. {{DataSetTumbleCountWindowAggReduceGroupFunction}}
> We should replace loops, maps etc. by primitive while loops.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5785) Add an Imputer for preparing data

2017-03-08 Thread Anna Beer (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901539#comment-15901539
 ] 

Anna Beer commented on FLINK-5785:
--

I just started to try it

> Add an Imputer for preparing data
> -
>
> Key: FLINK-5785
> URL: https://issues.apache.org/jira/browse/FLINK-5785
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>
> We need to add an Imputer as described in [1].
> "The Imputer class provides basic strategies for imputing missing values, 
> either using the mean, the median or the most frequent value of the row or 
> column in which the missing values are located. This class also allows for 
> different missing values encodings."
> References
> 1. http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
> 2. 
> http://scikit-learn.org/stable/auto_examples/missing_values.html#sphx-glr-auto-examples-missing-values-py



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5963) Remove preparation mapper of DataSetAggregate

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901518#comment-15901518
 ] 

ASF GitHub Bot commented on FLINK-5963:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3472
  
Thanks for the reviews @shaoxuan-wang and @sunjincheng121.
Will merge this PR


> Remove preparation mapper of DataSetAggregate
> -
>
> Key: FLINK-5963
> URL: https://issues.apache.org/jira/browse/FLINK-5963
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Minor
>
> With the new UDAGG interface we do not need the preparation mapper anymore. 
> It adds overhead because 
> - it is another operator
> - it prevents to use {{AggregateFunction.accumulate()}} in a combiner or 
> reducer.
> Hence, it should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3472: [FLINK-5963] [table] Remove prepare mapper of DataSetAggr...

2017-03-08 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3472
  
Thanks for the reviews @shaoxuan-wang and @sunjincheng121.
Will merge this PR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5804) Add [non-partitioned] processing time OVER RANGE BETWEEN UNBOUNDED PRECEDING aggregation to SQL

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901515#comment-15901515
 ] 

ASF GitHub Bot commented on FLINK-5804:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3491
  
Thanks for the update @sunjincheng121. 
The PR looks good to merge.


> Add [non-partitioned] processing time OVER RANGE BETWEEN UNBOUNDED PRECEDING 
> aggregation to SQL
> ---
>
> Key: FLINK-5804
> URL: https://issues.apache.org/jira/browse/FLINK-5804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> The goal of this issue is to add support for OVER RANGE aggregations on 
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (ORDER BY procTime() RANGE BETWEEN UNBOUNDED PRECEDING AND 
> CURRENT ROW) AS sumB,
>   MIN(b) OVER (ORDER BY procTime() RANGE BETWEEN UNBOUNDED PRECEDING AND 
> CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - Since no PARTITION BY clause is specified, the execution will be single 
> threaded.
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a 
> parameterless scalar function that just indicates processing time mode.
> - bounded PRECEDING is not supported (see FLINK-5654)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3491: [FLINK-5804] [table] Add support for procTime non-partiti...

2017-03-08 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3491
  
Thanks for the update @sunjincheng121. 
The PR looks good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3389: [FLINK-5881] [table] ScalarFunction(UDF) should support v...

2017-03-08 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3389
  
Thanks @clarkyzl. I will review the PR tomorrow.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5594) Use higher-resolution graphic on Flink project site homepage

2017-03-08 Thread Mike Winters (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901507#comment-15901507
 ] 

Mike Winters commented on FLINK-5594:
-

Addressed by this PR: https://github.com/apache/flink-web/pull/49

> Use higher-resolution graphic on Flink project site homepage
> 
>
> Key: FLINK-5594
> URL: https://issues.apache.org/jira/browse/FLINK-5594
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Mike Winters
>Assignee: Mike Winters
>Priority: Minor
>  Labels: newbie, website
> Attachments: flink-home-graphic.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> On some monitors, the graphic on the Flink project site homepage 
> (http://flink.apache.org/index.html) is blurry. It should be replaced with a 
> higher-resolution image. The current image's filename is 
> "flink-front-graphic-update.png".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5881) ScalarFunction(UDF) should support variable types and variable arguments

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901488#comment-15901488
 ] 

ASF GitHub Bot commented on FLINK-5881:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3389
  
Thanks @clarkyzl. I will review the PR tomorrow.


> ScalarFunction(UDF) should support variable types and variable arguments  
> -
>
> Key: FLINK-5881
> URL: https://issues.apache.org/jira/browse/FLINK-5881
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> As a sub-task of FLINK-5826. We would like to support the ScalarFunction 
> first and make the review a little bit easier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5047) Add sliding group-windows for batch tables

2017-03-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901480#comment-15901480
 ] 

Timo Walther commented on FLINK-5047:
-

Fixed for time-based windows in 1.3.0: 31a57c5a89d6d22ccb629c2adfe4ffb87441e6dd

> Add sliding group-windows for batch tables
> --
>
> Key: FLINK-5047
> URL: https://issues.apache.org/jira/browse/FLINK-5047
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Timo Walther
>
> Add Slide group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
> There are two ways to implement sliding windows for batch:
> 1. replicate the output in order to assign keys for overlapping windows. This 
> is probably the more straight-forward implementation and supports any 
> aggregation function but blows up the data volume.
> 2. if the aggregation functions are combinable / pre-aggregatable, we can 
> also find the largest tumbling window size from which the sliding windows can 
> be assembled. This is basically the technique used to express sliding windows 
> with plain SQL (GROUP BY + OVER clauses). For a sliding window Slide(10 
> minutes, 2 minutes) this would mean to first compute aggregates of 
> non-overlapping (tumbling) 2 minute windows and assembling consecutively 5 of 
> these into a sliding window (could be done in a MapPartition with sorted 
> input). The implementation could be done as an optimizer rule to split the 
> sliding aggregate into a tumbling aggregate and a SQL WINDOW operator. Maybe 
> it makes sense to implement the WINDOW clause first and reuse this for 
> sliding windows.
> 3. There is also a third, hybrid solution: Doing the pre-aggregation on the 
> largest non-overlapping windows (as in 2) and replicating these results and 
> processing those as in the 1) approach. The benefits of this is that it a) is 
> based on the implementation that supports non-combinable aggregates (which is 
> required in any case) and b) that it does not require the implementation of 
> the SQL WINDOW operator. Internally, this can be implemented again as an 
> optimizer rule that translates the SlidingWindow into a pre-aggregating 
> TublingWindow and a final SlidingWindow (with replication).
> see FLINK-4692 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5047) Add sliding group-windows for batch tables

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901477#comment-15901477
 ] 

ASF GitHub Bot commented on FLINK-5047:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3364


> Add sliding group-windows for batch tables
> --
>
> Key: FLINK-5047
> URL: https://issues.apache.org/jira/browse/FLINK-5047
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Timo Walther
>
> Add Slide group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
> There are two ways to implement sliding windows for batch:
> 1. replicate the output in order to assign keys for overlapping windows. This 
> is probably the more straight-forward implementation and supports any 
> aggregation function but blows up the data volume.
> 2. if the aggregation functions are combinable / pre-aggregatable, we can 
> also find the largest tumbling window size from which the sliding windows can 
> be assembled. This is basically the technique used to express sliding windows 
> with plain SQL (GROUP BY + OVER clauses). For a sliding window Slide(10 
> minutes, 2 minutes) this would mean to first compute aggregates of 
> non-overlapping (tumbling) 2 minute windows and assembling consecutively 5 of 
> these into a sliding window (could be done in a MapPartition with sorted 
> input). The implementation could be done as an optimizer rule to split the 
> sliding aggregate into a tumbling aggregate and a SQL WINDOW operator. Maybe 
> it makes sense to implement the WINDOW clause first and reuse this for 
> sliding windows.
> 3. There is also a third, hybrid solution: Doing the pre-aggregation on the 
> largest non-overlapping windows (as in 2) and replicating these results and 
> processing those as in the 1) approach. The benefits of this is that it a) is 
> based on the implementation that supports non-combinable aggregates (which is 
> required in any case) and b) that it does not require the implementation of 
> the SQL WINDOW operator. Internally, this can be implemented again as an 
> optimizer rule that translates the SlidingWindow into a pre-aggregating 
> TublingWindow and a final SlidingWindow (with replication).
> see FLINK-4692 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3364: [FLINK-5047] [table] Add sliding group-windows for...

2017-03-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3364


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5047) Add sliding group-windows for batch tables

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901463#comment-15901463
 ] 

ASF GitHub Bot commented on FLINK-5047:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3364#discussion_r104948259
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/IncrementalAggregateAllWindowFunction.scala
 ---
@@ -53,7 +53,12 @@ class IncrementalAggregateAllWindowFunction[W <: Window](
 
 if (iterator.hasNext) {
   val record = iterator.next()
-  out.collect(record)
+  var i = 0
+  while (i < record.getArity) {
+output.setField(i, record.getField(0))
--- End diff --

Good point!


> Add sliding group-windows for batch tables
> --
>
> Key: FLINK-5047
> URL: https://issues.apache.org/jira/browse/FLINK-5047
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Timo Walther
>
> Add Slide group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
> There are two ways to implement sliding windows for batch:
> 1. replicate the output in order to assign keys for overlapping windows. This 
> is probably the more straight-forward implementation and supports any 
> aggregation function but blows up the data volume.
> 2. if the aggregation functions are combinable / pre-aggregatable, we can 
> also find the largest tumbling window size from which the sliding windows can 
> be assembled. This is basically the technique used to express sliding windows 
> with plain SQL (GROUP BY + OVER clauses). For a sliding window Slide(10 
> minutes, 2 minutes) this would mean to first compute aggregates of 
> non-overlapping (tumbling) 2 minute windows and assembling consecutively 5 of 
> these into a sliding window (could be done in a MapPartition with sorted 
> input). The implementation could be done as an optimizer rule to split the 
> sliding aggregate into a tumbling aggregate and a SQL WINDOW operator. Maybe 
> it makes sense to implement the WINDOW clause first and reuse this for 
> sliding windows.
> 3. There is also a third, hybrid solution: Doing the pre-aggregation on the 
> largest non-overlapping windows (as in 2) and replicating these results and 
> processing those as in the 1) approach. The benefits of this is that it a) is 
> based on the implementation that supports non-combinable aggregates (which is 
> required in any case) and b) that it does not require the implementation of 
> the SQL WINDOW operator. Internally, this can be implemented again as an 
> optimizer rule that translates the SlidingWindow into a pre-aggregating 
> TublingWindow and a final SlidingWindow (with replication).
> see FLINK-4692 for more discussion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3364: [FLINK-5047] [table] Add sliding group-windows for...

2017-03-08 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3364#discussion_r104948259
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/IncrementalAggregateAllWindowFunction.scala
 ---
@@ -53,7 +53,12 @@ class IncrementalAggregateAllWindowFunction[W <: Window](
 
 if (iterator.hasNext) {
   val record = iterator.next()
-  out.collect(record)
+  var i = 0
+  while (i < record.getArity) {
+output.setField(i, record.getField(0))
--- End diff --

Good point!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3155) Update Flink docker version to latest stable Flink version

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901447#comment-15901447
 ] 

ASF GitHub Bot commented on FLINK-3155:
---

Github user kdombeck commented on a diff in the pull request:

https://github.com/apache/flink/pull/3490#discussion_r104945513
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,7 +22,7 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ARG FLINK_VERSION=1.1.3
+ARG FLINK_VERSION=1.1.4
--- End diff --

The main reason for the upgrade that I needed was the the mirrors did not 
have version 1.1.3. They only had 1.1.4 and 1.2.0. Which is probably a 
different issue.

Example: http://mirrors.advancedhosters.com/apache/flink/


> Update Flink docker version to latest stable Flink version
> --
>
> Key: FLINK-3155
> URL: https://issues.apache.org/jira/browse/FLINK-3155
> Project: Flink
>  Issue Type: Task
>  Components: flink-contrib
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Maximilian Michels
>Priority: Minor
> Fix For: 1.0.0
>
>
> It would be nice to always set the Docker Flink binary URL to point to the 
> latest Flink version. Until then, this JIRA keeps track of the updates for 
> releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3490: [FLINK-3155] Update Docker to use the latest 1.1.x...

2017-03-08 Thread kdombeck
Github user kdombeck commented on a diff in the pull request:

https://github.com/apache/flink/pull/3490#discussion_r104945513
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,7 +22,7 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ARG FLINK_VERSION=1.1.3
+ARG FLINK_VERSION=1.1.4
--- End diff --

The main reason for the upgrade that I needed was the the mirrors did not 
have version 1.1.3. They only had 1.1.4 and 1.2.0. Which is probably a 
different issue.

Example: http://mirrors.advancedhosters.com/apache/flink/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3155) Update Flink docker version to latest stable Flink version

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901439#comment-15901439
 ] 

ASF GitHub Bot commented on FLINK-3155:
---

Github user kdombeck commented on the issue:

https://github.com/apache/flink/pull/3490
  
Added to existing Jira 
[issue](https://issues.apache.org/jira/browse/FLINK-3155)


> Update Flink docker version to latest stable Flink version
> --
>
> Key: FLINK-3155
> URL: https://issues.apache.org/jira/browse/FLINK-3155
> Project: Flink
>  Issue Type: Task
>  Components: flink-contrib
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Maximilian Michels
>Priority: Minor
> Fix For: 1.0.0
>
>
> It would be nice to always set the Docker Flink binary URL to point to the 
> latest Flink version. Until then, this JIRA keeps track of the updates for 
> releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3490: [FLINK-3155] Update Docker to use the latest 1.1.x versio...

2017-03-08 Thread kdombeck
Github user kdombeck commented on the issue:

https://github.com/apache/flink/pull/3490
  
Added to existing Jira 
[issue](https://issues.apache.org/jira/browse/FLINK-3155)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5414) Bump up Calcite version to 1.11

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901431#comment-15901431
 ] 

ASF GitHub Bot commented on FLINK-5414:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3426


> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3338: [FLINK-5414] [table] Bump up Calcite version to 1....

2017-03-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3338


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-5414) Bump up Calcite version to 1.11

2017-03-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5414.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: bec818d84a65a812290d49bca9cfd62de7379b1e

> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
> Fix For: 1.3.0
>
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5414) Bump up Calcite version to 1.11

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901430#comment-15901430
 ] 

ASF GitHub Bot commented on FLINK-5414:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3338


> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3426: [FLINK-5414] [table] Bump up Calcite version to 1....

2017-03-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3426


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5985) Flink treats every task as stateful (making topology changes impossible)

2017-03-08 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901424#comment-15901424
 ] 

Gyula Fora commented on FLINK-5985:
---

I was somehow assuming that this affects all jobs. Unfortunately I cannot send 
the program but I can try to reproduce it in a minimal example tomorrow. I know 
this used to work in 1.1 with pretty much the same job.

> Flink treats every task as stateful (making topology changes impossible)
> 
>
> Key: FLINK-5985
> URL: https://issues.apache.org/jira/browse/FLINK-5985
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Gyula Fora
>Priority: Critical
>
> It seems  that Flink treats every Task as stateful so changing the topology 
> is not possible without setting uid on every single operator.
> If the topology has an iteration this is virtually impossible (or at least 
> gets super hacky)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5984) Add resetAccumulator method for AggregateFunction

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901423#comment-15901423
 ] 

ASF GitHub Bot commented on FLINK-5984:
---

Github user shaoxuan-wang commented on the issue:

https://github.com/apache/flink/pull/3496
  
@fhueske , do you have plan to merge FLINK-5983 and FLINK-5963 very soon? 
This PR probably need rebase once your two PRs are merged, as changes are 
overlapped.


> Add resetAccumulator method for AggregateFunction
> -
>
> Key: FLINK-5984
> URL: https://issues.apache.org/jira/browse/FLINK-5984
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Shaoxuan Wang
>
> Right now we have to create a new accumulator object if we just want to reset 
> it. We should allow passing the old one as a {{reuse}} object to 
> {{AggregateFunction#createAccumulator}}. The aggregate function then can 
> decide if it wants to create a new object or reset the old one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3496: [FLINK-5984] [table] add resetAccumulator method for Aggr...

2017-03-08 Thread shaoxuan-wang
Github user shaoxuan-wang commented on the issue:

https://github.com/apache/flink/pull/3496
  
@fhueske , do you have plan to merge FLINK-5983 and FLINK-5963 very soon? 
This PR probably need rebase once your two PRs are merged, as changes are 
overlapped.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5984) Add resetAccumulator method for AggregateFunction

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901405#comment-15901405
 ] 

ASF GitHub Bot commented on FLINK-5984:
---

GitHub user shaoxuan-wang opened a pull request:

https://github.com/apache/flink/pull/3496

[FLINK-5984] [table] add resetAccumulator method for AggregateFunction

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shaoxuan-wang/flink F5984-submit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3496.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3496


commit d60ef3e67736c6cf366a486d0c57b13874c381bd
Author: shaoxuan-wang 
Date:   2017-03-08T15:10:28Z

[FLINK-5984] [table] add resetAccumulator method for AggregateFunction




> Add resetAccumulator method for AggregateFunction
> -
>
> Key: FLINK-5984
> URL: https://issues.apache.org/jira/browse/FLINK-5984
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Shaoxuan Wang
>
> Right now we have to create a new accumulator object if we just want to reset 
> it. We should allow passing the old one as a {{reuse}} object to 
> {{AggregateFunction#createAccumulator}}. The aggregate function then can 
> decide if it wants to create a new object or reset the old one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5414) Bump up Calcite version to 1.11

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901402#comment-15901402
 ] 

ASF GitHub Bot commented on FLINK-5414:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3426
  
Thanks for the update @haohui. I fixed the decimal issues by applying 
changes from @wuchong's PR. I think for now we should not do the primitive 
checking for scalar functions, this should be part of FLINK-5177. I will merge 
this now. But we should definitely solve FLINK-5177 soon. I will assign it to 
me.


> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-5177) Improve nullability handling

2017-03-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-5177:
---

Assignee: Timo Walther

> Improve nullability handling 
> -
>
> Key: FLINK-5177
> URL: https://issues.apache.org/jira/browse/FLINK-5177
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Currently, all fields of the Table API are marked as nullable by default. A 
> lot of null checking could be avoided if we would properly handle 
> nullability. Fields of tuples and POJOs with primitive fields can not be 
> null. Elements of primitive arrays too. It also includes parameters and 
> return types of user-defined scalar, table, and aggregate functions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3496: [FLINK-5984] [table] add resetAccumulator method f...

2017-03-08 Thread shaoxuan-wang
GitHub user shaoxuan-wang opened a pull request:

https://github.com/apache/flink/pull/3496

[FLINK-5984] [table] add resetAccumulator method for AggregateFunction

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shaoxuan-wang/flink F5984-submit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3496.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3496


commit d60ef3e67736c6cf366a486d0c57b13874c381bd
Author: shaoxuan-wang 
Date:   2017-03-08T15:10:28Z

[FLINK-5984] [table] add resetAccumulator method for AggregateFunction




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3426: [FLINK-5414] [table] Bump up Calcite version to 1.11

2017-03-08 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3426
  
Thanks for the update @haohui. I fixed the decimal issues by applying 
changes from @wuchong's PR. I think for now we should not do the primitive 
checking for scalar functions, this should be part of FLINK-5177. I will merge 
this now. But we should definitely solve FLINK-5177 soon. I will assign it to 
me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5971) JobLeaderIdService should time out registered jobs

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901398#comment-15901398
 ] 

ASF GitHub Bot commented on FLINK-5971:
---

Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3488#discussion_r104937304
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * The set of configuration options relating to the ResourceManager
+ */
+@PublicEvolving
+public class ResourceManagerOptions {
+
+   public static final ConfigOption JOB_TIMEOUT = ConfigOptions
+   .key("resourcemanager.job.timeout")
--- End diff --

maybe we can change the name to something like "inactive_job.timeout" or 
"idle_job.timeout" to make this config more specific


> JobLeaderIdService should time out registered jobs
> --
>
> Key: FLINK-5971
> URL: https://issues.apache.org/jira/browse/FLINK-5971
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> The {{JobLeaderIdService}} has no mechanism to time out inactive jobs. At the 
> moment it relies on the {{RunningJobsRegistry}} which only gives a heuristic 
> answer.
> We should remove the {{RunningJobsRegistry}} and register instead a timeout 
> for each job which does not have a job leader associated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3488: [FLINK-5971] [flip-6] Add timeout for registered j...

2017-03-08 Thread KurtYoung
Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3488#discussion_r104937304
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * The set of configuration options relating to the ResourceManager
+ */
+@PublicEvolving
+public class ResourceManagerOptions {
+
+   public static final ConfigOption JOB_TIMEOUT = ConfigOptions
+   .key("resourcemanager.job.timeout")
--- End diff --

maybe we can change the name to something like "inactive_job.timeout" or 
"idle_job.timeout" to make this config more specific


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6000) Can not start HA cluster with start-cluster.sh

2017-03-08 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-6000:
---

 Summary: Can not start HA cluster with start-cluster.sh
 Key: FLINK-6000
 URL: https://issues.apache.org/jira/browse/FLINK-6000
 Project: Flink
  Issue Type: Bug
  Components: Startup Shell Scripts
Affects Versions: 1.2.0
Reporter: Dawid Wysakowicz


Right know it is impossible to start a cluster in zookeeper HA mode as 
described in the documentation by setting:

in con/flink-conf.yaml:
{code}
high-availability: zookeeper
...
{code}

in conf/masters:
{code}
localhost:8081
localhost:8082
{code}

The problem is with the {{bin/config.sh}} file. If value "zookeeper" is read 
from config file the variable {{HIGH_AVAILABILITY}} will be reset to "none" 
with the else branch. See the below code:

{code}
if [ -z "${HIGH_AVAILABILITY}" ]; then
 HIGH_AVAILABILITY=$(readFromConfig ${KEY_HIGH_AVAILABILITY} "" 
"${YAML_CONF}")
 if [ -z "${HIGH_AVAILABILITY}" ]; then
# Try deprecated value
DEPRECATED_HA=$(readFromConfig "recovery.mode" "" "${YAML_CONF}")
if [ -z "${DEPRECATED_HA}" ]; then
HIGH_AVAILABILITY="none"
elif [ ${DEPRECATED_HA} == "standalone" ]; then
# Standalone is now 'none'
HIGH_AVAILABILITY="none"
else
HIGH_AVAILABILITY=${DEPRECATED_HA}
fi
 else
 HIGH_AVAILABILITY="none" <-- it exits here
 fi
fi
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5177) Improve nullability handling

2017-03-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-5177:

Description: Currently, all fields of the Table API are marked as nullable 
by default. A lot of null checking could be avoided if we would properly handle 
nullability. Fields of tuples and POJOs with primitive fields can not be null. 
Elements of primitive arrays too. It also includes parameters and return types 
of user-defined scalar, table, and aggregate functions.  (was: Currently, all 
fields of the Table API are marked as nullable by default. A lot of null 
checking could be avoided if we would properly handle nullability. Fields of 
tuples and POJOs with primitive fields can not be null. Elements of primitive 
arrays too. )

> Improve nullability handling 
> -
>
> Key: FLINK-5177
> URL: https://issues.apache.org/jira/browse/FLINK-5177
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> Currently, all fields of the Table API are marked as nullable by default. A 
> lot of null checking could be avoided if we would properly handle 
> nullability. Fields of tuples and POJOs with primitive fields can not be 
> null. Elements of primitive arrays too. It also includes parameters and 
> return types of user-defined scalar, table, and aggregate functions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5653) Add processing time OVER ROWS BETWEEN x PRECEDING aggregation to SQL

2017-03-08 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901391#comment-15901391
 ] 

Stefano Bortoli commented on FLINK-5653:


What is the plan for this issue? 

> Add processing time OVER ROWS BETWEEN x PRECEDING aggregation to SQL
> 
>
> Key: FLINK-5653
> URL: https://issues.apache.org/jira/browse/FLINK-5653
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Stefano Bortoli
>
> The goal of this issue is to add support for OVER ROWS aggregations on 
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY procTime() ROWS BETWEEN 2 PRECEDING 
> AND CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY procTime() ROWS BETWEEN 2 PRECEDING 
> AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single 
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a 
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5656)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5984) Add resetAccumulator method for AggregateFunction

2017-03-08 Thread Shaoxuan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaoxuan Wang updated FLINK-5984:
-
Summary: Add resetAccumulator method for AggregateFunction  (was: Allow 
reusing of accumulators in AggregateFunction)

> Add resetAccumulator method for AggregateFunction
> -
>
> Key: FLINK-5984
> URL: https://issues.apache.org/jira/browse/FLINK-5984
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Shaoxuan Wang
>
> Right now we have to create a new accumulator object if we just want to reset 
> it. We should allow passing the old one as a {{reuse}} object to 
> {{AggregateFunction#createAccumulator}}. The aggregate function then can 
> decide if it wants to create a new object or reset the old one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3468: [FLINK-5824] Fix String/byte conversions without explicit...

2017-03-08 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3468
  
Good change, thanks!
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5824) Fix String/byte conversions without explicit encoding

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901388#comment-15901388
 ] 

ASF GitHub Bot commented on FLINK-5824:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3468
  
Good change, thanks!
Merging this...


> Fix String/byte conversions without explicit encoding
> -
>
> Key: FLINK-5824
> URL: https://issues.apache.org/jira/browse/FLINK-5824
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Queryable State, State Backends, 
> Checkpointing, Webfrontend
>Reporter: Ufuk Celebi
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>
> In a couple of places we convert Strings to bytes and bytes back to Strings 
> without explicitly specifying an encoding. This can lead to problems when 
> client and server default encodings differ.
> The task of this JIRA is to go over the whole project and look for 
> conversions where we don't specify an encoding and fix it to specify UTF-8 
> explicitly.
> For starters, we can {{grep -R 'getBytes()' .}}, which already reveals many 
> problematic places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5781) Generation HTML from ConfigOption

2017-03-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901378#comment-15901378
 ] 

ASF GitHub Bot commented on FLINK-5781:
---

GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/3495

[FLINK-5781] Generation HTML from ConfigOption

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink configHTML

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3495.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3495


commit 303e9dd06f895d8df33cb8b6494cd622a2dafaba
Author: Dawid Wysakowicz 
Date:   2017-03-08T14:57:33Z

[FLINK-5781] Generation HTML from ConfigOption




> Generation HTML from ConfigOption
> -
>
> Key: FLINK-5781
> URL: https://issues.apache.org/jira/browse/FLINK-5781
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Ufuk Celebi
>Assignee: Dawid Wysakowicz
>
> Use the ConfigOption instances to generate a HTML page that we can use to 
> include in the docs configuration page.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3495: [FLINK-5781] Generation HTML from ConfigOption

2017-03-08 Thread dawidwys
GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/3495

[FLINK-5781] Generation HTML from ConfigOption

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink configHTML

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3495.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3495


commit 303e9dd06f895d8df33cb8b6494cd622a2dafaba
Author: Dawid Wysakowicz 
Date:   2017-03-08T14:57:33Z

[FLINK-5781] Generation HTML from ConfigOption




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


<    1   2   3   4   >