[jira] [Commented] (FLINK-3427) Add watermark monitoring to JobManager web frontend

2017-04-03 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953612#comment-15953612
 ] 

Vladislav Pernin commented on FLINK-3427:
-

What is the official way of monitoring backpressure ?
The rest_api is not documented any more and sends a "deprecated" response.

Shouldn't it be part of the metrics system ?

> Add watermark monitoring to JobManager web frontend
> ---
>
> Key: FLINK-3427
> URL: https://issues.apache.org/jira/browse/FLINK-3427
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API, Webfrontend
>Reporter: Robert Metzger
> Fix For: 1.3.0
>
>
> Currently, its quite hard to figure out issues with the watermarks.
> I think we can improve the situation by reporting the following information 
> through the metrics system:
> - Report the current low watermark for each operator (this way, you can see 
> if one operator is preventing the watermarks to rise)
> - Report the number of events arrived after the low watermark (users can see 
> how accurate the watermarks are)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6214) WindowAssigners do not allow negative offsets

2017-03-31 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950531#comment-15950531
 ] 

Vladislav Pernin commented on FLINK-6214:
-

Let's take the example of a simple one day window, but that has to begin at 
00:00:00 and not at 01:00:00 and ends at 00:00:00 next day.
of(Time.days(1), Time.hours(-1)) would produce the expected window : J 00:00:00 
-> J+1 00:00:00
of(Time.days(1), Time.hours(23)) would also work but not as easy to think about.

> WindowAssigners do not allow negative offsets
> -
>
> Key: FLINK-6214
> URL: https://issues.apache.org/jira/browse/FLINK-6214
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>
> Both the website and the JavaDoc promotes 
> ".window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) For 
> example, in China you would have to specify an offset of Time.hours(-8)". But 
> both the sliding and tumbling event time assigners do not allow offset to be 
> negative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-2549) Add topK operator for DataSet

2017-03-22 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936398#comment-15936398
 ] 

Vladislav Pernin commented on FLINK-2549:
-

Is there a chance of seeing it merged within the next months or is this going 
to be abandoned ?

> Add topK operator for DataSet
> -
>
> Key: FLINK-2549
> URL: https://issues.apache.org/jira/browse/FLINK-2549
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataSet API
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
>Priority: Minor
>
> topK is a common operation for user, it would be great to have it in Flink. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6061) NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() on a keyed state in the open() function

2017-03-15 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15926254#comment-15926254
 ] 

Vladislav Pernin commented on FLINK-6061:
-

Thanks for the hint. I'll try this.

> NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() 
> on a keyed state in the open() function
> 
>
> Key: FLINK-6061
> URL: https://issues.apache.org/jira/browse/FLINK-6061
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing, Streaming
>Affects Versions: 1.3.0
>Reporter: Vladislav Pernin
>Assignee: Stefan Richter
>
> With a default state (heap), the call to state.entries() "nicely fails" with 
> a IllegalStateException :
> {noformat}
> Caused by: java.lang.IllegalStateException: No key set.
>   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>   at 
> org.apache.flink.runtime.state.heap.HeapMapState.entries(HeapMapState.java:188)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> With a RocksDBStateBackend, it fails with a NPE :
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:64)
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:27)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKey(AbstractRocksDBState.java:181)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKeyWithGroupAndNamespace(AbstractRocksDBState.java:163)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:148)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.serializeCurrentKeyAndNamespace(RocksDBMapState.java:263)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.iterator(RocksDBMapState.java:196)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.entries(RocksDBMapState.java:143)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The reason is that the record is null, because backend.getCurrentKey() is 
> null (not yet set) in AbstractRocksDBState.
> This may also be the case for other RockDBState implementations.
> You can find the reproducer here based on 1.3-SNAPSHOT (needed for the 
> MapState) :
> https://github.com/vpernin/flink-rocksdbstate-npe
> The reproducer is a non sense application. There is no MapState with TTL or 
> expiration yet, so the goal is to try to shrink or expire the state at some 
> interval.
> This could be done by iterating over the entries of the state and removing 
> some of them.
> This could probably not be done in the open() method of a rich function.
> I also tried to implement CheckpointListener and to access the state content 
> in notifyCheckpointComplete() method, but it fails to, I guess due to the 
> asynchronous nature of the checkpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6061) NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() on a keyed state in the open() function

2017-03-15 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15926177#comment-15926177
 ] 

Vladislav Pernin commented on FLINK-6061:
-

Impressed by the reactivity guys.

Any idea to implement properly enough an expiring mechanism on a keyed state at 
some regular interval ?

> NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() 
> on a keyed state in the open() function
> 
>
> Key: FLINK-6061
> URL: https://issues.apache.org/jira/browse/FLINK-6061
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing, Streaming
>Affects Versions: 1.3.0
>Reporter: Vladislav Pernin
>Assignee: Stefan Richter
>
> With a default state (heap), the call to state.entries() "nicely fails" with 
> a IllegalStateException :
> {noformat}
> Caused by: java.lang.IllegalStateException: No key set.
>   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>   at 
> org.apache.flink.runtime.state.heap.HeapMapState.entries(HeapMapState.java:188)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> With a RocksDBStateBackend, it fails with a NPE :
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:64)
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:27)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKey(AbstractRocksDBState.java:181)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKeyWithGroupAndNamespace(AbstractRocksDBState.java:163)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:148)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.serializeCurrentKeyAndNamespace(RocksDBMapState.java:263)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.iterator(RocksDBMapState.java:196)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.entries(RocksDBMapState.java:143)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The reason is that the record is null, because backend.getCurrentKey() is 
> null (not yet set) in AbstractRocksDBState.
> This may also be the case for other RockDBState implementations.
> You can find the reproducer here based on 1.3-SNAPSHOT (needed for the 
> MapState) :
> https://github.com/vpernin/flink-rocksdbstate-npe
> The reproducer is a non sense application. There is no MapState with TTL or 
> expiration yet, so the goal is to try to shrink or expire the state at some 
> interval.
> This could be done by iterating over the entries of the state and removing 
> some of them.
> This could probably not be done in the open() method of a rich function.
> I also tried to implement CheckpointListener and to access the state content 
> in notifyCheckpointComplete() method, but it fails to, I guess due to the 
> asynchronous nature of the checkpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6061) NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() on a keyed state in the open() function

2017-03-15 Thread Vladislav Pernin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pernin updated FLINK-6061:

Summary: NPE on TypeSerializer.serialize with a RocksDBStateBackend calling 
entries() on a keyed state in the open() function  (was: NPE on 
TypeSerializer.serialize with a RocksDBStateBackend calling state.entries in 
the open() function)

> NPE on TypeSerializer.serialize with a RocksDBStateBackend calling entries() 
> on a keyed state in the open() function
> 
>
> Key: FLINK-6061
> URL: https://issues.apache.org/jira/browse/FLINK-6061
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing, Streaming
>Affects Versions: 1.3.0
>Reporter: Vladislav Pernin
>
> With a default state (heap), the call to state.entries() "nicely fails" with 
> a IllegalStateException :
> {noformat}
> Caused by: java.lang.IllegalStateException: No key set.
>   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>   at 
> org.apache.flink.runtime.state.heap.HeapMapState.entries(HeapMapState.java:188)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> With a RocksDBStateBackend, it fails with a NPE :
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:64)
>   at 
> org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:27)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKey(AbstractRocksDBState.java:181)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKeyWithGroupAndNamespace(AbstractRocksDBState.java:163)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:148)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.serializeCurrentKeyAndNamespace(RocksDBMapState.java:263)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.iterator(RocksDBMapState.java:196)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.entries(RocksDBMapState.java:143)
>   at 
> org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
>   at 
> org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The reason is that the record is null, because backend.getCurrentKey() is 
> null (not yet set) in AbstractRocksDBState.
> This may also be the case for other RockDBState implementations.
> You can find the reproducer here based on 1.3-SNAPSHOT (needed for the 
> MapState) :
> https://github.com/vpernin/flink-rocksdbstate-npe
> The reproducer is a non sense application. There is no MapState with TTL or 
> expiration yet, so the goal is to try to shrink or expire the state at some 
> interval.
> This could be done by iterating over the entries of the state and removing 
> some of them.
> This could probably not be done in the open() method of a rich function.
> I also tried to implement CheckpointListener and to access the state content 
> in notifyCheckpointComplete() method, but it fails to, I guess due to the 
> asynchronous nature of the checkpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6061) NPE on TypeSerializer.serialize with a RocksDBStateBackend calling state.entries in the open() function

2017-03-15 Thread Vladislav Pernin (JIRA)
Vladislav Pernin created FLINK-6061:
---

 Summary: NPE on TypeSerializer.serialize with a 
RocksDBStateBackend calling state.entries in the open() function
 Key: FLINK-6061
 URL: https://issues.apache.org/jira/browse/FLINK-6061
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, State Backends, Checkpointing, Streaming
Affects Versions: 1.3.0
Reporter: Vladislav Pernin


With a default state (heap), the call to state.entries() "nicely fails" with a 
IllegalStateException :
{noformat}
Caused by: java.lang.IllegalStateException: No key set.
at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
at 
org.apache.flink.runtime.state.heap.HeapMapState.entries(HeapMapState.java:188)
at 
org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
at 
org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
at java.lang.Thread.run(Thread.java:745)
{noformat}

With a RocksDBStateBackend, it fails with a NPE :
{noformat}
Caused by: java.lang.NullPointerException
at 
org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:64)
at 
org.apache.flink.api.common.typeutils.base.ShortSerializer.serialize(ShortSerializer.java:27)
at 
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKey(AbstractRocksDBState.java:181)
at 
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeKeyWithGroupAndNamespace(AbstractRocksDBState.java:163)
at 
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.writeCurrentKeyWithGroupAndNamespace(AbstractRocksDBState.java:148)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.serializeCurrentKeyAndNamespace(RocksDBMapState.java:263)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.iterator(RocksDBMapState.java:196)
at 
org.apache.flink.contrib.streaming.state.RocksDBMapState.entries(RocksDBMapState.java:143)
at 
org.apache.flink.runtime.state.UserFacingMapState.entries(UserFacingMapState.java:77)
at 
org.apache.flink.Reproducer$FailingMapWithState.open(Reproducer.java:78)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:112)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:670)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The reason is that the record is null, because backend.getCurrentKey() is null 
(not yet set) in AbstractRocksDBState.
This may also be the case for other RockDBState implementations.

You can find the reproducer here based on 1.3-SNAPSHOT (needed for the 
MapState) :
https://github.com/vpernin/flink-rocksdbstate-npe

The reproducer is a non sense application. There is no MapState with TTL or 
expiration yet, so the goal is to try to shrink or expire the state at some 
interval.
This could be done by iterating over the entries of the state and removing some 
of them.

This could probably not be done in the open() method of a rich function.
I also tried to implement CheckpointListener and to access the state content in 
notifyCheckpointComplete() method, but it fails to, I guess due to the 
asynchronous nature of the checkpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-15 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925861#comment-15925861
 ] 

Vladislav Pernin commented on FLINK-6001:
-

Very nice, it works.
I was reluctant to push a PR myself with your NPE protection without being 
aware of possible side effects.

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-14 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924615#comment-15924615
 ] 

Vladislav Pernin commented on FLINK-6001:
-

Maybe related to FLINK-5713 ? I have to test.

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-3089) State API Should Support Data Expiration

2017-03-13 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907304#comment-15907304
 ] 

Vladislav Pernin commented on FLINK-3089:
-

Another use case is idempotency.

It could be handled by a an in memory cache snapshoted at checkpoint but it has 
... to be not too large because in memory.
Ex : 
https://github.com/jgrier/FilteringExample/blob/master/src/main/java/com/dataartisans/filters/DedupeFilterFunction.java

It could also be implemented by a MapState backed by RocksDB. It won't be in 
memory, so it can grow very large.
But it won't be possible to expire the state if some entries are never queried 
because the remains without duplicates.

> State API Should Support Data Expiration
> 
>
> Key: FLINK-3089
> URL: https://issues.apache.org/jira/browse/FLINK-3089
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, State Backends, Checkpointing
>Reporter: Niels Basjes
>
> In some usecases (webanalytics) there is a need to have a state per visitor 
> on a website (i.e. keyBy(sessionid) ).
> At some point the visitor simply leaves and no longer creates new events (so 
> a special 'end of session' event will not occur).
> The only way to determine that a visitor has left is by choosing a timeout, 
> like "After 30 minutes no events we consider the visitor 'gone'".
> Only after this (chosen) timeout has expired should we discard this state.
> In the Trigger part of Windows we can set a timer and close/discard this kind 
> of information. But that introduces the buffering effect of the window (which 
> in some scenarios is unwanted).
> What I would like is to be able to set a timeout on a specific OperatorState 
> value which I can update afterwards.
> This makes it possible to create a map function that assigns the right value 
> and that discards the state automatically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901756#comment-15901756
 ] 

Vladislav Pernin commented on FLINK-6001:
-

Another reproducer version without a "sleeping map" but a slow source function 
that try to mimic the reality :
https://github.com/vpernin/flink-window-npe/tree/slow-serializer

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901616#comment-15901616
 ] 

Vladislav Pernin commented on FLINK-6001:
-

I have simplified the reproducer but it fails less often. Please use the 
following branch :
https://github.com/vpernin/flink-window-npe/tree/simpler-but-fails-less-often

> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
>  ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
> ~[flink-runtime_2.11-1.2.0.jar:1.2.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> It fails only with the Thread.sleep. If you uncomment it, it won't fail.
> So, you may have to increase the sleep time depending of your environment.
> I know this is not a very rigourous test, but this is the only way I've found 
> to reproduce it.
> You can find the reproducer here :
> https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pernin updated FLINK-6001:

Description: 
I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
So, you may have to increase the sleep time depending of your environment.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe

  was:
I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe


> NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and 
> allowedLateness
> ---
>
> Key: FLINK-6001
> URL: https://issues.apache.org/jira/browse/FLINK-6001
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Streaming
>Affects Versions: 1.2.0
>Reporter: Vladislav Pernin
>Priority: Critical
>
> I try to isolate the problem in a small and simple reproducer by extracting 
> the data from my real setup.
> I fails with NPE at :
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
>  

[jira] [Created] (FLINK-6001) NPE on TumblingEventTimeWindows with ContinuousEventTimeTrigger and allowedLateness

2017-03-08 Thread Vladislav Pernin (JIRA)
Vladislav Pernin created FLINK-6001:
---

 Summary: NPE on TumblingEventTimeWindows with 
ContinuousEventTimeTrigger and allowedLateness
 Key: FLINK-6001
 URL: https://issues.apache.org/jira/browse/FLINK-6001
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, Streaming
Affects Versions: 1.2.0
Reporter: Vladislav Pernin
Priority: Critical


I try to isolate the problem in a small and simple reproducer by extracting the 
data from my real setup.

I fails with NPE at :
{noformat}
java.lang.NullPointerException: null
at 
org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger.onEventTime(ContinuousEventTimeTrigger.java:81)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onEventTime(WindowOperator.java:721)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:425)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.HeapInternalTimerService.advanceWatermark(HeapInternalTimerService.java:276)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:858)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:168)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
 ~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) 
~[flink-streaming-java_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) 
~[flink-runtime_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}

It fails only with the Thread.sleep. If you uncomment it, it won't fail.
I know this is not a very rigourous test, but this is the only way I've found 
to reproduce it.

You can find the reproducer here :
https://github.com/vpernin/flink-window-npe



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-2922) Add Queryable Window Operator

2017-02-24 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882872#comment-15882872
 ] 

Vladislav Pernin commented on FLINK-2922:
-

This ML message seems to indicate this is not in 1.2.0 :
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Queryable-State-and-Windows-td11212.html

In which JIRA is this tracked ?

> Add Queryable Window Operator
> -
>
> Key: FLINK-2922
> URL: https://issues.apache.org/jira/browse/FLINK-2922
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>  Labels: requires-design-doc
> Attachments: FLINK-2922.pdf
>
>
> The idea is to provide a window operator that allows to query the current 
> window result at any time without discarding the current result.
> For example, a user might have an aggregation window operation with tumbling 
> windows of 1 hour. Now, at any time they might be interested in the current 
> aggregated value for the currently in-flight hour window.
> The idea is to make the operator a two input operator where normal elements 
> arrive on input one while queries arrive on input two. The query stream must 
> be keyed by the same key as the input stream. If an input arrives for a key 
> the current value for that key is emitted along with the query element so 
> that the user can map the result to the query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-07-12 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372977#comment-15372977
 ] 

Vladislav Pernin commented on FLINK-4034:
-

For sure, even if I have been very careful not to change any dependencies or to 
take the right choice in case of conflicts, I might have broken something.
So yes, 1.2 is a safe target.
I will have some holidays during within the next weeks, but I will do my best 
to integrate the comments in the PR and rebase from master.

> Dependency convergence on com.101tec:zkclient and 
> com.esotericsoftware.kryo:kryo
> 
>
> Key: FLINK-4034
> URL: https://issues.apache.org/jira/browse/FLINK-4034
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.0.3
>Reporter: Vladislav Pernin
>
> If dependency convergence is enabled and asserted on Maven, projects using 
> Flink do not compile.
> Example :
> {code}
> Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths 
> to dependency are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-java:1.0.3
> +-org.apache.flink:flink-core:1.0.3
>   +-com.esotericsoftware.kryo:kryo:2.24.0
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.twitter:chill-java:0.7.4
>   +-com.esotericsoftware.kryo:kryo:2.21
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.esotericsoftware.kryo:kryo:2.21
> {code}
>   
> {code}
> Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
> are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
>   +-com.101tec:zkclient:0.7
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.kafka:kafka_2.11:0.8.2.2
>   +-com.101tec:zkclient:0.3
> {code}
> I cannot emit a pull request without knowing on which specifics versions you 
> rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-07-06 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364933#comment-15364933
 ] 

Vladislav Pernin commented on FLINK-4034:
-

I couldn't agree more about the dependency convergence, that's why I'd like to 
help fixing it.

How do you see the process of "splitting" my PR : I could try to enable 
dependency converge on submodules ou groups of modules first and submit 
incremental PR, but I think that as soon as I will enable DC on hadoop-shaded 
... or other common module, it will be required everywhere and some conflict 
have to be fixed by chirurgical dependency management in the parent pom on 
specific profiles, so visible/enabled by every modules ?

> Dependency convergence on com.101tec:zkclient and 
> com.esotericsoftware.kryo:kryo
> 
>
> Key: FLINK-4034
> URL: https://issues.apache.org/jira/browse/FLINK-4034
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.0.3
>Reporter: Vladislav Pernin
>
> If dependency convergence is enabled and asserted on Maven, projects using 
> Flink do not compile.
> Example :
> {code}
> Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths 
> to dependency are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-java:1.0.3
> +-org.apache.flink:flink-core:1.0.3
>   +-com.esotericsoftware.kryo:kryo:2.24.0
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.twitter:chill-java:0.7.4
>   +-com.esotericsoftware.kryo:kryo:2.21
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.esotericsoftware.kryo:kryo:2.21
> {code}
>   
> {code}
> Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
> are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
>   +-com.101tec:zkclient:0.7
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.kafka:kafka_2.11:0.8.2.2
>   +-com.101tec:zkclient:0.3
> {code}
> I cannot emit a pull request without knowing on which specifics versions you 
> rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4145) JmxReporterTest fails due to port conflicts

2016-07-03 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15360686#comment-15360686
 ] 

Vladislav Pernin commented on FLINK-4145:
-

Wy not using something like :
{code}
public static int getAvailablePort() {
ServerSocket ss = null;
try {
ss = new ServerSocket(0);
return ss.getLocalPort();
} catch (IOException e) {
throw new IllegalStateException(e);
} finally {
if (ss != null && !ss.isClosed()) {
try {
ss.close();
} catch (IOException e) {
throw new IllegalStateException(e);
}
}
}
}
{code}

> JmxReporterTest fails due to port conflicts
> ---
>
> Key: FLINK-4145
> URL: https://issues.apache.org/jira/browse/FLINK-4145
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> I saw multiple failures of the {{JmxReporterTest}} most likely due to a port 
> conflicts. The test relies on the default JMX reporter port range, which 
> spans 5 ports. Running on Travis with multiple concurrent builds and bad 
> timings, this can lead to port conflicts.
> Some example failed runs:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/141999066/log.txt (one 
> out of 5 jobs failed)
> https://travis-ci.org/uce/flink/builds/141917901 (all 5 jobs failed)
> I propose to take the fork number into account (like the forkable Flink 
> testing cluster) and configure a larger port range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-06-17 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335820#comment-15335820
 ] 

Vladislav Pernin edited comment on FLINK-4034 at 6/17/16 10:16 AM:
---

[~rmetzger] In case of "dependency divergence", I have to make a decision:
- always sticking to the current resolved version, which would garanty zero 
regression, but leave us with some differents libraires version choices among 
the Flink modules, and a complicated dependencyManagement/exclusions
- always upgrading to the biggest version (not the latest library version 
published but in a conflict between two versions, choosing the latest), has 
some possible impact
- a mixed strategy depending of the libraries, the hadoop1, hadoop2 profile

Are we confident enough in the unit test ?
What do you think ?




was (Author: vpernin):
[~rmetzger] In case of "dependency divergence", I have to make a decision:
- always sticking to the current resolved version, which would garanty zero 
regression, but leave us with some differents libraires version choices among 
the Flink modules, and a complicated dependencyManagement/exclusions
- always upgrading to the biggest version (not the latest library version 
published but in a conflict between two versions, choosing the latest), has 
some possible impact
- a mixed strategy depending of the libraries

Are we confident enough in the unit test ?
What do you think ?



> Dependency convergence on com.101tec:zkclient and 
> com.esotericsoftware.kryo:kryo
> 
>
> Key: FLINK-4034
> URL: https://issues.apache.org/jira/browse/FLINK-4034
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.0.3
>Reporter: Vladislav Pernin
>
> If dependency convergence is enabled and asserted on Maven, projects using 
> Flink do not compile.
> Example :
> {code}
> Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths 
> to dependency are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-java:1.0.3
> +-org.apache.flink:flink-core:1.0.3
>   +-com.esotericsoftware.kryo:kryo:2.24.0
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.twitter:chill-java:0.7.4
>   +-com.esotericsoftware.kryo:kryo:2.21
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.esotericsoftware.kryo:kryo:2.21
> {code}
>   
> {code}
> Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
> are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
>   +-com.101tec:zkclient:0.7
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.kafka:kafka_2.11:0.8.2.2
>   +-com.101tec:zkclient:0.3
> {code}
> I cannot emit a pull request without knowing on which specifics versions you 
> rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-06-17 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335820#comment-15335820
 ] 

Vladislav Pernin commented on FLINK-4034:
-

[~rmetzger] In case of "dependency divergence", I have to make a decision:
- always sticking to the current resolved version, which would garanty zero 
regression, but leave us with some differents libraires version choices among 
the Flink modules, and a complicated dependencyManagement/exclusions
- always upgrading to the biggest version (not the latest library version 
published but in a conflict between two versions, choosing the latest), has 
some possible impact
- a mixed strategy depending of the libraries

Are we confident enough in the unit test ?
What do you think ?



> Dependency convergence on com.101tec:zkclient and 
> com.esotericsoftware.kryo:kryo
> 
>
> Key: FLINK-4034
> URL: https://issues.apache.org/jira/browse/FLINK-4034
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.0.3
>Reporter: Vladislav Pernin
>
> If dependency convergence is enabled and asserted on Maven, projects using 
> Flink do not compile.
> Example :
> {code}
> Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths 
> to dependency are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-java:1.0.3
> +-org.apache.flink:flink-core:1.0.3
>   +-com.esotericsoftware.kryo:kryo:2.24.0
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.twitter:chill-java:0.7.4
>   +-com.esotericsoftware.kryo:kryo:2.21
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.esotericsoftware.kryo:kryo:2.21
> {code}
>   
> {code}
> Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
> are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
>   +-com.101tec:zkclient:0.7
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.kafka:kafka_2.11:0.8.2.2
>   +-com.101tec:zkclient:0.3
> {code}
> I cannot emit a pull request without knowing on which specifics versions you 
> rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-06-10 Thread Vladislav Pernin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324706#comment-15324706
 ] 

Vladislav Pernin commented on FLINK-4034:
-

I will enable dependencyConverge in the enforcer rule and resolve conflict one 
by one, and then double checking the resolved dependencies before and after.
If I'm successfull, I will make a PR.

> Dependency convergence on com.101tec:zkclient and 
> com.esotericsoftware.kryo:kryo
> 
>
> Key: FLINK-4034
> URL: https://issues.apache.org/jira/browse/FLINK-4034
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.0.3
>Reporter: Vladislav Pernin
>
> If dependency convergence is enabled and asserted on Maven, projects using 
> Flink do not compile.
> Example :
> {code}
> Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths 
> to dependency are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-java:1.0.3
> +-org.apache.flink:flink-core:1.0.3
>   +-com.esotericsoftware.kryo:kryo:2.24.0
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.twitter:chill-java:0.7.4
>   +-com.esotericsoftware.kryo:kryo:2.21
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-streaming-java_2.11:1.0.3
> +-org.apache.flink:flink-runtime_2.11:1.0.3
>   +-com.twitter:chill_2.11:0.7.4
> +-com.esotericsoftware.kryo:kryo:2.21
> {code}
>   
> {code}
> Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
> are:
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
>   +-com.101tec:zkclient:0.7
> and
> +-groupidXXX:artifactidXXX:versionXXX
>   +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
> +-org.apache.kafka:kafka_2.11:0.8.2.2
>   +-com.101tec:zkclient:0.3
> {code}
> I cannot emit a pull request without knowing on which specifics versions you 
> rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-06-08 Thread Vladislav Pernin (JIRA)
Vladislav Pernin created FLINK-4034:
---

 Summary: Dependency convergence on com.101tec:zkclient and 
com.esotericsoftware.kryo:kryo
 Key: FLINK-4034
 URL: https://issues.apache.org/jira/browse/FLINK-4034
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Vladislav Pernin


If dependency convergence is enabled and asserted on Maven, projects using 
Flink do not compile.

Example :

{code}
Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths to 
dependency are:
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-java:1.0.3
+-org.apache.flink:flink-core:1.0.3
  +-com.esotericsoftware.kryo:kryo:2.24.0
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-streaming-java_2.11:1.0.3
+-org.apache.flink:flink-runtime_2.11:1.0.3
  +-com.twitter:chill_2.11:0.7.4
+-com.twitter:chill-java:0.7.4
  +-com.esotericsoftware.kryo:kryo:2.21
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-streaming-java_2.11:1.0.3
+-org.apache.flink:flink-runtime_2.11:1.0.3
  +-com.twitter:chill_2.11:0.7.4
+-com.esotericsoftware.kryo:kryo:2.21
{code}  

{code}
Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
are:
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
+-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
  +-com.101tec:zkclient:0.7
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
+-org.apache.kafka:kafka_2.11:0.8.2.2
  +-com.101tec:zkclient:0.3
{code}

I cannot emit a pull request without knowing on which specifics versions you 
rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)