[jira] [Commented] (FLINK-3322) MemoryManager creates too much GC pressure with iterative jobs

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508776#comment-15508776
 ] 

ASF GitHub Bot commented on FLINK-3322:
---

Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/2510#discussion_r79761429
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/JoinDriver.java 
---
@@ -128,17 +130,22 @@ public void prepare() throws Exception{

ConfigConstants.DEFAULT_RUNTIME_HASH_JOIN_BLOOM_FILTERS);
 
// create and return joining iterator according to provided 
local strategy.
-   if (objectReuseEnabled) {
-   switch (ls) {
-   case INNER_MERGE:
-   this.joinIterator = new 
ReusingMergeInnerJoinIterator<>(in1, in2, 
+   if (reset) {
+   resetForIterativeTasks(in1, in2, serializer1, 
serializer2, comparator1, comparator2, pairComparatorFactory);
+   reset = false;
+   }
+   if (joinIterator == null) {
--- End diff --

@ggevay 
Can you have a look at this?  If we can really see how cases like the 
PageRank can be solved then may be we can have the reset() method to do the 
actual reset. 


> MemoryManager creates too much GC pressure with iterative jobs
> --
>
> Key: FLINK-3322
> URL: https://issues.apache.org/jira/browse/FLINK-3322
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.0.0
>Reporter: Gabor Gevay
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: FLINK-3322.docx, FLINK-3322_reusingmemoryfordrivers.docx
>
>
> When taskmanager.memory.preallocate is false (the default), released memory 
> segments are not added to a pool, but the GC is expected to take care of 
> them. This puts too much pressure on the GC with iterative jobs, where the 
> operators reallocate all memory at every superstep.
> See the following discussion on the mailing list:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Memory-manager-behavior-in-iterative-jobs-tt10066.html
> Reproducing the issue:
> https://github.com/ggevay/flink/tree/MemoryManager-crazy-gc
> The class to start is malom.Solver. If you increase the memory given to the 
> JVM from 1 to 50 GB, performance gradually degrades by more than 10 times. 
> (It will generate some lookuptables to /tmp on first run for a few minutes.) 
> (I think the slowdown might also depend somewhat on 
> taskmanager.memory.fraction, because more unused non-managed memory results 
> in rarer GCs.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2510: FLINK-3322 Allow drivers and iterators to reuse th...

2016-09-20 Thread ramkrish86
Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/2510#discussion_r79761429
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/JoinDriver.java 
---
@@ -128,17 +130,22 @@ public void prepare() throws Exception{

ConfigConstants.DEFAULT_RUNTIME_HASH_JOIN_BLOOM_FILTERS);
 
// create and return joining iterator according to provided 
local strategy.
-   if (objectReuseEnabled) {
-   switch (ls) {
-   case INNER_MERGE:
-   this.joinIterator = new 
ReusingMergeInnerJoinIterator<>(in1, in2, 
+   if (reset) {
+   resetForIterativeTasks(in1, in2, serializer1, 
serializer2, comparator1, comparator2, pairComparatorFactory);
+   reset = false;
+   }
+   if (joinIterator == null) {
--- End diff --

@ggevay 
Can you have a look at this?  If we can really see how cases like the 
PageRank can be solved then may be we can have the reset() method to do the 
actual reset. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2510: FLINK-3322 Allow drivers and iterators to reuse th...

2016-09-20 Thread ramkrish86
Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/2510#discussion_r79755676
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/JoinDriver.java 
---
@@ -128,17 +130,22 @@ public void prepare() throws Exception{

ConfigConstants.DEFAULT_RUNTIME_HASH_JOIN_BLOOM_FILTERS);
 
// create and return joining iterator according to provided 
local strategy.
-   if (objectReuseEnabled) {
-   switch (ls) {
-   case INNER_MERGE:
-   this.joinIterator = new 
ReusingMergeInnerJoinIterator<>(in1, in2, 
+   if (reset) {
+   resetForIterativeTasks(in1, in2, serializer1, 
serializer2, comparator1, comparator2, pairComparatorFactory);
+   reset = false;
+   }
+   if (joinIterator == null) {
--- End diff --

I can submit a PR with my updated changes. As said above this time though 
the drivers are ResettableDrivers, `reset()` does not have any impl and 
everything is done in `prepare()`.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3322) MemoryManager creates too much GC pressure with iterative jobs

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508622#comment-15508622
 ] 

ASF GitHub Bot commented on FLINK-3322:
---

Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/2510#discussion_r79755676
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/JoinDriver.java 
---
@@ -128,17 +130,22 @@ public void prepare() throws Exception{

ConfigConstants.DEFAULT_RUNTIME_HASH_JOIN_BLOOM_FILTERS);
 
// create and return joining iterator according to provided 
local strategy.
-   if (objectReuseEnabled) {
-   switch (ls) {
-   case INNER_MERGE:
-   this.joinIterator = new 
ReusingMergeInnerJoinIterator<>(in1, in2, 
+   if (reset) {
+   resetForIterativeTasks(in1, in2, serializer1, 
serializer2, comparator1, comparator2, pairComparatorFactory);
+   reset = false;
+   }
+   if (joinIterator == null) {
--- End diff --

I can submit a PR with my updated changes. As said above this time though 
the drivers are ResettableDrivers, `reset()` does not have any impl and 
everything is done in `prepare()`.



> MemoryManager creates too much GC pressure with iterative jobs
> --
>
> Key: FLINK-3322
> URL: https://issues.apache.org/jira/browse/FLINK-3322
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.0.0
>Reporter: Gabor Gevay
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: FLINK-3322.docx, FLINK-3322_reusingmemoryfordrivers.docx
>
>
> When taskmanager.memory.preallocate is false (the default), released memory 
> segments are not added to a pool, but the GC is expected to take care of 
> them. This puts too much pressure on the GC with iterative jobs, where the 
> operators reallocate all memory at every superstep.
> See the following discussion on the mailing list:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Memory-manager-behavior-in-iterative-jobs-tt10066.html
> Reproducing the issue:
> https://github.com/ggevay/flink/tree/MemoryManager-crazy-gc
> The class to start is malom.Solver. If you increase the memory given to the 
> JVM from 1 to 50 GB, performance gradually degrades by more than 10 times. 
> (It will generate some lookuptables to /tmp on first run for a few minutes.) 
> (I think the slowdown might also depend somewhat on 
> taskmanager.memory.fraction, because more unused non-managed memory results 
> in rarer GCs.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2369: [FLINK-4035] Add a streaming connector for Apache Kafka 0...

2016-09-20 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2369
  
@cjstehno I would expect this to be in the 1.2.0 major release, which would 
probably be ~2 months from now according to Flink's past release cycle. The 
Flink community usually doesn't release major new features like this between 
minor bugfix releases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508526#comment-15508526
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2369
  
@cjstehno I would expect this to be in the 1.2.0 major release, which would 
probably be ~2 months from now according to Flink's past release cycle. The 
Flink community usually doesn't release major new features like this between 
minor bugfix releases.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Assignee: Robert Metzger
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2369: [FLINK-4035] Add a streaming connector for Apache Kafka 0...

2016-09-20 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2369
  
Looks like we need to rebase this PR on the recently merged Kerberos 
support.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508517#comment-15508517
 ] 

ASF GitHub Bot commented on FLINK-4035:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/2369
  
Looks like we need to rebase this PR on the recently merged Kerberos 
support.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Assignee: Robert Metzger
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4632) when yarn nodemanager lost, flink hung

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/FLINK-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508441#comment-15508441
 ] 

刘喆 commented on FLINK-4632:
---

The container is killed by two reasons:
1,  yarn preemption
2,  yarn nodemanager unhealthy,  kill all java children

I killed one taskmanager manually,  sometimes reproduct this hung, but 
sometimes it restart successfully.

> when yarn nodemanager lost,  flink hung
> ---
>
> Key: FLINK-4632
> URL: https://issues.apache.org/jira/browse/FLINK-4632
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Streaming
>Affects Versions: 1.2.0, 1.1.2
> Environment: cdh5.5.1  jdk1.7 flink1.1.2  1.2-snapshot   kafka0.8.2
>Reporter: 刘喆
>
> When run flink streaming on yarn,  using kafka as source,  it runs well when 
> start. But after long run, for example  8 hours, dealing 60,000,000+ 
> messages, it hung: no messages consumed,   one taskmanager is CANCELING, the 
> exception show:
> org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: 
> connection timeout
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:152)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:79)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:835)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:87)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: 连接超时
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   ... 6 more
> after applyhttps://issues.apache.org/jira/browse/FLINK-4625   
> it show:
> java.lang.Exception: TaskManager was lost/killed: 
> ResourceID{resourceId='container_1471620986643_744852_01_001400'} @ 
> 38.slave.adh (dataPort=45349)
>   at 
> org.apache.flink.runtime.instance.SimpleSlot.releaseSlot(SimpleSlot.java:162)
>   at 
> org.apache.flink.runtime.instance.SlotSharingGroupAssignment.releaseSharedSlot(SlotSharingGroupAssignment.java:533)
>   at 
> org.apache.flink.runtime.instance.SharedSlot.releaseSlot(SharedSlot.java:138)
>   at 
> org.apache.flink.runtime.instance.Instance.markDead(Instance.java:167)
>   at 
> 

[jira] [Assigned] (FLINK-4406) Implement job master registration at resource manager

2016-09-20 Thread zhuhaifeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuhaifeng reassigned FLINK-4406:
-

Assignee: zhuhaifeng

> Implement job master registration at resource manager
> -
>
> Key: FLINK-4406
> URL: https://issues.apache.org/jira/browse/FLINK-4406
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Wenlong Lyu
>Assignee: zhuhaifeng
>
> Job Master needs to register to Resource Manager when starting and then 
> watches leadership changes of RM, and trigger re-registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4629) Kafka v 0.10 Support

2016-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508374#comment-15508374
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-4629:


I just recalled that I have actually replied to you on the mailing list a few 
days ago: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Kafka-v-0-10-Support-td13528.html.
Could it be that you did not subscribe to the mailing list, so you didn't 
receive the reply?

> Kafka v 0.10 Support
> 
>
> Key: FLINK-4629
> URL: https://issues.apache.org/jira/browse/FLINK-4629
> Project: Flink
>  Issue Type: Wish
>Reporter: Mariano Gonzalez
>Priority: Minor
>
> I couldn't find any repo or documentation about when Flink will start 
> supporting Kafka v 0.10.
> Is there any document that you can point me out where i can see Flink's 
> roadmap?
> Thanks in advance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2332: [FLINK-2055] Implement Streaming HBaseSink

2016-09-20 Thread delding
Github user delding commented on the issue:

https://github.com/apache/flink/pull/2332
  
Hi @fhueske , I have updated this PR to address your comments. In this 
change, only one MutationActions and two ArralyList are needed for entire 
stream, MutationActions is now resettable. But I replaced MutationActions 
actions(IN value) with void actions(IN value, MutationActions actions) instead 
of void actions(IN value, List actions) as you suggested. Because in 
this case, user can still utilize MutationActions's API to handle Mutations 
creation logic which makes user easier to code. I also added a connect() method 
as @ramkrish86 suggested, but didn't explicitly check existence of table in the 
code, because once Admin#tableExists is added a NoSuchColumnFamilyException 
will be thrown during running the example. I have no idea why this happens... 
For now, if table doesn't exist, a table not existence Exception will be thrown 
until HBaseClient#send() is called.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2055) Implement Streaming HBaseSink

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507872#comment-15507872
 ] 

ASF GitHub Bot commented on FLINK-2055:
---

Github user delding commented on the issue:

https://github.com/apache/flink/pull/2332
  
Hi @fhueske , I have updated this PR to address your comments. In this 
change, only one MutationActions and two ArralyList are needed for entire 
stream, MutationActions is now resettable. But I replaced MutationActions 
actions(IN value) with void actions(IN value, MutationActions actions) instead 
of void actions(IN value, List actions) as you suggested. Because in 
this case, user can still utilize MutationActions's API to handle Mutations 
creation logic which makes user easier to code. I also added a connect() method 
as @ramkrish86 suggested, but didn't explicitly check existence of table in the 
code, because once Admin#tableExists is added a NoSuchColumnFamilyException 
will be thrown during running the example. I have no idea why this happens... 
For now, if table doesn't exist, a table not existence Exception will be thrown 
until HBaseClient#send() is called.


> Implement Streaming HBaseSink
> -
>
> Key: FLINK-2055
> URL: https://issues.apache.org/jira/browse/FLINK-2055
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming, Streaming Connectors
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Erli Ding
>
> As per : 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Write-Stream-to-HBase-td1300.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-09-20 Thread Simone Robutti (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507774#comment-15507774
 ] 

Simone Robutti commented on FLINK-4035:
---

The PR for kerberos integration for Kafka 0.9 has been merged recently. I don't 
know how does it work but maybe the work to make it compatible with this 
component should be done in this PR so we don't have a partial support for 
Kerberos on Kafka. 

> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.3
>Reporter: Elias Levy
>Assignee: Robert Metzger
>Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-3670) Kerberos: Improving long-running streaming jobs

2016-09-20 Thread Vijay Srinivasaraghavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Srinivasaraghavan resolved FLINK-3670.

   Resolution: Fixed
Fix Version/s: 1.2.0

Resolved as part of FLINK-3929

> Kerberos: Improving long-running streaming jobs
> ---
>
> Key: FLINK-3670
> URL: https://issues.apache.org/jira/browse/FLINK-3670
> Project: Flink
>  Issue Type: Improvement
>  Components: Client, Local Runtime
>Reporter: Maximilian Michels
>Assignee: Vijay Srinivasaraghavan
> Fix For: 1.2.0
>
>
> We have seen in the past, that Hadoop's delegation tokens are subject to a 
> number of subtle token renewal bugs. In addition, they have a maximum life 
> time that can be worked around but is very inconvenient for the user.
> As per [mailing list 
> discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Kerberos-for-Streaming-amp-Kafka-td10906.html],
>  a way to work around the maximum life time of DelegationTokens would be to 
> pass the Kerberos principal and key tab upon job submission. A daemon could 
> then periodically renew the ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-3670) Kerberos: Improving long-running streaming jobs

2016-09-20 Thread Vijay Srinivasaraghavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Srinivasaraghavan reassigned FLINK-3670:
--

Assignee: Vijay Srinivasaraghavan  (was: Eron Wright )

> Kerberos: Improving long-running streaming jobs
> ---
>
> Key: FLINK-3670
> URL: https://issues.apache.org/jira/browse/FLINK-3670
> Project: Flink
>  Issue Type: Improvement
>  Components: Client, Local Runtime
>Reporter: Maximilian Michels
>Assignee: Vijay Srinivasaraghavan
>
> We have seen in the past, that Hadoop's delegation tokens are subject to a 
> number of subtle token renewal bugs. In addition, they have a maximum life 
> time that can be worked around but is very inconvenient for the user.
> As per [mailing list 
> discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Kerberos-for-Streaming-amp-Kafka-td10906.html],
>  a way to work around the maximum life time of DelegationTokens would be to 
> pass the Kerberos principal and key tab upon job submission. A daemon could 
> then periodically renew the ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-3239) Support for Kerberos enabled Kafka 0.9.0.0

2016-09-20 Thread Vijay Srinivasaraghavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Srinivasaraghavan resolved FLINK-3239.

   Resolution: Fixed
Fix Version/s: 1.2.0

Resolved as part of FLINK-3929

> Support for Kerberos enabled Kafka 0.9.0.0
> --
>
> Key: FLINK-3239
> URL: https://issues.apache.org/jira/browse/FLINK-3239
> Project: Flink
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Vijay Srinivasaraghavan
> Fix For: 1.2.0
>
> Attachments: flink3239-prototype.patch
>
>
> In Kafka 0.9.0.0 support for Kerberos has been created ( KAFKA-1686 ).
> Request: Allow Flink to forward/manage the Kerberos tickets for Kafka 
> correctly so that we can use Kafka in a secured environment.
> I expect the needed changes to be similar to FLINK-2977 which implements the 
> same support for HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #934: Framesize fix

2016-09-20 Thread kl0u
Github user kl0u closed the pull request at:

https://github.com/apache/flink/pull/934


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-3929) Support for Kerberos Authentication with Keytab Credential

2016-09-20 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels resolved FLINK-3929.
---
   Resolution: Fixed
Fix Version/s: 1.2.0

> Support for Kerberos Authentication with Keytab Credential
> --
>
> Key: FLINK-3929
> URL: https://issues.apache.org/jira/browse/FLINK-3929
> Project: Flink
>  Issue Type: New Feature
>Reporter: Eron Wright 
>Assignee: Vijay Srinivasaraghavan
>  Labels: kerberos, security
> Fix For: 1.2.0
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> _This issue is part of a series of improvements detailed in the [Secure Data 
> Access|https://docs.google.com/document/d/1-GQB6uVOyoaXGwtqwqLV8BHDxWiMO2WnVzBoJ8oPaAs/edit?usp=sharing]
>  design doc._
> Add support for a keytab credential to be associated with the Flink cluster, 
> to facilitate:
> - Kerberos-authenticated data access for connectors
> - Kerberos-authenticated ZooKeeper access
> Support both the standalone and YARN deployment modes.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3929) Support for Kerberos Authentication with Keytab Credential

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507665#comment-15507665
 ] 

ASF GitHub Bot commented on FLINK-3929:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2275


> Support for Kerberos Authentication with Keytab Credential
> --
>
> Key: FLINK-3929
> URL: https://issues.apache.org/jira/browse/FLINK-3929
> Project: Flink
>  Issue Type: New Feature
>Reporter: Eron Wright 
>Assignee: Vijay Srinivasaraghavan
>  Labels: kerberos, security
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> _This issue is part of a series of improvements detailed in the [Secure Data 
> Access|https://docs.google.com/document/d/1-GQB6uVOyoaXGwtqwqLV8BHDxWiMO2WnVzBoJ8oPaAs/edit?usp=sharing]
>  design doc._
> Add support for a keytab credential to be associated with the Flink cluster, 
> to facilitate:
> - Kerberos-authenticated data access for connectors
> - Kerberos-authenticated ZooKeeper access
> Support both the standalone and YARN deployment modes.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2275


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-4640) Serialization of the initialValue of a Fold on WindowedStream fails

2016-09-20 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4640:
-
Description: 
The following program

{code}
DataStream> src = env.fromElements(new Tuple2("a", 1L));

src
  .keyBy(1)
  .timeWindow(Time.minutes(5))
  .fold(TreeMultimap.create(), new FoldFunction, TreeMultimap>() {
@Override
public TreeMultimap fold(
TreeMultimap topKSoFar, 
Tuple2 itemCount) throws Exception 
{
  String item = itemCount.f0;
  Long count = itemCount.f1;
  topKSoFar.put(count, item);
  if (topKSoFar.keySet().size() > 10) {
topKSoFar.removeAll(topKSoFar.keySet().first());
  }
  return topKSoFar;
}
});
{code}

throws this exception

{quote}
Caused by: java.lang.RuntimeException: Could not add value to folding state.
at 
org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:91)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:382)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:176)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
com.google.common.collect.AbstractMapBasedMultimap.put(AbstractMapBasedMultimap.java:192)
at 
com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:121)
at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:78)
at 
org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:115)
at 
org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:109)
at 
org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:85)
... 6 more
{quote}

The exception is caused because the initial value was not correctly 
deserialized and is {{null}}.

The user reporting this issue said that using the same {{FoldFunction}} on a 
{{KeyedStream}} (without a window) works fine.

I tracked the problem down to the serialization of the {{StateDescriptor}}, 
i.e., the {{writeObject()}} and {{readObject()}} methods. The methods use 
Flink's TypeSerializers to serialize the default value. In case of the 
{{TreeMultiMap}} this is the {{KryoSerializer}} which fails to read the 
serialized data for some reason.

A quick workaround to solve this issue would be to check if the default value 
implements {{Serializable}} and use Java Serialization in this case. However, 
it would be good to track the root cause of this problem.

  was:
The following program

{code}
DataStream> src = env.fromElements(new Tuple2("a", 1L));

src
  .keyBy(1)
  .timeWindow(Time.minutes(5))
  .fold(TreeMultimap.create(), new FoldFunction, TreeMultimap>() {
@Override
public TreeMultimap fold(
TreeMultimap topKSoFar, 
Tuple2 itemCount) throws Exception 
{
  String item = itemCount.f0;
  Long count = itemCount.f1;
  topKSoFar.put(count, item);
  if (topKSoFar.keySet().size() > 10) {
topKSoFar.removeAll(topKSoFar.keySet().first());
  }
  return topKSoFar;
}
});
{code}

throws this exception

{quote}
Caused by: java.lang.RuntimeException: Could not add value to folding state.
at 
org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:91)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:382)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:176)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
com.google.common.collect.AbstractMapBasedMultimap.put(AbstractMapBasedMultimap.java:192)
at 
com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:121)
at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:78)
at 
org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:115)
at 

[jira] [Resolved] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2016-09-20 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske resolved FLINK-2662.
--
   Resolution: Fixed
Fix Version/s: 1.1.3
   1.2.0

Fixed for 1.1.3 with a7f6594b6b47b91242cbb0a13ea4efc5508adcfc
Fixed for 1.2.0 with 303f6fee99b731dd138e37513705271f97f76d72

> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.2.0, 1.1.3, 1.0.0
>
> Attachments: FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507622#comment-15507622
 ] 

ASF GitHub Bot commented on FLINK-2662:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2508


> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2508: [FLINK-2662] [dataSet] Translate union with multip...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2508


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4361) Introduce Flink's own future abstraction

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507587#comment-15507587
 ] 

ASF GitHub Bot commented on FLINK-4361:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2472
  
The naming is not super nice, I agree. But there is a clear benefit for 
Java 8 familiar people to stick with something established.

+1 for this from my side.

Next time we get involved in the Java community processes, to make sure 
they have better names ;-)


> Introduce Flink's own future abstraction
> 
>
> Key: FLINK-4361
> URL: https://issues.apache.org/jira/browse/FLINK-4361
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Stephan Ewen
>Assignee: Till Rohrmann
>
> In order to keep the abstraction Scala Independent, we should not rely on 
> Scala Futures



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2472: [FLINK-4361] Introduce Flink's own future abstraction

2016-09-20 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2472
  
The naming is not super nice, I agree. But there is a clear benefit for 
Java 8 familiar people to stick with something established.

+1 for this from my side.

Next time we get involved in the Java community processes, to make sure 
they have better names ;-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4644) Deprecate "flink.base.dir.path" from ConfigConstants

2016-09-20 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507579#comment-15507579
 ] 

Stephan Ewen commented on FLINK-4644:
-

It was initially used by the web frontend, IIRC.

+1 for deprecating it.

> Deprecate "flink.base.dir.path" from ConfigConstants
> 
>
> Key: FLINK-4644
> URL: https://issues.apache.org/jira/browse/FLINK-4644
> Project: Flink
>  Issue Type: Wish
>Affects Versions: 1.2.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.2.0
>
>
> Flink's configuration entry "flink.base.dir.path" is not documented anywhere 
> but present in {{ConfigConstants}}. It is set in the JobManager but it is not 
> accessed anywhere in the code.
> I propose to deprecate the entry from ConfigConstants:
> public static final String FLINK_BASE_DIR_PATH_KEY = "flink.base.dir.path";
> I think this entry can only possibly be abused by users to write into the 
> Flink base dir or rely on some structure of the directory (which is different 
> for standalone, yarn/mesos). The only way i can see such a variable being 
> useful if it is present in the form of an environment variable to make it 
> available to other systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4645) Hard to register Kryo Serializers due to generics

2016-09-20 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507572#comment-15507572
 ] 

Stephan Ewen commented on FLINK-4645:
-

I think both methods have the same erasure (binary signature): {{public void 
registerTypeWithKryoSerializer(Class type, Class serializerClass)}}.
The japicmp plugin also classifies them as equivalent.

Interestingly, this is a case where binary compatibility is preserved, but 
whether source compatibility is more tricky to answer. I think it is, though, 
because {{Class}} is strictly more generic than {{Class>}}.


> Hard to register Kryo Serializers due to generics
> -
>
> Key: FLINK-4645
> URL: https://issues.apache.org/jira/browse/FLINK-4645
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> It currently does not work to do this:
> {code}
> env.registerTypeWithKryoSerializer(TreeMultimap.class, JavaSerializer.class);
> {code}
> instead on needs to do that:
> {code}
> env.registerTypeWithKryoSerializer(TreeMultimap.class, (Class Serializer>) JavaSerializer.class);
> {code}
> The fix would be to change the signature of the environment method from
> {code}
> public void registerTypeWithKryoSerializer(Class type, Class Serializer> serializerClass)
> {code}
> to
> {code}
> public void registerTypeWithKryoSerializer(Class type, Class Serializer> serializerClass)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3930) Implement Service-Level Authorization

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507535#comment-15507535
 ] 

ASF GitHub Bot commented on FLINK-3930:
---

Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@rmetzger I have added internals documentation section and provided details 
on how secure cookie is implemented. I will address the missing Netty data 
transfer secure cookie part in FLINK-4635. Please take a look.


> Implement Service-Level Authorization
> -
>
> Key: FLINK-3930
> URL: https://issues.apache.org/jira/browse/FLINK-3930
> Project: Flink
>  Issue Type: New Feature
>  Components: Security
>Reporter: Eron Wright 
>Assignee: Vijay Srinivasaraghavan
>  Labels: security
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> _This issue is part of a series of improvements detailed in the [Secure Data 
> Access|https://docs.google.com/document/d/1-GQB6uVOyoaXGwtqwqLV8BHDxWiMO2WnVzBoJ8oPaAs/edit?usp=sharing]
>  design doc._
> Service-level authorization is the initial authorization mechanism to ensure 
> clients (or servers) connecting to the Flink cluster are authorized to do so. 
>   The purpose is to prevent a cluster from being used by an unauthorized 
> user, whether to execute jobs, disrupt cluster functionality, or gain access 
> to secrets stored within the cluster.
> Implement service-level authorization as described in the design doc.
> - Introduce a shared secret cookie
> - Enable Akka security cookie
> - Implement data transfer authentication
> - Secure the web dashboard



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-20 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@rmetzger I have added internals documentation section and provided details 
on how secure cookie is implemented. I will address the missing Netty data 
transfer secure cookie part in FLINK-4635. Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4640) Serialization of the initialValue of a Fold on WindowedStream fails

2016-09-20 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507442#comment-15507442
 ] 

Stephan Ewen commented on FLINK-4640:
-

Have a fix and nice tests. Waiting for the CI to give green light, then merging 
this fix.

> Serialization of the initialValue of a Fold on WindowedStream fails
> ---
>
> Key: FLINK-4640
> URL: https://issues.apache.org/jira/browse/FLINK-4640
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Fabian Hueske
>Priority: Blocker
> Fix For: 1.2.0, 1.1.3
>
>
> The following program
> {code}
> DataStream> src = env.fromElements(new Tuple2 Long>("a", 1L));
> src
>   .keyBy(1)
>   .timeWindow(Time.minutes(5))
>   .fold(TreeMultimap.create(), new FoldFunction Long>, TreeMultimap>() {
> @Override
> public TreeMultimap fold(
> TreeMultimap topKSoFar, 
> Tuple2 itemCount) throws Exception 
> {
>   String item = itemCount.f0;
>   Long count = itemCount.f1;
>   topKSoFar.put(count, item);
>   if (topKSoFar.keySet().size() > 10) {
> topKSoFar.removeAll(topKSoFar.keySet().first());
>   }
>   return topKSoFar;
> }
> });
> {code}
> throws this exception
> {quote}
> Caused by: java.lang.RuntimeException: Could not add value to folding state.
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:91)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:382)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:176)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> com.google.common.collect.AbstractMapBasedMultimap.put(AbstractMapBasedMultimap.java:192)
>   at 
> com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:121)
>   at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:78)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:115)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:109)
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:85)
>   ... 6 more
> {quote}
> The exception is caused because the initial value was not correctly 
> deserialized and is {{null}}.
> Using the same {{FoldFunction}} on a {{KeyedStream}} (without a window) works 
> fine.
> I tracked the problem down to the serialization of the {{StateDescriptor}}, 
> i.e., the {{writeObject()}} and {{readObject()}} methods. The methods use 
> Flink's TypeSerializers to serialize the default value. In case of the 
> {{TreeMultiMap}} this is the {{KryoSerializer}} which fails to read the 
> serialized data for some reason.
> A quick workaround to solve this issue would be to check if the default value 
> implements {{Serializable}} and use Java Serialization in this case. However, 
> it would be good to track the root cause of this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4640) Serialization of the initialValue of a Fold on WindowedStream fails

2016-09-20 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507323#comment-15507323
 ] 

Stephan Ewen commented on FLINK-4640:
-

Okay, so the {{keyBy(1).fold(...)}} case can be fixed via 
{{env.registerTypeWithKryoSerializer(TreeMultimap.class, 
JavaSerializer.class);}}

The Window operator still fails if I do 
{{keyBy(1).timeWindow(Time.seconds(10)).fold(...)}}.
The bug is that serializer registrations are not properly forwarded.

Fixing that...

> Serialization of the initialValue of a Fold on WindowedStream fails
> ---
>
> Key: FLINK-4640
> URL: https://issues.apache.org/jira/browse/FLINK-4640
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Fabian Hueske
>Priority: Blocker
> Fix For: 1.2.0, 1.1.3
>
>
> The following program
> {code}
> DataStream> src = env.fromElements(new Tuple2 Long>("a", 1L));
> src
>   .keyBy(1)
>   .timeWindow(Time.minutes(5))
>   .fold(TreeMultimap.create(), new FoldFunction Long>, TreeMultimap>() {
> @Override
> public TreeMultimap fold(
> TreeMultimap topKSoFar, 
> Tuple2 itemCount) throws Exception 
> {
>   String item = itemCount.f0;
>   Long count = itemCount.f1;
>   topKSoFar.put(count, item);
>   if (topKSoFar.keySet().size() > 10) {
> topKSoFar.removeAll(topKSoFar.keySet().first());
>   }
>   return topKSoFar;
> }
> });
> {code}
> throws this exception
> {quote}
> Caused by: java.lang.RuntimeException: Could not add value to folding state.
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:91)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:382)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:176)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> com.google.common.collect.AbstractMapBasedMultimap.put(AbstractMapBasedMultimap.java:192)
>   at 
> com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:121)
>   at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:78)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:115)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:109)
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:85)
>   ... 6 more
> {quote}
> The exception is caused because the initial value was not correctly 
> deserialized and is {{null}}.
> Using the same {{FoldFunction}} on a {{KeyedStream}} (without a window) works 
> fine.
> I tracked the problem down to the serialization of the {{StateDescriptor}}, 
> i.e., the {{writeObject()}} and {{readObject()}} methods. The methods use 
> Flink's TypeSerializers to serialize the default value. In case of the 
> {{TreeMultiMap}} this is the {{KryoSerializer}} which fails to read the 
> serialized data for some reason.
> A quick workaround to solve this issue would be to check if the default value 
> implements {{Serializable}} and use Java Serialization in this case. However, 
> it would be good to track the root cause of this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4645) Hard to register Kryo Serializers due to generics

2016-09-20 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507285#comment-15507285
 ] 

Greg Hogan commented on FLINK-4645:
---

{{registerTypeWithKryoSerializer}} is in the public API. Is this not a breaking 
change?

> Hard to register Kryo Serializers due to generics
> -
>
> Key: FLINK-4645
> URL: https://issues.apache.org/jira/browse/FLINK-4645
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> It currently does not work to do this:
> {code}
> env.registerTypeWithKryoSerializer(TreeMultimap.class, JavaSerializer.class);
> {code}
> instead on needs to do that:
> {code}
> env.registerTypeWithKryoSerializer(TreeMultimap.class, (Class Serializer>) JavaSerializer.class);
> {code}
> The fix would be to change the signature of the environment method from
> {code}
> public void registerTypeWithKryoSerializer(Class type, Class Serializer> serializerClass)
> {code}
> to
> {code}
> public void registerTypeWithKryoSerializer(Class type, Class Serializer> serializerClass)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3988) Add data set connector to DistributedLog

2016-09-20 Thread John Lonergan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507259#comment-15507259
 ] 

John Lonergan commented on FLINK-3988:
--

Anyone fancy collaborating?

> Add data set connector to DistributedLog
> 
>
> Key: FLINK-3988
> URL: https://issues.apache.org/jira/browse/FLINK-3988
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jia Zhai
>
> I would like to add a connector to DistributedLog, which recently published 
> by twitter.
> All the infomation of DistributedLog, could be found at here: 
> http://distributedlog.io/html
> And this JIRA ticket is to track the data set connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3987) Add data stream connector to DistributedLog

2016-09-20 Thread John Lonergan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507255#comment-15507255
 ] 

John Lonergan commented on FLINK-3987:
--

Anyone fancy collaborating?

> Add data stream connector to DistributedLog
> ---
>
> Key: FLINK-3987
> URL: https://issues.apache.org/jira/browse/FLINK-3987
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jia Zhai
>
> I would like to add a connector to DistributedLog, which recently published 
> by twitter.
> All the infomation of DistributedLog, could be found at here: 
> http://distributedlog.io/html
> And this JIRA ticket is to track the data stream connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4645) Hard to register Kryo Serializers due to generics

2016-09-20 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4645:
---

 Summary: Hard to register Kryo Serializers due to generics
 Key: FLINK-4645
 URL: https://issues.apache.org/jira/browse/FLINK-4645
 Project: Flink
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0


It currently does not work to do this:
{code}
env.registerTypeWithKryoSerializer(TreeMultimap.class, JavaSerializer.class);
{code}
instead on needs to do that:
{code}
env.registerTypeWithKryoSerializer(TreeMultimap.class, (Class>) JavaSerializer.class);
{code}

The fix would be to change the signature of the environment method from
{code}
public void registerTypeWithKryoSerializer(Class type, Class> serializerClass)
{code}
to
{code}
public void registerTypeWithKryoSerializer(Class type, Class serializerClass)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4629) Kafka v 0.10 Support

2016-09-20 Thread Mariano Gonzalez (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507154#comment-15507154
 ] 

Mariano Gonzalez commented on FLINK-4629:
-

[~tzulitai] The mailing list was my first option, but nobody replied to me, 
that is why i had to create this ticket :(

> Kafka v 0.10 Support
> 
>
> Key: FLINK-4629
> URL: https://issues.apache.org/jira/browse/FLINK-4629
> Project: Flink
>  Issue Type: Wish
>Reporter: Mariano Gonzalez
>Priority: Minor
>
> I couldn't find any repo or documentation about when Flink will start 
> supporting Kafka v 0.10.
> Is there any document that you can point me out where i can see Flink's 
> roadmap?
> Thanks in advance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2458: [FLINK-4560] enforcer java version as 1.7

2016-09-20 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2458
  
We already gave up Java 6, and setting the source and target version in the 
Maven compiler plugin already prevents Java 6 usage. That is why I am 
wondering...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4560) enforcer java version as 1.7

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507137#comment-15507137
 ] 

ASF GitHub Bot commented on FLINK-4560:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2458
  
We already gave up Java 6, and setting the source and target version in the 
Maven compiler plugin already prevents Java 6 usage. That is why I am 
wondering...


> enforcer java version as 1.7
> 
>
> Key: FLINK-4560
> URL: https://issues.apache.org/jira/browse/FLINK-4560
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> 1. maven-enforcer-plugin add java version enforce
> 2. maven-enforcer-plugin version upgrade to 1.4.1
> explicit require java version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4644) Deprecate "flink.base.dir.path" from ConfigConstants

2016-09-20 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4644:
-

 Summary: Deprecate "flink.base.dir.path" from ConfigConstants
 Key: FLINK-4644
 URL: https://issues.apache.org/jira/browse/FLINK-4644
 Project: Flink
  Issue Type: Wish
Affects Versions: 1.2.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
Priority: Minor
 Fix For: 1.2.0


Flink's configuration entry "flink.base.dir.path" is not documented anywhere 
but present in {{ConfigConstants}}. It is set in the JobManager but it is not 
accessed anywhere in the code.

I propose to deprecate the entry from ConfigConstants:

public static final String FLINK_BASE_DIR_PATH_KEY = "flink.base.dir.path";

I think this entry can only possibly be abused by users to write into the Flink 
base dir or rely on some structure of the directory (which is different for 
standalone, yarn/mesos). The only way i can see such a variable being useful if 
it is present in the form of an environment variable to make it available to 
other systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4640) Serialization of the initialValue of a Fold on WindowedStream fails

2016-09-20 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507104#comment-15507104
 ] 

Stephan Ewen commented on FLINK-4640:
-

I actually reproduced the error BOTH on the {{KeyedStream}} and on the 
{{WindowDataStream}}.

The problem is that Kryo cannot properly serialize the {{TreeMultiMap}}. It 
uses Objenesis to instantiate the map on deserialization, which leaves a broken 
object that then causes the nullpointer exception.
That is a Kryo/Guava incompatibility. Not sure there is anything we can do 
directly about that.

Should be fixable by registering a suitable serializer for the TreeMultiMap:
{code}
env.registerTypeWithKryoSerializer(TreeMultimap.class, JavaSerializer.class);
{code}

> Serialization of the initialValue of a Fold on WindowedStream fails
> ---
>
> Key: FLINK-4640
> URL: https://issues.apache.org/jira/browse/FLINK-4640
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Fabian Hueske
>Priority: Blocker
> Fix For: 1.2.0, 1.1.3
>
>
> The following program
> {code}
> DataStream> src = env.fromElements(new Tuple2 Long>("a", 1L));
> src
>   .keyBy(1)
>   .timeWindow(Time.minutes(5))
>   .fold(TreeMultimap.create(), new FoldFunction Long>, TreeMultimap>() {
> @Override
> public TreeMultimap fold(
> TreeMultimap topKSoFar, 
> Tuple2 itemCount) throws Exception 
> {
>   String item = itemCount.f0;
>   Long count = itemCount.f1;
>   topKSoFar.put(count, item);
>   if (topKSoFar.keySet().size() > 10) {
> topKSoFar.removeAll(topKSoFar.keySet().first());
>   }
>   return topKSoFar;
> }
> });
> {code}
> throws this exception
> {quote}
> Caused by: java.lang.RuntimeException: Could not add value to folding state.
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:91)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:382)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:176)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> com.google.common.collect.AbstractMapBasedMultimap.put(AbstractMapBasedMultimap.java:192)
>   at 
> com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:121)
>   at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:78)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:115)
>   at 
> org.apache.flink.streaming.examples.wordcount.WordCount$1.fold(WordCount.java:109)
>   at 
> org.apache.flink.runtime.state.memory.MemFoldingState.add(MemFoldingState.java:85)
>   ... 6 more
> {quote}
> The exception is caused because the initial value was not correctly 
> deserialized and is {{null}}.
> Using the same {{FoldFunction}} on a {{KeyedStream}} (without a window) works 
> fine.
> I tracked the problem down to the serialization of the {{StateDescriptor}}, 
> i.e., the {{writeObject()}} and {{readObject()}} methods. The methods use 
> Flink's TypeSerializers to serialize the default value. In case of the 
> {{TreeMultiMap}} this is the {{KryoSerializer}} which fails to read the 
> serialized data for some reason.
> A quick workaround to solve this issue would be to check if the default value 
> implements {{Serializable}} and use Java Serialization in this case. However, 
> it would be good to track the root cause of this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2519: [docs] Small improvements to the docs.

2016-09-20 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/2519#discussion_r79654882
  
--- Diff: docs/quickstart/run_example_quickstart.md ---
@@ -125,21 +125,21 @@ public class WikipediaAnalysis {
 }
 {% endhighlight %}
 
-I admit it's very bare bones now but we will fill it as we go. Note, that 
I'll not give
+I admit it's very bare bones now but we will fill it as we go. Note that 
I'll not give
--- End diff --

"bare bones" -> "basic"? Or perhaps "The program is very basic now but we 
will fill it in ..."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2519: [docs] Small improvements to the docs.

2016-09-20 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/2519#discussion_r79655043
  
--- Diff: docs/quickstart/run_example_quickstart.md ---
@@ -125,21 +125,21 @@ public class WikipediaAnalysis {
 }
 {% endhighlight %}
 
-I admit it's very bare bones now but we will fill it as we go. Note, that 
I'll not give
+I admit it's very bare bones now but we will fill it as we go. Note that 
I'll not give
 import statements here since IDEs can add them automatically. At the end 
of this section I'll show
 the complete code with import statements if you simply want to skip ahead 
and enter that in your
 editor.
 
 The first step in a Flink program is to create a 
`StreamExecutionEnvironment`
 (or `ExecutionEnvironment` if you are writing a batch job). This can be 
used to set execution
-parameters and create sources for reading from external systems. So let's 
go ahead, add
+parameters and create sources for reading from external systems. So let's 
go ahead, and add
--- End diff --

Remove the comma? "go ahead and add"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2519: [docs] Small improvements to the docs.

2016-09-20 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/2519#discussion_r79655828
  
--- Diff: docs/quickstart/run_example_quickstart.md ---
@@ -277,8 +277,8 @@ The number in front of each line tells you on which 
parallel instance of the pri
 was produced.
 
 This should get you started with writing your own Flink programs. You can 
check out our guides
-about [basic concepts]{{{ site.baseurl }}/apis/common/index.html} and the
-[DataStream API]{{{ site.baseurl }}/apis/streaming/index.html} if you want 
to learn more. Stick
+about [basic concepts]({{ site.baseurl }}/apis/common/index.html) and the
+[DataStream API]({{ site.baseurl }}/apis/streaming/index.html) if you want 
to learn more. Stick
--- End diff --

"if you want to learn more" -> "to learn more"

We can assume our users want to learn more :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4361) Introduce Flink's own future abstraction

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507066#comment-15507066
 ] 

ASF GitHub Bot commented on FLINK-4361:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79656257
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
--- End diff --

I see. After all, it's a minor naming issue.


> Introduce Flink's own future abstraction
> 
>
> Key: FLINK-4361
> URL: https://issues.apache.org/jira/browse/FLINK-4361
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Stephan Ewen
>Assignee: Till Rohrmann
>
> In order to keep the abstraction Scala Independent, we should not rely on 
> Scala Futures



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2472: [FLINK-4361] Introduce Flink's own future abstract...

2016-09-20 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79656257
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
--- End diff --

I see. After all, it's a minor naming issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4533) Unprotected access to meters in StatsDReporter#report()

2016-09-20 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507024#comment-15507024
 ] 

Chesnay Schepler commented on FLINK-4533:
-

there already is a comment within report() stating what i said.

> Unprotected access to meters in StatsDReporter#report()
> ---
>
> Key: FLINK-4533
> URL: https://issues.apache.org/jira/browse/FLINK-4533
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Access to meters in AbstractReporter is protected by AbstractReporter.this.
> Access to meters in StatsDReporter#report() should do the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4533) Unprotected access to meters in StatsDReporter#report()

2016-09-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved FLINK-4533.
---
Resolution: Won't Fix

> Unprotected access to meters in StatsDReporter#report()
> ---
>
> Key: FLINK-4533
> URL: https://issues.apache.org/jira/browse/FLINK-4533
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Access to meters in AbstractReporter is protected by AbstractReporter.this.
> Access to meters in StatsDReporter#report() should do the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507021#comment-15507021
 ] 

ASF GitHub Bot commented on FLINK-4248:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79652411
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlTimestampParserTest.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Timestamp;
+
+public class SqlTimestampParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01 00:00:00.000", "1990-10-14 02:42:25.123", 
"1990-10-14 02:42:25.12301",
+   "1990-10-14 02:42:25.12302", "2013-08-12 
14:15:59.478", "2013-08-12 14:15:59.47",
+   "-01-01 00:00:00.000",
+   };
+   }
+
+   @Override
+   public Timestamp[] getValidTestResults() {
+   return new Timestamp[] {
+   Timestamp.valueOf("1970-01-01 00:00:00.000"), 
Timestamp.valueOf("1990-10-14 02:42:25.123"),
+   Timestamp.valueOf("1990-10-14 02:42:25.12301"), 
Timestamp.valueOf("1990-10-14 02:42:25.12302"),
+   Timestamp.valueOf("2013-08-12 14:15:59.478"), 
Timestamp.valueOf("2013-08-12 14:15:59.47"),
+   Timestamp.valueOf("-01-01 00:00:00.000")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12 14:15:59.479", "2013-08-12 14:15:59.479 ", 
"1970-01-01 00:00::00",
+   "00x00:00", "2013/08/12", "-01-01 00:00:00.f00", 
"2013-08-12 14:15:59.4788",
--- End diff --

Yes, the maximum precision is 9 digits.
The timestamp you suggest is also valid.


> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507023#comment-15507023
 ] 

ASF GitHub Bot commented on FLINK-4248:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79652535
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/types/parser/SqlDateParser.java ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+import java.sql.Date;
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Parses a text field into a {@link java.sql.Date}.
+ */
+@PublicEvolving
+public class SqlDateParser extends FieldParser {
+
+   private static final Date DATE_INSTANCE = new Date(0L);
+
+   private Date result;
+
+   @Override
+   public int parseField(byte[] bytes, int startPos, int limit, byte[] 
delimiter, Date reusable) {
+   int i = startPos;
+
+   final int delimLimit = limit - delimiter.length + 1;
--- End diff --

I will try to generalize it a bit.


> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2303: [FLINK-4248] [core] [table] CsvTableSource does no...

2016-09-20 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79652535
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/types/parser/SqlDateParser.java ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+import java.sql.Date;
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Parses a text field into a {@link java.sql.Date}.
+ */
+@PublicEvolving
+public class SqlDateParser extends FieldParser {
+
+   private static final Date DATE_INSTANCE = new Date(0L);
+
+   private Date result;
+
+   @Override
+   public int parseField(byte[] bytes, int startPos, int limit, byte[] 
delimiter, Date reusable) {
+   int i = startPos;
+
+   final int delimLimit = limit - delimiter.length + 1;
--- End diff --

I will try to generalize it a bit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2303: [FLINK-4248] [core] [table] CsvTableSource does no...

2016-09-20 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79652411
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlTimestampParserTest.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Timestamp;
+
+public class SqlTimestampParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01 00:00:00.000", "1990-10-14 02:42:25.123", 
"1990-10-14 02:42:25.12301",
+   "1990-10-14 02:42:25.12302", "2013-08-12 
14:15:59.478", "2013-08-12 14:15:59.47",
+   "-01-01 00:00:00.000",
+   };
+   }
+
+   @Override
+   public Timestamp[] getValidTestResults() {
+   return new Timestamp[] {
+   Timestamp.valueOf("1970-01-01 00:00:00.000"), 
Timestamp.valueOf("1990-10-14 02:42:25.123"),
+   Timestamp.valueOf("1990-10-14 02:42:25.12301"), 
Timestamp.valueOf("1990-10-14 02:42:25.12302"),
+   Timestamp.valueOf("2013-08-12 14:15:59.478"), 
Timestamp.valueOf("2013-08-12 14:15:59.47"),
+   Timestamp.valueOf("-01-01 00:00:00.000")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12 14:15:59.479", "2013-08-12 14:15:59.479 ", 
"1970-01-01 00:00::00",
+   "00x00:00", "2013/08/12", "-01-01 00:00:00.f00", 
"2013-08-12 14:15:59.4788",
--- End diff --

Yes, the maximum precision is 9 digits.
The timestamp you suggest is also valid.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4520) Integrate Siddhi as a lightweight CEP Library

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507010#comment-15507010
 ] 

ASF GitHub Bot commented on FLINK-4520:
---

Github user haoch commented on the issue:

https://github.com/apache/flink/pull/2487
  
@StephanEwen thanks for the comments, I think it's both ok to keep this in 
the core or as an separated project, but the concern is it maybe better for 
community development to centralize qualified libraries togather. As an 
alternative solution for too test stability and dead code, may it possible to 
create another code repository say "flink-library"?

**BTW: here are the answers to your questions one by one:**

> How complete is the implementation?

Siddhi is a rich-featured CEP and has its own community, and maybe almost 
the only open source CEP solutions compatible with Apache License. And this 
library `flink-siddhi` is mainly focused on bring siddhi's capability to flink 
users seamlessly by:

- Integrate Siddhi CEP runtime with flink lifecycle
- Schema and DataStream source mapping
- State management and fault-tolerant.

So I think it would be extremely light-weight but useful, and the current 
implementation should be almost completed. 

> Would you be up for maintaining this code?

Sure, first of all, personally I am very willing to keep continuously 
contributing to Flink project in any way.  

And also we used siddhi with distributed streaming system a lot in 
production, and currently considering to support flink as well under 
consideration of better state management and window supporting. So I would 
continuously maintain the code if merged, it not, I would maintain at 
https://github.com/haoch/flink-siddhi as well to make sure it's workable.

> Are you building this as an experiment, or building a production use case 
based on Siddhi on Flink?

We use siddhi with streaming environment in production a lot, currently 
supports storm and spark streaming, and also consider extending to Flink.


> Integrate Siddhi as a lightweight CEP Library
> -
>
> Key: FLINK-4520
> URL: https://issues.apache.org/jira/browse/FLINK-4520
> Project: Flink
>  Issue Type: New Feature
>  Components: CEP
>Affects Versions: 1.2.0
>Reporter: Hao Chen
>Assignee: Hao Chen
>  Labels: cep, library, patch-available
> Fix For: 1.2.0
>
>
> h1. flink-siddhi proposal
> h2. Abstraction
> Siddhi CEP is a lightweight and easy-to-use Open Source Complex Event 
> Processing Engine (CEP) released as a Java Library under `Apache Software 
> License v2.0`. Siddhi CEP processes events which are generated by various 
> event sources, analyses them and notifies appropriate complex events 
> according to the user specified queries. 
> It would be very helpful for flink users (especially streaming application 
> developer) to provide a library to run Siddhi CEP query directly in Flink 
> streaming application.
> * http://wso2.com/products/complex-event-processor/
> * https://github.com/wso2/siddhi
> h2. Features
> * Integrate Siddhi CEP as an stream operator (i.e. 
> `TupleStreamSiddhiOperator`), supporting rich CEP features like
> * Filter
> * Join
> * Aggregation
> * Group by
> * Having
> * Window
> * Conditions and Expressions
> * Pattern processing
> * Sequence processing
> * Event Tables
> ...
> * Provide easy-to-use Siddhi CEP API to integrate Flink DataStream API (See 
> `SiddhiCEP` and `SiddhiStream`)
> * Register Flink DataStream associating native type information with 
> Siddhi Stream Schema, supporting POJO,Tuple, Primitive Type, etc.
> * Connect with single or multiple Flink DataStreams with Siddhi CEP 
> Execution Plan
> * Return output stream as DataStream with type intelligently inferred 
> from Siddhi Stream Schema
> * Integrate siddhi runtime state management with Flink state (See 
> `AbstractSiddhiOperator`)
> * Support siddhi plugin management to extend CEP functions. (See 
> `SiddhiCEP#registerExtension`)
> h2. Test Cases 
> * org.apache.flink.contrib.siddhi.SiddhiCEPITCase: 
> https://github.com/haoch/flink/blob/FLINK-4520/flink-contrib/flink-siddhi/src/test/java/org/apache/flink/contrib/siddhi/SiddhiCEPITCase.java
> h2. Example
> {code}
>  StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>  SiddhiCEP cep = SiddhiCEP.getSiddhiEnvironment(env);
>  cep.registerExtension("custom:plus",CustomPlusFunctionExtension.class);
>  cep.registerStream("inputStream1", input1, "id", "name", 
> "price","timestamp");
>  cep.registerStream("inputStream2", input2, "id", "name", 
> "price","timestamp");
>  DataStream> output = 

[GitHub] flink issue #2487: [FLINK-4520][flink-siddhi] Integrate Siddhi as a light-we...

2016-09-20 Thread haoch
Github user haoch commented on the issue:

https://github.com/apache/flink/pull/2487
  
@StephanEwen thanks for the comments, I think it's both ok to keep this in 
the core or as an separated project, but the concern is it maybe better for 
community development to centralize qualified libraries togather. As an 
alternative solution for too test stability and dead code, may it possible to 
create another code repository say "flink-library"?

**BTW: here are the answers to your questions one by one:**

> How complete is the implementation?

Siddhi is a rich-featured CEP and has its own community, and maybe almost 
the only open source CEP solutions compatible with Apache License. And this 
library `flink-siddhi` is mainly focused on bring siddhi's capability to flink 
users seamlessly by:

- Integrate Siddhi CEP runtime with flink lifecycle
- Schema and DataStream source mapping
- State management and fault-tolerant.

So I think it would be extremely light-weight but useful, and the current 
implementation should be almost completed. 

> Would you be up for maintaining this code?

Sure, first of all, personally I am very willing to keep continuously 
contributing to Flink project in any way.  

And also we used siddhi with distributed streaming system a lot in 
production, and currently considering to support flink as well under 
consideration of better state management and window supporting. So I would 
continuously maintain the code if merged, it not, I would maintain at 
https://github.com/haoch/flink-siddhi as well to make sure it's workable.

> Are you building this as an experiment, or building a production use case 
based on Siddhi on Flink?

We use siddhi with streaming environment in production a lot, currently 
supports storm and spark streaming, and also consider extending to Flink.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507001#comment-15507001
 ] 

ASF GitHub Bot commented on FLINK-4248:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79651336
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlTimeParserTest.java 
---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Time;
+
+public class SqlTimeParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "00:00:00", "02:42:25", "14:15:51", "18:00:45", 
"23:59:58", "0:0:0"
+   };
+   }
+
+   @Override
+   public Time[] getValidTestResults() {
+   return new Time[] {
+   Time.valueOf("00:00:00"), Time.valueOf("02:42:25"), 
Time.valueOf("14:15:51"),
+   Time.valueOf("18:00:45"), Time.valueOf("23:59:58"), 
Time.valueOf("0:0:0")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 00:00:00", "00:00:00 ", "00:00::00", "00x00:00", 
"2013/08/12", " ", "\t"
--- End diff --

This is also not invalid. The seconds are converted to minutes.


> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2303: [FLINK-4248] [core] [table] CsvTableSource does no...

2016-09-20 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79651336
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlTimeParserTest.java 
---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Time;
+
+public class SqlTimeParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "00:00:00", "02:42:25", "14:15:51", "18:00:45", 
"23:59:58", "0:0:0"
+   };
+   }
+
+   @Override
+   public Time[] getValidTestResults() {
+   return new Time[] {
+   Time.valueOf("00:00:00"), Time.valueOf("02:42:25"), 
Time.valueOf("14:15:51"),
+   Time.valueOf("18:00:45"), Time.valueOf("23:59:58"), 
Time.valueOf("0:0:0")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 00:00:00", "00:00:00 ", "00:00::00", "00x00:00", 
"2013/08/12", " ", "\t"
--- End diff --

This is also not invalid. The seconds are converted to minutes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506986#comment-15506986
 ] 

ASF GitHub Bot commented on FLINK-4248:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79650020
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlDateParserTest.java 
---
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Date;
+
+public class SqlDateParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01", "1990-10-14", "2013-08-12", "2040-05-12", 
"2040-5-12", "1970-1-1",
+   };
+   }
+
+   @Override
+   public Date[] getValidTestResults() {
+   return new Date[] {
+   Date.valueOf("1970-01-01"), Date.valueOf("1990-10-14"), 
Date.valueOf("2013-08-12"),
+   Date.valueOf("2040-05-12"), Date.valueOf("2040-05-12"), 
Date.valueOf("1970-01-01")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12", "2013-08-12 ", "2013-08--12", 
"13-08-12", "2013/08/12", " ", "\t",
--- End diff --

What does it make out of `"2000-14-01"`?


> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2303: [FLINK-4248] [core] [table] CsvTableSource does no...

2016-09-20 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79650020
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlDateParserTest.java 
---
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Date;
+
+public class SqlDateParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01", "1990-10-14", "2013-08-12", "2040-05-12", 
"2040-5-12", "1970-1-1",
+   };
+   }
+
+   @Override
+   public Date[] getValidTestResults() {
+   return new Date[] {
+   Date.valueOf("1970-01-01"), Date.valueOf("1990-10-14"), 
Date.valueOf("2013-08-12"),
+   Date.valueOf("2040-05-12"), Date.valueOf("2040-05-12"), 
Date.valueOf("1970-01-01")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12", "2013-08-12 ", "2013-08--12", 
"13-08-12", "2013/08/12", " ", "\t",
--- End diff --

What does it make out of `"2000-14-01"`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2487: [FLINK-4520][flink-siddhi] Integrate Siddhi as a light-we...

2016-09-20 Thread haoch
Github user haoch commented on the issue:

https://github.com/apache/flink/pull/2487
  
@mushketyk I have added lots of java docs as required in latest commit: 
https://github.com/apache/flink/pull/2487/commits/4699f9c3dfc4ce0a9837eb60579c76d50b346f03,
 please continue to help review.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4520) Integrate Siddhi as a lightweight CEP Library

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506964#comment-15506964
 ] 

ASF GitHub Bot commented on FLINK-4520:
---

Github user haoch commented on the issue:

https://github.com/apache/flink/pull/2487
  
@mushketyk I have added lots of java docs as required in latest commit: 
https://github.com/apache/flink/pull/2487/commits/4699f9c3dfc4ce0a9837eb60579c76d50b346f03,
 please continue to help review.


> Integrate Siddhi as a lightweight CEP Library
> -
>
> Key: FLINK-4520
> URL: https://issues.apache.org/jira/browse/FLINK-4520
> Project: Flink
>  Issue Type: New Feature
>  Components: CEP
>Affects Versions: 1.2.0
>Reporter: Hao Chen
>Assignee: Hao Chen
>  Labels: cep, library, patch-available
> Fix For: 1.2.0
>
>
> h1. flink-siddhi proposal
> h2. Abstraction
> Siddhi CEP is a lightweight and easy-to-use Open Source Complex Event 
> Processing Engine (CEP) released as a Java Library under `Apache Software 
> License v2.0`. Siddhi CEP processes events which are generated by various 
> event sources, analyses them and notifies appropriate complex events 
> according to the user specified queries. 
> It would be very helpful for flink users (especially streaming application 
> developer) to provide a library to run Siddhi CEP query directly in Flink 
> streaming application.
> * http://wso2.com/products/complex-event-processor/
> * https://github.com/wso2/siddhi
> h2. Features
> * Integrate Siddhi CEP as an stream operator (i.e. 
> `TupleStreamSiddhiOperator`), supporting rich CEP features like
> * Filter
> * Join
> * Aggregation
> * Group by
> * Having
> * Window
> * Conditions and Expressions
> * Pattern processing
> * Sequence processing
> * Event Tables
> ...
> * Provide easy-to-use Siddhi CEP API to integrate Flink DataStream API (See 
> `SiddhiCEP` and `SiddhiStream`)
> * Register Flink DataStream associating native type information with 
> Siddhi Stream Schema, supporting POJO,Tuple, Primitive Type, etc.
> * Connect with single or multiple Flink DataStreams with Siddhi CEP 
> Execution Plan
> * Return output stream as DataStream with type intelligently inferred 
> from Siddhi Stream Schema
> * Integrate siddhi runtime state management with Flink state (See 
> `AbstractSiddhiOperator`)
> * Support siddhi plugin management to extend CEP functions. (See 
> `SiddhiCEP#registerExtension`)
> h2. Test Cases 
> * org.apache.flink.contrib.siddhi.SiddhiCEPITCase: 
> https://github.com/haoch/flink/blob/FLINK-4520/flink-contrib/flink-siddhi/src/test/java/org/apache/flink/contrib/siddhi/SiddhiCEPITCase.java
> h2. Example
> {code}
>  StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>  SiddhiCEP cep = SiddhiCEP.getSiddhiEnvironment(env);
>  cep.registerExtension("custom:plus",CustomPlusFunctionExtension.class);
>  cep.registerStream("inputStream1", input1, "id", "name", 
> "price","timestamp");
>  cep.registerStream("inputStream2", input2, "id", "name", 
> "price","timestamp");
>  DataStream> output = cep
>   .from("inputStream1").union("inputStream2")
>   .sql(
> "from every s1 = inputStream1[id == 2] "
>  + " -> s2 = inputStream2[id == 3] "
>  + "select s1.id as id_1, s1.name as name_1, s2.id as id_2, s2.name as 
> name_2 , custom:plus(s1.price,s2.price) as price"
>  + "insert into outputStream"
>   )
>   .returns("outputStream");
>  env.execute();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4643) Average Clustering Coefficient

2016-09-20 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-4643:
-

 Summary: Average Clustering Coefficient
 Key: FLINK-4643
 URL: https://issues.apache.org/jira/browse/FLINK-4643
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Affects Versions: 1.2.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Minor


Gelly has Global Clustering Coefficient and Local Clustering Coefficient. This 
adds Average Clustering Coefficient. The distinction is discussed in 
[http://jponnela.com/web_documents/twomode.pdf] (pdf page 2, document page 32).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4560) enforcer java version as 1.7

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506901#comment-15506901
 ] 

ASF GitHub Bot commented on FLINK-4560:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/2458
  
@greghogan exactly 3.0.3 will be OK. This limit is some module require, I 
forget where it is, I am sorry..


> enforcer java version as 1.7
> 
>
> Key: FLINK-4560
> URL: https://issues.apache.org/jira/browse/FLINK-4560
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> 1. maven-enforcer-plugin add java version enforce
> 2. maven-enforcer-plugin version upgrade to 1.4.1
> explicit require java version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2458: [FLINK-4560] enforcer java version as 1.7

2016-09-20 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/2458
  
@greghogan exactly 3.0.3 will be OK. This limit is some module require, I 
forget where it is, I am sorry..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2458: [FLINK-4560] enforcer java version as 1.7

2016-09-20 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2458
  
Would this be useful to prevent building Flink with Maven 3.3.x?
  https://maven.apache.org/enforcer/enforcer-rules/versionRanges.html
  https://maven.apache.org/enforcer/enforcer-rules/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4560) enforcer java version as 1.7

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506858#comment-15506858
 ] 

ASF GitHub Bot commented on FLINK-4560:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2458
  
Would this be useful to prevent building Flink with Maven 3.3.x?
  https://maven.apache.org/enforcer/enforcer-rules/versionRanges.html
  https://maven.apache.org/enforcer/enforcer-rules/


> enforcer java version as 1.7
> 
>
> Key: FLINK-4560
> URL: https://issues.apache.org/jira/browse/FLINK-4560
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> 1. maven-enforcer-plugin add java version enforce
> 2. maven-enforcer-plugin version upgrade to 1.4.1
> explicit require java version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506812#comment-15506812
 ] 

ASF GitHub Bot commented on FLINK-4248:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79632225
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlDateParserTest.java 
---
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Date;
+
+public class SqlDateParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01", "1990-10-14", "2013-08-12", "2040-05-12", 
"2040-5-12", "1970-1-1",
+   };
+   }
+
+   @Override
+   public Date[] getValidTestResults() {
+   return new Date[] {
+   Date.valueOf("1970-01-01"), Date.valueOf("1990-10-14"), 
Date.valueOf("2013-08-12"),
+   Date.valueOf("2040-05-12"), Date.valueOf("2040-05-12"), 
Date.valueOf("1970-01-01")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12", "2013-08-12 ", "2013-08--12", 
"13-08-12", "2013/08/12", " ", "\t",
--- End diff --

The Java SQL date parser supports this and converts this into "2000-03-01".


> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2303: [FLINK-4248] [core] [table] CsvTableSource does no...

2016-09-20 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2303#discussion_r79632225
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/types/parser/SqlDateParserTest.java 
---
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.flink.types.parser;
+
+
+import java.sql.Date;
+
+public class SqlDateParserTest extends ParserTestBase {
+
+   @Override
+   public String[] getValidTestValues() {
+   return new String[] {
+   "1970-01-01", "1990-10-14", "2013-08-12", "2040-05-12", 
"2040-5-12", "1970-1-1",
+   };
+   }
+
+   @Override
+   public Date[] getValidTestResults() {
+   return new Date[] {
+   Date.valueOf("1970-01-01"), Date.valueOf("1990-10-14"), 
Date.valueOf("2013-08-12"),
+   Date.valueOf("2040-05-12"), Date.valueOf("2040-05-12"), 
Date.valueOf("1970-01-01")
+   };
+   }
+
+   @Override
+   public String[] getInvalidTestValues() {
+   return new String[] {
+   " 2013-08-12", "2013-08-12 ", "2013-08--12", 
"13-08-12", "2013/08/12", " ", "\t",
--- End diff --

The Java SQL date parser supports this and converts this into "2000-03-01".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2523: [FLINK-4556] Make Queryable State Key-Group Aware

2016-09-20 Thread StefanRRichter
GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/2523

[FLINK-4556] Make Queryable State Key-Group Aware

This PR addresses [FLINK-4556]  and makes queryable state aware of 
key-groups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink key-group-queryable-state

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2523.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2523


commit aeb2033383418fe7015b28a71f6ef6c74039acb2
Author: Stefan Richter 
Date:   2016-09-20T14:36:16Z

[FLINK-4556] Make Queryable State Key-Group Aware




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4642) Remove unnecessary Guava dependency from flink-streaming-java

2016-09-20 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4642:
---

 Summary: Remove unnecessary Guava dependency from 
flink-streaming-java
 Key: FLINK-4642
 URL: https://issues.apache.org/jira/browse/FLINK-4642
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.1.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2522: Parameterize Flink version in Quickstart bash script

2016-09-20 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2522
  
Looks good, merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2513: [hotfix] RescalingITCase

2016-09-20 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2513
  
Looks good to me, merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4556) Make Queryable State Key-Group Aware

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506795#comment-15506795
 ] 

ASF GitHub Bot commented on FLINK-4556:
---

GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/2523

[FLINK-4556] Make Queryable State Key-Group Aware

This PR addresses [FLINK-4556]  and makes queryable state aware of 
key-groups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink key-group-queryable-state

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2523.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2523


commit aeb2033383418fe7015b28a71f6ef6c74039acb2
Author: Stefan Richter 
Date:   2016-09-20T14:36:16Z

[FLINK-4556] Make Queryable State Key-Group Aware




> Make Queryable State Key-Group Aware
> 
>
> Key: FLINK-4556
> URL: https://issues.apache.org/jira/browse/FLINK-4556
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Aljoscha Krettek
>Assignee: Stefan Richter
>Priority: Blocker
>
> The recent introduction of key-grouped state breaks queryable state because 
> the JobManager does not yet forward the client to the correct TaskManager 
> based on key-group ranges.
> This will either have to be implemented on the JobManager side, i.e. in 
> {{AkkaKvStateLocationLookupService}} or on the {{TaskManager}} when state is 
> registered. The JobManager can know the mapping because it should know the 
> {{parallelism}}/{{maxParallelism}} which it can use to determine where the 
> state for a key-group is stored. The {{TaskManager}} send a 
> {{NotifyKvStateRegistered}} message that already contains a {{keyGroupIndex}} 
> field that is not useful/correct at the moment, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4268) Add a parsers for BigDecimal/BigInteger

2016-09-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4268.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 5c02988b05c56f524fc8c65b15e16b0c24278a5e.

> Add a parsers for BigDecimal/BigInteger
> ---
>
> Key: FLINK-4268
> URL: https://issues.apache.org/jira/browse/FLINK-4268
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> Since BigDecimal and BigInteger are basic types now. It would be great if we 
> also parse those.
> FLINK-628 did this a long time ago. This feature should be reintroduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4268) Add a parsers for BigDecimal/BigInteger

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506783#comment-15506783
 ] 

ASF GitHub Bot commented on FLINK-4268:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2304


> Add a parsers for BigDecimal/BigInteger
> ---
>
> Key: FLINK-4268
> URL: https://issues.apache.org/jira/browse/FLINK-4268
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Since BigDecimal and BigInteger are basic types now. It would be great if we 
> also parse those.
> FLINK-628 did this a long time ago. This feature should be reintroduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2304: [FLINK-4268] [core] Add parsers for BigDecimal/Big...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2304


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-4556) Make Queryable State Key-Group Aware

2016-09-20 Thread Stefan Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter reassigned FLINK-4556:
-

Assignee: Stefan Richter

> Make Queryable State Key-Group Aware
> 
>
> Key: FLINK-4556
> URL: https://issues.apache.org/jira/browse/FLINK-4556
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Aljoscha Krettek
>Assignee: Stefan Richter
>Priority: Blocker
>
> The recent introduction of key-grouped state breaks queryable state because 
> the JobManager does not yet forward the client to the correct TaskManager 
> based on key-group ranges.
> This will either have to be implemented on the JobManager side, i.e. in 
> {{AkkaKvStateLocationLookupService}} or on the {{TaskManager}} when state is 
> registered. The JobManager can know the mapping because it should know the 
> {{parallelism}}/{{maxParallelism}} which it can use to determine where the 
> state for a key-group is stored. The {{TaskManager}} send a 
> {{NotifyKvStateRegistered}} message that already contains a {{keyGroupIndex}} 
> field that is not useful/correct at the moment, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2522: Parameterize Flink version in Quickstart bash scri...

2016-09-20 Thread chobeat
GitHub user chobeat opened a pull request:

https://github.com/apache/flink/pull/2522

Parameterize Flink version in Quickstart bash script



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radicalbit/flink FLINK-4301

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2522.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2522






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4560) enforcer java version as 1.7

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506752#comment-15506752
 ] 

ASF GitHub Bot commented on FLINK-4560:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/2458
  
hi, @StephanEwen  
There are much difference between JDK 6 and JDK 7+.  It's better for  
compatible low version JDK,but there are better performance for Scala with 
Lambda support, and JDK 7 and 8 are enhanced on many features.  

Now JDK7 is wide used. Give up JDK 6 compatible. It's clearly for 
developer: use JDK 7+ features relieved




> enforcer java version as 1.7
> 
>
> Key: FLINK-4560
> URL: https://issues.apache.org/jira/browse/FLINK-4560
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> 1. maven-enforcer-plugin add java version enforce
> 2. maven-enforcer-plugin version upgrade to 1.4.1
> explicit require java version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2458: [FLINK-4560] enforcer java version as 1.7

2016-09-20 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/2458
  
hi, @StephanEwen  
There are much difference between JDK 6 and JDK 7+.  It's better for  
compatible low version JDK,but there are better performance for Scala with 
Lambda support, and JDK 7 and 8 are enhanced on many features.  

Now JDK7 is wide used. Give up JDK 6 compatible. It's clearly for 
developer: use JDK 7+ features relieved




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2472: [FLINK-4361] Introduce Flink's own future abstract...

2016-09-20 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79623763
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
+
+   /**
+* Tries to cancel the future's operation. Note that not all future 
operations can be canceled.
+* The result of the cancelling will be returned.
+*
+* @param mayInterruptIfRunning true iff the future operation may be 
interrupted
+* @return true if the cancelling was successful; otherwise false
+*/
+   boolean cancel(boolean mayInterruptIfRunning);
+
+   /**
+* Gets the result value of the future. If the future has not been 
completed, then this
+* operation will block indefinitely until the result has been 
delivered.
+*
+* @return the result value
+* @throws CancellationException if the future has been cancelled
+* @throws InterruptedException if the current thread was interrupted 
while waiting for the result
+* @throws ExecutionException if the future has been completed with an 
exception
+*/
+   T get() throws InterruptedException, ExecutionException;
+
+   /**
+* Gets the result value of the future. If the future has not been 
done, then this operation
+* will block the given timeout value. If the result has not been 
delivered within the timeout,
+* then the method throws an {@link TimeoutException}.
+*
+* @param timeout the time to wait for the future to be done
+* @param unit time unit for the timeout argument
+* @return the result value
+* @throws CancellationException if the future has been cancelled
+* @throws InterruptedException if the current thread was interrupted 
while waiting for the result
+* @throws ExecutionException if the future has been completed with an 
exception
+* @throws TimeoutException if the future has not been completed within 
the given timeout
+*/
+   T get(long timeout, TimeUnit unit) throws InterruptedException, 
ExecutionException, TimeoutException;
+
+   /**
+* Gets the value of the future. If the future has not been completed 
when calling this
+* function, the given value is returned.
+*
+* @param valueIfAbsent value which is returned if the future has not 
been completed
+* @return value of the future or the given value if the future has not 
been completed
+* @throws ExecutionException if the future has been completed with an 
exception
+*/
+   T getNow(T valueIfAbsent) throws ExecutionException;
+
+   /**
+* Applies the given function to the value of the future. The result of 
the apply function is
+* the value of the newly returned future.
+* 
+* The apply function is executed asynchronously by the given executor.
+*
+* @param applyFunction function to apply to the future's value
+* @param executor used to execute the given apply function 
asynchronously
+* @param  type of the apply function's return value
+* @return future 

[jira] [Closed] (FLINK-4638) Fix exception message for MemorySegment

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4638.
-
Resolution: Fixed

Fixed in 7a25bf5cee9ab94525ed0284cbf399d2c33f70cf

> Fix exception message for MemorySegment
> ---
>
> Key: FLINK-4638
> URL: https://issues.apache.org/jira/browse/FLINK-4638
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.0.0, 1.1.2, 1.1.3
>Reporter: Liwei Lin
>Priority: Trivial
> Fix For: 1.2.0
>
>
> Please see the code snip below:
> {code:title=MemorySegment.java|borderStyle=solid}
> if (offHeapAddress >= Long.MAX_VALUE - Integer.MAX_VALUE) {
>   throw new IllegalArgumentException("Segment initialized with too large 
> address: " + address // here address has not been initialized yet; should 
> really be offHeapAddress 
>   + " ; Max allowed address is " + (Long.MAX_VALUE - Integer.MAX_VALUE - 
> 1));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2472: [FLINK-4361] Introduce Flink's own future abstract...

2016-09-20 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79622945
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
--- End diff --

That's because I sticked to Java 8's `CompletableFuture` implementation 
where it is named the same.

I'm not super happy with the naming either. But I also see the benefit of 
sticking to Java 8's `CompletableFuture` interface. This will allow us to 
easily replace it once we switch to Java 8.

I think we have to decide whether we want to stick to Java 8's 
`CompletableFuture` or not. In the latter case we can rename other methods as 
well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2508: [FLINK-2662] [dataSet] Translate union with multiple outp...

2016-09-20 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2508
  
Thanks, merging then


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4361) Introduce Flink's own future abstraction

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506714#comment-15506714
 ] 

ASF GitHub Bot commented on FLINK-4361:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79622945
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
--- End diff --

That's because I sticked to Java 8's `CompletableFuture` implementation 
where it is named the same.

I'm not super happy with the naming either. But I also see the benefit of 
sticking to Java 8's `CompletableFuture` interface. This will allow us to 
easily replace it once we switch to Java 8.

I think we have to decide whether we want to stick to Java 8's 
`CompletableFuture` or not. In the latter case we can rename other methods as 
well.


> Introduce Flink's own future abstraction
> 
>
> Key: FLINK-4361
> URL: https://issues.apache.org/jira/browse/FLINK-4361
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Stephan Ewen
>Assignee: Till Rohrmann
>
> In order to keep the abstraction Scala Independent, we should not rely on 
> Scala Futures



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4361) Introduce Flink's own future abstraction

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506724#comment-15506724
 ] 

ASF GitHub Bot commented on FLINK-4361:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2472#discussion_r79623763
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/concurrent/Future.java ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.concurrent;
+
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Flink's basic future abstraction. A future represents an asynchronous 
operation whose result
+ * will be contained in this instance upon completion.
+ *
+ * @param  type of the future's result
+ */
+public interface Future {
+
+   /**
+* Checks if the future has been completed. A future is completed, if 
the result has been
+* delivered.
+*
+* @return true if the future is completed; otherwise false
+*/
+   boolean isDone();
+
+   /**
+* Tries to cancel the future's operation. Note that not all future 
operations can be canceled.
+* The result of the cancelling will be returned.
+*
+* @param mayInterruptIfRunning true iff the future operation may be 
interrupted
+* @return true if the cancelling was successful; otherwise false
+*/
+   boolean cancel(boolean mayInterruptIfRunning);
+
+   /**
+* Gets the result value of the future. If the future has not been 
completed, then this
+* operation will block indefinitely until the result has been 
delivered.
+*
+* @return the result value
+* @throws CancellationException if the future has been cancelled
+* @throws InterruptedException if the current thread was interrupted 
while waiting for the result
+* @throws ExecutionException if the future has been completed with an 
exception
+*/
+   T get() throws InterruptedException, ExecutionException;
+
+   /**
+* Gets the result value of the future. If the future has not been 
done, then this operation
+* will block the given timeout value. If the result has not been 
delivered within the timeout,
+* then the method throws an {@link TimeoutException}.
+*
+* @param timeout the time to wait for the future to be done
+* @param unit time unit for the timeout argument
+* @return the result value
+* @throws CancellationException if the future has been cancelled
+* @throws InterruptedException if the current thread was interrupted 
while waiting for the result
+* @throws ExecutionException if the future has been completed with an 
exception
+* @throws TimeoutException if the future has not been completed within 
the given timeout
+*/
+   T get(long timeout, TimeUnit unit) throws InterruptedException, 
ExecutionException, TimeoutException;
+
+   /**
+* Gets the value of the future. If the future has not been completed 
when calling this
+* function, the given value is returned.
+*
+* @param valueIfAbsent value which is returned if the future has not 
been completed
+* @return value of the future or the given value if the future has not 
been completed
+* @throws ExecutionException if the future has been completed with an 
exception
+*/
+   T getNow(T valueIfAbsent) throws ExecutionException;
+
+   /**
+* Applies the given function to the value of the future. The result of 
the apply function is
+* the value of the newly returned future.
+* 
+* The apply function is executed asynchronously by the given executor.
+*

[jira] [Commented] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506717#comment-15506717
 ] 

ASF GitHub Bot commented on FLINK-2662:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2508
  
Thanks, merging then


> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4638) Fix exception message for MemorySegment

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-4638:
--
Fix Version/s: 1.2.0

> Fix exception message for MemorySegment
> ---
>
> Key: FLINK-4638
> URL: https://issues.apache.org/jira/browse/FLINK-4638
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.0.0, 1.1.2, 1.1.3
>Reporter: Liwei Lin
>Priority: Trivial
> Fix For: 1.2.0
>
>
> Please see the code snip below:
> {code:title=MemorySegment.java|borderStyle=solid}
> if (offHeapAddress >= Long.MAX_VALUE - Integer.MAX_VALUE) {
>   throw new IllegalArgumentException("Segment initialized with too large 
> address: " + address // here address has not been initialized yet; should 
> really be offHeapAddress 
>   + " ; Max allowed address is " + (Long.MAX_VALUE - Integer.MAX_VALUE - 
> 1));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2304: [FLINK-4268] [core] Add parsers for BigDecimal/BigInteger

2016-09-20 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2304
  
Thanks again for the review @fhueske. I will fix your comments and merge 
this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4268) Add a parsers for BigDecimal/BigInteger

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506699#comment-15506699
 ] 

ASF GitHub Bot commented on FLINK-4268:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2304
  
Thanks again for the review @fhueske. I will fix your comments and merge 
this.


> Add a parsers for BigDecimal/BigInteger
> ---
>
> Key: FLINK-4268
> URL: https://issues.apache.org/jira/browse/FLINK-4268
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Since BigDecimal and BigInteger are basic types now. It would be great if we 
> also parse those.
> FLINK-628 did this a long time ago. This feature should be reintroduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4572) Convert to negative in LongValueToIntValue

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-4572:
--
Fix Version/s: 1.2.0

> Convert to negative in LongValueToIntValue
> --
>
> Key: FLINK-4572
> URL: https://issues.apache.org/jira/browse/FLINK-4572
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.2.0
>
>
> The Gelly drivers expect that scale 32 edges, represented by the lower 32 
> bits of {{long}} values, can be converted to {{int}} values. Values between 
> 2^31 and 2^32 - 1 should be converted to negative integers, which is not 
> supported by {{MathUtils.checkedDownCast}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4594) Validate lower bound in MathUtils.checkedDownCast

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-4594:
--
Fix Version/s: 1.2.0

> Validate lower bound in MathUtils.checkedDownCast
> -
>
> Key: FLINK-4594
> URL: https://issues.apache.org/jira/browse/FLINK-4594
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {{MathUtils.checkedDownCast}} only compares against the upper bound 
> {{Integer.MAX_VALUE}}, which has worked with current usage. 
> Rather than adding a second comparison we can replace
> {noformat}
> if (value > Integer.MAX_VALUE) {
> {noformat}
> with a cast and check
> {noformat}
> if ((int)value != value) { ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4594) Validate lower bound in MathUtils.checkedDownCast

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4594.
-
Resolution: Fixed

Fixed in 4f84a02f4b8be5948b233531fbcb07dfdb5e7930

> Validate lower bound in MathUtils.checkedDownCast
> -
>
> Key: FLINK-4594
> URL: https://issues.apache.org/jira/browse/FLINK-4594
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {{MathUtils.checkedDownCast}} only compares against the upper bound 
> {{Integer.MAX_VALUE}}, which has worked with current usage. 
> Rather than adding a second comparison we can replace
> {noformat}
> if (value > Integer.MAX_VALUE) {
> {noformat}
> with a cast and check
> {noformat}
> if ((int)value != value) { ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4638) Fix exception message for MemorySegment

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506668#comment-15506668
 ] 

ASF GitHub Bot commented on FLINK-4638:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2515


> Fix exception message for MemorySegment
> ---
>
> Key: FLINK-4638
> URL: https://issues.apache.org/jira/browse/FLINK-4638
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.0.0, 1.1.2, 1.1.3
>Reporter: Liwei Lin
>Priority: Trivial
>
> Please see the code snip below:
> {code:title=MemorySegment.java|borderStyle=solid}
> if (offHeapAddress >= Long.MAX_VALUE - Integer.MAX_VALUE) {
>   throw new IllegalArgumentException("Segment initialized with too large 
> address: " + address // here address has not been initialized yet; should 
> really be offHeapAddress 
>   + " ; Max allowed address is " + (Long.MAX_VALUE - Integer.MAX_VALUE - 
> 1));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4609) Remove redundant check for null in CrossOperator

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4609.
-
Resolution: Fixed

Fixed in 470b752e693ae4292d34fc7fc0c778f95fa04fd9

> Remove redundant check for null in CrossOperator
> 
>
> Key: FLINK-4609
> URL: https://issues.apache.org/jira/browse/FLINK-4609
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Affects Versions: 1.1.2
>Reporter: Alexander Pivovarov
>Priority: Trivial
> Fix For: 1.2.0
>
>
> CrossOperator checks input1 and input2 for null after they were dereferenced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4609) Remove redundant check for null in CrossOperator

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-4609:
--
Fix Version/s: 1.2.0

> Remove redundant check for null in CrossOperator
> 
>
> Key: FLINK-4609
> URL: https://issues.apache.org/jira/browse/FLINK-4609
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Affects Versions: 1.1.2
>Reporter: Alexander Pivovarov
>Priority: Trivial
> Fix For: 1.2.0
>
>
> CrossOperator checks input1 and input2 for null after they were dereferenced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4572) Convert to negative in LongValueToIntValue

2016-09-20 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4572.
-
Resolution: Fixed

Fixed in e58fd6e001b319aa7325a0544735f8c0680141d8

> Convert to negative in LongValueToIntValue
> --
>
> Key: FLINK-4572
> URL: https://issues.apache.org/jira/browse/FLINK-4572
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.2.0
>
>
> The Gelly drivers expect that scale 32 edges, represented by the lower 32 
> bits of {{long}} values, can be converted to {{int}} values. Values between 
> 2^31 and 2^32 - 1 should be converted to negative integers, which is not 
> supported by {{MathUtils.checkedDownCast}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4572) Convert to negative in LongValueToIntValue

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506667#comment-15506667
 ] 

ASF GitHub Bot commented on FLINK-4572:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2469


> Convert to negative in LongValueToIntValue
> --
>
> Key: FLINK-4572
> URL: https://issues.apache.org/jira/browse/FLINK-4572
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The Gelly drivers expect that scale 32 edges, represented by the lower 32 
> bits of {{long}} values, can be converted to {{int}} values. Values between 
> 2^31 and 2^32 - 1 should be converted to negative integers, which is not 
> supported by {{MathUtils.checkedDownCast}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4594) Validate lower bound in MathUtils.checkedDownCast

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1550#comment-1550
 ] 

ASF GitHub Bot commented on FLINK-4594:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2481


> Validate lower bound in MathUtils.checkedDownCast
> -
>
> Key: FLINK-4594
> URL: https://issues.apache.org/jira/browse/FLINK-4594
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {{MathUtils.checkedDownCast}} only compares against the upper bound 
> {{Integer.MAX_VALUE}}, which has worked with current usage. 
> Rather than adding a second comparison we can replace
> {noformat}
> if (value > Integer.MAX_VALUE) {
> {noformat}
> with a cast and check
> {noformat}
> if ((int)value != value) { ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4609) Remove redundant check for null in CrossOperator

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506669#comment-15506669
 ] 

ASF GitHub Bot commented on FLINK-4609:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2490


> Remove redundant check for null in CrossOperator
> 
>
> Key: FLINK-4609
> URL: https://issues.apache.org/jira/browse/FLINK-4609
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Affects Versions: 1.1.2
>Reporter: Alexander Pivovarov
>Priority: Trivial
>
> CrossOperator checks input1 and input2 for null after they were dereferenced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >