[jira] [Commented] (FLINK-4632) when yarn nodemanager lost, flink hung

2016-09-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/FLINK-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522303#comment-15522303
 ] 

刘喆 commented on FLINK-4632:
---

I tried the lastest github version, the problem is still there.
When the job is running, kill one taskmanager, then it becomes canceling and 
then hung

> when yarn nodemanager lost,  flink hung
> ---
>
> Key: FLINK-4632
> URL: https://issues.apache.org/jira/browse/FLINK-4632
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Streaming
>Affects Versions: 1.2.0, 1.1.2
> Environment: cdh5.5.1  jdk1.7 flink1.1.2  1.2-snapshot   kafka0.8.2
>Reporter: 刘喆
>
> When run flink streaming on yarn,  using kafka as source,  it runs well when 
> start. But after long run, for example  8 hours, dealing 60,000,000+ 
> messages, it hung: no messages consumed,   one taskmanager is CANCELING, the 
> exception show:
> org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: 
> connection timeout
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:152)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:79)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:835)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:87)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: 连接超时
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   ... 6 more
> after applyhttps://issues.apache.org/jira/browse/FLINK-4625   
> it show:
> java.lang.Exception: TaskManager was lost/killed: 
> ResourceID{resourceId='container_1471620986643_744852_01_001400'} @ 
> 38.slave.adh (dataPort=45349)
>   at 
> org.apache.flink.runtime.instance.SimpleSlot.releaseSlot(SimpleSlot.java:162)
>   at 
> org.apache.flink.runtime.instance.SlotSharingGroupAssignment.releaseSharedSlot(SlotSharingGroupAssignment.java:533)
>   at 
> org.apache.flink.runtime.instance.SharedSlot.releaseSlot(SharedSlot.java:138)
>   at 
> org.apache.flink.runtime.instance.Instance.markDead(Instance.java:167)
>   at 
> org.apache.flink.runtime.instance.InstanceManager.unregisterTaskManager(InstanceManager.java:224)

[jira] [Assigned] (FLINK-4489) Implement TaskManager's SlotAllocationTable

2016-09-26 Thread zhangjing (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangjing reassigned FLINK-4489:


Assignee: zhangjing

> Implement TaskManager's SlotAllocationTable
> ---
>
> Key: FLINK-4489
> URL: https://issues.apache.org/jira/browse/FLINK-4489
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>Assignee: zhangjing
>
> The {{SlotManager}} is responsible for managing the available slots on the 
> {{TaskManager}}. This basically means to maintain the mapping between slots 
> and the owning {{JobManagers}} and to offer tasks which run in the slots 
> access to the owning {{JobManagers}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4673) TypeInfoFactory for Either type

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522384#comment-15522384
 ] 

ASF GitHub Bot commented on FLINK-4673:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2545
  
I will review this PR soon.


> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2545: [FLINK-4673] [core] TypeInfoFactory for Either type

2016-09-26 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2545
  
I will review this PR soon.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2332: [FLINK-2055] Implement Streaming HBaseSink

2016-09-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2332
  
@ramkrish86 that is correct.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2055) Implement Streaming HBaseSink

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522387#comment-15522387
 ] 

ASF GitHub Bot commented on FLINK-2055:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2332
  
@ramkrish86 that is correct.


> Implement Streaming HBaseSink
> -
>
> Key: FLINK-2055
> URL: https://issues.apache.org/jira/browse/FLINK-2055
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming, Streaming Connectors
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Erli Ding
>
> As per : 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Write-Stream-to-HBase-td1300.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4632) when yarn nodemanager lost, flink hung

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-4632:

Priority: Blocker  (was: Major)

> when yarn nodemanager lost,  flink hung
> ---
>
> Key: FLINK-4632
> URL: https://issues.apache.org/jira/browse/FLINK-4632
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Streaming
>Affects Versions: 1.2.0, 1.1.2
> Environment: cdh5.5.1  jdk1.7 flink1.1.2  1.2-snapshot   kafka0.8.2
>Reporter: 刘喆
>Priority: Blocker
>
> When run flink streaming on yarn,  using kafka as source,  it runs well when 
> start. But after long run, for example  8 hours, dealing 60,000,000+ 
> messages, it hung: no messages consumed,   one taskmanager is CANCELING, the 
> exception show:
> org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: 
> connection timeout
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:152)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:79)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:835)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:87)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: 连接超时
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   ... 6 more
> after applyhttps://issues.apache.org/jira/browse/FLINK-4625   
> it show:
> java.lang.Exception: TaskManager was lost/killed: 
> ResourceID{resourceId='container_1471620986643_744852_01_001400'} @ 
> 38.slave.adh (dataPort=45349)
>   at 
> org.apache.flink.runtime.instance.SimpleSlot.releaseSlot(SimpleSlot.java:162)
>   at 
> org.apache.flink.runtime.instance.SlotSharingGroupAssignment.releaseSharedSlot(SlotSharingGroupAssignment.java:533)
>   at 
> org.apache.flink.runtime.instance.SharedSlot.releaseSlot(SharedSlot.java:138)
>   at 
> org.apache.flink.runtime.instance.Instance.markDead(Instance.java:167)
>   at 
> org.apache.flink.runtime.instance.InstanceManager.unregisterTaskManager(InstanceManager.java:224)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$handleTaskMan

[jira] [Commented] (FLINK-4632) when yarn nodemanager lost, flink hung

2016-09-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522404#comment-15522404
 ] 

Stephan Ewen commented on FLINK-4632:
-

I have tried to reproduce this, but it works well on my machine.

Can you again send the logs of:
  - the JobManager
  - the hanging TaskManager

> when yarn nodemanager lost,  flink hung
> ---
>
> Key: FLINK-4632
> URL: https://issues.apache.org/jira/browse/FLINK-4632
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Streaming
>Affects Versions: 1.2.0, 1.1.2
> Environment: cdh5.5.1  jdk1.7 flink1.1.2  1.2-snapshot   kafka0.8.2
>Reporter: 刘喆
>
> When run flink streaming on yarn,  using kafka as source,  it runs well when 
> start. But after long run, for example  8 hours, dealing 60,000,000+ 
> messages, it hung: no messages consumed,   one taskmanager is CANCELING, the 
> exception show:
> org.apache.flink.runtime.io.network.netty.exception.LocalTransportException: 
> connection timeout
>   at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:152)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:79)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:835)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:87)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: 连接超时
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
>   at 
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
>   ... 6 more
> after applyhttps://issues.apache.org/jira/browse/FLINK-4625   
> it show:
> java.lang.Exception: TaskManager was lost/killed: 
> ResourceID{resourceId='container_1471620986643_744852_01_001400'} @ 
> 38.slave.adh (dataPort=45349)
>   at 
> org.apache.flink.runtime.instance.SimpleSlot.releaseSlot(SimpleSlot.java:162)
>   at 
> org.apache.flink.runtime.instance.SlotSharingGroupAssignment.releaseSharedSlot(SlotSharingGroupAssignment.java:533)
>   at 
> org.apache.flink.runtime.instance.SharedSlot.releaseSlot(SharedSlot.java:138)
>   at 
> org.apache.flink.runtime.instance.Instance.markDead(Instance.java:167)
>   at 
> org.apache.flink.runtime.instance.InstanceManager.unregisterTaskManager(InstanceMana

[jira] [Assigned] (FLINK-3674) Add an interface for EventTime aware User Function

2016-09-26 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-3674:
---

Assignee: Aljoscha Krettek  (was: ramkrishna.s.vasudevan)

> Add an interface for EventTime aware User Function
> --
>
> Key: FLINK-3674
> URL: https://issues.apache.org/jira/browse/FLINK-3674
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Affects Versions: 1.0.0
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>
> I suggest to add an interface that UDFs can implement, which will let them be 
> notified upon watermark updates.
> Example usage:
> {code}
> public interface EventTimeFunction {
> void onWatermark(Watermark watermark);
> }
> public class MyMapper implements MapFunction, 
> EventTimeFunction {
> private long currentEventTime = Long.MIN_VALUE;
> public String map(String value) {
> return value + " @ " + currentEventTime;
> }
> public void onWatermark(Watermark watermark) {
> currentEventTime = watermark.getTimestamp();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3674) Add an interface for Time aware User Function

2016-09-26 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3674:

Summary: Add an interface for Time aware User Function  (was: Add an 
interface for EventTime aware User Function)

> Add an interface for Time aware User Function
> -
>
> Key: FLINK-3674
> URL: https://issues.apache.org/jira/browse/FLINK-3674
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Affects Versions: 1.0.0
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>
> I suggest to add an interface that UDFs can implement, which will let them be 
> notified upon watermark updates.
> Example usage:
> {code}
> public interface EventTimeFunction {
> void onWatermark(Watermark watermark);
> }
> public class MyMapper implements MapFunction, 
> EventTimeFunction {
> private long currentEventTime = Long.MIN_VALUE;
> public String map(String value) {
> return value + " @ " + currentEventTime;
> }
> public void onWatermark(Watermark watermark) {
> currentEventTime = watermark.getTimestamp();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3674) Add an interface for Time aware User Functions

2016-09-26 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3674:

Summary: Add an interface for Time aware User Functions  (was: Add an 
interface for Time aware User Function)

> Add an interface for Time aware User Functions
> --
>
> Key: FLINK-3674
> URL: https://issues.apache.org/jira/browse/FLINK-3674
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Affects Versions: 1.0.0
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>
> I suggest to add an interface that UDFs can implement, which will let them be 
> notified upon watermark updates.
> Example usage:
> {code}
> public interface EventTimeFunction {
> void onWatermark(Watermark watermark);
> }
> public class MyMapper implements MapFunction, 
> EventTimeFunction {
> private long currentEventTime = Long.MIN_VALUE;
> public String map(String value) {
> return value + " @ " + currentEventTime;
> }
> public void onWatermark(Watermark watermark) {
> currentEventTime = watermark.getTimestamp();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2547: [FLINK-4552] Refactor WindowOperator/Trigger Tests

2016-09-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2547
  
There is a failing 
test:`TimestampsAndPeriodicWatermarksOperatorTest.testTimestampsAndPeriodicWatermarksOperator(TimestampsAndPeriodicWatermarksOperatorTest.java:74)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4552) Refactor WindowOperator/Trigger Tests

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522565#comment-15522565
 ] 

ASF GitHub Bot commented on FLINK-4552:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2547
  
There is a failing 
test:`TimestampsAndPeriodicWatermarksOperatorTest.testTimestampsAndPeriodicWatermarksOperator(TimestampsAndPeriodicWatermarksOperatorTest.java:74)`


> Refactor WindowOperator/Trigger Tests
> -
>
> Key: FLINK-4552
> URL: https://issues.apache.org/jira/browse/FLINK-4552
> Project: Flink
>  Issue Type: Improvement
>  Components: Windowing Operators
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now, tests for {{WindowOperator}}, {{WindowAssigner}}, {{Trigger}} and 
> {{WindowFunction}} are all conflated in {{WindowOperatorTest}}. All of these 
> test that a certain combination of a {{Trigger}}, {{WindowAssigner}} and 
> {{WindowFunction}} produce the expected output.
> We should modularize these tests and spread them out across multiple files, 
> possibly one per trigger, for the triggers. Also, we should extend/change the 
> tests in some key ways:
>  - {{WindowOperatorTest}} test should just verify that the interaction 
> between {{WindowOperator}} and the various other parts works as expected, 
> that the correct methods on {{Trigger}} and {{WindowFunction}} are called at 
> the expected time and that snapshotting, timers, cleanup etc. work correctly. 
> These tests should also verify that the different state types and 
> {{WindowFunctions}} work correctly.
>  - {{Trigger}} tests should present elements to triggers and verify that they 
> fire at the correct times. The actual output of the {{WindowFunction}} is not 
> important for these tests. We should also test that triggers correctly clean 
> up state and timers.
>  - {{WindowAssigner}} tests should test each window assigner and also verify 
> that, for example, the offset parameter of time-based windows works correctly.
> There is already {{WindowingTestHarness}} but it is not used by tests, I 
> think we can expand on that and provide more thorough test coverage while 
> also making the tests more maintainable ({{WindowOperatorTest.java}} is 
> nearing 3000 lines of code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2547: [FLINK-4552] Refactor WindowOperator/Trigger Tests

2016-09-26 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2547
  
@zentol jip thanks, working on that



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4552) Refactor WindowOperator/Trigger Tests

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522569#comment-15522569
 ] 

ASF GitHub Bot commented on FLINK-4552:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2547
  
@zentol jip thanks, working on that



> Refactor WindowOperator/Trigger Tests
> -
>
> Key: FLINK-4552
> URL: https://issues.apache.org/jira/browse/FLINK-4552
> Project: Flink
>  Issue Type: Improvement
>  Components: Windowing Operators
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now, tests for {{WindowOperator}}, {{WindowAssigner}}, {{Trigger}} and 
> {{WindowFunction}} are all conflated in {{WindowOperatorTest}}. All of these 
> test that a certain combination of a {{Trigger}}, {{WindowAssigner}} and 
> {{WindowFunction}} produce the expected output.
> We should modularize these tests and spread them out across multiple files, 
> possibly one per trigger, for the triggers. Also, we should extend/change the 
> tests in some key ways:
>  - {{WindowOperatorTest}} test should just verify that the interaction 
> between {{WindowOperator}} and the various other parts works as expected, 
> that the correct methods on {{Trigger}} and {{WindowFunction}} are called at 
> the expected time and that snapshotting, timers, cleanup etc. work correctly. 
> These tests should also verify that the different state types and 
> {{WindowFunctions}} work correctly.
>  - {{Trigger}} tests should present elements to triggers and verify that they 
> fire at the correct times. The actual output of the {{WindowFunction}} is not 
> important for these tests. We should also test that triggers correctly clean 
> up state and timers.
>  - {{WindowAssigner}} tests should test each window assigner and also verify 
> that, for example, the offset parameter of time-based windows works correctly.
> There is already {{WindowingTestHarness}} but it is not used by tests, I 
> think we can expand on that and provide more thorough test coverage while 
> also making the tests more maintainable ({{WindowOperatorTest.java}} is 
> nearing 3000 lines of code).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2545: [FLINK-4673] [core] TypeInfoFactory for Either typ...

2016-09-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80437853
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java 
---
@@ -675,38 +673,6 @@ else if (isClassType(t) && 
Tuple.class.isAssignableFrom(typeToClass(t))) {
return new TupleTypeInfo(typeToClass(t), subTypesInfo);

}
-   // check if type is a subclass of Either
-   else if (isClassType(t) && 
Either.class.isAssignableFrom(typeToClass(t))) {
-   Type curT = t;
-
-   // go up the hierarchy until we reach Either (with or 
without generics)
-   // collect the types while moving up for a later 
top-down
-   while (!(isClassType(curT) && 
typeToClass(curT).equals(Either.class))) {
-   typeHierarchy.add(curT);
-   curT = typeToClass(curT).getGenericSuperclass();
-   }
-
-   // check if Either has generics
-   if (curT instanceof Class) {
-   throw new InvalidTypesException("Either needs 
to be parameterized by using generics.");
-   }
-
-   typeHierarchy.add(curT);
-
-   // create the type information for the subtypes
-   final TypeInformation[] subTypesInfo = 
createSubTypesInfo(t, (ParameterizedType) curT, typeHierarchy, in1Type, 
in2Type, false);
-   // type needs to be treated a pojo due to additional 
fields
-   if (subTypesInfo == null) {
-   if (t instanceof ParameterizedType) {
-   return (TypeInformation) 
analyzePojo(typeToClass(t), new ArrayList(typeHierarchy), 
(ParameterizedType) t, in1Type, in2Type);
--- End diff --

What happens if Either class is extended? The whole checking is missing 
when using a factory. It need to go into the `createTypeInfo` method. What I 
just recognized, actually the `EitherTypeInfo` needs a second constructor that 
takes the subclass similar to `TupleTypeInfo`. The `t` parameter of 
`createTypeInfo` can than passed to this constructor.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2545: [FLINK-4673] [core] TypeInfoFactory for Either typ...

2016-09-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80439161
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeExtractorTest.java
 ---
@@ -1857,12 +1856,6 @@ public void testEitherHierarchy() {
Assert.assertEquals(expected, ti);
}
 
-   @Test(expected=InvalidTypesException.class)
-   public void testEitherFromObjectException() {
-   Either> either = Either.Left("test");
-   TypeExtractor.getForObject(either);
--- End diff --

This test is necessary. The exception is not thrown anymore but the type 
information that is currently generated is invalid (the right type information 
is `null`, which should never happen).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4673) TypeInfoFactory for Either type

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522590#comment-15522590
 ] 

ASF GitHub Bot commented on FLINK-4673:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80439161
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeExtractorTest.java
 ---
@@ -1857,12 +1856,6 @@ public void testEitherHierarchy() {
Assert.assertEquals(expected, ti);
}
 
-   @Test(expected=InvalidTypesException.class)
-   public void testEitherFromObjectException() {
-   Either> either = Either.Left("test");
-   TypeExtractor.getForObject(either);
--- End diff --

This test is necessary. The exception is not thrown anymore but the type 
information that is currently generated is invalid (the right type information 
is `null`, which should never happen).


> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4673) TypeInfoFactory for Either type

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522592#comment-15522592
 ] 

ASF GitHub Bot commented on FLINK-4673:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80424649
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/EitherTypeInfoFactory.java
 ---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.types.Either;
+
+import java.lang.reflect.Type;
+import java.util.Map;
+
+public class EitherTypeInfoFactory extends TypeInfoFactory> {
+
+   @Override
+   public TypeInformation> createTypeInfo(Type t, Map> genericParameters) {
+   return new EitherTypeInfo(genericParameters.get("L"), 
genericParameters.get("R"));
--- End diff --

Can check here if the key is not null? Btw. we should add a null check to 
the constructor of EitherTypeInfo.


> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2545: [FLINK-4673] [core] TypeInfoFactory for Either typ...

2016-09-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80424649
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/EitherTypeInfoFactory.java
 ---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.types.Either;
+
+import java.lang.reflect.Type;
+import java.util.Map;
+
+public class EitherTypeInfoFactory extends TypeInfoFactory> {
+
+   @Override
+   public TypeInformation> createTypeInfo(Type t, Map> genericParameters) {
+   return new EitherTypeInfo(genericParameters.get("L"), 
genericParameters.get("R"));
--- End diff --

Can check here if the key is not null? Btw. we should add a null check to 
the constructor of EitherTypeInfo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4673) TypeInfoFactory for Either type

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522591#comment-15522591
 ] 

ASF GitHub Bot commented on FLINK-4673:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r80437853
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java 
---
@@ -675,38 +673,6 @@ else if (isClassType(t) && 
Tuple.class.isAssignableFrom(typeToClass(t))) {
return new TupleTypeInfo(typeToClass(t), subTypesInfo);

}
-   // check if type is a subclass of Either
-   else if (isClassType(t) && 
Either.class.isAssignableFrom(typeToClass(t))) {
-   Type curT = t;
-
-   // go up the hierarchy until we reach Either (with or 
without generics)
-   // collect the types while moving up for a later 
top-down
-   while (!(isClassType(curT) && 
typeToClass(curT).equals(Either.class))) {
-   typeHierarchy.add(curT);
-   curT = typeToClass(curT).getGenericSuperclass();
-   }
-
-   // check if Either has generics
-   if (curT instanceof Class) {
-   throw new InvalidTypesException("Either needs 
to be parameterized by using generics.");
-   }
-
-   typeHierarchy.add(curT);
-
-   // create the type information for the subtypes
-   final TypeInformation[] subTypesInfo = 
createSubTypesInfo(t, (ParameterizedType) curT, typeHierarchy, in1Type, 
in2Type, false);
-   // type needs to be treated a pojo due to additional 
fields
-   if (subTypesInfo == null) {
-   if (t instanceof ParameterizedType) {
-   return (TypeInformation) 
analyzePojo(typeToClass(t), new ArrayList(typeHierarchy), 
(ParameterizedType) t, in1Type, in2Type);
--- End diff --

What happens if Either class is extended? The whole checking is missing 
when using a factory. It need to go into the `createTypeInfo` method. What I 
just recognized, actually the `EitherTypeInfo` needs a second constructor that 
takes the subclass similar to `TupleTypeInfo`. The `t` parameter of 
`createTypeInfo` can than passed to this constructor.


> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2465: [FLINK-4447] [docs] Include NettyConfig options on Config...

2016-09-26 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/2465
  
Very good! +1 to add this with a note that the defaults should work fine 
out of the box.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4447) Include NettyConfig options on Configurations page

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522597#comment-15522597
 ] 

ASF GitHub Bot commented on FLINK-4447:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/2465
  
Very good! +1 to add this with a note that the defaults should work fine 
out of the box.


> Include NettyConfig options on Configurations page
> --
>
> Key: FLINK-4447
> URL: https://issues.apache.org/jira/browse/FLINK-4447
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {{NettyConfig}} looks for the following configuration options which are not 
> listed in the Flink documentation.
> {noformat}
>   public static final String NUM_ARENAS = "taskmanager.net.num-arenas";
>   public static final String NUM_THREADS_SERVER = 
> "taskmanager.net.server.numThreads";
>   public static final String NUM_THREADS_CLIENT = 
> "taskmanager.net.client.numThreads";
>   public static final String CONNECT_BACKLOG = 
> "taskmanager.net.server.backlog";
>   public static final String CLIENT_CONNECT_TIMEOUT_SECONDS = 
> "taskmanager.net.client.connectTimeoutSec";
>   public static final String SEND_RECEIVE_BUFFER_SIZE = 
> "taskmanager.net.sendReceiveBufferSize";
>   public static final String TRANSPORT_TYPE = "taskmanager.net.transport";
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4671) Table API can not be built

2016-09-26 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4671:
---

Assignee: Timo Walther

> Table API can not be built
> --
>
> Key: FLINK-4671
> URL: https://issues.apache.org/jira/browse/FLINK-4671
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Running {{mvn clean verify}} in {{flink-table}} results in a build failure.
> {code}
> [ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
> dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
> Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2524: [FLINK-4653] [Client] Refactor JobClientActor to a...

2016-09-26 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2524#discussion_r80441549
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobClient/JobClientUtils.java
 ---
@@ -0,0 +1,257 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobClient;
+
+import akka.actor.ActorSystem;
+import akka.actor.Address;
+import akka.util.Timeout;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.akka.ListeningBehaviour;
+import org.apache.flink.runtime.blob.BlobCache;
+import org.apache.flink.runtime.blob.BlobKey;
+import org.apache.flink.runtime.client.JobExecutionException;
+import org.apache.flink.runtime.client.JobRetrievalException;
+import org.apache.flink.runtime.jobmaster.JobMasterGateway;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.flink.runtime.messages.JobManagerMessages;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.akka.AkkaRpcService;
+import org.apache.flink.util.NetUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.Option;
+import scala.Some;
+import scala.Tuple2;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.util.List;
+import java.util.concurrent.TimeoutException;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * JobClientUtils is a utility for client.
+ * It offers the following methods:
+ * 
+ * {@link #startJobClientRpcService(Configuration)} Starts a rpc 
service for client
+ * {@link #retrieveRunningJobResult(JobID, JobMasterGateway, 
RpcService, LeaderRetrievalService, boolean, FiniteDuration, Configuration)}
+ *  Attaches to a running Job using the JobID, and wait for its 
job result
+ * {@link #awaitJobResult(JobInfoTracker, ClassLoader)} Awaits the 
result of the job execution which jobInfoTracker listen for
+ * {@link #retrieveClassLoader(JobID, JobMasterGateway, 
Configuration)} Reconstructs the class loader by first requesting information 
about it at the JobMaster
+ *  and then downloading missing jar files
+ * 
+ */
+public class JobClientUtils {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JobClientUtils.class);
+
+
+   /**
+* Starts a rpc service for client
+*
+* @param config the flink configuration
+* @return
+* @throws IOException
+*/
+   public static RpcService startJobClientRpcService(Configuration config)
+   throws IOException
+   {
+   LOG.info("Starting JobClientUtils rpc service");
+   Option> remoting = new Some<>(new 
Tuple2("", 0));
+
+   // start a remote actor system to listen on an arbitrary port
+   ActorSystem system = AkkaUtils.createActorSystem(config, 
remoting);
+   Address address = system.provider().getDefaultAddress();
+
+   String hostAddress = address.host().isDefined() ?
+   
NetUtils.ipAddressToUrlString(InetAddress.getByName(address.host().get())) :
+   "(unknown)";
+   int port = address.port().isDefined() ? ((Integer) 
address.port().get()) : -1;
+   LOG.info("Started JobClientUtils actor system at " + 
hostAddress + ':' + port);
+
+   Timeout timeout = new 
Timeout(AkkaUtils.getClientTimeout(config));
+  

[jira] [Commented] (FLINK-4653) Refactor JobClientActor to adapt to the new Rpc framework and new cluster managerment

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522610#comment-15522610
 ] 

ASF GitHub Bot commented on FLINK-4653:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2524#discussion_r80441549
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobClient/JobClientUtils.java
 ---
@@ -0,0 +1,257 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobClient;
+
+import akka.actor.ActorSystem;
+import akka.actor.Address;
+import akka.util.Timeout;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.akka.AkkaUtils;
+import org.apache.flink.runtime.akka.ListeningBehaviour;
+import org.apache.flink.runtime.blob.BlobCache;
+import org.apache.flink.runtime.blob.BlobKey;
+import org.apache.flink.runtime.client.JobExecutionException;
+import org.apache.flink.runtime.client.JobRetrievalException;
+import org.apache.flink.runtime.jobmaster.JobMasterGateway;
+import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService;
+import org.apache.flink.runtime.messages.JobManagerMessages;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.akka.AkkaRpcService;
+import org.apache.flink.util.NetUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import scala.Option;
+import scala.Some;
+import scala.Tuple2;
+import scala.concurrent.Await;
+import scala.concurrent.Future;
+import scala.concurrent.duration.FiniteDuration;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.util.List;
+import java.util.concurrent.TimeoutException;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * JobClientUtils is a utility for client.
+ * It offers the following methods:
+ * 
+ * {@link #startJobClientRpcService(Configuration)} Starts a rpc 
service for client
+ * {@link #retrieveRunningJobResult(JobID, JobMasterGateway, 
RpcService, LeaderRetrievalService, boolean, FiniteDuration, Configuration)}
+ *  Attaches to a running Job using the JobID, and wait for its 
job result
+ * {@link #awaitJobResult(JobInfoTracker, ClassLoader)} Awaits the 
result of the job execution which jobInfoTracker listen for
+ * {@link #retrieveClassLoader(JobID, JobMasterGateway, 
Configuration)} Reconstructs the class loader by first requesting information 
about it at the JobMaster
+ *  and then downloading missing jar files
+ * 
+ */
+public class JobClientUtils {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JobClientUtils.class);
+
+
+   /**
+* Starts a rpc service for client
+*
+* @param config the flink configuration
+* @return
+* @throws IOException
+*/
+   public static RpcService startJobClientRpcService(Configuration config)
+   throws IOException
+   {
+   LOG.info("Starting JobClientUtils rpc service");
+   Option> remoting = new Some<>(new 
Tuple2("", 0));
+
+   // start a remote actor system to listen on an arbitrary port
+   ActorSystem system = AkkaUtils.createActorSystem(config, 
remoting);
+   Address address = system.provider().getDefaultAddress();
+
+   String hostAddress = address.host().isDefined() ?
+   
NetUtils.ipAddressToUrlString(InetAddress.getByName(address.host().get())) :
+   "(unknown)";
+   int port = address.por

[jira] [Commented] (FLINK-4596) RESTART_STRATEGY is not really pluggable

2016-09-26 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522615#comment-15522615
 ] 

Till Rohrmann commented on FLINK-4596:
--

Hi [~nguraja], you're right that the class loading will fail due to the lower 
casing of the restart strategy. This is clearly a bug and I'll fix it.

At the moment, the restart strategies are not intended to be pluggable. That's 
also why you are not allowed create you own {{RestartStrategyConfigurations}}. 
A problem, for example, is that the default restart strategy will be created 
when you start the job manager. Thus, there won't be any user code classes 
around. If you want to use this, then you would have to put the custom restart 
strategy manually in the classpath of the {{JobManager}}.

If you need custom restart strategies, then we can change this restriction. We 
could then introduce a {{CustomRestartStrategyConfiguration}} interface which 
is capable to instantiate {{RestartStrategies}}. However, this class would have 
to be serializable.

> RESTART_STRATEGY is not really pluggable
> 
>
> Key: FLINK-4596
> URL: https://issues.apache.org/jira/browse/FLINK-4596
> Project: Flink
>  Issue Type: Bug
>Reporter: Nagarjun Guraja
>
> Standalone cluster config accepts an implementation(class) as part of the 
> yaml config file but that does not work either as cluster level restart 
> strategy or streaming job level restart strategy
> CLUSTER LEVEL CAUSE: createRestartStrategyFactory converts configured value 
> of strategyname to lowercase and searches for class name using lowercased 
> string.
> JOB LEVEL CAUSE: Checkpointed streams have specific code to add 
> fixeddelayrestartconfiguration if no RestartConfiguration is specified in  
> the job env. Also, jobs cannot provide their own custom restart strategy 
> implementation and are constrained to pick up one of the three restart 
> strategies provided by flink. 
> FIX: Do not lower case the strategy config value, support a new 
> restartconfiguration to fallback to cluster level restart strategy and 
> support jobs to provide custom implementation of the strategy class itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4596) RESTART_STRATEGY is not really pluggable

2016-09-26 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-4596:
-
Issue Type: Wish  (was: Bug)

> RESTART_STRATEGY is not really pluggable
> 
>
> Key: FLINK-4596
> URL: https://issues.apache.org/jira/browse/FLINK-4596
> Project: Flink
>  Issue Type: Wish
>Affects Versions: 1.2.0
>Reporter: Nagarjun Guraja
>
> Standalone cluster config accepts an implementation(class) as part of the 
> yaml config file but that does not work either as cluster level restart 
> strategy or streaming job level restart strategy
> CLUSTER LEVEL CAUSE: createRestartStrategyFactory converts configured value 
> of strategyname to lowercase and searches for class name using lowercased 
> string.
> JOB LEVEL CAUSE: Checkpointed streams have specific code to add 
> fixeddelayrestartconfiguration if no RestartConfiguration is specified in  
> the job env. Also, jobs cannot provide their own custom restart strategy 
> implementation and are constrained to pick up one of the three restart 
> strategies provided by flink. 
> FIX: Do not lower case the strategy config value, support a new 
> restartconfiguration to fallback to cluster level restart strategy and 
> support jobs to provide custom implementation of the strategy class itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4596) RESTART_STRATEGY is not really pluggable

2016-09-26 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-4596:
-
Affects Version/s: 1.2.0

> RESTART_STRATEGY is not really pluggable
> 
>
> Key: FLINK-4596
> URL: https://issues.apache.org/jira/browse/FLINK-4596
> Project: Flink
>  Issue Type: Wish
>Affects Versions: 1.2.0
>Reporter: Nagarjun Guraja
>
> Standalone cluster config accepts an implementation(class) as part of the 
> yaml config file but that does not work either as cluster level restart 
> strategy or streaming job level restart strategy
> CLUSTER LEVEL CAUSE: createRestartStrategyFactory converts configured value 
> of strategyname to lowercase and searches for class name using lowercased 
> string.
> JOB LEVEL CAUSE: Checkpointed streams have specific code to add 
> fixeddelayrestartconfiguration if no RestartConfiguration is specified in  
> the job env. Also, jobs cannot provide their own custom restart strategy 
> implementation and are constrained to pick up one of the three restart 
> strategies provided by flink. 
> FIX: Do not lower case the strategy config value, support a new 
> restartconfiguration to fallback to cluster level restart strategy and 
> support jobs to provide custom implementation of the strategy class itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4596) RESTART_STRATEGY is not really pluggable

2016-09-26 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-4596:
-
Component/s: Distributed Coordination

> RESTART_STRATEGY is not really pluggable
> 
>
> Key: FLINK-4596
> URL: https://issues.apache.org/jira/browse/FLINK-4596
> Project: Flink
>  Issue Type: Wish
>  Components: Distributed Coordination
>Affects Versions: 1.2.0
>Reporter: Nagarjun Guraja
>
> Standalone cluster config accepts an implementation(class) as part of the 
> yaml config file but that does not work either as cluster level restart 
> strategy or streaming job level restart strategy
> CLUSTER LEVEL CAUSE: createRestartStrategyFactory converts configured value 
> of strategyname to lowercase and searches for class name using lowercased 
> string.
> JOB LEVEL CAUSE: Checkpointed streams have specific code to add 
> fixeddelayrestartconfiguration if no RestartConfiguration is specified in  
> the job env. Also, jobs cannot provide their own custom restart strategy 
> implementation and are constrained to pick up one of the three restart 
> strategies provided by flink. 
> FIX: Do not lower case the strategy config value, support a new 
> restartconfiguration to fallback to cluster level restart strategy and 
> support jobs to provide custom implementation of the strategy class itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4379) Add Rescalable Non-Partitioned State

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522624#comment-15522624
 ] 

ASF GitHub Bot commented on FLINK-4379:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2512
  
I think we should also update the documentation as part of this PR (there's 
a page dedicated to working with state)


> Add Rescalable Non-Partitioned State
> 
>
> Key: FLINK-4379
> URL: https://issues.apache.org/jira/browse/FLINK-4379
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> This issue is associated with [FLIP-8| 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-8%3A+Rescalable+Non-Partitioned+State].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2512: [FLINK-4379] Rescalable non-partitioned state

2016-09-26 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2512
  
I think we should also update the documentation as part of this PR (there's 
a page dedicated to working with state)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4379) Add Rescalable Non-Partitioned State

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522638#comment-15522638
 ] 

ASF GitHub Bot commented on FLINK-4379:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2512
  
I would like to go for a follow-up PR. This is currently blocking some 
other work, so getting it in soon would help.


> Add Rescalable Non-Partitioned State
> 
>
> Key: FLINK-4379
> URL: https://issues.apache.org/jira/browse/FLINK-4379
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> This issue is associated with [FLIP-8| 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-8%3A+Rescalable+Non-Partitioned+State].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2512: [FLINK-4379] Rescalable non-partitioned state

2016-09-26 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/2512
  
I would like to go for a follow-up PR. This is currently blocking some 
other work, so getting it in soon would help.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2526: [FLINK-4580] [rpc] Report rpc invocation exceptions to th...

2016-09-26 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2526
  
Thanks for the review @StephanEwen. Will merge this PR then.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4580) Check that the RpcEndpoint supports the specified RpcGateway

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522651#comment-15522651
 ] 

ASF GitHub Bot commented on FLINK-4580:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2526
  
Thanks for the review @StephanEwen. Will merge this PR then.


> Check that the RpcEndpoint supports the specified RpcGateway
> 
>
> Key: FLINK-4580
> URL: https://issues.apache.org/jira/browse/FLINK-4580
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.2.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> When calling {{RpcService.connect}} the user specifies the type of the 
> {{RpcGateway}}. At the moment, it is not checked whether the {{RpcEndpoint}} 
> actually supports the specified {{RpcGateway}}.
> I think it would be good to add a runtime check that the corresponding 
> {{RpcEndpoint}} supports the specified {{RpcGateway}}. If not, then we can 
> let the connect method fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4489) Implement TaskManager's SlotAllocationTable

2016-09-26 Thread zhangjing (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangjing updated FLINK-4489:
-
Assignee: (was: zhangjing)

> Implement TaskManager's SlotAllocationTable
> ---
>
> Key: FLINK-4489
> URL: https://issues.apache.org/jira/browse/FLINK-4489
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>
> The {{SlotManager}} is responsible for managing the available slots on the 
> {{TaskManager}}. This basically means to maintain the mapping between slots 
> and the owning {{JobManagers}} and to offer tasks which run in the slots 
> access to the owning {{JobManagers}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2332: [FLINK-2055] Implement Streaming HBaseSink

2016-09-26 Thread nielsbasjes
Github user nielsbasjes commented on the issue:

https://github.com/apache/flink/pull/2332
  
@fhueske Should the TableInputFormat be updated to use the HBase 1.1.2 api 
also? It would make things a bit cleaner.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2055) Implement Streaming HBaseSink

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522744#comment-15522744
 ] 

ASF GitHub Bot commented on FLINK-2055:
---

Github user nielsbasjes commented on the issue:

https://github.com/apache/flink/pull/2332
  
@fhueske Should the TableInputFormat be updated to use the HBase 1.1.2 api 
also? It would make things a bit cleaner.


> Implement Streaming HBaseSink
> -
>
> Key: FLINK-2055
> URL: https://issues.apache.org/jira/browse/FLINK-2055
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming, Streaming Connectors
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Erli Ding
>
> As per : 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Write-Stream-to-HBase-td1300.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2332: [FLINK-2055] Implement Streaming HBaseSink

2016-09-26 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2332
  
@nielsbasjes We would break compatibility with older HBase versions. Before 
we do that, we should start a discussion on the user and dev mailing lists.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2055) Implement Streaming HBaseSink

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522823#comment-15522823
 ] 

ASF GitHub Bot commented on FLINK-2055:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2332
  
@nielsbasjes We would break compatibility with older HBase versions. Before 
we do that, we should start a discussion on the user and dev mailing lists.


> Implement Streaming HBaseSink
> -
>
> Key: FLINK-2055
> URL: https://issues.apache.org/jira/browse/FLINK-2055
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming, Streaming Connectors
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Erli Ding
>
> As per : 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Write-Stream-to-HBase-td1300.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4676) Merge flink-batch-con

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4676:


 Summary: Merge flink-batch-con
 Key: FLINK-4676
 URL: https://issues.apache.org/jira/browse/FLINK-4676
 Project: Flink
  Issue Type: Task
Reporter: Fabian Hueske






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Affects Version/s: 1.2.0

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Summary: Merge flink-batch-connectors and flink-streaming-connectors 
modules  (was: Merge flink-batch-con)

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Component/s: Build System

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Priority: Minor  (was: Major)

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Fix Version/s: 1.2.0

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2439: [FLINK-4450]update storm verion to 1.0.0 in flink-...

2016-09-26 Thread liuyuzhong
Github user liuyuzhong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2439#discussion_r80458706
  
--- Diff: 
flink-contrib/flink-storm/src/test/java/org/apache/flink/storm/wrappers/WrapperSetupHelperTest.java
 ---
@@ -186,15 +177,15 @@ public void testCreateTopologyContext() {
.shuffleGrouping("bolt2", 
TestDummyBolt.groupingStreamId)
.shuffleGrouping("bolt2", 
TestDummyBolt.shuffleStreamId);
 
-   LocalCluster cluster = new LocalCluster();
--- End diff --

It can't work, so delete it first.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2439: [FLINK-4450]update storm verion to 1.0.0 in flink-...

2016-09-26 Thread liuyuzhong
Github user liuyuzhong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2439#discussion_r80458736
  
--- Diff: 
flink-contrib/flink-storm/src/test/java/org/apache/flink/storm/util/SpoutOutputCollectorObserverTest.java
 ---
@@ -17,14 +17,15 @@
  */
 package org.apache.flink.storm.util;
 
-import backtype.storm.spout.SpoutOutputCollector;
+import org.apache.storm.spout.SpoutOutputCollector;
 
 import org.junit.Assert;
 import org.junit.Test;
 
 import static org.mockito.Mockito.mock;
 
-public class SpoutOutputCollectorObserverTest {
--- End diff --

Fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4450) update storm version to 1.0.0

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522837#comment-15522837
 ] 

ASF GitHub Bot commented on FLINK-4450:
---

Github user liuyuzhong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2439#discussion_r80458706
  
--- Diff: 
flink-contrib/flink-storm/src/test/java/org/apache/flink/storm/wrappers/WrapperSetupHelperTest.java
 ---
@@ -186,15 +177,15 @@ public void testCreateTopologyContext() {
.shuffleGrouping("bolt2", 
TestDummyBolt.groupingStreamId)
.shuffleGrouping("bolt2", 
TestDummyBolt.shuffleStreamId);
 
-   LocalCluster cluster = new LocalCluster();
--- End diff --

It can't work, so delete it first.


> update storm version to 1.0.0
> -
>
> Key: FLINK-4450
> URL: https://issues.apache.org/jira/browse/FLINK-4450
> Project: Flink
>  Issue Type: Improvement
>  Components: flink-contrib
>Reporter: yuzhongliu
> Fix For: 2.0.0
>
>
> The storm package path was changed in new version
> storm old version package:
> backtype.storm.*
> storm new version pachage:
> org.apache.storm.*
> shall we update flink/flink-storm code to new storm version?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4676:
-
Description: 
We have two separate Maven modules for batch and streaming connectors 
(flink-batch-connectors and flink-streaming-connectors) that contain modules 
for the individual external systems and storage formats such as HBase, 
Cassandra, Avro, Elasticsearch, etc.

Some of these systems can be used in streaming as well as batch jobs as for 
instance HBase, Cassandra, and Elasticsearch. 
However, due to the separate main modules for streaming and batch connectors, 
we currently need to decide where to put a connector. 
For example, the flink-connector-cassandra module is located in 
flink-streaming-connectors but includes a CassandraInputFormat and 
CassandraOutputFormat (i.e., a batch source and sink).

This issue is about merging flink-batch-connectors and 
flink-streaming-connectors into a joint flink-connectors module.
Names of moved modules should not be changed (although this leads to an 
inconsistent naming scheme: flink-connector-cassandra vs. flink-hbase) to keep 
the change of code structure transparent to users. 

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>
> We have two separate Maven modules for batch and streaming connectors 
> (flink-batch-connectors and flink-streaming-connectors) that contain modules 
> for the individual external systems and storage formats such as HBase, 
> Cassandra, Avro, Elasticsearch, etc.
> Some of these systems can be used in streaming as well as batch jobs as for 
> instance HBase, Cassandra, and Elasticsearch. 
> However, due to the separate main modules for streaming and batch connectors, 
> we currently need to decide where to put a connector. 
> For example, the flink-connector-cassandra module is located in 
> flink-streaming-connectors but includes a CassandraInputFormat and 
> CassandraOutputFormat (i.e., a batch source and sink).
> This issue is about merging flink-batch-connectors and 
> flink-streaming-connectors into a joint flink-connectors module.
> Names of moved modules should not be changed (although this leads to an 
> inconsistent naming scheme: flink-connector-cassandra vs. flink-hbase) to 
> keep the change of code structure transparent to users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4450) update storm version to 1.0.0

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522838#comment-15522838
 ] 

ASF GitHub Bot commented on FLINK-4450:
---

Github user liuyuzhong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2439#discussion_r80458736
  
--- Diff: 
flink-contrib/flink-storm/src/test/java/org/apache/flink/storm/util/SpoutOutputCollectorObserverTest.java
 ---
@@ -17,14 +17,15 @@
  */
 package org.apache.flink.storm.util;
 
-import backtype.storm.spout.SpoutOutputCollector;
+import org.apache.storm.spout.SpoutOutputCollector;
 
 import org.junit.Assert;
 import org.junit.Test;
 
 import static org.mockito.Mockito.mock;
 
-public class SpoutOutputCollectorObserverTest {
--- End diff --

Fixed.


> update storm version to 1.0.0
> -
>
> Key: FLINK-4450
> URL: https://issues.apache.org/jira/browse/FLINK-4450
> Project: Flink
>  Issue Type: Improvement
>  Components: flink-contrib
>Reporter: yuzhongliu
> Fix For: 2.0.0
>
>
> The storm package path was changed in new version
> storm old version package:
> backtype.storm.*
> storm new version pachage:
> org.apache.storm.*
> shall we update flink/flink-storm code to new storm version?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2517: [FLINK-4564] [metrics] Delimiter should be configured per...

2016-09-26 Thread ex00
Github user ex00 commented on the issue:

https://github.com/apache/flink/pull/2517
  
zentol, thanks for you review.
I am pushed edited implementation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4564) [metrics] Delimiter should be configured per reporter

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522860#comment-15522860
 ] 

ASF GitHub Bot commented on FLINK-4564:
---

Github user ex00 commented on the issue:

https://github.com/apache/flink/pull/2517
  
zentol, thanks for you review.
I am pushed edited implementation.


> [metrics] Delimiter should be configured per reporter
> -
>
> Key: FLINK-4564
> URL: https://issues.apache.org/jira/browse/FLINK-4564
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Anton Mushin
>
> Currently, the delimiter used or the scope string is based on a configuration 
> setting shared by all reporters. However, different reporters may have 
> different requirements in regards to the delimiter, as such we should allow 
> reporters to use a different delimiter.
> We can keep the current setting as a global setting that is used if no 
> specific setting was set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4563) [metrics] scope caching not adjusted for multiple reporters

2016-09-26 Thread Anton Mushin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Mushin reassigned FLINK-4563:
---

Assignee: Anton Mushin

> [metrics] scope caching not adjusted for multiple reporters
> ---
>
> Key: FLINK-4563
> URL: https://issues.apache.org/jira/browse/FLINK-4563
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Anton Mushin
>
> Every metric group contains a scope string, representing what entities 
> (job/task/etc.) a given metric belongs to, which is calculated on demand. 
> Before this string is cached a CharacterFilter is applied to it, which is 
> provided by the callee, usually a reporter. This was done since different 
> reporters have different requirements in regards to valid characters. The 
> filtered string is cached so that we don't have to refilter the string every 
> time.
> This all works fine with a single reporter; with multiple however it is 
> completely broken as only the first filter is ever applied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-09-26 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-4676:
--
Component/s: Streaming Connectors
 Batch Connectors and Input/Output Formats

> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Batch Connectors and Input/Output Formats, Build System, 
> Streaming Connectors
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>
> We have two separate Maven modules for batch and streaming connectors 
> (flink-batch-connectors and flink-streaming-connectors) that contain modules 
> for the individual external systems and storage formats such as HBase, 
> Cassandra, Avro, Elasticsearch, etc.
> Some of these systems can be used in streaming as well as batch jobs as for 
> instance HBase, Cassandra, and Elasticsearch. 
> However, due to the separate main modules for streaming and batch connectors, 
> we currently need to decide where to put a connector. 
> For example, the flink-connector-cassandra module is located in 
> flink-streaming-connectors but includes a CassandraInputFormat and 
> CassandraOutputFormat (i.e., a batch source and sink).
> This issue is about merging flink-batch-connectors and 
> flink-streaming-connectors into a joint flink-connectors module.
> Names of moved modules should not be changed (although this leads to an 
> inconsistent naming scheme: flink-connector-cassandra vs. flink-hbase) to 
> keep the change of code structure transparent to users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4672) TaskManager accidentally decorates Kill messages

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-4672.
-
Resolution: Fixed

Fixed in
  - 1.1.3 via caa0fbb2157de56c9bdc4bbf8aedb73df90edede
  - 1.2.0 via 6f237cfe6f70b5b72fedd3dea6fbeb6c929631e8

> TaskManager accidentally decorates Kill messages
> 
>
> Key: FLINK-4672
> URL: https://issues.apache.org/jira/browse/FLINK-4672
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0, 1.1.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4672) TaskManager accidentally decorates Kill messages

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-4672.
---

> TaskManager accidentally decorates Kill messages
> 
>
> Key: FLINK-4672
> URL: https://issues.apache.org/jira/browse/FLINK-4672
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.2.0, 1.1.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2544: [FLINK-4218] [checkpoints] Do not rely on FileSyst...

2016-09-26 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2544#discussion_r80463412
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java ---
@@ -47,7 +49,7 @@
 * If the the size is not known, return {@code 0}.
 *
 * @return Size of the state in bytes.
-* @throws Exception If the operation fails during size retrieval.
+* @throws IOException If the operation fails during size retrieval.
 */
-   long getStateSize() throws Exception;
+   long getStateSize() throws IOException;
--- End diff --

I think we should do that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-4218) Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." causes task restarting

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-4218.
-
   Resolution: Fixed
 Assignee: Stephan Ewen
Fix Version/s: 1.2.0

Fixed via 95e9004e36fffae755eab7aa3d5d0d5e8bfb7113

> Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." 
> causes task restarting
> --
>
> Key: FLINK-4218
> URL: https://issues.apache.org/jira/browse/FLINK-4218
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Sergii Koshel
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> Sporadically see exception as below. And restart of task because of it.
> {code:title=Exception|borderStyle=solid}
> java.lang.RuntimeException: Error triggering a checkpoint as the result of 
> receiving checkpoint barrier
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:785)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:775)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:203)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:129)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:183)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:588)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3:///flink/checkpoints/ece317c26960464ba5de75f3bbc38cb2/chk-8810/96eebbeb-de14-45c7-8ebb-e7cde978d6d3
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:351)
>   at 
> org.apache.flink.runtime.state.filesystem.AbstractFileStateHandle.getFileSize(AbstractFileStateHandle.java:93)
>   at 
> org.apache.flink.runtime.state.filesystem.FileStreamStateHandle.getStateSize(FileStreamStateHandle.java:58)
>   at 
> org.apache.flink.runtime.state.AbstractStateBackend$DataInputViewHandle.getStateSize(AbstractStateBackend.java:482)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskStateList.getStateSize(StreamTaskStateList.java:77)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:604)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:779)
>   ... 8 more
> {code}
> File actually exists on S3. 
> I suppose it is related to some race conditions with S3 but would be good to 
> retry a few times before stop task execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2526: [FLINK-4580] [rpc] Report rpc invocation exception...

2016-09-26 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2526


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4218) Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." causes task restarting

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522909#comment-15522909
 ] 

ASF GitHub Bot commented on FLINK-4218:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2544#discussion_r80463412
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java ---
@@ -47,7 +49,7 @@
 * If the the size is not known, return {@code 0}.
 *
 * @return Size of the state in bytes.
-* @throws Exception If the operation fails during size retrieval.
+* @throws IOException If the operation fails during size retrieval.
 */
-   long getStateSize() throws Exception;
+   long getStateSize() throws IOException;
--- End diff --

I think we should do that.


> Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." 
> causes task restarting
> --
>
> Key: FLINK-4218
> URL: https://issues.apache.org/jira/browse/FLINK-4218
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Sergii Koshel
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> Sporadically see exception as below. And restart of task because of it.
> {code:title=Exception|borderStyle=solid}
> java.lang.RuntimeException: Error triggering a checkpoint as the result of 
> receiving checkpoint barrier
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:785)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:775)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:203)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:129)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:183)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:588)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3:///flink/checkpoints/ece317c26960464ba5de75f3bbc38cb2/chk-8810/96eebbeb-de14-45c7-8ebb-e7cde978d6d3
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:351)
>   at 
> org.apache.flink.runtime.state.filesystem.AbstractFileStateHandle.getFileSize(AbstractFileStateHandle.java:93)
>   at 
> org.apache.flink.runtime.state.filesystem.FileStreamStateHandle.getStateSize(FileStreamStateHandle.java:58)
>   at 
> org.apache.flink.runtime.state.AbstractStateBackend$DataInputViewHandle.getStateSize(AbstractStateBackend.java:482)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskStateList.getStateSize(StreamTaskStateList.java:77)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:604)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:779)
>   ... 8 more
> {code}
> File actually exists on S3. 
> I suppose it is related to some race conditions with S3 but would be good to 
> retry a few times before stop task execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4580) Check that the RpcEndpoint supports the specified RpcGateway

2016-09-26 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-4580.

Resolution: Fixed

Fixed via 2a61e74b9108835ffb5f8ab89d67cb7105801594

> Check that the RpcEndpoint supports the specified RpcGateway
> 
>
> Key: FLINK-4580
> URL: https://issues.apache.org/jira/browse/FLINK-4580
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.2.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> When calling {{RpcService.connect}} the user specifies the type of the 
> {{RpcGateway}}. At the moment, it is not checked whether the {{RpcEndpoint}} 
> actually supports the specified {{RpcGateway}}.
> I think it would be good to add a runtime check that the corresponding 
> {{RpcEndpoint}} supports the specified {{RpcGateway}}. If not, then we can 
> let the connect method fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4580) Check that the RpcEndpoint supports the specified RpcGateway

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522905#comment-15522905
 ] 

ASF GitHub Bot commented on FLINK-4580:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2526


> Check that the RpcEndpoint supports the specified RpcGateway
> 
>
> Key: FLINK-4580
> URL: https://issues.apache.org/jira/browse/FLINK-4580
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.2.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> When calling {{RpcService.connect}} the user specifies the type of the 
> {{RpcGateway}}. At the moment, it is not checked whether the {{RpcEndpoint}} 
> actually supports the specified {{RpcGateway}}.
> I think it would be good to add a runtime check that the corresponding 
> {{RpcEndpoint}} supports the specified {{RpcGateway}}. If not, then we can 
> let the connect method fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2544: [FLINK-4218] [checkpoints] Do not rely on FileSyst...

2016-09-26 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2544#discussion_r80463624
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsCheckpointStreamFactory.java
 ---
@@ -301,9 +301,16 @@ public StreamStateHandle closeAndGetHandle() throws 
IOException {
}
else {
flush();
+
+   long size = -1;
--- End diff --

Stream position should be okay to determine the state size. All instances I 
checked were accurate there.
Also, given that the size is more informational and should not be relied 
upon, it should be all the less critical.
As 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4218) Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." causes task restarting

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522914#comment-15522914
 ] 

ASF GitHub Bot commented on FLINK-4218:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2544#discussion_r80463624
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsCheckpointStreamFactory.java
 ---
@@ -301,9 +301,16 @@ public StreamStateHandle closeAndGetHandle() throws 
IOException {
}
else {
flush();
+
+   long size = -1;
--- End diff --

Stream position should be okay to determine the state size. All instances I 
checked were accurate there.
Also, given that the size is more informational and should not be relied 
upon, it should be all the less critical.
As 


> Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." 
> causes task restarting
> --
>
> Key: FLINK-4218
> URL: https://issues.apache.org/jira/browse/FLINK-4218
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Sergii Koshel
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> Sporadically see exception as below. And restart of task because of it.
> {code:title=Exception|borderStyle=solid}
> java.lang.RuntimeException: Error triggering a checkpoint as the result of 
> receiving checkpoint barrier
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:785)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:775)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:203)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:129)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:183)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:588)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3:///flink/checkpoints/ece317c26960464ba5de75f3bbc38cb2/chk-8810/96eebbeb-de14-45c7-8ebb-e7cde978d6d3
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:351)
>   at 
> org.apache.flink.runtime.state.filesystem.AbstractFileStateHandle.getFileSize(AbstractFileStateHandle.java:93)
>   at 
> org.apache.flink.runtime.state.filesystem.FileStreamStateHandle.getStateSize(FileStreamStateHandle.java:58)
>   at 
> org.apache.flink.runtime.state.AbstractStateBackend$DataInputViewHandle.getStateSize(AbstractStateBackend.java:482)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskStateList.getStateSize(StreamTaskStateList.java:77)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:604)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:779)
>   ... 8 more
> {code}
> File actually exists on S3. 
> I suppose it is related to some race conditions with S3 but would be good to 
> retry a few times before stop task execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4218) Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." causes task restarting

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-4218.
---

> Sporadic "java.lang.RuntimeException: Error triggering a checkpoint..." 
> causes task restarting
> --
>
> Key: FLINK-4218
> URL: https://issues.apache.org/jira/browse/FLINK-4218
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Sergii Koshel
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> Sporadically see exception as below. And restart of task because of it.
> {code:title=Exception|borderStyle=solid}
> java.lang.RuntimeException: Error triggering a checkpoint as the result of 
> receiving checkpoint barrier
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:785)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:775)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:203)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:129)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:183)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:588)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3:///flink/checkpoints/ece317c26960464ba5de75f3bbc38cb2/chk-8810/96eebbeb-de14-45c7-8ebb-e7cde978d6d3
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:351)
>   at 
> org.apache.flink.runtime.state.filesystem.AbstractFileStateHandle.getFileSize(AbstractFileStateHandle.java:93)
>   at 
> org.apache.flink.runtime.state.filesystem.FileStreamStateHandle.getStateSize(FileStreamStateHandle.java:58)
>   at 
> org.apache.flink.runtime.state.AbstractStateBackend$DataInputViewHandle.getStateSize(AbstractStateBackend.java:482)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskStateList.getStateSize(StreamTaskStateList.java:77)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:604)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$3.onEvent(StreamTask.java:779)
>   ... 8 more
> {code}
> File actually exists on S3. 
> I suppose it is related to some race conditions with S3 but would be good to 
> retry a few times before stop task execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4410) Split checkpoint times into synchronous and asynchronous part

2016-09-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522934#comment-15522934
 ] 

Stephan Ewen commented on FLINK-4410:
-

I will take up this issue - have a pretty good plan how to do this and do some 
overdue cleanup in the process.

> Split checkpoint times into synchronous and asynchronous part
> -
>
> Key: FLINK-4410
> URL: https://issues.apache.org/jira/browse/FLINK-4410
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Ufuk Celebi
>Assignee: Stephan Ewen
>Priority: Minor
>
> Checkpoint statistics contain the duration of a checkpoint. We should split 
> this time into the synchronous and asynchronous part. This will give more 
> insight into the inner workings of the checkpointing mechanism and help users 
> better understand what's going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4410) Split checkpoint times into synchronous and asynchronous part

2016-09-26 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-4410:
---

Assignee: Stephan Ewen

> Split checkpoint times into synchronous and asynchronous part
> -
>
> Key: FLINK-4410
> URL: https://issues.apache.org/jira/browse/FLINK-4410
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Ufuk Celebi
>Assignee: Stephan Ewen
>Priority: Minor
>
> Checkpoint statistics contain the duration of a checkpoint. We should split 
> this time into the synchronous and asynchronous part. This will give more 
> insight into the inner workings of the checkpointing mechanism and help users 
> better understand what's going on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4630) add netty tcp/restful pushed source support

2016-09-26 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-4630:
-
Summary: add netty tcp/restful pushed source support  (was: add netty tcp 
source support)

> add netty tcp/restful pushed source support
> ---
>
> Key: FLINK-4630
> URL: https://issues.apache.org/jira/browse/FLINK-4630
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: shijinkui
>
> When source stream get start, listen a provided tcp port, receive stream data 
> from user data source.
> This netty tcp source is keepping alive and end-to-end, that is from business 
> system to flink worker directly. 
> Such source service is needed in produce indeed.
> describe the source in detail below:
> 1.source run as a netty tcp server
> 2.user provide a tcp port, if the port is in used, increace the port 
> number between 1024 to 65535. Source can parallel.
> 3.callback the provided url to report the real port to listen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4509) Specify savepoint directory per savepoint

2016-09-26 Thread Ufuk Celebi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522980#comment-15522980
 ] 

Ufuk Celebi commented on FLINK-4509:


What you describe makes sense, but I meant something orthogonal. I guess my 
JIRA title was misleading. I meant to allow setting the directory parameter per 
savepoint instead of requiring a per cluster default. This could then be 
naturally extended to let the complete checkpoint go to that directory (what 
you describe).

> Specify savepoint directory per savepoint
> -
>
> Key: FLINK-4509
> URL: https://issues.apache.org/jira/browse/FLINK-4509
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Currently, savepoints go to a per cluster configured default directory 
> (configured via {{savepoints.state.backend}} and 
> {{savepoints.state.backend.fs.dir}}).
> We shall allow to specify the directory per triggered savepoint in case no 
> default is configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4511) Schedule periodic savepoints

2016-09-26 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi reassigned FLINK-4511:
--

Assignee: Ufuk Celebi

> Schedule periodic savepoints
> 
>
> Key: FLINK-4511
> URL: https://issues.apache.org/jira/browse/FLINK-4511
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Allow triggering of periodic savepoints, which are kept in a bounded queue 
> (like completed checkpoints currently, but separate).
> If there is no periodic checkpointing enabled, only periodic savepoints 
> should be schedulded.
> If periodic checkpointing is enabled, the periodic savepoints should not be 
> scheduled independently, but instead the checkpoint scheduler should trigger 
> a savepoint instead. This will ensure that no unexpected interference between 
> checkpoints and savepoints happens. For this, I would restrict the savepoint 
> interval to be a multiple of the checkpointing interval (if enabled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4512) Add option for persistent checkpoints

2016-09-26 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi reassigned FLINK-4512:
--

Assignee: Ufuk Celebi

> Add option for persistent checkpoints
> -
>
> Key: FLINK-4512
> URL: https://issues.apache.org/jira/browse/FLINK-4512
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Allow periodic checkpoints to be persisted by writing out their meta data. 
> This is what we currently do for savepoints, but in the future checkpoints 
> and savepoints are likely to diverge with respect to guarantees they give for 
> updatability, etc.
> This means that the difference between persistent checkpoints and savepoints 
> in the long term will be that persistent checkpoints can only be restored 
> with the same job settings (like parallelism, etc.)
> Regular and persisted checkpoints should behave differently with respect to 
> disposal in *globally* terminal job states (FINISHED, CANCELLED, FAILED): 
> regular checkpoints are cleaned up in all of these cases whereas persistent 
> checkpoints only on FINISHED. Maybe with the option to customize behaviour on 
> CANCELLED or FAILED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4630) add netty tcp/restful pushed source support

2016-09-26 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-4630:
-
Description: 
When source stream get start, listen a provided tcp port, receive stream data 
from user data source.
This netty tcp source is keepping alive and end-to-end, that is from business 
system to flink worker directly. 

Such source service is needed in produce indeed.

describe the source in detail below:

1.  source run as a netty tcp server
2.  user provide a tcp port, if the port is in used, increace the port 
number between 1024 to 65535. Source can parallel.
3.  callback the provided url to report the real port to listen
4.  user push streaming data to netty server, then collect the data to flink

  was:
When source stream get start, listen a provided tcp port, receive stream data 
from user data source.
This netty tcp source is keepping alive and end-to-end, that is from business 
system to flink worker directly. 

Such source service is needed in produce indeed.

describe the source in detail below:

1.  source run as a netty tcp server
2.  user provide a tcp port, if the port is in used, increace the port 
number between 1024 to 65535. Source can parallel.
3.  callback the provided url to report the real port to listen


> add netty tcp/restful pushed source support
> ---
>
> Key: FLINK-4630
> URL: https://issues.apache.org/jira/browse/FLINK-4630
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: shijinkui
>
> When source stream get start, listen a provided tcp port, receive stream data 
> from user data source.
> This netty tcp source is keepping alive and end-to-end, that is from business 
> system to flink worker directly. 
> Such source service is needed in produce indeed.
> describe the source in detail below:
> 1.source run as a netty tcp server
> 2.user provide a tcp port, if the port is in used, increace the port 
> number between 1024 to 65535. Source can parallel.
> 3.callback the provided url to report the real port to listen
> 4.user push streaming data to netty server, then collect the data to flink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4677) Jars with no job executions produces NullPointerException in ClusterClient

2016-09-26 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4677:
-

 Summary: Jars with no job executions produces NullPointerException 
in ClusterClient
 Key: FLINK-4677
 URL: https://issues.apache.org/jira/browse/FLINK-4677
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.2, 1.2.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
Priority: Minor
 Fix For: 1.2.0, 1.1.3


When the user jar contains no job executions, the command-line client displays 
a NullPointerException. This is not a big issue but should be changed to 
something more descriptive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2548: [FLINK-4677] fail if user jar contains no executio...

2016-09-26 Thread mxm
GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2548

[FLINK-4677] fail if user jar contains no executions

If the user has an empty main method with no Flink executions, this will 
print a nicer error message.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-4677

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2548.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2548


commit 55f8ff766674b7c112a364de287b30a7afa4a1ee
Author: Maximilian Michels 
Date:   2016-09-26T13:00:32Z

[FLINK-4677] fail if user jar contains no executions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4677) Jars with no job executions produces NullPointerException in ClusterClient

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15522999#comment-15522999
 ] 

ASF GitHub Bot commented on FLINK-4677:
---

GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2548

[FLINK-4677] fail if user jar contains no executions

If the user has an empty main method with no Flink executions, this will 
print a nicer error message.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-4677

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2548.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2548


commit 55f8ff766674b7c112a364de287b30a7afa4a1ee
Author: Maximilian Michels 
Date:   2016-09-26T13:00:32Z

[FLINK-4677] fail if user jar contains no executions




> Jars with no job executions produces NullPointerException in ClusterClient
> --
>
> Key: FLINK-4677
> URL: https://issues.apache.org/jira/browse/FLINK-4677
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0, 1.1.2
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.2.0, 1.1.3
>
>
> When the user jar contains no job executions, the command-line client 
> displays a NullPointerException. This is not a big issue but should be 
> changed to something more descriptive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4671) Table API can not be built

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523020#comment-15523020
 ] 

ASF GitHub Bot commented on FLINK-4671:
---

GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/2549

[FLINK-4671] [table] Table API can not be built

This PR solves problems introduced by FLINK-3929 / 25a622f. Now the plugin 
is configured globally. All projects that depend on `flink-test-utils` should 
build again.

@mxm what do you think?



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-4671

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2549.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2549


commit 4f90737247a561e53069aabb7179f483bcee6507
Author: twalthr 
Date:   2016-09-26T13:05:40Z

[FLINK-4671] [table] Table API can not be built




> Table API can not be built
> --
>
> Key: FLINK-4671
> URL: https://issues.apache.org/jira/browse/FLINK-4671
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Running {{mvn clean verify}} in {{flink-table}} results in a build failure.
> {code}
> [ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
> dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
> Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2549: [FLINK-4671] [table] Table API can not be built

2016-09-26 Thread twalthr
GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/2549

[FLINK-4671] [table] Table API can not be built

This PR solves problems introduced by FLINK-3929 / 25a622f. Now the plugin 
is configured globally. All projects that depend on `flink-test-utils` should 
build again.

@mxm what do you think?



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-4671

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2549.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2549


commit 4f90737247a561e53069aabb7179f483bcee6507
Author: twalthr 
Date:   2016-09-26T13:05:40Z

[FLINK-4671] [table] Table API can not be built




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2549: [FLINK-4671] [table] Table API can not be built

2016-09-26 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2549#discussion_r80472473
  
--- Diff: pom.xml ---
@@ -1081,6 +1081,15 @@ under the License.


 
+   
--- End diff --

Would change "e.g." to "i.e." because it is the only dependency which uses 
the bundle mechanism.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2549: [FLINK-4671] [table] Table API can not be built

2016-09-26 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2549
  
Looks good! +1 to merge when the tests have passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4671) Table API can not be built

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523041#comment-15523041
 ] 

ASF GitHub Bot commented on FLINK-4671:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2549
  
Looks good! +1 to merge when the tests have passed.


> Table API can not be built
> --
>
> Key: FLINK-4671
> URL: https://issues.apache.org/jira/browse/FLINK-4671
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Running {{mvn clean verify}} in {{flink-table}} results in a build failure.
> {code}
> [ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
> dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
> Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2549: [FLINK-4671] [table] Table API can not be built

2016-09-26 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2549#discussion_r80472617
  
--- Diff: pom.xml ---
@@ -1081,6 +1081,15 @@ under the License.


 
+   
--- End diff --

Could we also keep the link for convenience? 
https://issues.apache.org/jira/browse/DIRSHARED-134


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4671) Table API can not be built

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523043#comment-15523043
 ] 

ASF GitHub Bot commented on FLINK-4671:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2549#discussion_r80472617
  
--- Diff: pom.xml ---
@@ -1081,6 +1081,15 @@ under the License.


 
+   
--- End diff --

Could we also keep the link for convenience? 
https://issues.apache.org/jira/browse/DIRSHARED-134


> Table API can not be built
> --
>
> Key: FLINK-4671
> URL: https://issues.apache.org/jira/browse/FLINK-4671
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Running {{mvn clean verify}} in {{flink-table}} results in a build failure.
> {code}
> [ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
> dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
> Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4671) Table API can not be built

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523042#comment-15523042
 ] 

ASF GitHub Bot commented on FLINK-4671:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2549#discussion_r80472473
  
--- Diff: pom.xml ---
@@ -1081,6 +1081,15 @@ under the License.


 
+   
--- End diff --

Would change "e.g." to "i.e." because it is the only dependency which uses 
the bundle mechanism.


> Table API can not be built
> --
>
> Key: FLINK-4671
> URL: https://issues.apache.org/jira/browse/FLINK-4671
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Running {{mvn clean verify}} in {{flink-table}} results in a build failure.
> {code}
> [ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
> dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
> Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4606) Integrate the new ResourceManager with the existing FlinkResourceManager

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523054#comment-15523054
 ] 

ASF GitHub Bot commented on FLINK-4606:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2540#discussion_r80473835
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ---
@@ -66,15 +67,16 @@
  * {@link #requestSlot(SlotRequest)} requests a slot from the 
resource manager
  * 
  */
-public class ResourceManager extends RpcEndpoint 
implements LeaderContender {
+public abstract class ResourceManager extends RpcEndpoint implements 
LeaderContender {
--- End diff --

I believe this should be 

```java
 ResourceManager
extends RpcEndpoint
```


> Integrate the new ResourceManager with the existing FlinkResourceManager
> 
>
> Key: FLINK-4606
> URL: https://issues.apache.org/jira/browse/FLINK-4606
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: zhangjing
>Assignee: zhangjing
>
> Integrate the new ResourceManager with the existing FlinkResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4678) Add SessionRow row-windows for streaming tables (FLIP-11)

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4678:


 Summary: Add SessionRow row-windows for streaming tables (FLIP-11)
 Key: FLINK-4678
 URL: https://issues.apache.org/jira/browse/FLINK-4678
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add SessionRow row-windows for streaming tables as described in FLIP-11. 

This task requires to implement a custom stream operator and integrate it with 
checkpointing and timestamp / watermark logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4678) Add SessionRow row-windows for streaming tables (FLIP-11)

2016-09-26 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-4678:
-
Description: 
Add SessionRow row-windows for streaming tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 

This task requires to implement a custom stream operator and integrate it with 
checkpointing and timestamp / watermark logic.

  was:
Add SessionRow row-windows for streaming tables as described in FLIP-11. 

This task requires to implement a custom stream operator and integrate it with 
checkpointing and timestamp / watermark logic.


> Add SessionRow row-windows for streaming tables (FLIP-11)
> -
>
> Key: FLINK-4678
> URL: https://issues.apache.org/jira/browse/FLINK-4678
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>
> Add SessionRow row-windows for streaming tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  
> This task requires to implement a custom stream operator and integrate it 
> with checkpointing and timestamp / watermark logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4679) Add TumbleRow row-windows for streaming tables

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4679:


 Summary: Add TumbleRow row-windows for streaming tables
 Key: FLINK-4679
 URL: https://issues.apache.org/jira/browse/FLINK-4679
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add TumbleRow row-windows for streaming tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 

This task requires to implement a custom stream operator and integrate it with 
checkpointing and timestamp / watermark logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4680) Add SlidingRow row-windows for streaming tables

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4680:


 Summary: Add SlidingRow row-windows for streaming tables
 Key: FLINK-4680
 URL: https://issues.apache.org/jira/browse/FLINK-4680
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add SlideRow row-windows for streaming tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 

This task requires to implement a custom stream operator and integrate it with 
checkpointing and timestamp / watermark logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2540: [FLINK-4606] [cluster management] Integrate the ne...

2016-09-26 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2540#discussion_r80473835
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ---
@@ -66,15 +67,16 @@
  * {@link #requestSlot(SlotRequest)} requests a slot from the 
resource manager
  * 
  */
-public class ResourceManager extends RpcEndpoint 
implements LeaderContender {
+public abstract class ResourceManager extends RpcEndpoint implements 
LeaderContender {
--- End diff --

I believe this should be 

```java
 ResourceManager
extends RpcEndpoint
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4681) Add SessionRow row-windows for batch tables.

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4681:


 Summary: Add SessionRow row-windows for batch tables.
 Key: FLINK-4681
 URL: https://issues.apache.org/jira/browse/FLINK-4681
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add SessionRow row-windows for batch tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4682) Add TumbleRow row-windows for batch tables.

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4682:


 Summary: Add TumbleRow row-windows for batch tables.
 Key: FLINK-4682
 URL: https://issues.apache.org/jira/browse/FLINK-4682
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add TumbleRow row-windows for batch tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4683) Add SlideRow row-windows for batch tables

2016-09-26 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-4683:


 Summary: Add SlideRow row-windows for batch tables
 Key: FLINK-4683
 URL: https://issues.apache.org/jira/browse/FLINK-4683
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.2.0
Reporter: Fabian Hueske


Add SlideRow row-windows for batch tables as described in 
[FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2540: [FLINK-4606] [cluster management] Integrate the ne...

2016-09-26 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2540#discussion_r80478098
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ---
@@ -66,15 +67,16 @@
  * {@link #requestSlot(SlotRequest)} requests a slot from the 
resource manager
  * 
  */
-public class ResourceManager extends RpcEndpoint 
implements LeaderContender {
+public abstract class ResourceManager extends RpcEndpoint implements 
LeaderContender {
--- End diff --

The `RpcCompletnessTest` might have to be adapted for this to work.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4606) Integrate the new ResourceManager with the existing FlinkResourceManager

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523114#comment-15523114
 ] 

ASF GitHub Bot commented on FLINK-4606:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2540#discussion_r80478098
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ---
@@ -66,15 +67,16 @@
  * {@link #requestSlot(SlotRequest)} requests a slot from the 
resource manager
  * 
  */
-public class ResourceManager extends RpcEndpoint 
implements LeaderContender {
+public abstract class ResourceManager extends RpcEndpoint implements 
LeaderContender {
--- End diff --

The `RpcCompletnessTest` might have to be adapted for this to work.


> Integrate the new ResourceManager with the existing FlinkResourceManager
> 
>
> Key: FLINK-4606
> URL: https://issues.apache.org/jira/browse/FLINK-4606
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: zhangjing
>Assignee: zhangjing
>
> Integrate the new ResourceManager with the existing FlinkResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2540: [FLINK-4606] [cluster management] Integrate the new Resou...

2016-09-26 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2540
  
Thank you for your changes. I'm trying to incorporate them in `flip-6` now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4606) Integrate the new ResourceManager with the existing FlinkResourceManager

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523115#comment-15523115
 ] 

ASF GitHub Bot commented on FLINK-4606:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2540
  
Thank you for your changes. I'm trying to incorporate them in `flip-6` now.


> Integrate the new ResourceManager with the existing FlinkResourceManager
> 
>
> Key: FLINK-4606
> URL: https://issues.apache.org/jira/browse/FLINK-4606
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: zhangjing
>Assignee: zhangjing
>
> Integrate the new ResourceManager with the existing FlinkResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2535: [FLINK-4662] Bump Calcite version up to 1.9

2016-09-26 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2535
  
Thanks @wuchong. LGTM, will merge...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4662) Bump Calcite version up to 1.9

2016-09-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15523119#comment-15523119
 ] 

ASF GitHub Bot commented on FLINK-4662:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2535
  
Thanks @wuchong. LGTM, will merge...


> Bump Calcite version up to 1.9
> --
>
> Key: FLINK-4662
> URL: https://issues.apache.org/jira/browse/FLINK-4662
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Calcite just released the 1.9 version. We should adopt it also in the Table 
> API especially for FLINK-4294.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >